Accelerate your career with the 90 Day Mentoring Challenge → Learn More

Navigating the AI Revolution: Transforming Business Strategies, Copyright Controversies, and a Personal AI Adventure in Melbourne

Navigating the AI Revolution
Ana Welch
Andrew Welch
Chris Huntingford
William Dorrington

FULL SHOW NOTES
https://podcast.nz365guy.com/541 

Imagine your daily grind transformed by artificial intelligence—think bespoke walking tours and streamlined business operations. That's what we're unpacking, along with our esteemed guests, Andrew and Anna, as we delve into the burgeoning AI landscape of 2024. We're peeling back the layers on how businesses are tossing aside hesitation and jumping headfirst into the world of AI, with Cloud Lighthouse leading the charge in guiding leaders from mere IT management to becoming strategic visionaries. And for a personal touch, you won't want to miss the tale of my unforgettable day in Melbourne, all thanks to an AI assistant that knew just what I needed.

But it's not all smooth sailing on the digital sea; we navigate through the choppy waters of copyright controversy, pondering if tech giants owe a hat tip—or more—to the original creators whose content trains their AI. This episode isn't just about embracing the future; it's a critical look at the present, discussing how technologies like Microsoft's Co-Pilot push the boundaries of citation practices and user engagement. Join us as we explore the intricate dance of math, ethics, and law that make up the ever-evolving AI domain, ensuring you're informed and ready for what's on the horizon.

AgileXRM 
AgileXRm - The integrated BPM for Microsoft Power Platform

90 Day Mentoring Challenge April 1st 2024
https://ako.nz365guy.com
Use the code PODCAST at checkout for a 10% discount

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

Chapters

00:00 - Preparing for AI in 2024

10:44 - AI Integration in Everyday Life

18:31 - Legalities and Technology

Transcript

Mark Smith: Welcome to the Power Platform Show. Thanks for joining me today. I hope today's guest inspires and educates you on the possibilities of the Microsoft Power Platform. Now let's get on with the show. Hey, welcome everybody to the Ecosystem Podcast by Cloud Lighthouse. We've got some exciting announcements to make around Cloud Lighthouse today. But before we get started, why don't we turn over to the two gentlemen on the left? But in your case they might be on the right. So we don't know if we'll start with the left or the right. But, mr Dorrington, mr Huntingford, the floor is yours.

William Dorrington: Hello, hi. Enough of that, on to Andrew and Anna. Right or left, right or left.

Andrew Welch: It's a wonder they're not sitting on each other's laps.

Mark Smith: Oh no, they were. They were you asked us not to yes.

Andrew Welch: I am Groot.

Mark Smith: Always a perfect cuddle. You can see, it's the start of the year, folks. We're trying to get back into the groove of things. Anna, what were you saying? That you loved the Groot?

Ana Demeny: mug.

Mark Smith: Oh, the Groot mug. Yes, the Groot mug, it's very cool, it's so sweet. I think, though, that he really should be drinking from between the eyeballs right the bit above it.

William Dorrington: I will I don't know why, it just unnerves me.

Mark Smith: Drink off the here. It's like a weird bridge. It really bothered Will.

William Dorrington: He sucked on it, though you can hear him sucking on it.

Mark Smith: Yeah, well, you can see you've got to keep the suction to stop it dripping through the cracks.

William Dorrington: It's like a chocolate coming out of a tippy cup.

Andrew Welch: This was a big problem for Will in the pre-recording.

Mark Smith: Mm-hmm. Yes.

Ana Demeny: I'm not surprised. We all read Chris's status message from today that he goes to bed at like 9 and he's like super happy to be a middle-aged man now.

Chris Huntingford: A morgue Middle-aged white guy. What's up?

Mark Smith: Is that right? Is that what you call a morgue, eh?

Ana Demeny: Right.

Mark Smith: Awesome, right, awesome so today we're talking about ai, ai for 2024 yeah, it's, I think, I think we're, we're in for a big year that's right.

Ana Demeny: And how are we preparing for artificial intelligence in 2024?

Mark Smith: that's a good question. Let's go around and ask that question, anna, you first.

Ana Demeny: I think we're preparing for artificial intelligence in 2024 by doing, first of all, taking care of our data. We're consolidating data, we are making sure that we've got governance, that everything is ready for artificial intelligence, and then, frankly, we're using tools, like I've seen a lot of stuff going on on the Internet. There's a lot of quick wins, andrew likes to call it. How do you call it, andrew High? Impact complexity stuff.

Andrew Welch: Those.

Ana Demeny: And yeah, that's how artificial intelligence looks like for me in 2024. Being able to bring some value and getting people used to it.

Mark Smith: Awesome.

Ana Demeny: But honestly, the tooling does bring a lot of benefit a lot of value. You just need to know a little bit how to use it. That's all, and there's tons of information on the internet, chris.

Chris Huntingford: So I'm seeing it as, like this hugely focused optimization tool, right, like that's the end business productivity tool, and it's more like a lot of orgs I'm chatting to now. There's less fear and more excitement, which is really cool. So, like we're chatting to a legal firm at the moment and, yeah, I mean obviously being careful, but they're pumped right Like they're really excited to get stuck in, they're really excited to start using it and not just co-pilot, like actual AI across the network back right, which, yeah, it's, it's super exciting and it's not only one company I'm talking to, like we're talking to a few. And, uh, the other thing that I've started to find is that I think people are like, yes, they're watching the news, yes, they're reading the social sphere, but they're less inclined to believe what people are saying. So, um, I've started to find it actually quite interesting to see that it's not just about the fear factor anymore. People are actually getting really excited and therefore, I'm getting really excited. It's about productivity and optimization. For me, I guess.

Mark Smith: Interesting Andrew.

Andrew Welch: I am preparing for AI in 2024. We're taking Cloud Lighthouse to the next level, right. So up until now, cloud Lighthouse we've been bringing you the Ecosystems podcast. There's a lot of content on there that we've shared. But starting now, cloud Lighthouse is going to be serving clients, serving organizations, and helping CIOs, cdos, leaders of organizations to make this transition from being what I always say, a superintendent of a utility company, right where you're really just trying to operate the IT infrastructure, to being strategic leaders of your organization, um, or their organization. So cod lighthouse is in business and, um, that is my. That is what I'm doing full-time now is is working with working with folks on this I like it.

William Dorrington: I like it, mr d hello um, so it's, uh, it's absolutely everything what everybody has already said. Uh, so it's from you know, knowing data, making sure we have a healthy pool of data and we've got our data hygiene intact all the way through to then looking at our governance layers of that, our regulatory layers of that, through to then, actually, now we've got that We've probably got quite a good amount of BI behind that as well we're finally going to get the intelligence we need. It's then how do we add more value to it and that's, and how do we mine value from it, and that's where AI really comes in. But your question was how are we getting ready for AI 2024?

William Dorrington: I think, for one, we can answer that in a multifaceted way, which is not just about business, but sitting around the Christmas table and the amount of my partner's family that were talking about how they use chat, gpt and other ai productivity tools.

William Dorrington: It's just phenomenal. So, for one, we don't just have to answer it from a business and how we're personally preparing the cios, the cto, cdos, but actually the fact that we're seeing this leak inside. You know it's our family and friendship circles as well. Just to cap that off, then you know the three aspects of that mining value is exactly how I might divide it, which is adopting, so you know, democratized AI through co-pilots and everything else, extending so how do we actually extend that out and make it so we can make it more contextual to our business? And then making your own, which is how you build your own co-pilot, your own large language models, which you know, with so much more advancements from how we're doing prompt engineering to how we're actually optimizing large language models. I mean, the research coming out at the moment is just phenomenal and I'm sure we'll touch on that a bit later with direct reference, optimization and items such as that.

Mark Smith: I like it. I'm glad you went there because I find that there's a lot of rhetoric around the theory of AI and of course we're in the Microsoft camp and therefore we know Copilot is being forcibly fed to us in all different types of formats, and one Copilot doesn't mean you know all Copilots. And I'm really big myself on what I'm terming as practical AI. How can I use it myself to increase what I know kind of like in a white paper that Andrew's just released? It would fit in the category of incremental AI, but at a personal level. So I'll give you an example.

Mark Smith: I was in Melbourne the other day and there was a time delay between me going to the airport and I would typically have just jumped in and like what can we do in the area and why everyone's laughing.

Mark Smith: This is actually taken directly from the white paper that Andrew has written and it gave a really good example of how practical you could use it in a daily basis. But what I find is that people and I don't know if it's necessary, but there needs to be a defaulting to, I think making sure it's much more part of what we do all the time Like when I walk into a room I don't go, do I need to turn on the lights if it's dark, I go turn it on. Right, it's there, it's it's, it's ambient around me and and the use of energy that way, and I think that we we need to, on purpose, till it becomes natural to a degree, start going. How can ai be infused into everything that I do so that I get comfortable with it, that I can test it, I can validate it through my experience, not just abstract, you know, and how we talk to customers?

William Dorrington: But that's how we're seeing the drive of AI at the moment, which is like an iFaF service intelligent functionality as a feature. So the quickest way to get people to adopt it is exactly the way Microsoft's done it with Copilot, which is let's make customer service leverages, make sales leverages, so it just becomes part of your day-to-day. It's triggered as soon as you go into pain and it starts reading contextually what you're doing and helping you immediately. Sorry, Andrew, I got the caffeine's kicked in.

Andrew Welch: I was going to add to that that we celebrated Christmas with my mother. She came over to London and somehow we got. I feel like AI is all my family wants to talk to me about right now is AI. And our daughter and my mother asked me you know, what are you using? How are you using AI, versus just Googling something? And I said, I thought about it for a minute and I said, listen, I Google something when I want to know a piece of information right, I want to know what the pounds to Euro exchange rate is right now. Right, I chat with Copilot or with Bing or with chat, gpt or whatever your flavor is here when I want to understand something, when I want the AI to do some of the lifting around assimilating information that I can go a little bit deeper on. So, for me, I'm now using traditional search to just know a quick fact and I'm using AI to help me process a lot of information and assimilate it so that I can understand it a little bit better.

Ana Demeny: That also works if you're looking for something that has changed meaning, right. So a lot of the products have changed names, which can be very, very confusing, especially if you're not on the field. If you're not in the field but you've heard about something that happened in like the data field or the AI field or the infrastructure field, whatever, but you've heard about it last year, it's very likely that Microsoft has changed the naming because it's just Microsoft's hobby really. That's just their hobby right now. So AI is really like super helpful with that. But AI is also very helpful when you just want to achieve something like very quick. For example, today I was trying to achieve a good reasoning why I should change my British passport first before my Romanian passport, for example.

Ana Demeny: You know, because everyone wants, like every embassy wants to. Like they tell you like, oh, you've got a different citizenship. Like, let them do the hard work first Exactly Make it their problem. So in order for you to be able to actually get that done, you need a lot of information. So AI helps you to detangle the complicated information a lot, as well as realize what current information meant for last year's you. That's super useful. That saves a lot of time.

Andrew Welch: The example that Mark was sharing earlier from the white paper. Right is, and I think that this was my big, this is my big turning point for use of AI in my own life. Right, and to finish that story, anna and I were sitting in Melbourne Australia. We had maybe six hours before we needed to go catch our flight. We wondered, well, what should we do with our day today? And I could have spent 45 minutes Googling things and put a plan together.

Andrew Welch: Instead, I pulled out Bing. I chatted Well, now it's been rebranded again Copilot but I pulled this thing out. I said listen, my wife and I are in Melbourne Australia. We're sitting having breakfast at this place. We need to be back here to get our bags from our hotel by six o'clock tonight. We enjoy urban you know kind of urban outdoors, sort of spaces, architecture, that you know, that type of thing. What should we do? And Bing came back and said you should do these things and even put them in order to make it into a good walking tour. And within 20 seconds we had a plan for the day and we reclaimed that 45 minutes back. So that was my big moment about just do this.

Ana Demeny: That's exactly what happened, and all of that was like started by the fact that we, like we planned our entire honeymoon based on AI, you know. So we've done all that.

Mark Smith: The entire honeymoon.

Andrew Welch: In fairness, we did ask Mark Smith what the best time of year was to go to poly.

Ana Demeny: Right, but like, honestly so one day we're in Melbourne and Andrew's like I'm going to look for something for us to do and four and a half minutes later he's like this is our trajectory for the day and I'm like, please, you just used co-pilot, come on, I know you 100% You're incapable of doing that for us.

William Dorrington: It's a new level of plagiarization now, isn't it? It's not plagiarizing other people, it's plagiarizing bots and all sorts. It's brilliant, yeah.

Ana Demeny: Yeah, oh, please, I plagiarize everything, will? You must know this. I copy everything.

Chris Huntingford: I copy everything.

Andrew Welch: There is some. There's a developing story right now about I believe it's the New York Times taking issue with OpenAI for having trained its models.

William Dorrington: It's not just them as well, there's many.

Andrew Welch: There's many. Who else is in on this? What other there's?

William Dorrington: a couple I've seen I'll pull out, but it's because you know there's a load of artists also getting frustrated about it from absorbing their text. But what is your opinion on that, Andrew? You know you've always got a good, crafted opinion on these things. What would be your opinion on? You know, when we look at GPT generative, pre-trained transform the pre-trained element is so important for large language models, as we know, going through the nuances, et cetera, applying, you know, attention rates to them, et cetera. We need that public source.

Andrew Welch: Should they be allowed to train on it? I think that could be a topic in itself. Yeah, that is one that we might want to pick up for an episode or have a guest who's really specializing in those sort of legal issues. Legal and regulatory regime around in this case copyrights really needs a fresh look, because if we deny this technology, that information, that knowledge to be trained on, then we're really doing a huge disservice. We're doing a huge disservice to ourselves.

Mark Smith: But I think you're on point. I think you're on point. The rules of yesterday were not designed in a world of today and, if you like, the. It doesn't mean that we go. No, we need to enforce yesterday's rules, you know should. Should horses have priorities on all our roads because they had priority before the motor car, right? Should we stay in the dark ages and not move forward and progress as a society? I think those are the rules that need to be understood, because the risk is we do stay in the dark ages.

Mark Smith: We don't advance with this technology because somebody's you know, under the legal protected system today or based on yesterday, is trying to be applied to the future. So I and my and my first thing was that you know, the literature is in, everything that's been betrayed is definitely, as in my understanding, has not been behind a paywall or it's not been behind a gated um scenario. Right, it's not. That's not the scenario we're seeing playing out. If you go back and look at the history of Google and I remember reading a book on Google years ago Google went out and bought massive libraries of content, like of written books and things like that, own the rights to them, et cetera, because they wanted it to train the future and, like that, own the rights to them, et cetera, because they wanted it to train the future and the searchability of those assets, et cetera.

Mark Smith: Now did they pay an incremental royalty back to the authors of those books? I doubt it, you know. I doubt that that was part of the mix, but that was if you read the history that went on. I just think that there's folks that are going, hey, let's make a quick buck off these big tech companies at the moment, you know, and america being as litigious as it is according to every movie I've seen, I assume that that's what's happening here. But I'm hoping that the lawmakers will see sense and go. You know what? These laws need to be recrafted for the future as they so often do.

Andrew Welch: I mean, who, who, what class of people on earth sees sense more readily than a lawmaker? Um, I think one of the one, one simple thing that that microsoft does that being in co-pilot now does pretty well, right is they provide a citation for you. You know where have we sourced the knowledge that we've used to produce this information? And I actually find myself frequently actually clicking on the citation, not because I feel a need to verify it, but because I want to read more, right? So, to me, I'm finding that some of these AI tools actually drive me towards the source content in a pretty effective way.

Mark Smith: Yeah, what frustrates you about it, though, and I'll go first. Um, it's already trying to police it too much, in my opinion. I take a photo of this mug I uploaded and say can you please rep, you know, redraw that. As you know, a, oh sorry, pii, that piece we have we can't use a real image of a person it doesn't say. I'm interpreting what it says.

Andrew Welch: It doesn't do, it, it won't do it.

Mark Smith: It won't do it. What I do is that I have to message Will Dorrington and go. What app did you use to do that? Because at least it looks like you, you know, dressed up as Ken and Barbie.

Mark Smith: Oh yeah, that was the thing you know right, see comment. I'm hating the limitation that already and Microsoft need to because they're going to protect themselves legally. But what I've is, like you know, I, I go, nah, I'm going to use stable ais. You know where it runs on my desktop, their large language model and the tweaking in real time that you get. You know, if you want to go further than just the productionized apps that are out there, you've got, I think, go to downloading them off, get getting them on your own machine if you really want to create some creative stuff and that's just to get around that kind of legal risk that the big corporations are worried about if they reproduce your face.

William Dorrington: You know correctly we're covering multiple topics here, though, because one is around oversensitivity in European law thanks to GDPR.

William Dorrington: The other part is actually a lot of people making decisions on something that has spooked them because they're not aware of how it works, and that's the bit that I think if people really sat down and understood how large language models work and the beauty behind them, actually, if you really sat down and looked at how all the various networks get put together, the feedforward networks, the attention mechanisms, all the maths behind it, they realize actually. Yes, it is bloody clever, but it's nothing to be scared of. You know, and I think that's part of this concern for me, where we've suddenly seen this emergence of AI specialists in quotations which they are, but at a functional level, not actually understanding. I've got a friend who I'd love to bring on at some point, I'm sure we will called Dr Paul Von Lone. He's a doctor of actuarial mathematics. He now heads up part of Amazon's exploratory data science area, but this stuff has been going on for years. It's just the fact that we've made it more public, but it is beautiful maths behind the scenes and I think the moment we get a grip on that, the moment a lot of this will calm down.

William Dorrington: However, there are still arguments around what data should we use to train the models, and I think we, you know, using a horse scenario or analogy here isn't actually appropriate to what we're saying around a person's business or IP, but I do think the betterment argument that Andrew hinted to is absolutely appropriate for here and now and you could argue, just to extend on your Google search part, which is actually Google and search engines have been representing bodies of text on their page for a long time. You don't have to click on a Google link to understand what that article is about. They summarize it for you, so the argument should have been there a long time ago if there was concern.

Mark Smith: Yeah, very valid, very valid.

Chris Huntingford: |So he has an interesting thing around legalities, around tech. Okay, so go and look up a dude called Kevin Mitnick. Okay, so Kevin Mitnick was one of the first ever hackers that was actually locked up for fraud. There's a movie about him, right?

Chris Huntingford: Yeah, and in the beginning, when he started committing the crimes because it was based on Gallo-Roman law they weren't actually able to convict him. Right, it was a struggle, right, because the legal I guess the way the legal rules worked didn't actually even match the crime, right? It's like trying to lock somebody up for committing something that's so far-fetched so they didn't know what to do, so they gave him 12 months because they didn't actually have the definition of what cybercrime was. And what's happening now is that even when you look at the legalities around AI in Europe and the stuff that's come out like I'll give you an example One of the requirements is that you have to prove and evidence how the decision was made and how the contract was generated. So when you look at, like, the gpts, it's a black box. It's really hard to evidence a lot of that stuff okay, it'll just be generic.

William Dorrington: It's like. This is how it works. That's all we can tell you yeah, it's the same.

Mark Smith: So the lineage you're looking for the lineage of it.

Chris Huntingford: That's correct and it's. It's really. It's not like an ml right, the ml, you can see the path. This is different. So when they're asking the question and somebody will get pulled out on this, especially for things like ageism, all sorts of things like that Ableism, ableism, all that it's going to happen. And when I say it's going to happen, I don't think that companies are ready to actually show how it works. And when you look at what happened with GDPR, like subject access requests that's the same shit, man, like that was show us your data. Now it's going to be. Show us how you made, how you reasoned over this. Yeah, couldn't agree more.

William Dorrington: Yeah.

Ana Demeny: We're going to have some interesting scenarios coming up in the future. But from a corporate perspective and using your own data, microsoft does provide a way of, or at least a diagram of, how it works broadly. This is how it works and this is what people are interested in. I think, like, if you're working with corporate data and you're like, oh my God, all of that's gonna happen to me, you're like, actually no, all of the data that we're gonna use for AI and co-pilots is gonna come from your own estate and this is how it works. And I think, andrew, you do have a part of that shown in your white paper as well, right?

Andrew Welch: Yeah, the paper and what you're talking about here. Right, we can, what we can absolutely do today. Right is, show how, how the technology works at a macro level. What I think we're going to see are increasing calls from regulators and, in some cases, from society in a less structured way, right to understand not how does this work at a macro level, how does the technology work? But, quite pointedly, how did the AI reason its way to this particular response, to this particular prompt? And that, I think, is something that we really struggle with, at least today.

William Dorrington: And the weird answer to this is actually in the output itself, which is a load of development around prompt engineering which, by the way, guys, is not just writing natural language into a model.

William Dorrington: There's a lot more science than just that. It is around looking at the reasoning aspect and getting it to detail, that so chain of thought, commands, et cetera. It's just to try and bypass it. But my concern isn't actually Microsoft. Actually, if you look at Microsoft's history of investment, even before OpenAI, even before Elon Musk and the others got together to get that, they always invested in frameworks and models to look at responsible AI for help, because they knew that this would be coming and they knew they had to get on the front foot. My other ones is more emerging areas that we've seen over the last 10 years around things like Hug and Face and other aspects where we can allow developers to go in same with OpenAI to a point, but you can leverage Microsoft's privacy there now, but the developers are building this without putting that thought in. That's where other risk comes along and I think that's where EU laws, uk laws and that will hinder us because of that sensitivity.

Mark Smith: Yeah, and it's just like I think it might have been in Tools and Weapons, that Brad Smith book. Brad Smith book, yeah, he mentioned there that you know, if a hostile country decides to disregard law or the ethics of law sorry, a war, war, not law and they advance their AI to do something, they don't care about that the EU has a standard or that the US have a standard right If they're competing or fighting against those countries. And so I think that we're definitely going to see. The risk is is that you hamper, if you like, the white hat because you think you're going to stop the black hat. You know from doing their what they're going to do and they're going to stop the black hat. You know from doing their, their, what they're going to do and they're going to do it anyhow, law or no law, they're going to do it.

Chris Huntingford: We have this user group, this AI user group, um, recently in Bletchley Park where the agreement was signed, and it was quite interesting, right, because this guy stood up and he was really upset and like complaining about this and I said, look, dude, this is the human race man we'll turn a walking stick into a weapon, like that's what we do. We're idiots, we mess things up, we um, and unfortunately, because of that they have to take like over precautionary protocols. But you're right, like you can't him, you can't hinder growth because of the black hat no, yeah, absolutely.

William Dorrington: You just can't. We'll always make, we'll always take something, make it better. We'll always take something and make it worse. And we just got to make sure that we're in the middle, trying to make sure both those things one of them can happen, one of them can't. It's just the way we work.

Mark Smith: Yeah Well, it's been great talking to you guys. We'll see you on the next show and we'll drill into a bit more detail around the new white paper released by cloud white house and avril welch. Thanks everyone, thanks everyone, thanks everyone. Hey, thanks for listening. I'm your host business application mvp mark smith, otherwise known as the nz365 guy. If there's a guest you'd like to see on the show, please message me on linkedin. If you want to be a supporter of the show, please check out buymeacoffeecom. Forward slash nz365guy. Stay safe out there and shoot for the stars.

Andrew Welch Profile Photo

Andrew Welch

Andrew Welch is a Microsoft MVP for Business Applications serving as Vice President and Director, Cloud Application Platform practice at HSO. His technical focus is on cloud technology in large global organizations and on adoption, management, governance, and scaled development with Power Platform. He’s the published author of the novel “Field Blends” and the forthcoming novel “Flickan”, co-author of the “Power Platform Adoption Framework”, and writer on topics such as “Power Platform in a Modern Data Platform Architecture”.

Chris Huntingford Profile Photo

Chris Huntingford

Chris Huntingford is a geek and is proud to admit it! He is also a rather large, talkative South African who plays the drums, wears horrendous Hawaiian shirts, and has an affinity for engaging in as many social gatherings as humanly possible because, well… Chris wants to experience as much as possible and connect with as many different people as he can! He is, unapologetically, himself! His zest for interaction and collaboration has led to a fixation on community and an understanding that ANYTHING can be achieved by bringing people together in the right environment.

William Dorrington Profile Photo

William Dorrington

William Dorrington is the Chief Technology Officer at Kerv Digital. He has been part of the Power Platform community since the platform's release and has evangelized it ever since – through doing this he has also earned the title of Microsoft MVP.

Ana Welch Profile Photo

Ana Welch

Partner CTO and Senior Cloud Architect with Microsoft, Ana Demeny guide partners in creating their digital and app innovation, data, AI, and automation practices. In this role, she has built technical capabilities around Azure, Power Platform, Dynamics 365, and—most recently—Fabric, which have resulted in multi-million wins for partners in new practice areas. She applies this experience as a frequent speaker at technical conferences across Europe and the United States and as a collaborator with other cloud technology leaders on market-making topics such as enterprise architecture for cloud ecosystems, strategies to integrate business applications and the Azure data platform, and future-ready AI strategies. Most recently, she launched the “Ecosystems” podcast alongside Will Dorrington (CTO @ Kerv Digital), Andrew Welch (CTO @ HSO), Chris Huntingford (Low Code Lead @ ANS), and Mark Smith (Cloud Strategist @ IBM). Before joining Microsoft, she served as the Engineering Lead for strategic programs at Vanquis Bank in London where she led teams driving technical transformation and navigating regulatory challenges across affordability, loans, and open banking domains. Her prior experience includes service as a senior technical consultant and engineer at Hitachi, FelineSoft, and Ipsos, among others.