Accelerate your career with the 90 Day Mentoring Challenge → Learn More

Get ready to rethink everything you know about AI governance

Get ready to rethink everything you know about AI governance
Ana Welch
Andrew Welch
Chris Huntingford
William Dorrington

Send me a Text Message here

FULL SHOW NOTES
https://podcast.nz365guy.com/597

Get ready to rethink everything you know about AI governance as we dive into the compelling world of chaos engineering in AI. Today's episode of the Ecosystem Show promises to challenge traditional mindsets and spotlight the innovative approaches necessary for managing AI-infused container apps and custom large language models (LLMs). Through real-world examples and ethical discussions, we tackle the complexities of governing non-deterministic AI solutions and emphasize the critical role of transparency and rigorous testing as dictated by the EU AI Act. We also grapple with the thorny issues of liability and accountability when AI-generated content goes awry.

In our next segment, we shine a spotlight on the indispensable role of good data and observability in AI and AIOps, with special insights from Alistair Pugin. You'll learn about the dangers of data exfiltration, the nuances between observability and monitoring, and the imperative for continuous improvement in an AI-driven landscape. As we navigate through the complexities of establishing AI standards in light of new regulations, like the European AI Act, we also delve into the evolving role of citizen developers and how they integrate into this fast-paced world of AI.

Finally, we explore the strategies individuals use to stay updated in the ever-evolving world of AI. From balancing busy jobs and personal lives to leveraging informal learning and engaging with knowledgeable colleagues, we uncover the various ways to keep pace with rapid technological advancements. We introduce "Napkin AI," a tool that transforms written content into visual diagrams, enhancing comprehension and presentations. Wrapping up, we emphasize the importance of retaining skilled low-code Power Platform professionals and dispel the misconception that citizen developers only need advanced Excel skills to succeed. Tune in for a demonstration of Napkin AI and insights into the exciting, yet complex, future of AI governance and collaboration.

Support the show (https://www.buymeacoffee.com/nz365guy

90 Day Mentoring Challenge  10% off code use MBAP at checkout https://ako.nz365guy.com

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

Chapters

00:01 - Exploring Chaos Engineering in AI Governance

07:57 - Navigating AI Governance Standards and Training

15:09 - Navigating AI Learning Strategies

25:08 - Navigating AI Co-Pilot Chaos

34:53 - Revolutionizing AI Governance With LLM

Transcript

Mark Smith: Welcome to the Ecosystem Show. We're thrilled to have you with us here. We challenge traditional mindsets and explore innovative approaches to maximizing the value of your software estate. We don't expect you to agree with everything. Challenge us, share your thoughts and let's grow together. Now let's dive in. It's showtime. Okay, welcome back, beautiful people. We're in the house for another episode of the Ecosystem Show. We're excited to be here. We're back into the rhythm for season two. It's the three of us. Today.

Mark Smith: Will is once again gallivanting around the world like Will does. If you follow his Instagram feed, you will find it is just full of massive parties. And when I say parties, I'm talking about, you know, coachella level parties like full on. I don't know how he does it. I know how he stays awake, but he's in the prime of his youth, so it's kind of understandable. And then our deaf friend Andrew is in Bangkok. Now, as far as I know, he's not staying at the Bangkok Hilton probably not the best place to stay in Bangkok, but he's there all the same, working, working, working With that, mr Huntingford, what's on the agenda today? What's on the run sheet?

Chris Huntingford: Yeah, man, that's a good question. And actually Anna prompted this because anna and I after the last show um mark, you brought up the chaos engineering discussion, okay and obviously I was like blown away I thought it was the coolest term I've ever heard and then anna was, we had shared some stuff around like black hacking on Copilot and other things. I've had this thing in my head around when building out AI solutions. What is okay, because there's just no end to what can happen. This is Anna and I going back to Microsoft days type stuff.

Mark Smith: I love it.

Ana Welch: Yeah, absolutely. We got a bit obsessed, like obsessed, and we started researching and I fell in love, deeply in love, but I am married so I'm not going to pursue it with this guy. He does a lot of chaos, engineering on AI, and he has the best talks and demonstrations and I showed them to Chris and then Chris, since he only needs like three hours of sleep, he ran with it, really like, really ran with it so for those of you, just if you're listening to audio of this, we do this as a video as well, so therefore, chris is going to share some stuff on screen, so head over to YouTube.

Mark Smith: if you want to share some stuff on screen, so head over to YouTube. If you want to check that out on my channel, the NZ365 Guide channel, chris, the floor is yours.

Chris Huntingford: Thanks, man. Yeah, so I've had this in my mind for a while and I think it took that chaos engineering discussion to kind of start getting me to author this properly and I have authored it before internally, but I'm going to read it to you real quick. So I said I keep hearing that organizations want to build AI-infused container apps and custom LLMs, slash co-pilots or whatever. Okay, now I use the terminology container apps on purpose here, because those are effectively custom solutions that are built in Azure inside, like Kubernetes, structures and things like that. Okay, so I said that's all awesome and I love the creativity, but I've been thinking about the non-deterministic nature of GBTs versus the deterministic nature of automation as we understand it. This is going to be hard to govern and manage. Some loosely formulated thoughts as to why. Okay, I've also been reading about the EU AI Act, the link we'll provide, obviously, in the show notes. Okay, so if you're and I'm using the word app extremely loosely because people call them apps it's more than that. It's like the under the undercarriage, the interaction layer. So, just whenever I say the word app in the scenario because you know I hate the term, I'm just using it because people know what it is Okay, and you can also apply this to power platform solutions, microsoft Dynamics. So think about the wider stack. So if your app that you make is infused with GPT-AI functionality, this drives non-deterministic results. That's what GPTs do to an extent, if they accept the prompt that is grounded with data, and then security that guides the output. Okay. So just the example there to break it down, is that if I go into a GPT and I say, show me a packet of red tomatoes and there is no data in that, llm that talks about red tomatoes. Okay, that it's been trained on. It's not going to work right, and I've actually got evidence of this in another website.

Chris Huntingford: So how do you prove transparency? Because the EU AI legislation requires transparency. So if you build something, you have to be transparent about how you've red teamed it. So how do you provide transparency? What form of chaos engineering has been trialed? Has the solution been correctly red teamed? Red teamed is terminology that comes from the old hacking days, where what banks used to do is hire people and they used to ethically hack banks to find out vulnerabilities. So it's easy to prove transparency in a deterministic model like automation or process, because the solution has been built with preferred outputs. You know what to test for.

Chris Huntingford: So think about buying something from Amazon. You go in, you go and buy something. They know the pathway you're going to take. They know your user journey. So if I make a co-pilot and put it on someone's website and that generative AI so imagine this thing is grounded in whatever data gives me incorrect information and guidance because it isn't grounded in great data or the prompt isn't well thought out, who is liable? So if I go into a website and say, hey, I'm going to go and teach me how to steal a car and like, I don't know, bob the Cabbage Man's website has got an LLM sorry, he's got a co-pilot on it and it gives me the info. Who is liable for that? Okay, Like the Air Canada thing, they had to refund a bunch of flights. I would have gone straight back to the consultant and given them a hard time, okay. So here's the thing.

Chris Huntingford: Now, if I think about agentification and the combination of agents calling agents within a closed ecosystem, what if one of the agents isn't grounded correctly and hallucinates which is a user problem, by the way, this happens with users every day. What happens to the linked agents? I know this is a broken telephone conundrum in real life, but the proliferation of bad data could be pretty intense. In this scenario, I'm starting to see the real need for a deployment safety board, a DSB, and well-thought-out AI governance boards in all partners and customers. I think we need to be better at responsible AI. Okay, so, like I posted that after Anna and I went on our tirade about you know, black hat hacking and ethical hacking and things, and the comments are insane, like people are really getting into this man, and obviously one of the greatest comments is from Yuka, but I'll leave that for people to read.

Ana Welch: Yuka absolutely gave you the greatest comment and it was so realistic and it was so a thing that we may deal with very soon.

Mark Smith: I don't know. I was going to bring up Yuka because he's done a couple of posts over the last month around, like he's really cranked into Copilot and the risk profile that it raises, and so it's interesting because I saw your post, I read the actual post but I haven't got into the thread right of every people's inputs below, which is epic. I'm glad he's jumped on because I don't know if you've noticed about yuka but since he handed in his mvp them yeah, he's got gloves off, man he's got gloves off he's just calling shit the way shit is and I'm loving it.

Ana Welch: I'm here for it he's also, like, incredibly intelligent and very thorough, and I'm sure this doesn't happen to everybody, but I do need to read his post two or three times to really understand what's going on.

Chris Huntingford: It's incredible.

Chris Huntingford: I do too. There's another person who I really need you all to look out for, and that's a friend of mine from South Africa, alistair Pugin. Okay, so, al's an absolute legend, right, and he's also a straight shooter. But one of the things he called out long ago, long ago before all of this hit the fan was around grounding in good data, right, and what's interesting is he's put four points here. He says, bro, you're on the money we're seeing with customers today.

Chris Huntingford: Number one how reliable is the data you get back? So if you think about like a co-pilot just M365 co-pilot in your ecosystem and you have data exfiltration, what that means is that, anna, I'm going to email you a document. Okay, I'm not going to share a document, I'm going to email you, copy the document and you make changes. Okay, that is data exfiltration. It's bad, it's really bad, and this happens in legal firms and things. The second thing is that grounding from RAG how accurate is it? So, retrieval, augmented generation, what is the data coming back and how accurate is that data? Because if you've got bad data exfiltration, that RAG is going to be grossly incorrect. Those responses from RAG God rails to data security, that's obvious. And then continuous improvements and do you know who else called this out? It was Ioana, so Ioana Tanasso is doing I'm saying her name horribly wrong, I guess.

Ana Welch: You're doing good.

Chris Huntingford: She puts up with me, but she called out the concept of continuous improvement for RIE standards, and what that means is that if you release a system into an open or closed ecosystem, the concept of observability needs to happen in the background in order for you to know what's going on, and then you need to continuously improve and edit the model.

Ana Welch: But observability, not monitoring, two very different things.

Chris Huntingford: Correct? Yes, because observability is very different. Observability requires interaction, where monitoring may not Correct me if I'm wrong, anna.

Ana Welch: Yeah.

Chris Huntingford: Yeah, okay. So it's really interesting, right, and what this is turning into is a thing called AIOps, right, it's really interesting, right, and what this is turning into is a thing called AIOps, right. Then I noticed, if you go to the Microsoft certification website, this is wild, and my friend, david Graham, pointed this out to me. He's like dude, aiops is actually in the AI app specialization. I'll tell you the name now if you give me one second.

Chris Huntingford: So AIOps is in your digital app innovation and they call it the AZ-400, a DevOps engineer support, but that's being angled more onto that. So I'm seeing a shift in how this is being classified. So, anyway, that, all being aside, this whole RYTHIC, this whole deployment safety board which I call now a digital safety board because I believe it's wider than just deployments is going to be huge, and I feel like if you are a partner and if you are a customer, you should absolutely be looking at setting up AI standards in your business, like today, because the European AI Act went live on the 1st of August or the 4th of August, I can't remember.

Ana Welch: I would be curious to see how many audit firms or little consultancies were born on August 1st, when this regulation came out.

Mark Smith: Yeah, because surely some people knew about it.

Ana Welch: Surely there is somebody out there Because, okay, fine, the EU said this is the rule. What actually tells us that the EU knows how to check?

Chris Huntingford: But that's it. And can I tell you, I don't believe that consultancies, like risk consultancies, will actually go live with something that says you're right or wrong, and I'll tell you why. About liability once again, because if you go into a company and you do an audit on their RISE standards and they get fined that €35,000 or 7.5% of their revenue, then you are the one that is held liable because you did the audit right. So well, let me rephrase that If they get found out and you've done an audit and says they're okay Sorry, I should have said that in the beginning, apologies.

Mark Smith: Right, right, and you didn't yeah.

Ana Welch: I mean, it's true, but when you come out with an audit, you actually follow a set of rules and you're saying these are the set of rules that I'm going to follow. If you, a regulator, in the meantime decide to amend your rules which you may well do then I will change my product as well.

Ana Welch: So I do believe, actually, that there will be audit companies ready to do this, and I also believe and I thought about this this week in light of chaos, engineering and all of the things that are good and less good, that came out of artificial, that could come out of artificial intelligence the fact that we have a brand new wave of what we used to call citizen developers.

Chris Huntingford: Yes, hear me out, you're right.

Ana Welch: I'm not saying citizen developers necessarily do not. No, no, no use for heart attacks. But I'll give you an example. I've got a friend. She is pro dev, went to university with me. She worked in like Greek projects. She knows how to do full stack development and DevOps and Azure yeah, so like fully fledged. And right now she's saying do you know what this is like? Such hard work to keep you know up to date with my skills and everything.

Ana Welch: And also I feel like I wanted to see, I would like to see the bigger picture. I think that going into Power Platform would allow me to do this with my background. Show me some stuff. And then I then I was like, okay, what do I show this person? Because I will not insult her with app in a day, like it's just not happening. You know, yeah, like what do I do? So after a conversation I go into wait a second, the whole AI tracks may be useful for you. And all of a sudden I find like 17 learning paths on Microsoft Learn on what is AI, on grounding, on rag patterns, on prompt engineering, on, et cetera, et cetera. Do you not think that the way we have regulations for GDPR, let's just say, because that's the biggest out of the EU. We would have something similar and a learning path for what you're talking about, chris.

Mark Smith: So this is interesting because I thought you know I was saying before we jumped on here that I had a very sleepless night and I was thinking about this session coming up today. And one of my questions to both of you were since January this year to this point, what are you learning? Because we're on this massive accelerated track, I feel, with AI, and then you bring up that there's these learning paths that are there. So, anna, maybe you can just flick us and we'll get that in the show notes for this episode. Here's the path. So, people, if you're on Spotify or whatever, you can go click and follow these paths and get your training game on. But my question to both of you is what are you specifically learning yourself? And I'm not talking about learning for customers or learning for consulting, but what are you doing yourselves to stay at, to, kind of, I feel like, keep your head above water in a way, with so much that is coming?

Mark Smith: You know I, six months months ago, as I, you know I said to to folks in the community, you know five prompts a day. You should be doing a five minimum of five. I think I'm over a hundred on prompts a day. Nowadays, my prompts are sometimes um, multiple page prompts. I'm talking about full landscape for length prompts. They're like detailed as and I'm creating this whole library because I never like. Why? Why would you have a prompt library? Well, really, what I'm doing is, you know, I'm creating rag patterns, right? I'm injecting a whole bunch of data, as a you know, into the context window as a starting point for whatever I'm doing. So what are you guys doing? How are you and how do you find what you're going to learn on in this AI world?

Ana Welch: I'm going to start because Chris will have like a whole plethora of things. I definitely am not doing enough and I recognize that in myself. That's because I have a very busy job and everybody, I think, would empathize with me and that and it's really really easy to blame it on the job and like we travel a lot and often I'm home alone with my toddler. But overall I feel really really very tired. After 7 pm I feel like I cannot function anymore. So these are the things that I have to fight every day until I figure out what the hell's wrong with me. But in between, the only thing that I feel like I can do to help my brain who needs to understand what's going on on is go to the essence.

Ana Welch: If you've noticed, I've always been talking about data. What can we do with data? Why is data important? So I'm reading all these books that are more or less connected to AI to understand how other people think and how information that's out there may or may not influence you know AI results and other than that, my five to 10 to 20 prompts a day. Of course, that's a minimum and learning a little bit about you know how grounding works and how the whole thing works. That's the only thing that I do so far, but I do feel like a lot of information is out there right now. That wasn't there like three months ago even.

Mark Smith: Yeah.

Ana Welch: Like the very thing that we've warned everybody. What's going to happen has happened. These waves are just like crashing onto the shore every day, and I think it's going to get a lot easier for me to pick up more information. Yeah, but I'm sure Chris lives and breathes AI. Uses AI more than electricity.

Chris Huntingford: I don't think so, because I am really crummy at reading articles and things. Like I'm just I just suck at it, like I have to be really, really, really, really into it to actually read a full-on article, but like, so I've just found that you need speechify. Yeah, I do, I actually have it. Yeah, um, I have been randomly directed to a lot of content, but actually it's more down the roots of curiosity, and I talk to a lot of people daily and when I say a lot, I really do mean a lot of people that understand AI quite well. So today I was having a chat with a colleague called David Graham and he was walking me through containerization and how that works and how the Kubernetes structures work underneath and how you can get these little containers to talk to one another, and actually they've been doing identification in cloud and azure world for a lot longer than we have in co-pilot, right. So it's cool to learn. It's cool to learn like that, and what I've learned is that actually, the more I have these conversations like these, the more it forces me down the routes to go and learning.

Chris Huntingford: So people like Ioana, donna, dave, you guys, like there's a big crowd of folks who I literally talk to every day and they're just like, hey, read that article. And then I do and it sends me down the spiral of random learning. So it's never planned, but I do plan to actually go and do one of the Azure courses that's available. So if you look at the, I think it's the, the az204, the developer associates it's. I haven't written code in a long time but actually I'm so interested in how you would infuse ai into the apps, because once what I figured out about myself is once I know the tech, then I can disseminate that against real world and then build governance around it. I can't do governance without understanding the depth, and then that forces me into a learning pattern. That's a really confusing answer, but like it's the only way I can explain it.

Ana Welch: Sure, there's also an event that I have just signed on for, and I wish they would tell me the name of the event again.

Mark Smith: Just while Anna's looking for that event, I'm going to give you something that you are going to thank me for.

Chris Huntingford: Oh goody.

Mark Smith: Napkin as in. You like napkin at a restaurant to wipe up napkinai. Thank me later. Okay, that shit is gonna change the game for you, particularly in your presentations.

Mark Smith: Oh, that's something that I need in my life it's an ai that diagrams out whatever you've written and the diagrams are next level. Epic man, they are epic. So, like I take a blog post, I write, I give it, just it doesn't take the whole post, that you go this paragraph, give me an illustration, and it designs on the fly an illustration using ai for that paragraph, and of course, you know how visual. I just did a thing. What did I want to do? I wanted to um the difference between oh, hackathons and um innovation challenges, and so it did. This diagram hackathons versus innovate, and it just capsulated it. It is an epic diagramming tool that will take whatever input you give it and you know, particularly if you've written a paragraph or two and create a really intelligent diagram for it. Amazing, it is epic. It's not like five-finger type human drawing, that type of mid-journey experience. This is designed specifically and it builds awesome diagrams. Amazing. I will take payments later.

Chris Huntingford: I'm on it already. I'm on it already. I'm on it already.

Mark Smith: It's free, it's 100% free. There's, no, there's, no, it's in beta it's 100% free. It's freaking epic man. Yeah, see all of these tools.

Ana Welch: There's just so, so much, and I so admire people who can just go on all of these tools and and and imagine or like, just work with what they've got. I need to understand what's going on. So for that respect, you're gonna laugh, but what I'm gonna do is I'm gonna do a half-day workshop, one of those Microsoft workshops, and it's called build and extendend AI-Powered Copilots with Copilot Studio. I want to do that, you know, because I want to see how they do it.

Chris Huntingford: Yeah, yeah, you should and.

Ana Welch: I want to because normally the way because I've written and presented some of these courses as well, and Chris has also and they guide you down paths that you wouldn't normally go into and you feel like, why are you making me present this in this way? And it's because of the pipeline. Normally it's because you know, they know what's coming, yeah, yeah, so you should learning, you should learn it in this manner, and it's so useful, so I recommend it. And now I'm just going to share a link as well.

Mark Smith: Yeah, share the link. That is epic. You know what? I was talking to somebody this week in the product team and I think Microsoft have got over 100 co-pilots now and they've got this concept of co-pilot in where, or co-pilot on right, which is, you know, um, and it goes back to the build versus buy story. Do you want to just take a co-pilot off the shelf? You're looking at co-pilot for service or co-pilot for sales or something like that, right, and then you go into this. You're loving that, aren't you, chris?

Chris Huntingford: my dude this is so good, this is so good um, anyhow and so, but what I hear?

Mark Smith: what I hear is that the gold is co-pilot studio. That's, if you just could only invest in either co-pilot co-pilot studio yeah, or any of the pre-built or azure ai studio. Invest your time in co-pilot studio. That's where the goods are at and that's where I think you'll get the biggest bang for buck. That's my take on things right now.

Chris Huntingford: But dude, this is exactly what I was saying to. I won't mention companies' names, but I'm like look after your low-code people, because they are AI extensibility people now and they're like, oh no, it's cool, we'll just let them all get fired and go somewhere else or do their own thing. It is, in my opinion, it is the dumbest move to release good low-code power platform people from your business right now. If you do, okay, let me tell you something else Technical power platform people so not functional people who make Canvas apps. I'm talking like actual technical people who understand layers of the Azure stack. You can talk the M365 story. You can understand a bit more about how things work.

Mark Smith: Good functionals will know that man Good functionals will know that.

Chris Huntingford: Yeah, well, well, no, the quality of a functional consultant has deteriorated based on the fact that citizen devs have kind of arrived and you know, there's a whole story to be had I think.

Mark Smith: I think that's why I you know, I have a allergic reaction every time I hear the word citizen dev. Yeah, I think a citizen dev is an excel user. It's not. It's not. I'm building an app for the organization user, it's the excel. I just needed to do a calculation. That's your citizen dev. I think we've bought them too much across the, the, the, the line into oh you can build a business app. No, no, you can't. You're probably not going to that's. That's not a citizen dev, and I think a lot of people that call themselves citizen devs aren't right. They're much more around the functional analyst type person than a citizen dev.

Ana Welch: But I feel like many people believe just that. I think, including Chris and Chris please correct me if I'm wrong, if I misunderstood one of your sessions. Chris goes in and talks about governance and it's a really, really cool presentation, very entertaining, of course, but he talks about how. He talks about how we do not ask IT to look after our Excel, why do we ask IT to look after our Power Apps or our Power Platform. 100% beautiful analogy beautiful.

Mark Smith: There we go. Maybe that's where we are heading our Power Apps or our Power Platform.

Ana Welch: A hundred percent Beautiful analogy Beautiful, there we go. Maybe that's where we are heading Exactly. You know exactly like you're saying.

Chris Huntingford: You read into it perfectly.

Mark Smith: Back on track. What were you going to?

Chris Huntingford: say I don't remember. I've now gotten this tool in my hand. Dude, this is so freaking cool.

Mark Smith: Napkinai man, it is the goodies man.

Ana Welch: Like man it is. It is the goodies man like. Honestly, last last time we we talked about chaos engineering and now like this tool. This can only be surfaced at the end of a show, because otherwise we've lost chris you know it's so cool.

Mark Smith: Though it's so cool it's mega right, I tell you what it's just so cool massive consumption phase at the moment. Just consuming so much, trying so much, like I'm like you know what? I realize I'm not getting good at um prompting around image generation, and you see some people, what they're creating from images, oh my gosh, it's amazing. But once again it comes back. Really, I think they're all bad. Oh no, no, there's some stuff that you cannot tell whether it's a real human person or not, and it all comes down to the prompt. If you're a shitty prompter, you're going to get a shitty image and it's going to look like an AI image. If you're a good prompter, you're like where did you photograph that?

Chris Huntingford: She's going to show you my creation. Yeah, she's going to show you my creation. Yeah, she's going to show you my creation, please do. It's just so cool. I'm not going to show you the text, but I'm going to show you. No good, I mean, this is so cool.

Mark Smith: This is napkin dot, right? You've just done this on napkin since I said it.

Chris Huntingford: It's literally next level.

Ana Welch: So this I've run through a God.

Chris Huntingford: But the thing is so you run a thing called a flash right and then the flash scans through it, then you get these options and then you can kind of like dick about with the options.

Mark Smith: Yeah, you can go right in and change out any of those icons in there inside, what you just did then, but isn't it an epic tool, right? Isn't it such an epic tool?

Chris Huntingford: Bro, it's actually wild.

Ana Welch: Okay, Isn't it such an epic tool, Bro?

Chris Huntingford: it's actually wild Okay so then you, there we go, there's one so then, what do you do? You just hit the little flash button.

Mark Smith: Yeah, and it scans your text and then it goes here's a bunch of ways that we could visualize this, so it gives you about 15 different ways of visualizing it. Here they are on the left-hand side, yeah, and then, once you've selected how you want to visualize it, then you can select how you want to theme it. It's so cool and then you ultimately can go down and change any of the elements. Then, after it's produced, yeah.

Mark Smith: So up the top there there's a little purple button that allows you to download it as an SVG, a PDF or a PNG. Got it.

Chris Huntingford: I'm going to use my one f word for this podcast this is cool as fuck. Like, really, this is. This is so cool, honestly. Oh, this is so cool yeah, yeah I literally don't know what I was talking about.

Ana Welch: You shouldn't have given me the shiny thing we were talking about agents and about how they could, and just to. I know we're not gonna go through like yuka's post or like whatever, but his comment talks about how. Because in going back to chris's post, he's saying that once you have like two or more agents that you cannot really control, means that it's really hard to visualize or to observe, rather than monitor, what they're doing. So when they're getting biased information, aggressive information, wrong information in general, you don't know.

Ana Welch: You don't know, because they're going to be able to make decisions. And then Yuga comes and says Chris, you do not even need two or more agents. In fact, if somebody asks just normal co-pilot a question and that co-pilot hasn't been prompted correctly or has responded in a way that it shouldn't have, or it's just like wrong information, but this person has copy pasted it into like a document and then saved it on SharePoint, yeah, that's then human content.

Mark Smith: Yes, so that's what I was talking about.

Ana Welch: So, therefore, the agent can grow up to be whatever they want to be. And now take that, times 15 competent individuals in your organization and if your organization has like a thousand, for sure you have 50 people who are not going to check the response of Copilot and they are going to insert this information in either Excel spreadsheets or documents. And then you have a co-pilot agent writing a whole different story.

Chris Huntingford: But this is the point I was getting across in the beginning. It's not like the human error. Is security by obscurity right? So you don't have that break you might like. This is why shit like insider risk management exists in tools like purview, because the concept of data exfiltration is not just about like oh, the inside has got a bad, the inside has turned red and been bad.

Mark Smith: It's actually like people just make dumb mistakes because they don't know you accidentally send some an attachment to an email address because that was one that auto-completed, and you go send and oh fuck, that's not who I meant it for and it's got confidential data in it. Right purviews prevents that stuff from happening if you configure it right if it's been classified as not being allowed outside your firewalls or your domain structure or something like that this is the scary part.

Chris Huntingford: Right, because right, because what happens there is, it is proliferated through the automation layer in the agentic frameworks. So, like people, so Yuka's version is right, but like I took it to the next layer. So Yuka is exactly right in saying co-pilot will surface the information way quicker, but the human will never know. The agent will then never, never know. And then you've got automation wrapped up in this whole thing and then, like you've got agents turned bad and like I don't, I personally. So I drew a very, very can I show you this is hilarious. Actually, I'm going to share a screen. I'm going to share off a screen. I've been talking about this. I called Mark and Andrew from a certain place in Seattle and I was like dude, this is important. So when you have a group of agents this is what I was talking about it's called Agents Armageddon. So if these were people, this would take a lot longer.

Mark Smith: Yeah.

Chris Huntingford: Okay, but because they aren't necessarily people. The proliferation of data is way, way, way, way way more chaotic right, and then you effectively infect other agents. So what are we doing about governance?

Mark Smith: Yeah.

Chris Huntingford: Is my question.

Mark Smith: Like the speed of the stuff changes everything. I don't know. Did any of you see the Eric Schmidt video that got taken down this week, where you're speaking, I think, at Harvard or something like that? I can't remember the university.

Mark Smith: So Eric Schmidtmidt, you know, was the guy that probably made google as big as what google became. You know it's yeah, it's kind of like he took over from larry and sergey brin and really made it the mega company it was. And in that he said you know that agents is going to be such a big deal that you could go, you know what they're're going to shut down TikTok. He used this as an example and you could say listen, reproduce me TikTok, not with all the data, but the actual app that does that. And because the ability of multiple agents working together to kind of like, once you test a QA, that type of thing, it's going to fire back and say, hey, I don't like this, you need to rewrite this. But now that's happening almost at the speed of light, that kind of interaction that they can correct each other, and therefore you're looking at potentially years of iterations happening in hours yes, yes, exactly, and this is dude.

Chris Huntingford: This is the thing that I was trying. Do you remember when I phoned you from seattle when I was at microsoft? I was like this is the shit that's going to change everything. Yeah, okay, this is what I mean. I don't think people quite grasp the fact that you know, I'm going to reverse it to say that whole like containerization of applications and things like that on custom LLMs. That is just one thing. That is just one thing.

Chris Huntingford: But now democratizing it into a layer, like Anna put on the it's not a sit dev layer, but democratizing it into a layer where everyone can do this shit, now that changes it. Because at least when you're doing this inside the Azure framework and you've got somebody that is technical, they know the digital boundaries about where and not to do things right, things right. But when you're putting this in a layer where, like somebody who doesn't have tech experience or doesn't have governance experience, can use Copilot Studio that easily, do agents chaining that easily and start building out these things, and you have no governance, this is different. This is so different, yeah.

Ana Welch: I think that another phenomenon that we'll see is a set of organizations who are still on-prem. There is such a thing still. They're now going to say no man, we're just going to buy a bunch of servers, we're never putting our data in the cloud.

Chris Huntingford: Yeah, this is why I think on-prem cloud will make a comeback.

Ana Welch: And I think that's when on-prem cloud will make a comeback. On-prem cloud will make a comeback, and I think that's when on-prem cloud will make a comeback.

Chris Huntingford: On-prem cloud will make a comeback. I do think that people are going to shit themselves, try and ring fence their data from those layers of updates, and I'm not surprised. But at the same time, this is good. I don't know where this is going to, teeter, and I've really thought about this. Anna, I'm so glad you brought that up, because I've really really thought about this. Like is ai the era that sends everyone back to on-prem cloud or puts everyone in the cloud for sure because they want to, you know, be better than everyone else?

Mark Smith: I don't, yeah I think that you know where apple is going with their on device being able to execute llm on device-device.

Chris Huntingford: Yes, I'm sure.

Mark Smith: And you know, the latest Surface from Microsoft is really getting that way and I mean, we're in V1, right, and you'd think, well, it's going to be in just, probably two years it's going to be, you know, mount your FaceTime capabilities, where that will all be on device rather than in cloud. I think there'll always be an extension to the cloud, but, yeah, it's hard to say, right, people are going to get super worried about their data. One of the other things is that I think I heard in that Eric video. You know, people are worried about licensing, like when works of art and stuff are being used and how are they being compensated? And great model why don't they?

Mark Smith: Why don't we apply the um ai sorry, the spotify model or the netflix model to anything that you've created personal works, anything that you've created, whether it's you know, and, and if an llm uses it, it needs to compensate you for the creation. Like, if that's a bit of training data that you know, with blockchain, it wouldn't be hard to kind of um key everything that was created that was created by you know, living, breathing um intelligence, as in um that has blood in their veins that you could get compensated for all your great content, right and it would show which I think is lineage right. That whole, where did this? Where was its birthplace? What did it come with? What was in its DNA?

Chris Huntingford: at its starting point.

Mark Smith: Can we trace it? And blockchain has the ability to track lineage.

Chris Huntingford: That is true, I have not thought of that in the least.

Ana Welch: Yeah, I felt the same and how most of their before iPhone, after iPhones as well most of their revenue didn't actually come from devices, it came from music.

Chris Huntingford: It did. Yeah, yes, you're absolutely right.

Mark Smith: With that we're at time, we will leave it. Man, I look forward to doing these podcasts every time because I feel like I learned so much and me too, the speed of change is just phenomenal.

Mark Smith: it's exciting. Um yeah, if you're listening or watching, tell us what you want us to, you know to drill into, or and if you can add, you know, tools to the mix, like napkinai. If you've discovered something, share the love people. We want to hear about it, we want to see how epic it is, and if you've created something, even better. But with that, it's goodbye from me, anna Bye.

Ana Welch: Your turn, chris, say goodbye, bye. Sorry, I was reading something.

Chris Huntingford: The thing is, this napkin thing has killed me. You do this to me every time. It's like a wormhole.

Mark Smith: Thanks for tuning into the Ecosystem Show. We hope you found today's discussion insightful and thought-provoking, and maybe you had a laugh or two. Remember your feedback and challenges help us all grow, so don't hesitate to share your perspective. Stay connected with us for more innovative ideas and strategies to enhance your software estate. Until next time, keep pushing the boundaries and creating value. See you on the next episode.

Chris Huntingford Profile Photo

Chris Huntingford

Chris Huntingford is a geek and is proud to admit it! He is also a rather large, talkative South African who plays the drums, wears horrendous Hawaiian shirts, and has an affinity for engaging in as many social gatherings as humanly possible because, well… Chris wants to experience as much as possible and connect with as many different people as he can! He is, unapologetically, himself! His zest for interaction and collaboration has led to a fixation on community and an understanding that ANYTHING can be achieved by bringing people together in the right environment.

William Dorrington Profile Photo

William Dorrington

William Dorrington is the Chief Technology Officer at Kerv Digital. He has been part of the Power Platform community since the platform's release and has evangelized it ever since – through doing this he has also earned the title of Microsoft MVP.

Andrew Welch Profile Photo

Andrew Welch

Andrew Welch is a Microsoft MVP for Business Applications serving as Vice President and Director, Cloud Application Platform practice at HSO. His technical focus is on cloud technology in large global organizations and on adoption, management, governance, and scaled development with Power Platform. He’s the published author of the novel “Field Blends” and the forthcoming novel “Flickan”, co-author of the “Power Platform Adoption Framework”, and writer on topics such as “Power Platform in a Modern Data Platform Architecture”.

Ana Welch Profile Photo

Ana Welch

Partner CTO and Senior Cloud Architect with Microsoft, Ana Demeny guide partners in creating their digital and app innovation, data, AI, and automation practices. In this role, she has built technical capabilities around Azure, Power Platform, Dynamics 365, and—most recently—Fabric, which have resulted in multi-million wins for partners in new practice areas. She applies this experience as a frequent speaker at technical conferences across Europe and the United States and as a collaborator with other cloud technology leaders on market-making topics such as enterprise architecture for cloud ecosystems, strategies to integrate business applications and the Azure data platform, and future-ready AI strategies. Most recently, she launched the “Ecosystems” podcast alongside Will Dorrington (CTO @ Kerv Digital), Andrew Welch (CTO @ HSO), Chris Huntingford (Low Code Lead @ ANS), and Mark Smith (Cloud Strategist @ IBM). Before joining Microsoft, she served as the Engineering Lead for strategic programs at Vanquis Bank in London where she led teams driving technical transformation and navigating regulatory challenges across affordability, loans, and open banking domains. Her prior experience includes service as a senior technical consultant and engineer at Hitachi, FelineSoft, and Ipsos, among others.