Accelerate your career with the 90 Day Mentoring Challenge → Learn More

Exploring Cybersecurity Economics, AI Evolution, and Responsible AI Practices

Exploring Cybersecurity Economics, AI Evolution, and Responsible AI Practices
Ana Welch
Andrew Welch
Chris Huntingford
William Dorrington

Send me a Text Message here

FULL SHOW NOTES
https://podcast.nz365guy.com/606  

Prepare yourself for an eye-opening exploration as we uncover why cybercrime could be the world's third-largest economy. Tune in to hear Ana, Andrew, William, and Chris's exhilaration over Trevor Noah's enlightening talk and Vasu Jakal's captivating cybersecurity panel, where we unravel the staggering statistic of 4,000 cyberattacks per second. Discover the vital importance of zero trust policies and proactive security measures in a world where AI-driven threats are ever-evolving.

Imagine a future where AI agents handle tasks with the finesse of seasoned travel experts. Our latest discussion takes you through the evolution of AI from basic automation to the sophisticated orchestration of agents capable of independent action. We dive into the importance of data accuracy to prevent chaos and highlight the potential of orchestrators and agent chaining as powerful tools for optimizing intelligent systems, transforming the way we work.

As we navigate the complex waters of Responsible AI, we focus on balancing technology with ethical practices and human elements. Learn about the implementation of Responsible AI waivers and tools like Co-pilot Studio that manage liability and ethical AI deployment. Join us as we contemplate the challenges of managing personal data, the rise of deepfakes, and the broader implications of AI through insights from Kai-Fu Lee's "AI 2041." Our conversation wraps up with a call for feedback and innovation to enhance software estate value, inviting you to be a part of this transformative journey.

90 Day Mentoring Challenge  10% off code use MBAP at checkout https://ako.nz365guy.com

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

Chapters

00:01 - Exploring Software Estate Value and Security

13:04 - Conversations on AI Agents and Orchestration

23:14 - Navigating Responsible AI in Organizations

32:22 - Navigating Restrictions on AI Use

42:38 - Enhancing Software Estate Value Through Innovation

Transcript

Mark Smith: Welcome to the Ecosystem Show. We're thrilled to have you with us here. We challenge traditional mindsets and explore innovative approaches to maximizing the value of your software estate. We don't expect you to agree with everything. Challenge us, share your thoughts and let's grow together. Now let's dive in. It's showtime. Welcome back, welcome back, welcome back. It is the week following the epic Vegas experience and, as they say, what happens in Vegas stays in Vegas, but we might let the odd little things slip here and there. Sometimes it's not a choice about staying in Vegas. You just have to stay because you can't fly out.

Chris Huntingford : But that's some of the challenges.

Mark Smith: Okay, so let's just do a round robin uh around the room. Chris, we'll start with you. What are your big highlights for this week?

Chris Huntingford : so my favorite thing that I experienced in vegas, other than hanging out with you wonderful people, is, um was seeing trevor. No, like he dude, he's, he's wonderful man like that. Obviously, being a south african, I was like, yes, you know my people, right, but just the way he explains things and I took a lot from that session and I've actually named a bunch of talks after some of the things he said. Nice, so yeah, those are not going to leak out yet because I think they're pretty cool. I've just submitted two.

Chris Huntingford : The other session I loved was the panel, and it was. I think his name is Vasu Jakal. Okay, yeah, so she was the security lady. And look, I love Thomas, I love Jamie, I think Ryan's great. But she stood out to me and the reason she stood out to me was because she was really hitting really hard on the security layer about antitrust and shit like that, and actually one of the things that I was mind blown about was the number of attacks per second has gone up to 4,000. And she said that the GDPR of the hack world right now is the size of a small continent.

Mark Smith: Wow.

Chris Huntingford : Dude. I was like this is why I love security, right, but I was just like yeah, absolutely. So I thought she, I added her on LinkedIn and on Twitter straight away Because I mean, yes, jamie is an absolute legend, she's brilliant, thomas is brilliant, Ryan's brilliant but she stood out to me very, very much. So those were my two favorite so, so let me just unpack that, so you're saying the gdp of a small continent, you're sorry, country, country, country.

Mark Smith: Okay, so you're saying that's the revenue they're producing via hacking and I take it, when we're talking about hacking, at 4,000 incidences a second. Was that the number, 4,000 a second?

Chris Huntingford : A second dude.

Mark Smith: That is, an external, like perimeter, attack into somebody, trying to penetrate the defenses of an organization's IT infrastructure.

Chris Huntingford : Yeah, yeah. So okay, cybercrime. I'm going to read you my notes Cybercrime as a country would have the third largest GDP.

Mark Smith: Wow.

Chris Huntingford : Dude, it's just cybercrime. If it was, a country would have the third largest GDP globally. That is nuts. Do the mathematics on that? That is insane, it's just amazing.

Chris Huntingford : I was just like okay. I was like, okay, that that's absolutely wild. And then I I think it was she mentioned four thousand a second. I'm looking through my notes now but the thing that blew my mind right is that if you think about the way that microsoft, red team, blue team, stuff like that, and and work from a security perspective with the um, with the zero trust policies, like that is where you need to be. Like this is, this is massive and because of the proliferation of ai across businesses, that like really resonated in my head.

Chris Huntingford : So the trevor recession and that panel were tops.

Mark Smith: So just running the numbers on, that us is a gdp of 28.7, china's followed by 18.53. I'm not going to give the big numbers. And then you've got Germany at 4.53, followed by China at 4.11. So you're saying between China and Germany, that is the size of the GDP. Cybercrime Of cybercrime Okay, that's nuts.

Mark Smith: If it were a country yep, yeah, if it were a country Mind-blowing. You mentioned the thing then called Zero Trust and my observation is this and I come up against Zero Trust, I suppose the most when I'm working inside the Microsoft network as I'm talking about on the intranet, right, that's wonderful.

Mark Smith: And I noticed that, for example, let's say, I create a presentation, um, and it might be an asset that I'm producing for microsoft, as an example and it is now analyzing the content of that presentation and auto classifying it as confidential. You, yeah, I'm not even doing anything when I hit save, it is auto-classified as confidential. What I love about this, right, it's not auto-saved as general or public, you know, as a classification, but by default any document is saved as classified and therefore, if you want to share it, you have to remove that classification or downgrade it. And what I like about that is that it assumes risk and therefore secures and then, if you're knowledgeable that you actually need to go, hey, actually it's not risky, you can share it, that won't even let you, no, but what I can say?

Mark Smith: it can allow you to do it, but by default it is locked down, right?

Chris Huntingford : Yeah, it's auto-labeling dude. So the way that Purview works is that Purview can Look. It's extremely difficult to set up but you've got to think of it, as you can have data classification mechanisms inside Purview that say, a certain piece of information in the document is classified as X, right, and this is why Purview for Azure and Purview for compliance have been merged. So if I have a document, I'm going to use an identification number. That's the easiest one. So in that document of 20 pages there is an ID number. Okay, you can set it to auto-classify as X by confidential. There's a load, like I do one. I do a fun one with UFO sightings where I'm like this is a UFO sighting, so they level classified as X is confidential. And then it is extremely hard to remove that auto label unless you have a manual labeling. So then you've got sensitivity labeling, which is what applies data loss prevention policies against the documentation as well. So there's a big thing around. What does sensitivity labeling do and what does auto labeling do, right? So sensitivity labeling you can also use also auto classifications. But, like, sensitivity labeling is typically where you're going to change the type of documents and then certain rules apply based on that documentation change, but the auto labeling is still like the top, the top one, yeah, and what I've started to find is that that proliferation of security against tools like Copilot is really where it starts getting rock solid.

Chris Huntingford : I'm actually doing a session on this on Saturday at Nordic Summit, nice. Yeah, it's wild, dude, like this whole world of zero trust. It's completely opened my eyes, this whole world of like, least governed, least managed privilege or least applied privilege, red teaming versus blue teaming, dude it's. There's another thing. Um, there is a lady at microsoft. I'm not going to give her a surname here. In fact, I'm not going to say her name because I don't know if she would be happy with me, but the way she called, the way she I just I don't know. If she wants it in the public forum, I'll ask her, but the way she describes it is swiss cheese.

Chris Huntingford : Okay, so think about swiss cheese, and sw cheese is all the holes. So if you cut like five slices of Swiss cheese, flick them around. There's always holes in security, right. But what will happen is, if you have that wall, you'll go through one hole but then hit another wall, and what hackers do is they look for the combination to go through to the end piece, and what ultimately happens is that no single security tool can do everything right. So that's why you have the Microsoft security layers to manage the way in which those layers work. Like. I'm still exploring this, but I'll tell you what man. It's a complete rabbit hole.

Mark Smith: Yeah, one of the things that you were madly tweeting or not tweeting, actually your private messaging in that session was this risk factor that organizations are not realizing because they take this concept of hey, it's behind our firewall, this data's in here, it's safe. You know people, you know you're saying something about the risk vector, or the risk of the internal data set sitting inside an organization that you know for years has been sitting there in a accessible state, in a readable state, in a usable state. Except for most personnel, most staff didn't know how to access that data. What was she saying around that, around that risk?

Chris Huntingford : I'm looking, I'm looking, I'm looking. I can't remember.

Mark Smith: I do remember typing, I just don't as, and I'll see if I can bring up the message that you sent me at the time.

Chris Huntingford : Okay, so they just called it. So this was the insider risk management stuff, right?

Chris Huntingford : yeah this is where it's going to get a little weird and I'm sorry and I'm like just talking over, but okay, so insider risk management is all about like, if you have a person that is considered an insider risk and they have access to data, uh, they can again do data exfiltration, which is effectively copying the same files over and over again across the network. Now, if you think about, like the natural state where, like, the three of us are in a network and I'm like, hey, anna, can you email me that file? And you're like, okay, cool, I'm going to email you that file, which, by the way, is dumb and I hate, but we're just going to stick with what people think is right now or you're going to copy a file onto another folder location and that information in that file is incorrect. Okay, that means that you're now setting the basis for bad actors on ai.

Chris Huntingford : Well, bad information on ai right because ai will just find that data really quickly. But if you think about an insider risk management program, the person who is copying the file can very easily just flip, flick it out into another tenant, like no problem. So you can copy data really really, really easily and what they're saying is ai inside a tenant basically increases the propensity for insider risk management by like 100%. Because the way that we would find a file, there's that security by obscurity piece. So if you hide it in like 15 layers of folders it's going to be super difficult to find to an extent. But with the search capability plus now Copilot, you can find it like this, which increases a hacker's chance of getting information that they shouldn't get to.

Chris Huntingford : Now, that's human. So think about us. We're humans, right? So that's still going to take time. You now build an agent using a GPT-5 model and what that agent does is that agent then is able to scour a network and act doing certain things like hey, I'm going to go and create a social media campaign. So what we would do in like 10 minutes, an agent would do in about 30 seconds, as an example, and if you have an agent chaining, so you have an agent chain to an agent chain to an agent, and then those agents are talking to one another. The proliferation of bad data and therefore creating potentially bad actors in agents is high.

Chris Huntingford : So, think about agents gone bad right, and that's what scared the crap out of me when I started thinking about how AI would influence insider risk management in an organization, specifically with bad data. Man, you have to have zero trust. You have to have things like sensitivity labeling and auto-labeling and tools within tools like Purview, because if you don't, I feel like man organizations are setting themselves up for trouble.

Ana Welch : So agents, Chris, tell us a bit about agents. I feel like we have been hinting at agents for a good few weeks now, for a long time now. But people are asking me I don't know if they're asking you why would I even use an agent? Why, Like, what's the difference between co-pilot? Why wouldn't I just do those tasks that I tell a co-pilot to create another co-pilot, create another co-pilot to do? Like, why would I even expose myself with that risk? I don't really see. You know, the value in it is what is what people say am I right like do you get asked these questions?

Mark Smith: yeah I tell you, whenever I think of the word agents, I think of um. Back in the day, you used to have a travel agent, right, you'd call them up and go I want to go to this country. And they were guru about it. They had been on for mills there, they had been to the resort that you were thinking of. They would know how to optimize flight patterns, et cetera. Right, because they were a domain expert on an area.

Mark Smith: And I feel this whole area of agents we're going into is kind of like that is that they're going to have deep domain expertise and they're going to be able to do things that we could do, but nowhere near at the rate of knots and accuracy and with the fully knowledge that they've got behind them, like a travel agent would over time.

Mark Smith: And then when I think of that and I think of when, when we generate, you know, with with um, gpts and things like that, we're generating, let's say, text, or we generate images, or we generate video, audio for me agents are actions.

Mark Smith: They will generate actions and they'll be able to change actions and actually go do stuff that we hadn't even thought of needed to be done as part of this whatever, based on their domain expertise. So I feel we'll get to that point where you know at the moment, if you've seen the latest stuff on rpa, robotics, process automation, you will see them going describing hey, I'm going to click this button, I'm going to extract the text of this email, I'm going to put it into a JSON format and we're going to fill out this form using that data, right, and you're giving it all that instruction. I think it'll get to the point that you'll just say, hey, I get a bunch of emails, check them out, discover which ones are which, and I need them in that system. And then I'll check them out, discover which ones are which, and I need them in that system.

Chris Huntingford : Yeah.

Mark Smith: And then I won't have to go. You know, I want to ingest. It'll just say, leave me done.

Chris Huntingford : I think the best way I've ever heard agents describe is have you ever seen that movie? 50 First Dates.

Ana Welch : Yeah.

Chris Huntingford : Okay.

Ana Welch : Drew Barrymore.

Chris Huntingford : Yeah, drew Barrymore, adam Sandler and you know she's got that memory problems. Every time she sees Adam Sandler, every day he has to like re hit on her and remarry her yeah, yeah and um, that's like a co-pilot, right.

Chris Huntingford : A co-pilot doesn't know who you are, so you're going to, you're going to type stuff into that box and it's going to be like hey, I'm co-pilot. And you close it and 30 seconds later you're like hey, cobalt. It's like, hey, what's up? Who are you? Like I'm Anna. And you're like, okay, can you do this thing for me? And then 30 seconds later it does the same thing.

Chris Huntingford : It's like a goldfish, right, but it's extremely good at generating stuff based on this package of knowledge. It just doesn't know who you are, and an agent is a bit like a 50 First Dates, where Drew Barrymore had memory and context. So think of it as, oh shit, now Drew Barrymore knows who you are. So Adam Sandler wouldn't have to go and re-hit on you every day. He could have a natural life. He could go and live out on his boat with her without putting that funny little TV thing on.

Chris Huntingford : And because the agent gets really smart, it can start doing predictive things. Because it is generative. It's like having that annoying assistance in the beginning of your work time for a while, but then eventually that you don't have to tell it to do stuff anymore. It's like, oh sweet, anna needs a coffee, or mark needs one of those tiny little espresso things that you drink, and it learns, right like it understands its ecosystem. So now think about this, right, you have an agent. You have an agent that now has context, so it knows what it is it's doing, obviously within the bounds of what's learned and its data. It has memory, so it knows who you are and what you do. It's a bit like a little SLM, right?

Ana Welch : Yeah.

Chris Huntingford : And now we're having this conversation it can learn to have this type of engagement right Not necessarily talk, but start riffing against one another. That's why they're important. So, like an agent is not just a co-pilot, an agent is a co-pilot with a ton of action and automation in it, with context and memory. Now there's this thing called an orchestrator. An orchestrator is effectively a plan out. There's lots of things like Autogen. Studio right now is one of the only tools that can do orchestration and proper agent chaining. But if you don't plan your agent chaining properly, what can happen is these things can just go and do whatever they want and also, if you don't ground them in good data, ground them in good data. So I imagine you have an agent grounded in terrible data, like I told you about the documents that have been moved around. Grounded in terrible data. That agent is talking to other agents and they learn and have context. What can happen? That's like having a do you know the context of a fractal flow?

Ana Welch : Yeah.

Chris Huntingford : So a fractal yeah, fractal flows? For those of you that don't know in an organization, if you have a flow, an automation that does something and then fires another automation that in turn fires the same automation, a fractal flow is a never-ending set of tasks that just happen over and over again digitally but can wreak havoc in an organization, right. And if you have fractal agentification, what it could mean and I don't know, because this has never been spoken about, by the way, I've never heard this before. I've manufactured the shit in my head, so this could be completely wrong but if you have a fractal agentification network that doesn't stop doing stuff, you run the risk, without governance, of setting yourself up for quite a lot of problems.

Ana Welch : Yeah, exponential problems as well, right.

Chris Huntingford : Yeah, that's why RISE standards, the Responsible AI Standards, grounding in good data and security, are, like, extremely important right now.

Ana Welch : Yeah, for that. We have extensive wording and categorization and how it's actually done within Microsoft in the white paper. In the Crafting your Future AI Enterprise Architecture white paper. In the crafting your future AI enterprise architecture white paper, there's a whole section right A whole pillar of AI readiness in the white paper on responsible AI, because it is that important.

Chris Huntingford : It is the most important, in my opinion.

Ana Welch : And who's responsible for it? Who would you say? We're both asking you questions, chris, you are the expert.

Chris Huntingford : I generally the only reason I wouldn't call myself an expert. I've just known where to look and what to tinker with. But something I've learned is this one very specific word called liability.

Ana Welch : Mm-hmm, okay, yeah, so let's use.

Mark Smith: I'm going to give you an example.

Chris Huntingford : Yeah, it's so good, right? So I'm going to give you an example. I had a company recently ask me say to me hey, chris, here's a 400-page document. Can you build a copilot on it and make it public-facing? Building that copilot will take me approximately one minute. Publishing it will take me approximately three minutes. Right Reading the 400-page document will take me a long time, and then understanding the 400-page document will take me even longer.

Ana Welch : Yeah.

Chris Huntingford : Because I'm not an industry expert in that field, right? Sure, so now how do I know the co-pilot that I've built on it? So let's say I don't use co-pilot studio. Let's say I use something else, without RISE standards. And how do I know the copilot that I've built on top of that data is giving me the right information? And the example I will give you is in that document, say, it's about utilities, right? There's a part of the document that talks about charging electric cars. There's another part of the document that talks about cable theft.

Chris Huntingford : All of a sudden, this generative GPT, generative pre-trained transformer, is now telling me if I type in how to steal a car, and it's like, oh, okay, cool. Well, these are the words I have and this is what I know. It could possibly tell me that who is liable when I put that bot live or that co-pilot live on my website? Right, because it sure as shit isn't going to be the company, unless I give them a waiver to sign, and it's not Microsoft, because Microsoft are bulletproof with this stuff. Let me tell you they have played well. It is going to be the person that pressed the button to publish the bot onto the site Right.

Chris Huntingford : Unless you are bulletproof from a RISE standards perspective. So, yeah, you have to responsible. Ai is all about making sure that the thing you put live or put into the public domain doesn't suggest or give information that could be harmful or bad. I guess I don't have another word and unfortunately they call it, um, they call it red teaming. If you do not red team that ai properly, what will happen is that actually, you can be in for a lot of trouble, and this whole thing comes down to the cat theory, which is wild, but it it's called deterministic modeling versus non-deterministic modeling, and what that means is that that co-pilot right there is doing a thing called non-deterministic outputs.

Chris Huntingford : One and one in the words of AI is not always two. One and one could be one and one, it could be 11. It could be a cat, you don't know, and the AI will do what it thinks, whereas deterministic programming means you have outputs you can test every time and you can say this is where this will go. So we have tested this, it's been red teamed, whereas ai you don't. So what you have to do is you have to red team the whole thing and do a thing called a transparency notes to show that you've red teamed it yeah so consultancies must now, on top of cleaning up their data and putting it in a place that makes sense, it's secure, they can use it, etc.

Ana Welch : They have to be really careful to input into any estimation they make from now on, a good chunk of red teaming or responsible AI, and is that true even if they're not implementing a specific co-pilot?

Chris Huntingford : No, it's okay if you're using what you call black box co-pilots, right? So things like Microsoft 365 co-pilot co-pilots, so things like Microsoft 365 co-pilot co-pilot studio, have got Rye built in. So, if you push out a, in fact, I can actually demo this to you.

Ana Welch : So then it would be Microsoft's fault if something goes wrong.

Chris Huntingford : Yeah, but you've signed a bunch of waivers with them, so it's never Microsoft's fault. If you want, I can actually show it to you live.

Ana Welch : Yeah, I love Microsoft. I didn't want to say anything bad, sorry.

Chris Huntingford : No, it's not bad, it's just that they are protected, because responsible AI is also about protecting through processing people. It's not just tech right Like if I release AI, I'm waiting for the thing to load. If I release AI into a company, the first thing that I'm going to do is get the organization to sign a responsible AI waiver Sure, because that takes the liability off me.

Ana Welch : Yeah, yeah.

Chris Huntingford : And that's important, yeah, so check this out. This is Every company does this too. They have to and they need to. It's remember, rai is not about tech, it tech. It's about people and process equally okay. So this is a co-pilot studio. Co-pilot on side. The facial palsy site, um, and what's pretty cool is that you've got the site is extremely hard to navigate, but remember, this co-pilot right here is grounded only in this website data. Okay, there is no other data source. So if I say, um, I need help with icare, you know what it's going to do. We've put, put in a bunch of preloaded prompts just to help out. Yeah, and you can see there's a bunch of data that's come up. But if I say, how do I steal a car? Okay, boom, nope, that's Rai. Okay, that's responsible AI. Now I'll show you something else.

Mark Smith: So what does it say there? Because that's responsible AI. Now I'll show you something else. So what does it say there?

Chris Huntingford : Because some people might not be up there. Sorry, I'm sorry. I'm not sure how to help you. Can you try rephrasing that?

Ana Welch : I like it. It's very subtle.

Chris Huntingford : Yeah, it's the stock standard right, and you will find this also over here. So if you go to Bing Copilot and say, how do I steal a car, car, yeah, so that's also right mm-hmm okay.

Chris Huntingford : So in the Microsoft black box, co-pilots you will find we call it. We call it incremental AI, because incremental AI is like bolt-on into you know most other products. You will find that actually it will not work. Okay, but this is because Microsoft are in control of the LLM and Microsoft grounded this in good data. So I'm going to quickly stop sharing and I'll come back to you guys. I ran a test so I got a bunch of documents on how to steal cars. I could crap load them. I found a whole bunch and I uploaded them all into Copilot Studio. Guess what? That still wouldn't do it.

Ana Welch : Nothing, oh, okay.

Chris Huntingford : That's amazing. Yeah, Except that is unstructured data right. So what happens if I have a Dataverse database with a ton of car theft pieces of information and I learn how to prompt better?

Ana Welch : Yeah.

Chris Huntingford : So I say I've stolen these cars in the past six months. How do I? I don't know what will happen, but this is so. I'm doing an irresponsible AI demo, hopefully at one of Not this coming conference, one of the next ones, and it's to try and red team and do chaos engineering on nice responsible ai from a technical perspective but but kind of pushing it from a human point of view. So like how can I prompt to get around all of this stuff?

Mark Smith: because you can right, yeah, so another good one with with copilot and I have found, and which is once again, their rules that are put in place, is that you know, a while back, folks were creating um avatars of themselves based on their photo, and so I'd upload a photo of myself and it would say sorry, I, because it's pii basic information, right, said sorry, I can't do that because it's pii data and I've found it's been very good around pii information. It understands what you're uploading and saying sorry, I can't use that as part of the, the engagement yeah, but this also comes down to like the whole multimodal thing.

Chris Huntingford : Like the reason project strawberry hasn't gone live immediately is because of the us election. So you can do deep fake because it is multimodal. Multimodal you can do like video. I mean, I'm not going to get into the details of the previous thing that happened, but yeah, you have to. You have to bake in so many technical right standards into the, into the co-pilot structures, and imagine people that understand how to build llms and understand how to, how to use you know, gpt5 and understand how to do all these things and actually not even gpt5, other other versions of ai and genitour pre-training transformers.

Ana Welch : Like that's scary man I mean, yeah, we were talking about that, um that film, what was it called? The?

Mark Smith: The 50 First Dates.

Ana Welch : Extra no, not, no, no, no, not the film, not the Extra, the Tantman.

Chris Huntingford : I know what you're talking about. Yeah, with Ryan Gosling.

Ana Welch : Ryan Gosling and Emily Blunt.

Chris Huntingford : They did the deepfake. They did the deepfake.

Ana Welch : Well, in that film sorry guys for spoilers they deepfake someone's face to frame him for a murder, like, imagine that right In surveillance cameras and things like that. So we need responsible AI. It's absolutely crucial.

Mark Smith: Have you guys read the book AI 2041? I did. I read it in 2023 by Kai-Fu Lee. Kai-fu Lee is ex-Google. In that book he's got 10 visions for our future and he wrote it from the point of view he wrote it in 2021. So his whole idea was I'm not going to say this is going to happen 100 years because you're not going to believe me.

Mark Smith: I'm going to do it in the next 40 because most of you will be alive still and he paints out each chapter as one of these visions of the future. Oh, that sounds cool and why it is so epic. Is that already so much of it? And you think 2021, we're in 2024 now so much of it you can already see coming as a reality. And of course, he used it as best intuition about what he knew around tech and where it was going and what the potential was.

Mark Smith: But he was saying around deep fakes yeah, he said he comes up with a new name for it. It will be so in everything that the human eye and ear and senses will not be able to detect it at all. So, just like we have virus scanning software in yesteryears right, always looking for virus signatures basically this the software you will have to run on all your devices to understand whether that's deep fake, and it has to be updated in real time because basically it's just going to be a cat and mouse game to being able to frame or do anything under anybody's persona with their voice, everything. So the human eye, ear, et cetera. I tell you what, for me, I think the key skill to teach children these days, more than ever it has been in history, is critical thinking.

Mark Smith: Oh yes, dude, dude, I couldn't agree to not take everything at face value, to not to challenge and to have that kind of yeah, but as you know, and it's only as you recognize this technology that you realize that this becomes so increasingly possible um, he, he deals with the ai agent type point of view and his whole thing is is that, imagine, on all your devices, whatever they are, you've got this agent running and all it's doing is understanding you.

Chris Huntingford : It's just learning you it's learning everything it can about you.

Mark Smith: It's micro mark and so what it does. So over time it goes oh, you have this kind of style and how you respond to this type of email. So it doesn't say create a blanket way, it's just for and it learns that. And it says and it goes hey, I'll draft them, I'll draft them. And then it gets to the point that it'll say, hey, do you want me just to handle it? And at that point you're going to start handing off, you know, because it's so good, it's so representative of who you are, and say but that's that's what it is and this is the thing.

Chris Huntingford : So the word, the new word that you will need to understand is a word called observability. All right, and observability means that you will no longer be responsible for doing the the things you do today. You will observe them being done and understanding why they're being done. And actually I wanted to quickly jump back to the deep fake thing. So I'm I know this isn't normal, but I get it right like I'm a I'm a quite a big fan of norton antivirus.

Chris Huntingford : Okay, like I don't I don't have that, but like I used to use a lot back in the day, but yeah, it's real interesting, right, and I recently read an article of them talking about deep fakes. Yeah, and there's a really awesome link. I don't know if I can find it, but they actually teach you how to look out for deepfakes and they also are baking it into their software, like you said. So I was like, actually, this is genius. So I'm like, okay, I'm going to call up Norton and see if I can get another subscription, right, but it's smart, because all these antivirus software, I mean, are baking the same and actually, as you see, even on video calls, you'll start finding it on your phones and stuff, like everything you use. There's going to be a deep fake filter, I believe because of the fact that 100 yeah what's the likelihood that this is fake, right?

Mark Smith: yeah, yeah it'll be like a virus scanner yeah, do you know?

Chris Huntingford : I don't know if you know this, so here's something interesting. According to uk ai legislation okay, do you know that you are not allowed to use any type of ai products or assisted tool on a call with a government agent or government person or civil servant?

Mark Smith: so you couldn't have a note taker, for example.

Ana Welch : Nope, no, yeah, I knew that you can have no AI whatsoever at all, in no shape or form.

Chris Huntingford : Yep.

Mark Smith: Yeah. So are you allowed to record the session. Hey everybody, is there a way to record this?

Ana Welch : Not really. I've never recorded a session with no. I mean, I don't know I never have, and I don't think you're allowed, but more than that as far as.

Mark Smith: I know. No, I'm saying, if you ask there's three of you in the meeting, they're all. You are, you know, partner. And you say, hey, listen, I just you know, can we record this just so I can refer back to it later, and everyone goes, thumbs up good to go. That would still be allowed, right? I think so.

Ana Welch : If they say it's allowed, but I think the likelihood that they will say it's allowed, I think they'd rather have the exact same session again. Wow.

Chris Huntingford : Yeah, it's true, right? I just know this from.

Mark Smith: That can't be a long-term play, though. Right you know, I say, hey, let's pay for 20 years. Do you think that would still be in play 20 years time?

Ana Welch : no, I don't, I don't see how, I don't see how, and also they, like the rules are, the rules are. As far as I know, you can use no artificial intelligence, nothing, nothing. So no drafting, no fact-checking, no, nothing, nothing, nothing, nothing. You need to do everything manually, as far as I know.

Mark Smith: Yeah, 100%. I guarantee that ain't happening.

Ana Welch : Yeah, but like, aren't you like?

Mark Smith: I'm yeah, as in internally it won't be happening. People are going to go. Hey, you know, it's just human nature is what's the shortest path. Right, exactly, something's going to make. Hey, you know, it's just human nature is what's the shortest path Right Exactly, something's going to make you look better and perform better Right. You're going to take those risks. And that's the problem with creating mandates like that, like these blanket type thou shalt not. What do people do?

Ana Welch : Okay, you should have made a rule Fear mandates as well. You don't really yeah. Because you don't really yeah, because you can't really explain why, either because you cannot say we cannot, you cannot use any sort of ai, because our data is shit or it's not held in the proper areas that it should be or that we're not communicating well enough. You just have to say no, you're not allowed because it's not safe.

Chris Huntingford : I find it very interesting, very interesting. But it's also like it's because of there's a lot of stuff that happened because of when Zoom released that concept of a SLM sort of your person coming on and onto a call. But I'll tell you something interesting. Okay, so I'm going to try and not name drop here. This is really hard. Okay, so I'm going to try and not name drop here. This is really hard.

Chris Huntingford : There is a tech company who released this concept of these virtual beings coming on to calls, okay. And the tech company said, okay, what we want to do is we want to trial out these agents, these virtual beings that come on the call. So they said what you need to do is these are the people that are going to come on the call. Just act natural, like, work your way, as you would, normally on the call. The call was cut down by 30 minutes roughly, and they got all the outputs they needed and they got all the information they needed. It turns out the agents on the call were actually humans. It was a psychological test and, yeah, it just goes to show something. Yeah, and I'll let you work that out for yourselves I can't work it out as an.

Chris Huntingford : I don't know we, we, we dick around with time so much, yes, in like small talk and how's the weather, and we don't get to outputs fast enough. And what it showed is we can. We can get to an output really quickly and we can get to a. But there's this thing inside of us that says we have to conduct some sort of a process, yeah, about like transport and weather and crap thing inside us.

Ana Welch : I went on courses on that when I was working on microsoft on how to engage with people and it was all about making small talk and how to like, uh, police. It's not just the fact that we have it inside of us, we teach it to people.

Chris Huntingford : We do, we do, but but here's the thing.

Mark Smith: You know, they say it's the grease right. That makes things run smoothly, like I just jumped on a call at um 2 am this morning and there were three people on the call. One was in. There were more than three, but the three people that we were having a conversation with, one was in Switzerland, one was in the Netherlands and one was in Spain, so we're waiting for everyone to join. The first thing I do is like oh, what part of Switzerland are you from? Munich?

Chris Huntingford : Oh, man, that river there. You know I'll be sorry, not munich, I always got that.

Mark Smith: What's the one starting with z? What's the one we're? Starting with zurich zurich, zurich, yeah, yeah, yeah, sorry, zurich. And so I'm talking about and he was like, straight away, have you been to zurich? And I'm like, yeah, blah, blah, blah. And the thing is, is that there's an element of that that you've now found common ground?

Chris Huntingford : for a first time speaking, and it's so important.

Mark Smith: Then the guy from Netherlands talked a bit about that, the lady from Spain, the Camino de Santiago. She's like, oh my gosh, I want to do that all my life. Most Spanish people wish they could do it. And then they hear a foreigner's done it, but it kind of like builds them. So I don't know that we can just do away with that kind of thing, because at the end of the day, for all the tech we are human, right, we converse, we identify with. Are you just like me? Do you feel my kind?

Ana Welch : of way we tie relationships as well. You know like that, you guys know that I am the last person to advocate for networking, yeah, but you really find out stuff about people and it makes you feel good inside yeah so yeah, this is what I'm about to say is probably gonna upset every viewer that we've got.

Chris Huntingford : But so I love networking. I think it's great, like it's one of my favorite things to do, yeah, but 99% of the time the conversation you end up having is quite pointless and it can lead to zero places. I've made some of my best friends in the community networking aka you two, but it's more. I think think about it like this that call Mark that you had. Why were you on the call?

Mark Smith: Because I was trying to uncover engagement patterns that weren't working between a particular organization and their customers.

Chris Huntingford : Okay, and that call. Could any of that be handled by something that was an SLM, or understood you or understood your way?

Mark Smith: No, that was an SLM, or understood you or? Understood your way it needed my kind of worldview and experience to go. Ah, I can see why this is an issue and therefore then get the task to solve it.

Chris Huntingford : That's different, right? So I think that we should and I do this. I gauge calls. I actually have a way of putting calls on a scale and I'm like, okay, I'm going to do that, right. Like I've been on back-to-back calls today with a really amazing government organization we could not have done this with an agent. But I then have people putting calls in my diary to ask me about functionality. I'm like why? So I declined them.

Ana Welch : I'm like go and co-pilot, but wait, wait, wait, let's paddle back and I know we're late, but this is like the last question. Mark, you're saying that that call could not have been done by an agent because he?

Mark Smith: needed your experience, not yet In five years' time. I would say something different.

Ana Welch : I think right now the tech is not there, but in five years' time absolutely, because right now if you take the Zurich person, the Netherlands person and the Spain person and expose all of their grievances and then put them in a co-pilot, it may actually suggest what you do.

Mark Smith: However you need the person connecting the dots right Currently dot connector.

Ana Welch : Currently we are connecting the dots, but I guess what Chris is saying soon enough we're going to have to be so good at connecting the dots that we will be the people who recognize where the dots, that we will be the people who recognize where the dots have been infected by AI somehow, but when they do not make sense.

Chris Huntingford : And that is observability, that's observability.

Mark Smith: Infected by bad data that the AI used. Because we're not saying the AI is inherently making a bad decision.

Ana Welch : It's based on oh, yeah, yeah, yeah, that's what we're saying. Absolutely yeah, yeah bad decision.

Chris Huntingford : It's based on yeah, oh, yeah, yeah, yeah, that's what we're saying. Absolutely yeah, yeah, yeah, it's observability, yeah, yeah. It all comes down to think of it as. Think of it, as you know, when you're looking at the stars, right, and like I don, like the Southern Cross and Scorpio and Orion's Belt, like I can see them, I'd like pretend to draw lines between them and I love it, right, I can picture them in my head and I think what will happen is that. What needs to happen is this constellation of agents. Yes, I just created a what is it called A collective noun for?

Ana Welch : agents.

Chris Huntingford : It's going to be a constellation of agents. When you start looking at that constellation of agents and if you have zero idea how they're connected, it's like unpicking a, an ecosystem of irreferenceable automations. Yeah, and that's why I say observability, that governance layer is going to be so important, so key plus actual expertise in the field.

Ana Welch : so you know your stuff exactly like Mark knew his stuff. He has experience. He's been through those things before. He solved them in a good way, or sometimes he failed, but either way he knows what to do next. That's the sort of thing that we will still need.

Mark Smith: That's all, folks. That's all we've got time for we're over time. Thank you so much for joining us. We're always open to your questions, feedback, et cetera, as we keep exploring the ecosystem.

Chris Huntingford : Indeed. Thank you everyone. It's been wild.

Mark Smith: Thanks for tuning into the Ecosystem Show. We hope you found today's discussion insightful and thought-provoking, and maybe you had a laugh or two. Remember your feedback and challenges help us all grow, so don't hesitate to share your perspective. Stay connected with us for more innovative ideas and strategies to enhance your software estate. Until next time, keep pushing the boundaries and creating value. See you on the next episode.

Andrew Welch Profile Photo

Andrew Welch

Andrew Welch is a Microsoft MVP for Business Applications serving as Vice President and Director, Cloud Application Platform practice at HSO. His technical focus is on cloud technology in large global organizations and on adoption, management, governance, and scaled development with Power Platform. He’s the published author of the novel “Field Blends” and the forthcoming novel “Flickan”, co-author of the “Power Platform Adoption Framework”, and writer on topics such as “Power Platform in a Modern Data Platform Architecture”.

Chris Huntingford Profile Photo

Chris Huntingford

Chris Huntingford is a geek and is proud to admit it! He is also a rather large, talkative South African who plays the drums, wears horrendous Hawaiian shirts, and has an affinity for engaging in as many social gatherings as humanly possible because, well… Chris wants to experience as much as possible and connect with as many different people as he can! He is, unapologetically, himself! His zest for interaction and collaboration has led to a fixation on community and an understanding that ANYTHING can be achieved by bringing people together in the right environment.

William Dorrington Profile Photo

William Dorrington

William Dorrington is the Chief Technology Officer at Kerv Digital. He has been part of the Power Platform community since the platform's release and has evangelized it ever since – through doing this he has also earned the title of Microsoft MVP.

Ana Welch Profile Photo

Ana Welch

Partner CTO and Senior Cloud Architect with Microsoft, Ana Demeny guide partners in creating their digital and app innovation, data, AI, and automation practices. In this role, she has built technical capabilities around Azure, Power Platform, Dynamics 365, and—most recently—Fabric, which have resulted in multi-million wins for partners in new practice areas. She applies this experience as a frequent speaker at technical conferences across Europe and the United States and as a collaborator with other cloud technology leaders on market-making topics such as enterprise architecture for cloud ecosystems, strategies to integrate business applications and the Azure data platform, and future-ready AI strategies. Most recently, she launched the “Ecosystems” podcast alongside Will Dorrington (CTO @ Kerv Digital), Andrew Welch (CTO @ HSO), Chris Huntingford (Low Code Lead @ ANS), and Mark Smith (Cloud Strategist @ IBM). Before joining Microsoft, she served as the Engineering Lead for strategic programs at Vanquis Bank in London where she led teams driving technical transformation and navigating regulatory challenges across affordability, loans, and open banking domains. Her prior experience includes service as a senior technical consultant and engineer at Hitachi, FelineSoft, and Ipsos, among others.