Data, Ethics, and the Future of AI in Europe
Ana Welch
Andrew Welch
Chris Huntingford
William Dorrington
FULL SHOW NOTES
https://podcast.nz365guy.com/618
Chris Huntingford joins us on the Ecosystem Show to dissect the rapidly evolving landscape of AI regulation in Europe, focusing on the anticipated launch of DSBs in February 2025. We promise you a deep understanding of the implications for high-risk AI solutions and how EU scrutiny might reshape the industry. From distinguishing between rules-based and autonomous AI agents to exploring Microsoft's latest offerings, we shine a light on AI's integration into the workforce and the crucial need for reliable data to avoid unintended consequences. Our discussion isn't just about technology; it's a thoughtful examination of AI's role in transforming how we work and live.
As we turn our attention to the future of Power Automate, we envision a world where intelligent agents operate in the background, enhancing productivity while maintaining the essential human touch. Ethical concerns surrounding digital equity and the traditional economic model challenge us to consider a more inclusive future. We also explore a future where humanoid robots efficiently maintain urban spaces, raising important questions about manufacturing and control. Join us as we emphasize the importance of embracing digital equity and adapting to technological change, while regulatory frameworks like the EU's AI regulations guide our global trajectory. This episode is a compelling blend of caution and opportunity, urging you to ride the wave of innovation wisely.
90 Day Mentoring Challenge 10% off code use MBAP at checkout https://ako.nz365guy.com
If you want to get in touch with me, you can message me here on Linkedin.
Thanks for listening 🚀 - Mark Smith
00:01 - Exploring AI and Autonomous Agents
09:35 - Future of Power Automate and Agents
17:51 - Navigating the Future of AI
Mark Smith: Welcome to the Ecosystem Show. We're thrilled to have you with us here. We challenge traditional mindsets and explore innovative approaches to maximizing the value of your software estate. We don't expect you to agree with everything. Challenge us, share your thoughts and let's grow together. Now let's dive in. It's showtime. Welcome back, welcome back. As you can see, it's no longer just me this time. I'm back with Mr Chris Huntingford in the house. Chris, I was meant to actually tell you, before we started the show, a bit of news offline, and I don't know whether I can say it now because it was private message to me, literally just before you talked to me. But at the end what I'm gonna do?
Chris Huntingford : at the end of this recording, I need to tell you what the message is okay, this always scares me because with you, if you're excited about something, it's either sketchy or hilarious or absolutely amazing it's amazing.
Mark Smith: it involves dynamics, minds and an absolutely epic speaker. If you could get an epic speaker out of Microsoft, I've been given three names and said which one would you go for, and one's like a no-brainer.
Chris Huntingford : Okay, I'm keen. I would love to know.
Mark Smith: Yeah, anyhow, that aside. That aside, sir, I just feel like the world's exploding at the moment around AI and I mean I'm for want of a better term balls deep in it. It seems. At the moment, everything is about AI and what I'm doing, and I know that in Europe there's some big changes coming in this space. What do you know?
Chris Huntingford : So I am going to give a shout-out to Ioana, who we should include her in the show notes. So, ioana Tanasa I hope I said her surname correctly. She is doing a lot of stuff with Microsoft on the DSB. Now, if you don't know what a DSB is, I'm not going to tell you, right, because this is going to be another episode. But the DSB is really important to them as it is one of the central mechanisms for the shipping room to allow a solution to go through into the public. So, anyway, what I found out is that this kicks off in Feb 25, right. And what's really interesting is that they're not playing games. This is not GDPR, right? This is not like, oh okay, you are kind of okay with your data. What I understood by the notes that were fired over to me is that a public-facing AI solution can get actually hauled out by people in the EU, especially high-risk AI solutions for testing.
Chris Huntingford : So imagine this you create a I'm going to call it a bot, because I don't think it's a co-pilot. You create a bot. The bot is both deterministic and non-deterministic. So you're looking at a solution that has both defined outputs and non-defined generative outputs, and you don't ground that solution in good data, like the air canada story where it was giving away free flights. So you don't ground that solution in good data. You build your own slm, slash, llm. You don't wrap right or responsible ai standards around it and then the bot starts going a little rogue.
Chris Huntingford : Right, like it doesn't. It doesn't go rogue, it just remember it is generative. It is like very, very enthusiastic intern who has no ethics training. So it starts answering questions because, remember, all it does is vector matching. It's just giving you the next word in the sentence. It doesn't know. And what will happen is that that gets popped up and, as an example, if I say how do I steal a car? And then I go and steal a car, who is liable? Is it the customer whose things on the flight? Like what are the rules right? So we're starting to find that actually there's going to be some focus that needs to be had around how people build these things out and, as I said, if it is a high risk solution dealing in certain data, you can actually get hauled out by the EU for it. I don't know if there's like an EU hit squad or whatever, but what I do know is that it can happen.
Mark Smith: Yeah, well, with the what was it? 7% of revenue is one of the fines, 30 mil or 7% you could see how it could become a nice revenue generator right as in to put a police force in to track this stuff down, so to speak.
Mark Smith: Yeah, but AI is taking our jobs, Mark, so Ha, honestly, I laugh in a way and then in a way I'm like for me. In the last week it's become crystal clear to me, and the further I've drilled into the concepts of agentic AIs. Oh yeah, and you know, microsoft, of course, have made a move in public around agents. And when I look at the agents that Microsoftrosoft is touting and I gave an example from mckinsey, um, this this week that's dropped, um, I'm like that's the baby rattle of agents.
Mark Smith: It's, it's the, it's the little toy of agents like and they're kind of excited about it and they're calling it autonomous and I'm a bit like, okay, that's a stretch, right, it's a massive stretch because, if you look at, in the research I've done, agents come in three different formats. You have a rules-based agent, which I feel this is what this is which ultimately, if you distill it down, is an if-then type agent.
Chris Huntingford : That's all. It is dude. We're not close to actual autonomous agents. It's already. You know what it is it's Power Automate with a text front end.
Mark Smith: Yeah, yep, yep. So you can say do this and it'll build out, which. I'm glad you said that, because I did a post last night, a video post, on this and what I see coming like a freight train towards us around Power Automate in a moment. So the second type of course is semi-autonomous. So this here requires a human in the loop, always as part of it, and it's not so much a human in the loop from a compliance perspective, but it needs human input.
Mark Smith: Right, it needs human input in some form, some way. It might be just a very final step, but it needs a nod. And then you've got autonomous. And if you look at autonomous, the definition of it or agentic, the definition of it is that it is self-learning. It will have parameters, that needs to operate in, but it will learn from errors, mistakes, unique nuances, and it will become more. No, in other words, it'll build its knowledge over time.
Mark Smith: So imagine, a call center gets a very unique request and that wasn't in the help or manual. It goes okay, let's solve that for future, let's get that help built out, let's do whatever needs to be done, and so, therefore, it becomes more intelligent over time. What I see and this is one of the goals that I'm aiming for myself is that how do I introduce agents into every part of my life and how do I do it without waiting to mid-2025, 2026, when this is just going to be everything right that is going on in business? And so what I've started doing is writing standard operating procedures down to the detail for a thing that I do Right, and then what I am doing is handing that SOP off to a VA, a virtual assistant, so a real, live human person in another country, and for the next three months. They are running that every single day.
Mark Smith: They have to and, at the end of the day, they tell me where they have updated the sop, why they made the change and why they improved it. Now, in six months time, let's say, agents are now at that point. I'm going to take that sop, which has been tried and tested and validated with humans, and I'm going to feed it in, and all of a sudden, an agent's going to do that. And then OpenAI brought out the concept of a swarm API last week, and so I can say okay, agent one, now multiply yourself by 100. And I want you to all run in parallel.
Chris Huntingford : Do you remember that drawing that I did for you a few months ago when I was at Microsoft and I said to you this is what it's going to look like right, with the little colorful people and agents behaving badly, if you get the standard operating procedures right. So that agentic rules layer that we drew in the middle, I remember, had all the agents, but I'll see if we can put it in the chat for this episode. Yes, yes, yeah, that agentic rules layer. And I said this is the thing that people are going to need to invent. That's effectively your standard operating procedures. Yeah, but you digitize that and you make it the thing that governs the agents. So it's like an agent that governs agents.
Mark Smith: So, yeah, an orchestration agent is going to be needed as well, right as in, which will be overseeing all those agents running. So I'm quite excited about what that means. But just back on the Power Automate. So I said in the post yesterday that I believe Power Automate is going to explode. Yeah, but for everybody looking at going, oh God, I'm going to build a career around Power Automate in five years' time, I don't think we'll virtually ever talk about Power Automate. It will be a tool that runs in the background and it will be fully automated. All the automations will be built out by an agent that will act on our behalf. So I don't see Power Automate going away.
Mark Smith: I see it becoming hyperscale, but I don't see it being you and I building out automations. I see our agents who build out our automations on our behalf.
Chris Huntingford : Yeah, we'll need to contextualize the automations.
Chris Huntingford : So there'll be the concept of observability we know that and then there'll be the concept of context.
Chris Huntingford : Right, the context of contextualized data and contextualized Power Automate is really where I think it's going to be at, and it kind of does it now to an extent. So that's where, if you think about the history of what happened, right, you had these little templates that you can use in Power Automate and then you go generate a template and provide context templates that you can use in Power Automate and then you go generate a template and provide context. Now you can use Copilot to generate a Power Automate, but you still have to provide context. You can have an agent that generates many chained Power Automate flows, but you still need to provide context. And I feel like you know Lewis Baybutt actually, lewis and I have been speaking about this a lot the power of contextualized information, and when I say information, it could be data, metadata, whatever you want to call it right, yeah, that's going to be where we become really important, because, although the co-pilot or the agent may have some context, we have all the context yeah and I feel like that's what's going to make it different.
Mark Smith: Like your us, your usp is being a human with context yeah, yeah, and I think the human element is going to be ever increasing important. But one of the other nuances I've worked out and it's something that I riffed off what Satya said in London, and it was this that you'll have personal agents, you'll have team agents, you'll have company agents and you'll have industry agents, and each will be endowed with, you know, applicability. But here's the thing is that let's take that swarm scenario that all of a sudden I've got 100 agents doing this one thing for me. I can see businesses, and it goes back to a podcast that we probably had nine or so months ago where I said the capitalist concept needs to evolve because if it's just the corporation that benefits off a new tool that means we can pay workers less or reduce their hours and therefore make the profits off that, that's not going to be fair and equitable.
Mark Smith: Going forward, there needs to be a model that if you can get the same output from a human, but at 50 of the input time of that human, you should still compensate them for the full 100 and let them the freedom to spend and do their time elsewhere, which is a real funny thing, because what I'm noticing in the enterprise deals that have been done in the market. The term productivity is not a selling point for AI, right? Because one of the comments I heard from a CXO was this is that so this person shaves 45 minutes off their day because they're using AI to do it, does that mean they're going to spend another 45 minutes around the water cooler in the office, right?
Chris Huntingford : Yeah.
Mark Smith: It's not benefiting me as an organization, which I can see it, but I'm also like that's the capitalist talking right there, right, that's the oh. We need to suck up all the money that we can make off our human resources as possible. Now this tells me that the future state world that if you can be a freelance or independent consultant and you can understand agents and scale them for yourself, could I decide to reduce my work week to 20 hours but produce 40 hours of output? And when I mean output, I mean let's just say 40 hours of revenue equivalence.
Chris Huntingford : Yeah, you can. I know somebody that does it already.
Mark Smith: But then could I produce 500 hours of revenue equivalents a week.
Chris Huntingford : This is where this is where the ethics, slash, blah, blah, discussion comes in and I think, okay. So somebody said to me the other day can I, can I generate a virtual staff force that gets paid? And I was like like, well, ethically it's probably not a good idea because you're not going to receive that many paychecks. But it goes back to the story of the boats in the dock. Right, like they couldn't get this boat out of the dock. There was something wrong. They sent many consultants in. Nobody could fix it.
Chris Huntingford : They called Bob. He walked in with a spanner, whacked it twice, billed them for a hundred grand and they said but you only spent one minute doing it. And he said, well, but I have the expertise. And this is where I think you need to get smart right. I think that I actually did this to an extent when I was at Microsoft, like I automated my role there. Yeah, I really did. I did hackathons all the time because I basically automated my washing my face and brushing my teeth process. And I think if you can, as a smart person, automate the generic generative stuff, and if other people haven't thought to do it, that's their problem.
Mark Smith: Here's the problem. It shouldn't be just their problem, because what we're going to do is create a massive divide between the concept of knowledge, as power is going to become even more of a thing, and you think there's a lot of people in the world that haven't had the digital exposure that we've had and therefore, are we just like, oh well, you're screwed, like we can't do that? Right, we can't, for a human race as such, we can't take that stance. And of course, I can already hear the people going oh, you're a como or uh, you're a socialist in that type of thinking. But I tell you what those people that say that will be bawling their eyes out when they realize it's going to sweep them away as well, the wave right that is coming and their ability for them to earn. Because you know, do you know what I don't think? Ai is the big deal. It's not robotics is the big deal, because you think if we, you know, if we all of a sudden have a robotic workforce which is a human labor as and I'm talking about a humanoid robot and they get produced for, let's say, ten thousand dollars, and you're not going to buy two or three of those, right, you're not going to like. At the moment we go I, I, when I open my browser, I have six tabs open every time and there's six different llms by different providers. Blah, blah, blah and I. I can switch because the cost of the premium access to them are negligible in my cost operating model on a day. So I'm like I own them all. You're not going to own five different robots by five different providers, right, you're going to choose one and you're going to hold onto it for five to 10 years potentially. But I tell you what that robot is. Literally so much of the physical labor stuff in the world that we do is going to be able to be done by it and therefore you just think of the implications If you live in I think of West Hampstead, where I used to live in London, and all the things that involve physical labor in West Hampstead the rubbish truck goes down the road, the inspectors, all that kind of stuff.
Mark Smith: All of a sudden that all becomes robotic humanoids and they are tasked with, let's say, not picking up the rubbish on the street or not collecting the garbage in the garbage trucks. They're targeted with making sure the street stays immaculate, so when they knock over that rubbish bin. They don't leave it on the ground like you'd currently see now. They pick up every dot. They notice their, you know, their, their, their. Their infrared scanner picks up that um, the street sign is about to fail because the bolts out on it. And guess what? They automatically created a work order for somebody to come through after hours and fix that. They see that bit of graffiti and they've got the device to go clean that graffiti on the spot. Like all of a sudden you're in a different world, right of what is possible, and I think it's just you know. But then here's the other thing who's going to create all those robots?
Chris Huntingford : Yeah, who's going to make the robots man?
Mark Smith: Is it going to be an American company or is it going to be a Chinese company? And then, if you had a labor force of, let's say, 1,000, let's say a million robots, is there going to be a concern about who the manufacturer is? And do they have a nefarious program instruction that could be onboarded to that robot force that is now in a foreign country? I don't know man. It raises some crazy stuff right when you talk about red teaming.
Chris Huntingford : Yeah, it's right there, but I don't know. This whole thing right now is moving at an incredible pace. Somebody asked me recently, actually asked me today are you scared of it? I said, well, no, I'm not, because there's just. You know what dude Think of it like this man. Media has done this, not anything else. It's just media has made this, not anything else. It's just media has made this way more sensational than it really is. I mean, I agree AI is a little scary and the products of AI will be a little scary, but I think ultimately, it's just taking it in our stride man and remembering digital equity, remembering that we need to be looking after. I love what Trevor Noah said at our platform conference. He's like stop protecting jobs and start protecting people.
Mark Smith: Yeah, it's a different paradigm shift, isn't it?
Chris Huntingford : Mate. This is the thing. So I said on Twitter one and a half years ago. I said, if I was you all, I would start learning Power Virtual Agents. This is me telling you that this is going to happen, and I've gone to companies I've worked with and I'm like, I'm telling you that this is what you need to learn. These are the things you need to understand and, unfortunately, those who haven't listened on our scrambling yeah.
Chris Huntingford : I did my best. I told them what to do. I gave them the information I had. And what to do? I gave them the information I had. And although even I'll bet the imposter syndrome being pretty big going oh shit, like am I saying the right thing Turns out yes, right, but I think that there's a very big gap between digital inequity and laziness.
Mark Smith: Yeah.
Chris Huntingford : Interesting and refusing to understand or acknowledge what's happening in, in what's going on around us, whether it be robots or ai or steam cars turning into petrol cars. Right, like I don't to me. I think designing the defaults of the next bunch of people is up to us. Right, and all we have to do is educate, but if people won't listen, man, I'm not gonna I'm not gonna sit there and bang my head against the rock, like one of the orgs that I I work with, and one of the people in specific in this organization that I work with, and you know the arrogant story I got told. I was arrogant for not wanting, because I refused, to sell the meeting they didn't go through a DSP and responsible AI, and this person is now on stage presenting about responsible AI. I'm like, oh no, but I just find it interesting. So respect to them because they haven't been lazy, although I don't think they know what it means or how to do it.
Chris Huntingford : What I do think is important, though, is that these people are acknowledging it, and even at Power Platform Conference, dude, I did not hear a single person talking about Rai. No, nothing. I was the only one in that entire crowd that, actually, and you, actually you and you, we were the only people that got up and said something about it, and it's that type of thing that is removing the ambiguity of what's happening around us. So again I'll go back to the fact that, sure, I believe in the digital equity piece, but actually I think there's such a thing as like refusing to learn and laziness, and then I cannot be responsible for those people yeah, and and that's why, honestly, it's it's I see a very different future and it's an exciting future.
Mark Smith: But I tell you what you need to apply yourself now because, as I say, you know I feel it is like a wave and you know you can catch a wave at the right time and you're going to get an epic ride, or you can be too far in and you're just going to get the dumping on you of that wave Right and you're going to go. What the fuck happened? I'm being tossed around like crazy, and that's why I think a wave is a good analogy. I think there's a window of time now to really, if you're in our space, apply yourself, folks and get skilled, because there's a massive op.
Chris Huntingford : I'm going to read this to you. Okay, I know you've read this, but I want people to read this, yeah. Yeah, I'm going to read this to you, okay, I know you've read this, but I want people to read this.
Mark Smith: Yeah, yeah.
Chris Huntingford : EUA applies. Euai applies to any organization that puts an AI system into service in the European Union, regardless of where its basis is. Any organization that uses an AI system in the EU. Providers and users outside of the EU where the output of the AI is intended to be used in the European Union. Important distributors of AI and systems product manufacturers. Timeline this blew my brain up. 25 Feb, the sentiment analysis prohibition. The sentiment analysis prohibition that's four months away. Four months. All these people doing c-sat, with their surveys out and doing sentiment analysis well, good luck. How do I know this? I choose to read things. And then enforcement of general purpose ai models in august 25. Okay, so here's the thing we to read this. We are connected into the people that understand this. We have chosen to read this. We understand this right. So the companies we work with are therefore affected by the knowledge that we have, because we are curious people and read things.
Mark Smith: Yeah.
Chris Huntingford : Mark, what happens to the people that don't read this and refuse to understand what's happening?
Mark Smith: Yeah, yeah, this is what I feel that companies, particularly companies that are Sorry. I'll give you a use case A company that I heard about this week said they don't want to do AI, and the reason they don't want to do AI is because their IT department said they know that their data is in such a state that their employees would find out stuff that they didn't want to know. So I was like okay, so this sounds like classic security by obscurity. It's all exposed there, you know it is so what we're going to do. They're like listen, if we took on AI, we'd have an 18-month project of having to clean up our systems. Oh my gosh, people, do you think that's going away? Do you think that your staff are not going to BYOD AI into your organization and do you think that you're protected because of this?
Chris Huntingford : I'm just like, oh, like crazy level of ignorance and like, oh, let's just bury our head in the sand, it'll go away the thing that the thing that makes me laugh, dude, is that number one okay and like a laugh in a uh no type of senses life is that you're just spot on. But the thing is, this is only going to get worse. Right, like the AI right now is optional, yeah, like it's optional in the things you're using. Like we're lucky we have a prompt box okay, because that will go away eventually as well. Like you will just have agents behind the scene doing everything. Okay, so we're lucky when these organizations are smashed with differential ai that lives under the hood and just does everything for them, but they have terrible data. You thought power bi was allowed, hayler for shit data yeah, yeah yeah.
Chris Huntingford : So I just think if, if I were to put this in in a summary right now, and I had to say to people you know, learn and get educated, I would say, well, sort your data out first now, like don't, don't wait, you know, this was said, this was said by many people. We're in year one. Okay, we've. We had this year. We were given this year as a grace period to figure things out. Right, if you haven't figured it out after this year, you can have some problems next year yeah, yeah.
Mark Smith: so I I can't believe the timing Microsoft had with Fabric, like really thinking it in that broad sense, and how rapidly the story around Fabric has grown. That's pretty mind blowing to me.
Chris Huntingford : It's data centricity. Man, that's all they've been trying to do. One late right. Yeah, yeah, like it's genius.
Mark Smith: Very smart, very smart. It is been trying to do one late, right? Yeah, yeah, like it's, it's genius. Very smart, very smart. It is.
Mark Smith: The whole idea of the show, of course, wasn't to be uh, morbid. I'm I'm bloody excited about this future and and and the implications of it are mind-blowing. I mean, sam altman said in february this year that we are probably going to see in our lifetime, a billionaire created that didn't employ any staff, so a billionaire of one that has built a billion-dollar company without any staff. And I believe absolutely that's going to be possible. And so, therefore, I think there's in front of us lies a whole new opportunity about how we earn, create money, how we create wealth.
Mark Smith: And I honestly believe that we're going into an extreme abundance period in history, abundance like we've never known before. And I find that exciting because, if you let your mind riff off, right, most people limit themselves in life because, oh, I can't afford it, right, it's one of the most limiting mindsets and behaviors. I can't afford that. Imagine you go to a scenario where you don't even think like that anymore. There's no need to think like that because of the abundance that's there. But you know, yeah, we've got an amazing opportunity right in front of us.
Chris Huntingford : I agree. I couldn't agree with you more and I, honestly, man, I think so. I feel very passionate with you about this right Because I know we have to wrap up, but I think I've never felt so driven to do something different as I have in tech before Right, and that's why I'm enjoying working with the customers. We get to work with them I'm not going to mention that, but we do get to work with some pretty damn amazing orgs and I'm seeing the results of what they're doing in society around me. So it's wild, great timing.
Mark Smith: Cheers, man, we'll see you on the next one. Rock and roll Great timing. Cheers man. Thank you, We'll see you on the next one.
Chris Huntingford : Rock and roll.
Mark Smith: Thanks for tuning into the Ecosystem Show. We hope you found today's discussion insightful and thought-provoking, and maybe you had a laugh or two. Remember your feedback and challenges help us all grow, so don't hesitate to share your perspective. Stay connected with us for more innovative ideas and strategies to enhance your software estate. Until next time, keep pushing the boundaries and creating value. See you on the next episode.
Andrew Welch is a Microsoft MVP for Business Applications serving as Vice President and Director, Cloud Application Platform practice at HSO. His technical focus is on cloud technology in large global organizations and on adoption, management, governance, and scaled development with Power Platform. He’s the published author of the novel “Field Blends” and the forthcoming novel “Flickan”, co-author of the “Power Platform Adoption Framework”, and writer on topics such as “Power Platform in a Modern Data Platform Architecture”.
Chris Huntingford is a geek and is proud to admit it! He is also a rather large, talkative South African who plays the drums, wears horrendous Hawaiian shirts, and has an affinity for engaging in as many social gatherings as humanly possible because, well… Chris wants to experience as much as possible and connect with as many different people as he can! He is, unapologetically, himself! His zest for interaction and collaboration has led to a fixation on community and an understanding that ANYTHING can be achieved by bringing people together in the right environment.
William Dorrington is the Chief Technology Officer at Kerv Digital. He has been part of the Power Platform community since the platform's release and has evangelized it ever since – through doing this he has also earned the title of Microsoft MVP.
Partner CTO and Senior Cloud Architect with Microsoft, Ana Demeny guide partners in creating their digital and app innovation, data, AI, and automation practices. In this role, she has built technical capabilities around Azure, Power Platform, Dynamics 365, and—most recently—Fabric, which have resulted in multi-million wins for partners in new practice areas. She applies this experience as a frequent speaker at technical conferences across Europe and the United States and as a collaborator with other cloud technology leaders on market-making topics such as enterprise architecture for cloud ecosystems, strategies to integrate business applications and the Azure data platform, and future-ready AI strategies. Most recently, she launched the “Ecosystems” podcast alongside Will Dorrington (CTO @ Kerv Digital), Andrew Welch (CTO @ HSO), Chris Huntingford (Low Code Lead @ ANS), and Mark Smith (Cloud Strategist @ IBM). Before joining Microsoft, she served as the Engineering Lead for strategic programs at Vanquis Bank in London where she led teams driving technical transformation and navigating regulatory challenges across affordability, loans, and open banking domains. Her prior experience includes service as a senior technical consultant and engineer at Hitachi, FelineSoft, and Ipsos, among others.