Democratizing AI: Ashish Bhatia's Journey from Microsoft to Power Automate and the Evolution of AI Builder

Democratizing AI: Ashish Bhatia's Journey from Microsoft to Power Automate and the Evolution of AI Builder

Democratizing AI
Ashish Bhatia

FULL SHOW NOTES
https://podcast.nz365guy.com/513 

We are thrilled to have on board Ashish Bhatia, a principal product manager at Microsoft, who is on a mission to democratize AI solutions. Through a fascinating exchange, Ashish sheds light on his journey into the realm of AI and his focus on GPT-based capabilities within Power Automate. He articulates his vision for AI Builder, a tool that empowers low-code citizen makers to build intelligent applications, and how it has evolved over time.

The conversation then delves into the intriguing world of AI, exploring how context, in the form of metadata and location information, can enhance precision and accuracy. We introduce the concept of RACS, a tool that aids in retrieving the right information and prompting the LLM to generate precise answers. Prompt engineering, a concept that reduces hallucination in AI models, also takes center stage in our discussion.

But how does AI impact our careers and learning? We thrash out this question in detail, discussing how AI has become a critical part of many jobs, enhancing performance. We examine different ways of learning about AI, from using it as a writing assistant to critiquing work. And how can we overlook AI safety? It's a hot topic that we analyze in-depth, looking at the ethical implications of AI and its potential for both positive and negative outcomes. Join us in this captivating exploration of the AI landscape and how it's shaping our lives and careers.

AgileXRM 
AgileXRm - The integrated BPM for Microsoft Power Platform
90-Day Mentoring Challenge 2024 https://ako.nz365guy.com

OTHER RESOURCES:
Azure OpenAI in AI Builder: https://learn.microsoft.com/en-us/shows/ai-show/azure-openai-in-ai-builder 
AI Show: https://learn.microsoft.com/en-us/shows/ai-show/ai-builder-a-world-of-ai-at-your-fingertips

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

Transcript

Mark Smith: Welcome to the Power 365 show. We're an interview staff at Microsoft across the Power Platform and Dynamics 365 Technology Stack. I hope you'll find this podcast educational and inspire you to do more with this great technology. Now let's get on with the show. Today's guest is from Bedford, Massachusetts in the United States. He works at Microsoft as a principal product manager. His mission is to democratize no code AI solutions, making intelligent technology accessible to everyone. He has successfully had multiple virtual teams and established a strategic partnership to drive innovation and growth. You can find links in the show notes to his bio social media, by the way. It's worth monitoring, particularly his social media on LinkedIn. He talks a lot about AI. Welcome to the show, Ashish.

Ashish Bhatia: Thank you, mark, good to be here.

Mark Smith: Good to have you on the show. I'm excited that you're here because you're one of these people that I see on LinkedIn constantly putting out great content around AI. I think it's so, so important because if you look at the Gartner hype cycle we've gone through, I feel that, at the start of the share around AI, Now we're into the phase of, I think we're just over the crest and it's now when the real work it's done for the long tail of what's going to happen in this AI space for the years ahead. So I'm excited about having you on the show. I always like to get to know my guests, to start with a bit of food, family and fun. What do you do when you're not doing your tech role? What do they mean to you?

Ashish Bhatia: When I'm not doing my tech role, I spend a lot of time with family. I'm an outdoor person so we do our sports a lot. I'm avid cook at home so whenever I get opportunity I just cook something and it's super relaxing for me. Put some music on, get a good meal on the table.

Mark Smith: Nice, tell me about your journey into AI, artificial intelligence.

Ashish Bhatia: I started Microsoft about nine years ago as a product manager and I'd spent a bunch of time doing dev work, project management and, for the most part, product management in my prior roles. But, as coming to Microsoft as a product manager, I was part of an AI team. This was ProDev, pro Data Scientist, phd Data Scientist. That was my first introduction to AI. It was super fascinating. Everybody just was super talented. For a long, long period of time, I had this imposter syndrome myself like what am I doing here when the right place? I tried to teach myself AI for some time, tried to do online courses and things like that, but then I hit pause and it kind of self-reflection moment for me that am I trying to be a Data Scientist or I'm trying to understand AI to fit my role? I thought that the latter is a better position for me to be and where I understand AI to an extent where I know where to apply it, when to apply it, what use cases it solves it as a product manager. That is super important for me to get a good grasp on, rather than knowing how to build a model, how to solve an AI, kind of slightly build a model, train a model myself. That was not important, because my team was established to do that, they were trained to do that. My job was something else, so that was my journey, that was my learning curve. I would say.

Mark Smith: Where do you sit currently inside Microsoft? What team, what all are you in? What's your current focus?

Ashish Bhatia: Product manager for a tool called AI Builder that sits in Power Platform. Power Platform is a slow code platform within Microsoft. We serve again several power tools Power Apps, power Automate and others in the Power Platform ecosystem. We also serve a bunch of internal dynamics teams who all are again based on the same horizontal capabilities. Thank you, as just my own work, I focus a lot in GPT based capabilities within Power Automate, so we're trying to bring these large language models to our makers. Citizen developers, makers can use that interchangeably and the goal is to simplify that experience. In itself, working with a large language model is simple. You're interacting in an actual form factor. You're using text to work with these models, but just removing all the other clutter and making it use case scenario oriented is the focus. That's what we work on.

Mark Smith: At what point has anything around GPT in the context of AI Builder Is anything GA'd in that space yet, or are we still in the? I was in Atlanta when AI Builder was first announced. Was that like 2018?

Ashish Bhatia: Yeah, it was before my time. Oh, it was before your time. Okay In the team. Yeah, for sure.

Mark Smith: I remember it coming out and you know there were I think four models in it at the time Imagered recognition. There was text you know looking for I call them I-O on off scenarios like looking, and I forget what the other two were, and it really was that point of bringing AI to the masses. But it was the AI that we know about pre-the world changing at around the start of this year, right, pre-generative AI and large language models. So it was machine learning based and, of course, all based on Azure Cognitive Services, things like that. Where are we at from that, from the pre-AI Builder of that time to the use of large language models as part of AI Builder?

Ashish Bhatia: I would say our ethos is still the same. Our goal is still to enable low-code citizen kind of makers build intelligent applications with foundational and state-of-the-art AI technology. I mean, I always say we are ourselves, not an AI team. We are facilitating whatever AI exists within Microsoft and bring it to this kind of platform and this audience in the most simple form right and bring it closer to their use case. And what I mean by that is, if you imagine Microsoft AI stack as a layer cake, the bottom layer is your platform where data scientists can build their models, and this is your Azure Machine Learning environments and workspaces where pro-data scientists work. If you go on top, the middle layer is where all the cognitive services sit right. A lot of them are based on the same technology that other pro-data scientists would use using Azure Machine Learning and stuff, but there is a lot of ready-to-use stuff there. You can imagine custom vision or computer vision, speech APIs, form processing capabilities. A lot of those are ready to use as well, but that is also a layer where, just working with APIs, pro-developers are going to be using them and our goal is to take a lot of that goodness and transition it to this local, local space and make it very scenario oriented. So you'll see capabilities like invoice processing, capabilities like identity document kind of parsing capability, or just document processing in itself, right, with invoice receipt understanding and things like that. So their scenario focus, use case focus, but again running on the same state of the art model. With GPT a little bit of that is changing Because GPT is kind of mixed bag. It is a pre trained model but you can set its own goals, you can give it your own instructions and make it work for you. So even though you can change the underlying model, you can change its behavior by instructing them, by prompting it, right. So that is something that is super powerful, makers, because again, this is a no code interaction. You could just tell them all what you wanted to do and you could define the outputs. And it is a net new model. So, for example, we don't have a summarization model in air builder. You could use GPT to act as a summarization model, right? You can say I want you to summarize this email or this incoming message from my customer and give me talking points about it, right? All of that, yes. So in that sense it's super powerful and customizable as well. Our thinking going forward is to kind of bring it even closer to scenarios and use cases and make it like super actionable because again, it's still a general right, it's a general purpose model. You can ask a bunch of things, but in the more targeted, the more kind of closer to end scenario we take it, the better is the uptake from developers or citizen makers in this case.

Mark Smith: How do I bring my own data to the mix right? So the large language model is in place and let's say, to keep the scenario simple, that I have a large knowledge base around all the products my organization has. How can I, with an AI builder, then provide that data set to be consumed and then, whether it be, for example, summarization, or I want to Q&A it there may be a range of samples. I might be out on a job and I need to fix something. Can I need the precise information on this fix from the data that we have back at the office, so to speak? How do I bring my own data to that mix?

Ashish Bhatia: So some of that is possible through the AI builder capability even today, but with college it's kind of limited, right? So I'll explain both of them. First of all, the GPT model is your kind of base model and you do two things. You give it an instruction, you give it a context right? Any prompt will have a combination of these two, unless you're asking it to do some creative tasks, generate a poem or generate song lyrics or whatnot, right? Otherwise, most of the time, you're giving it an instruction summarize blah email. Or summarize generate response to this message right? So you're giving it an instruction and you're giving it a context. A lot of times, that context is your own data. Could be a set of emails or a set of rows coming from a data source and say, hey, using this, create me a response for this incoming email, right? Something like that. So that is how you can use it. Most of the power tools already sit on top of a data layer which is called data. Worse, so your data already lives there. When you're incorporating this model in your flows or your apps, you can query that data to match the new incoming data and pick the right rows and then bring it in the context of the GPT model and that's prompt the model saying, hey, here's the new data, here's my instruction, here's the existing data. I want you to use all of that information, give me this unique response or unique summary. We're going to make some of that super simple that you wouldn't have to get involved in Query, and we'll take care of some of that workload for you. But again, those are things that will come a little bit later.

Mark Smith: So you talked about context there and context being my data. How about context being things like the device I'm using, the time of day, my geography? Yeah, let's actually just take those three. Is that considered context, or would we label it different?

Ashish Bhatia: It depends on the sonar that you're trying to hit A little bit with power apps, you will see a lot of that context is present what kind of device you're using, your compass, your location. So if your scenario is specific to, I'm creating a location aware application and my response to my user experience can be personalized based on the location of my user. You could do that. So, for example, you have some kind of a field front line staff they're addressing a break device or something, then you could definitely capture that location information and then give them awareness of where they are, what are the devices they're interacting with or facility that they're working on, and thereby you can bring in that location aware context for them.

Mark Smith: Yeah, so would we call that context. That still would sit under the heading of context.

Ashish Bhatia: I would say you will use some of that metadata to find the right context. So for example, I have a data source table where my prior equipment break records are and I know that my frontline staff is working in a given location. I can filter down my data source rows based on that location. Say, here is all the breakage in the past one year that I've had. Then for the narrow down that scenario to give them the right information that they need to complete the job that they are after.

Mark Smith: Yeah, so, like the use case that's going through my mind, that I would see is that somebody is logged into a computer as a bank teller, right? So we would know about this person, what's their role, privilege, what they are allowed to do, what's their day to day role. We would have an understanding of that based on their authentication Right, we know where they are, they work in this bank, at this location, this geography, and they're a teller, so they're going to be frontline staff in front of customers taking questions. Customer walks up to the counter and said what's your fixed term rate for this type of mortgage? They got to give a very precise, accurate answer, right. Whatever the fixed is x percentage and whatever the T's and C's, et cetera around it. If they were to query that, let's say, through a Power Virtual Agent on their computer, terminal, metadata, digital exhaust, whatever you want to call it would that be ingested into the prompt? So who they are, their geography, obviously, talking to a customer, they're wanting the rate for this. So, even though our data set has every rate going back from where the bank started, we don't want an old rate, right, it needs to be. And so the challenges I hear under that use case is that if that's just a SharePoint repository of all that data and I query what's the fixed rate for this mortgage, I am going to get 5,000 return results with links to archaic, outdated, but it had part of my prompt in it, so can that be fed into. Then the context of the data and then a precise based on today's date we are located, what our current rates were advertised at market, blah, blah, blah, blah. All that kind of stuff would be a very precise answer back so that they could, with full authority, give the correct data.

Ashish Bhatia: Yeah, great question. I would say no. And the reason I would say I'm going to introduce a new concept called RACS, which is retrieval base, retrieval, augmented generation rate and what it does is it breaks the task into two steps. So, for example, the question that you're asking, the way we would perform that is the first step is understanding the intent of what's the question being asked. And then what is the metadata that I'm working with, the location, the teller, the account, the type of account and all of that right, and all of the metadata about the end user that you're trying to cater. Take that metadata, go search my data-worse tables right. Find the relevant information. That relevant information may be spread across like 50 rows or whatnot, right? The goal is to make a query to narrow down that data as much as possible and then to be able to peel out the right answer. That's the job that we offload to the LLM say, hey, here is all the context data. Here is the instruction that I want you to follow, which is the original intent of the question Go, find me the right summary, the right answer, the right rate of Long-term kind of deposit or whatever Mm-hmm. And that's how we achieve that scenario. So retrieve which is the first candidate, the map reduce kind of world. Yeah, if you kind of make a parallel retrieve, the right information, give to the LLM, let the LLM generate an augmented result based on that context data.

Mark Smith: So what I heard and correct me if I'm wrong, if I play back what you just said the Request is really handed into a brilliant prompt engineer that takes into account all that and the metadata, the intent, and it crafts a very precise Request then to the LLM GPT-4. In this case, let's say, and therefore, because it was such a precise prompt, it then retrieves the correct data.

Ashish Bhatia: Yep. So again, the step one is the query, query for the data, for the reduced set of data, and the second step is that precise prompt with the original intent, this reduced set of data, and then Offloaded to an LLM to create the final answer of that reduced answer for you. And that's how kind of a lot of that large data problem can be solved. When you ask the question, can I bring my data to? Yes, be precise. Cut down on hallucination. Hallucination, great tool to cut down hallucination, because Model has this tendency to answer Questions that it doesn't knows. But the more data that you pack in for that model to educate it will help cut down hallucination like.

Mark Smith: We've talked now about prompt engineering and I like what you said actually earlier on. You said that in the observation of your career path that you were going on, you started to look at going down a data scientist path and then realized, no, actually there are people that specialize in that. I don't need to go down that path, but I do need to understand. You know AI and I feel I've been on the same journey. I thought, oh my gosh, I have to become a data scientist and Just quietly, that bores me to death. That's just not the way I'm wired to go down that path. But as I have learned, I realize is that and I don't think I'm the only one in this boat but it's like there's so much to learn. But it's like what do I need to learn to be really good? Because, you know, I come from a company that has many years of experience in the AI space and very publicly, and I look at the training resources I have to me behind the scenes and in my mind I go that's all old AI, right. It's like it's not where the world is now. That's all, yes, the year AI, and it's kind of like I don't want to spend my time learning. Like, honestly, if I looked at an AI course now and I saw their creation date was pre 2023 Probably not gonna. You know, it might not even get me to go deeper and look at the contents right now and saying that. And the company I work for is IBM and so obviously got a big brand with Watson. They've bought out Watson X. Hopefully I'm getting it smack for this, but I would have never named it over a dying product brand and reinvent it with another one and. But they did. And I just like and I've done a lot of that type of training and I'm just like, yeah, it's good fundamentals and good stuff for how AI was. But if I was to learn AI today, what kind of resources do you recommend that folks go out and learn in the post New Year's 2023 world that we're in? And really, from that perspective of I know as a consultant that I see everything I do is gonna have an AI component in the future, no matter what. So I can have 20 years experience in dynamics, my experience in the power platform from the day it came out as a concept. But I think all of those have to be now include AI Fundamentally for where our world is going. So, in that context, how would you advise people, encourage people to look at AI in context of their career or context of their learning pods? Because what I noticed? Google, they've got their AI program. Microsoft on LinkedIn, learners, their AI program and stuff Great, they're good. My wife's gone through the Google one been she's ex-Google herself and it's great. And then it gets really technical. It's like now all your business stakeholders like I'm out, now it's over my head. This is not the stuff I need to know.

Ashish Bhatia: I will start by saying that almost every job will get impacted by AI. That doesn't necessarily mean AI will replace the job, but AI will become an intrinsic part of your job. We're seeing this as the new school year is starting. Teachers are trying to figure out how to work with AI. When there was a time, everybody resisted maybe not everybody, but a lot of that education community resisted People, our kids playing around with AI. They're trying to now understand how to live with it, how to use it as a resource, as a tool to help educate better and reach out to more students and work with them. Similarly, as a product manager, I'm thinking about how I could use AI as a resource to do my job better. The question always come down to how you use as a tool to do your job better, and people who will figure that out will excel in the paths that they have chosen. How do you learn? Again, there are two paths there. For people who deal day to day with AI, that learning path is different. We're learning about AI safety all the time. You're posting a ton about it, because that is something that I am deep in. I'm learning right now, but for almost everybody else who's going to use AI as a tool to do their job better is start using it, start understanding scenarios on where the AI augments you and where you bring your own intuition to perform the job well, and just cracking that code is going to be the recipe for success, at least in my mind.

Mark Smith: Yeah, that word augment, I think, is critically important in that I find I use it a lot for finding my gaps in my thinking. I will say this is what I'm doing and I'll give it a whole. This is what's important. Now, you know, almost play the devil's advocate and go where is my thinking? What am I missing and what's blown me away? I've used it in a couple of contexts. I've used it in the context of my will. I took the legal document from the lawyer and I passed it in and I said, in context of because I'm New Zealand based, new Zealand law and this will, what is missing? It came up with four sections that the lawyer had totally not brought up, not addressed, but were all very relevant and I was like, hey, that's brilliant, right, it didn't mean I just copied and paste what it said, but it gave me context to go back and further a discussion as to why these things weren't in it. And then I often find, from a training perspective, I can give it, let's say, the table of contents of a training scenario and say what would the gaps be in this, what didn't I cover? And I'll give you a simple one. I gave it a. I do a course on communications in the context of Microsoft and the tech we work in, and I said what additional module should have been there? And it said you didn't cover listening. You talked all about talking and written communication and verbal, but you never did anything about listening and I was like, ah, like so obvious. But I find that's the beauty of having this augmented experience and when you're giving it the prompt to look for that, it's allowing you to uncover your blind spots in many situations.

Ashish Bhatia: I myself use it as three predominance scenarios, my kind of work life. One is a writing assistance. I use it a lot to kind of write whatever I'm writing maybe it's specs or requirements and things like that to kind of make them more holistic. I use it as a critique right, the stuff you were talking about here is what I'm thinking about. What are the gaps? Right. What am I not thinking about? Right, or how it could go down south, right. So as a critique, I use it a lot. And then the part that I use it is for ideation, right, I mean, here are three things that I'm thinking about. What else could I be thinking? Or this is an industry and we're thinking about AI and this specific piece of it. What are the other scenarios there? Yes, so idea generation as a critique and then writing, so Cinderella can at least three ways that I use it a lot in my own.

Mark Smith: Yeah, I like it. You knew this podcast was coming up. Is there anything else that you'd like to add? Before I kind of I want to ask you a bit about how you're using AI personally in your personal life, like how you're incorporating it in outside of your role in Microsoft, but how you finding practical application, as I say, either in your personal life or it might be a side hobby or anything like that or are you pretty much staying straight in the well house of how you use it for your nine to five jobs, so to speak?

Ashish Bhatia: I would say the current kind of investment of time that I'm putting in other than work is just learning more about AI safety. I'm reading a ton of papers. I kind of try to summarize sometimes those. So not so much incorporating AI in my personal life or time, I'm spending a lot of time reading about and learning outside, just for context. And so again, the two reasons A I'm passionate about that space, about AI safety, bias and how we bring AI in an equitable way. B there is so much going on in that space and I feel that is going to be a keenly contested area in the next few years, because there is a lot of room for goodness with AI, but there is also a possibility to do wrong. And again, it's in the hands of people who are thinking about this AI safety in general, on how, which levels we use and how much we use them. So again, this ton happening in that space and super exciting because that will dictate how all of this transitions right, good way, bad way. However, that ends out right. So again, it's interesting to watch that space and learn from that, and there's good takeaways, I feel.

Ashish BhatiaProfile Photo

Ashish Bhatia

As a product leader with expertise in AI and machine learning, Ashish Bhatia is passionate about driving strategic initiatives, launching innovative V1 products, and building strategic partnerships to create a smarter, more connected future. His mission is to democratize no-code AI solutions, making intelligent technology accessible to everyone.

With a global career spanning India, Finland, and the USA, he has collaborated with diverse cross-functional teams to deliver high-impact results. In his current role, he is integrating Azure OpenAI into AI Builder, Power Automate, and Power Apps, with tangible and game-changing outcomes for citizen developers.

Ashish’s experience in energy, sustainability, retail, manufacturing, healthcare, and finance has honed his skills in designing AI-powered solutions to address complex challenges. As a global Product Manager, he has successfully led multiple virtual teams and established strategic partnerships to drive innovation and growth.

With a strong foundation in Systems of Intelligence and machine learning for automated decision-making, he is well-versed in the latest AI advancements. As we move forward, Ashish remains dedicated to driving strategic initiatives, launching cutting-edge products, and shaping a smarter world through intelligent, no-code AI solutions.