When It Comes To AI, It’s The Little Things (For Now)
For more information on using AI on IBM i, watch these videos from Profound Logic:
Integrating GenAI with your Legacy Applications
Enhancing IBM i applications with AI is now simple!
AI on IBM i Demo
This transcript is edited for clarity.
Peg Tuttle: Today we are thrilled to have Brian May from Profound Logic with us. Brian is a published author, award winning speaker, and the vice president of product management at Profound Logic, where he leads the charge in transforming legacy systems for the future. Profound Logic has been at the forefront of modernizing and futurizing IT systems for over two decades, helping teams embrace tomorrow with cutting edge solutions. In this episode, we’ll explore the revolutionary world of AI assistance, their impact on legacy applications, and how Profound Logic is making it easier than ever to integrate AI into your business. Stay tuned as we uncover AI’s potential to transform how we work and interact with technology. Hey Brian, welcome to the show, so good to have you here today.
Brian May: Oh, thank you for having me.
Peg: Yeah absolutely. So, Brian, why don’t you just give everybody a quick update on you—and you know what? Here we are in the new year. What have you been up to so far, these first couple of months of 2024?
Brian: So, my beginning of the year has been solely concentrated on our new AI offering here at Profound Logic. So in addition to being the VP of product management, I’m specifically the owner of our AI and API solutions. Since there’s a new AI product coming down the pipe, that is taking 100% of my attention right now.
Peg: I can only imagine. We’re going to talk a lot about AI today, so let’s just launch right into it. I’ve got a few questions Brian, and I’m hoping that we can dive a little deeper into AI and what it means to our IBM i users and this platform, maybe even those who are just interested in learning more about AI because I know you’ve been focused on it. You’re bringing a lot of good information to our listeners, so go ahead and start talking about AI. Maybe you can talk a little bit about how AI is currently being utilized in the IBM i marketspace on the system, with the platform in this environment. Maybe you can just start talking a little bit about that.
Brian: Sure. So a big part of my job is of course talking to our customers and other IBM i shops that are interested in AI, and what I find is that there’s a common thread among all of them that they’re all in the early stages—which is not surprising at all, right?
Peg: Yeah.
Brian: AI is rapidly evolving and everyone’s really just trying to get their arms around this thing. So most all of the IBM i shops that I’m talking to are really in those early stages where they’re wanting to have conversations with companies like ours: we know we want to do something, but we’re not sure what. So that’s where having a partner that you can lean on, can sit down and say okay, these are some of the things that other customers are doing, here’s some ideas. Here’s some things we’re looking into. Everyone is really in the experimentation stage at this point.
Peg: Yeah.
Brian: I think that the IBM i industry as a whole though is actually in a slightly better space than some others, which may go against what others might think.
Peg: How so?
Brian: One of the things I think we have going for us when it comes to AI for our IBM i customers is IBM i customers are generally running applications that they’ve been running for a really long time, which means they have a lot of historical data. And having all that historical data can actually give you more information for AI to be able to leverage in trying to come up with answers to complex questions. So that is something that we’re rather unique. You know a lot of our customers have been running the same applications for decades—
Peg: Decades, yeah.
Brian: And so, unless they’ve had to go through major purges, they have decades of data that can be utilized by an AI. So it gives us a unique advantage there. Now the flip side of that is we also have decades old applications, right? So one of the things we’re doing in our solutions is we are really laser focused on helping customers be able to integrate the new features of AI into their existing applications. We don’t want them to have to throw out what they’ve done. We’re coming up with ways to make it simple for them to bring AI into what they already have and keep that legacy that they have and be able to utilize it to get a competitive advantage.
Peg: I like to say proven over legacy [laughs]. Those applications are proven, they’ve worked every day. They keep showing up—you know, proven applications—but in some cases, yeah, they are legacy and it is time for a change. So with the rapid advancement of AI and this technology, I mean it’s been what? 14 months since we’ve really seen it take off. I’m just kind of curious. What strategies do you recommend for companies that are trying to integrate AI into their environment?
Brian: Sure. You know what I advise our customers right now is it’s the small things. Everyone wants to think oh, how can we come up with this amazing solution that’s going to do all of these things? That’s great to have that as an end goal, but right now, start small. I have some customers that I’m working with that are just finding ways to use AI to add small efficiencies—those quick wins, right? To say okay, this is all new—and let’s be honest it’s kind of scary. So let’s do something small, be successful at it, measure the value that we’re getting out of it and then let’s scale that up.
Peg: Yeah, and then that gives you an opportunity to make changes, iterations, and adjust as you go forward.
Brian: Absolutely.
Peg: We’re going to talk about security, but I want to talk about ethics and responsibility when it comes to AI. What are your thoughts on that?
Brian: Well, that is an area that’s different when it comes to AI, right?
Peg: Yeah.
Brian: And you know you said we’re going to talk security and we will, and I’ll reiterate this when we get there, but I don’t talk to customers about security when I talk about AI. I talk to them about safety. It’s more holistic, right? It’s more than just making sure that someone doesn’t access some data that they shouldn’t. It’s also about the privacy side of things. It’s about the ethical usage. There’s a whole more to it. So when I’m talking to customers about that, you know the first thing I think you have to have is you have to go ahead and you have to have an acceptable use policy for your company. Period. You can’t expect your end users to use AI responsibly if you haven’t defined what that means, right?
Peg: Right.
Brian: So it’s having that acceptable use policy and educating the users on what that policy really means. Now there’s also putting guard rails around that, and we’ll talk more about that later as well, but yeah, that’s the big thing. That’s what you’ve got to do first from a leadership standpoint. You’ve got to say okay, we’re going to go down this road but we’ve got to go ahead and put in some rules about how this is going to be used. Because unfortunately, what can happen is that an end user can do something—you know, can go off and do something with AI that is detrimental to the company, can expose things or even can be against the law. So it’s very important that you define all of those things up front.
Peg: Yeah, yeah, I agree and I know we’ve seen some of those things in the news in the last 12 months, you know code that’s been accidentally shared with ChatGPT or something, and unfortunately the employee then loses their job and the company is in trouble. And so just in that vein then, I know that the work force and the skills of those using ChatGPT, AI even, it’s going to change the landscape of the job and the skills that they need. I’m kind of curious. In your research what have you discovered with regards to that area?
Brian: I think that it’s going to change, especially on the IT side of things, and I think that IBM i shops are very adaptable to this I guess is the way to put it. But I think you’re going to see developers shifting more into business analysts, which is really common in the IBM i world anyway, but you know with being able to build AI agents within our solutions or with anything else, you’re able to build these solutions using natural language, so syntax kind of goes out the window. You don’t have to be an expert in laying out an algorithm to get to a result. You can just tell the AI what it is you’re trying to do, and it can figure all of the bits out on itself. So when that happens, then your coding skills—although there will always be some coding to do. Don’t get me wrong, but those coding skills become—I won’t say less important, but less day to day. Then your understanding of the business becomes more and more important, because that’s the knowledge that you’re going to lean on when you’re building these AI capabilities is trying to describe the business case to your generative AI solution so that it can then help you to build out features. So I don’t think it’s going to be something where developers are going to lose their jobs. No, they’re just going to adjust their jobs. Their job is going to be more—and again, this has always kind of been the case in the IBM i world, right? We’ve always had a lot of business knowledge within our organizations anyway, so we’re very naturally suited to make the shift. But what’s going to become a real important part of it is the business knowledge and the prompt engineering side of it.
Peg: Yeah, yeah, absolutely. Very well said. You have to learn the business to really understand what you need to develop. Yeah, I think that’s absolutely spot on, so very good, very good. Let’s tackle security because it is paramount and when it comes to AI implementations, I want to hear your personal opinion, and maybe what Profound is going to take as their opinion as they go forward with AI. So maybe talk a little bit about security, you know your best practices and how you guys are going to implement that as a part of your solution for AI.
Brian: So yeah, for me personally, and I mentioned this before—it’s more about safety than security. It’s making sure you’re taking a holistic look at it. It’s not just locking down authorization to access the AI, and it’s not just controlling data access—although that is part of it, right? It’s really making sure you understand all of the areas that are important in order to use AI safely. We’ve tried to take that into account as we’re building solutions for AI. So you know if I were to kind of create a list here, kind of off the top of my head, when I’m talking about what does it mean to use AI safely. Obviously, things like authenticate are important, right? Prove who you are. Data access control is absolutely important and in fact in our solutions, we’ve come up with a method that you know the AI doesn’t have direct access to the database. We’re kind of in between there as a buffer. You have to understand, and this isn’t even a technical thing, but you have to understand the privacy policies and data retention policies of the models you’re going to use, right?
Peg: Yeah.
Brian: That’s important. Everyone’s worried about well, if I release data into this large language model, are they going to use it to train? Are they going to reveal it to my competitors? You have to understand when you’re signing up for a large language model, what are those implementations and are you under a plan that protects your data? The cheap ones don’t, but the enterprise level plans do. So you have to make sure that you understand that. You’ve got to have some sort of validation in place for the requests that are made of the AI. You’ve got to understand what’s happening there. One of the things that we do in our solutions, which I think is somewhat unique, we actually have control of the prompts. So if you just give an end user access to ChatGPT, they can go and tell the AI to do anything in the world that they want.
Peg: Yeah, yeah.
Brian: So one of the things that we do to help our customers to put those guard rails in place is actually IT creates the prompts for the agent, and then the end users can only interact with the chat interface of that agent. So it allows them in IT to actually say okay, this agent is designed to do this and it cannot do this, this, this, or this. So if I were building a prompt for an agent, I could say okay, you are an agent designed to help with HR requests for example, and you can only answer questions that are related to HR and the documents that are uploaded to the AI.
Peg: Right.
Brian: Any other requests are forbidden, and you will apologize for not being able to answer those questions. So you just put up a major guardrail, and then since your end users can’t access the prompt, they can’t circumvent it. So that’s one of the ways that we’re trying to build a solution that helps our customers to control that AI usage. They can embed an AI agent in a business application that is context aware, so we can then have it aware of the context of the screen that they’re on for example, and then also put guard rails around—okay, you can’t do anything that doesn’t pertain to the customer that is up on the screen right now, or you can in the prompt say you can answer questions about this customer or any customer. That’s completely in IT’s control. And then the last thing that we’re focused on from a safety standpoint is logging. We’re making sure that we’re logging everything: data access, all the requests that are made of the AI, all of the responses coming back from the large language models. We’re making sure that we have all of that logged so it can be audited. Again, it all comes down to just making sure. A lot of our customers are wary. They are not fully comfortable that it won’t go off and do something it’s not supposed to do. So a lot of what we’re doing within our solutions is not just making sure that we have ways to prevent things from going rogue, but also ways to go back and find out what happened if that does happen for some reason.
Peg: I like your idea about just starting small, just baby steps. You know, just start with a small project, see how it works, and then go. Safe and small.
Brian: Yes.
Peg: When you guys talk about your new platform—I did a little research and I saw that you actually have an easy three-step process to deploying these AI assistants. I’m kind of wondering if you can take a few minutes just to walk though that process and how it simplifies the integration of AI into proven legacy applications.
Brian: Sure. Now, we’ve taken as many steps as we can to streamline the process. So the first step is you need to choose what large language model you’re going to use. Right now, we have support or are adding support to basically all major models, open-source models, commercial models, self-hosted models, models in the cloud, whatever you want to do. That is one way that we’re simplifying this for our customers in that we’re working out the interface with the large language model so that the customers don’t have to know exactly how to communicate with it. They can just use the tool to be able to build out the agent and we’ll handle the plumbing. So in doing that, it gives them not only the ability to just choose what model they want to use and be able to get going quickly, it also makes it easy for them to change models down the road.
Peg: I was just going to ask that. What if it’s not the right model? What if it’s not the right one?
Brian: So what we’ve done with our tool is we’ve made sure that we support multiple models. We have basically have a sort of a universal interface behind the scenes so that you’re using—I don’t know Claude today and tomorrow—you decide you know what? We need to start using OpenAI. It’s literally a drop down within the agent configuration to have it use a different model. So you’ve got your model and you’ve decided on that, but you’re not locked into it You can change it later, not a problem. So the next thing you have to do is you have to configure your agent. So what does that mean? Well, that means using natural language to build your prompt right to describe what this agent should and shouldn’t do. You can add context from the screen that you’re going to have this agent embedded into and then you give the agent access to any data—you know, database data, unstructured data inside of documents, any APIs that you’d like for it to call, or you could even write low code routines for more complex tasks that you’d like the AI to be able to do. Then you’re just testing it as you configure because in our IDE, you select your model and the IDE is connected to the model at that point, so you have a mock chat window right there in front of you as you’re developing. You can be conversing with the large language model as you’re changing the prompt and as you’re changing settings so that you can say well, that wasn’t really the response that I was looking for. Let me make a tweak to my prompt and then ask again.
Peg: Yeah, yeah.
Brian: And so it allows you to streamline that whole process. So once you’ve gone through that, you’re going through your testing cycle and getting everything the way you want it, then the third and final step is just to deploy it. And when you’re ready to deploy, you choose the deploy option within the tool and it provides with the needed code that you just copy and paste directly into the UI of your application—you know depending on if you want to deploy it as an existing button, or you want an embedded icon, or if you’re trying to export it to a Slack bot or an OpenAI plug-in.
Peg: Oh sure. Yeah.
Brian: I just got Microsoft Copilot on my PC when I upgraded last night, so if I wanted to deploy to something like that, those are all things that we’re looking into, adding deployment options for you know whatever your UI is, especially a web UI, it will actually just provide you with a little bit of JavaScript to put in your onload or your on click, whichever is appropriate. You just copy and paste it in there and it’s ready to go.
Peg: Wow. I know that you guys are in beta with your product right now. Have you had customers approach you and say hey, I’d like to participate in this and be a part of it?
Brian: Yes. We opened up the beta just a few weeks ago. We already have a few customers that are in the beta program. I’m talking to new customers everyday, so that beta program is growing and we’re still looking for beta customers. We’re trying to get a really good group of beta customers that cover a lot of industries, a lot of shop sizes, but also that are willing to really partner with us.
Peg: Sure.
Brian: Our beta isn’t like oh, I’m going to do the Microsoft Windows beta and they just give you a beta copy and you go do things with it. In our beta, it’s actually a partnership. So when you sign up and join our beta, I’m going to have regular meetings with all of our beta customers to talk about what they’re doing. You know, how are things going? How can we improve things? Have you thought of this? Even if they’re short conversations—you know in the beginning I’m meeting with these customers every couple of weeks individually, which is taking up a lot of time, but that’s okay.
Peg: Yeah.
Brian: But really talking to them about AI and how they’re using it, helping them come up with ideas for their initial POCs, helping them navigate selecting the appropriate model, helping them figure out the privacy concerns and which ones are going to meet the needs of their industry, or just their company policies. So it really is a partnership. And then the idea is that by having that close knit partnership through the beta process, when we are ready for the general release, we fully understand as much as we can about all kinds of customers to make sure that the solution is meeting everyone’s needs, not just what we think they are.
Peg: Yeah, that’s very exciting. Oh my gosh, I can only imagine some of the business use cases and the conversations around the customers coming to you and just from that, all of the new ideas. Hey let’s try this, let’s try that that come from that brainstorming—you know, just talking about ideas. I think this is such an amazing area of growth in our industry and business alone. When I asked you about the workforce and job skills, I think about just programmers that are trying to write bits of code or have an issue with a piece of code—not that they would share what their existing production code would be, but maybe they can share a segment of a piece of code that’s not in production or something that they’re working on to help solve the problem that they’re running into. Or maybe they’re not sure how to write a piece of code and they turn to an AI assistant and it helps write that piece of code. I just think about like oh my gosh, for some people that could be hours of time saved.
Brian: Absolutely. I mean even internally we use AI within our development team all the time for assistance in writing code to find solutions to problems, and even for test generation.
Peg: Oh, awesome.
Brian: We’re using AI in all aspects. Our first dip of our toe into AI was we built a Slack bot that we named Alec, as in smart alec [laughs], and intentionally gave it a little bit of attitude. But we gave it access to all of our customer facing as well as our internal documentation about all of our products. That’s a Slack bot that we can converse with inside of Slack and ask questions about our products and how they work and how this feature interacts with this other feature. It actually does a fairly good job of pointing you in the right direction and giving you the reference documents to go and dig deeper. That was kind of our first experiment, and so we’ve obviously progressed since then. That was early last year I suppose is when we started with that, but now we have customers that are coming out with all kinds of great use cases. I had one customer—I’m trying to make sure I don’t you know violate any NDAs or anything—but I have a customer that’s in a highly regulated industry and that industry’s regulations change based on state law. So they’re actually looking at having an AI agent that, when they have a customer’s information up on screen, it will know what state they reside and then be able to answer questions about what that end user can and can’t do legally in that industry. Before they would have to prep—you know, before they would call that customer. Oh, they’re in I don’t know, Massachusetts or whatever, and they’d have to go and take a look at either the actual law or at least you know their summarized versions of what can and can’t be done. But little things like AI can save a ton of time on that because they can actually ask questions of the AI as they’re on the phone with the customer, so they don’t have to go searching for those answers. They can simply say can I ask this customer this and the AI can then come back and say based their state of residence, yes you can/no you can’t. It’s really powerful and that’s actually an extremely simple agent to build, but it adds so much value to them, especially when you start talking about new employees that don’t know the ropes yet. It’s amazing the things that customers come up with.
Peg: I had a question for you about your internal chat bot and Slack bot Alec. Okay so as you guys started working with it and using it—how did it get smarter? Do you know what I mean? I don’t know if I’m asking the right. How did it get smarter, or did you figure out along the way that you had to make changes so that it could get smarter?
Brian: Honestly, the main way that it got smarter was as the large language models got better. A lot of that was taking our documentation, our external and internal documentation which we all keep in a wiki style format, and then making that available to the large language model. Not to get too technical, but building vector indexing, etc., so that it can use context to be able to look for information. Really and truly, once that part was done there wasn’t a whole lot of development that had to happen to it after that.
Peg: Oh okay.
Brian: Most of the advancements after that were advancements of—we were using OpenAI for that, so it was advancements in the GPT model going from 3.0 to 3.5 to 4.0 to 4.0 turbo, etc. And that’s what’s great about this. You can build a solution that works great right now, but it can become amazing just because the model.
Peg: I love it. This is so fun to talk about. We’re going into trade show season and I’m wondering will you be demoing or showing your new product at the upcoming conferences, talking about it.
Brian: Yes, yes. I will be there along with other members of our staff, obviously. I will be at COMMON in Fort Worth and we’ll be talking about this extensively. I’m also going to be at COMMON Europe—
Peg: Oh wonderful.
Brian: So I’ll be there for that as well, and I have been doing presentations at various local user groups over the last month. I’m a little bit of everywhere right now. Basically anyone who will let me talk about the things that we’re doing, I’m excited to talk about it.
Peg: I think we talked a little bit about the future, but really, what do you think about future trends in this area from a business applications perspective?
Brian: I think that these AI features, these agents and chat bots—whatever you want to call you know the AI, the generative AI features—they’re going to become an expected feature. Not an oh you have that? It will become an oh, you don’t have that? That’s going to be a big shift and so that’s why we’re doing what we’re doing with this solution. We’re making sure that our IBM i customer base has the ability to keep up.
Peg: Yeah.
Brian: But that’s going to become the expected thing and the way we see it right now is that the time will come, probably soon, that when you need new features in your business application, you’re first going to say can AI do that for me, instead of oh, we need to go out and plan a modification to the application is before we make that investment—can AI do that? So we see a future of where your business applications actually adapt to your needs instead of you adapting to your business applications.
Peg: Well, I feel like we’re almost there. We’re going to be there very soon.
Brian: It is.
Peg: All right well Brian, we have come to the end of our time together, but I wanted to just give you a quick opportunity. Is there anything that we didn’t talk about that you need to share or want to share quick before we wrap up?
Brian: Sure. So if you’re interested in talking to us about AI, I recommend that you go to the landing page for our beta program. There’s a form there you can fill out if you’re interested in talking to us about that stuff and we’ll gladly get in touch and have these kinds of conversations. But you know the big thing for me is that if you’re not talking about AI within your organization now, you’re already behind. These are conversations that you need to be having. Now how you go about implementing those, we would love to help and we feel like we have a great solution to help you get started quickly and safely, but if you chose to roll your own, that’s something that is certainly possible. It will just require some investment and time, but the main thing is that you should be doing something.
Peg: Yeah.
Brian: We don’t want to see our IBM i customers falling behind the other platforms and we don’t want our customers to feel like they’re limited by the platform that we’re on or their business applications that have served them so well for so many years. We don’t want them to feel like that that’s their limitation, because it’s not. We feel like Profound AI is proof of that, that you can bring this new cutting edge world of AI to your application that you’ve been using for decades, and there’s no reason that it should hold you back.
Peg: Yeah, yeah. Absolutely. Absolutely. Well said. Thank you so much. Well thanks everybody for tuning and as Brian mentioned, head over to Profound Logic to check out more information about the beta program. You can fill out the form as Brian mentioned. If you want to reach out to Brian, I know he would love to hear from you, right?
Brian: Of course.
Peg: Yeah, and then I know that there are some videos as well, Brian. It’s your YouTube channel, correct?
Brian: Yes. You can find videos on our YouTube channel. You can also find videos on both my LinkedIn as well as our CEO Alex Roytman’s LinkedIn. We’re both posting videos with sneak peaks, even some demos of the product as it stands right now within our LinkedIn accounts.
Peg: Oh fantastic. Oh awesome. Well thank you so much, Brian May from Profound Logic, for joining me today on PowerTalk with Peg Tuttle. It was an absolute pleasure to have you here.
Brian: Thank you again.
Peg: Have a great day everybody.