Skip to main content

How AI Is Changing the IBM i World

This transcript is edited for clarity.

Charlie Guarino: Hi everybody. This is Charlie Guarino. Welcome to another edition of TechTalk SMB. Today’s meeting I’m very happy to say that I’m with a friend, a business leader, and a thought leader in our IBM i community. It’s Mr. Alex Roytman, who is the CEO of Profound Logic. Alex, thanks so much for joining me here today.

Alex Roytman: Thank you, Charlie. It’s always as pleasure. It’s always nice to talk to you.

Charlie: Thank you, Alex. I have to tell you, I know your company has been around for 25 years and I’ve known you not all those years, but many of those years. I have to say one thing that I’ve always found very interesting, I guess, is how you are different from many other CEOs that I work with with some of my customers, for example, and what I mean by that is really how technical you are. Many CEOs, many business owners are not as technical or not nearly as technical as you are. You really are even to this day I think very involved with technology. You’re really hands on because many CEOs just work with strategy, but you’re much more hands on, very highly technical. What do you say about that? I mean talk about that. Give me some information about that, like how you see the future of technology and really how your role is at Profound Logic?

Alex: Well that is very true, Charlie, and I’ll take it as a compliment. I do like to kind of dig in there and understand what we’re working with. Especially the type of company that we are where we try to take our customers into the future and look to what the future brings in terms of technology. I feel like I have to and I enjoy it. I enjoy getting in there and understanding the various different technologies that are coming, and really being hands on I think is necessary to do what we’re doing.

Charlie: We were talking before this meeting started, Alex, and one thing that you mentioned caught my attention, and that is that in our industry right now, the big, big buzzword is modernization. We talk about that all the time, modernization, and I often say that modernization, that term is so overused and it has kind of lost its meaning. I think the road map today, it should be focused on digital transformation, but you took it a step further. You threw a word at me that I had never heard before, but I think might even better encapsulate where we need to move, and it’s the word futurize.

Alex: Yes. Yes.

Charlie: So what does that word actually mean? I have some idea obviously, but what does futurize mean? How do you futurize an application versus modernize an application?

Alex: That’s a great question. So a little while ago we took a step back and we tried to determine what our brand is and kind of like what we do, and we realized that we are overusing this word modernization, which brings on the connotation that you’re going like from the past to the present. We realized that what we do is a little different in that we actively try to think about the future and the future technologies that are coming. So modernization kind of doesn’t do us justice is what we figured, so we coined this term, futurization, and we started using the word futurization because that better describes what we try to do. You know like all of the things that are happening with AI, we thought it was important—it was our identity so to speak to kind of jump in there and understand it and do so before everyone else does so that we can bring that knowledge to our customers and the people that we work with. So hopefully that gives you a sense of why we like the word futurize versus modernize. Even though I think futurizing encompasses modernization to some extent, futurizing is a bigger umbrella than just modernizing your applications.

Charlie: So you mentioned AI, and that’s I think a big a reason why you’re here today, why I invited you here today, because AI has really captured our imagination in so many different ways. I know for me, and I say this so many times to people: I’m as excited today about technology as I was when I first got into you know, how many years ago, because it really has completely revolutionized—which is a funny word I suppose in IT—but revolutionized the industry once again. It’s changed the direction.

Alex: Yeah, yeah. I feel the same way as you do. I am super excited with everything and sometimes I’m kind of shocked to see that others around me—you know some people are excited but you also see people that are like ah, this is nothing interesting. No, don’t you see what this can do. I’ve been a big proponent of this stuff ever since I got exposed to GitHub Copilot. I don’t know if you’ve used the tool, but I got the tool and immediately I realized that hey, this is different. This is not like your typical auto-complete. This has some kind of intelligence in it that is way beyond that, and so I got super excited. I had to tell my entire team. I had to say, we’ve got to get involved, we’ve got to learn more about this. You know in fact we started at Profound, like an internal AI meetup where we get together and talk about AI and how AI is going to change things, whether that’s for how we do things or whether that’s for our customers and the type of solutions that we bring to our customers.

Charlie: Well Alex, that raises an interesting point. I think we are obviously in agreement about how excited we are about this new technology, but there’s a concern in that there’s a lot of hype about AI as well and what it can do, what it cannot do today, and the perception of what it can do. I think that can be disconcerting to a lot of people. They’re afraid that this is an all-knowing, all-powerful tool. Maybe it is; I don’t know—

Alex: Yup.

Charlie: But how do we separate the hype, you know as far as what’s being used today, how is it practical in today’s applications versus the hype that’s associated with it?

Alex: Yeah, I think it is very important to understand what the limitations are, and this is where using the technology and playing with it [will give you] a sense for it. It’s hard to describe in a sentence what all the limitations are, but as you start getting to use it you’ll understand where it’s strong and where it’s weak. I have seen that a lot of people will exaggerate the capabilities of what AI can do. It can do a lot of interesting things, but it doesn’t automatically replace all the humans or anything like that. At least at this point in the time it’s not like an AGI where it’s sentient—and I guess I can’t even argue whether you know, it depends on the definition, but I don’t see it let’s say destroying the world. That’s not where this thing is. So some of it has been exaggerated so it’s important to understand the limitations, but it’s also important to understand what the real concerns can be. It’s a tool and if it’s used by the wrong people, if it’s in the wrong hands, hackers for example, you know they now have that much more power if they can use AI. In fact, it’s so easy for example to impersonate someone using AI, and this is something that you could perhaps do before AI, but AI just makes that type of stuff easier. By the way I don’t know if you’ve seen this, but ChatGPT recently released this voice capability. Have you tried that?

Charlie: No, but I have read about it. Again, it’s fascinating and it makes your mind think about all the different possibilities of what it can do.

Alex: Exactly, yeah. The interesting thing about that, I’ve already gotten used to the language capabilities of the AI and how that it sounds intelligent. It sounds just like a human, but what they did with the voice, the voice itself sounds like a human to where it will sometimes kind of like do a pause and like an um and take a breath before it speaks to you, so you really feel like you’re talking to somebody human and it’s hard to tell that this is just an AI. So yeah, when I got that capability—and I think you need the plus subscription and the ChatGPT app on your phone—I was on this thing like for hours just talking to it because it was really cool. I was excited to see it but anyhow that’s—I’m side tracking here.

Charlie: I can tell by your excitement how interested you are in this whole topic, how it’s also again captured your imagination as well. So let’s talk about IBM i or even just business in general. How do you think or how do you see customers using this technology as it stands today? Where do you see customers using this in practical applications?

Alex: Yeah, so that’s one of the things that we thought to investigate and kind of dive deeper is yeah, you’ve got this intelligent large language model if you want to call it that—a tool that you’re talking to. It can give you responses, but how do you integrate that into your data? So the thing to realize is that the tool has APIs, and here I’m talking about ChatGPT, but that you can kind of say that this applies to other things. You know there’s other models out there. There’s a company called Anthropic. I don’t know if you’ve heard of that one but they’re comparable to ChatGPT in their capabilities, and then there’s also open-source models, but at the end of the day you can interface with these models through APIs or programmatically, and that’s the rabbit hole you’ve got to go down. So one very simple example, and it came out of one of those like AI meetups that we were doing internally within the company, is you can for example, through the API you can kind of build this like business intelligence tool that you can communicate with through natural language, and that was very simple to implement. So you’re basically saying hey, here’s a query from the user. Give him this database, translate that query to a SQL statement, and then we go ahead and run that SQL statement and present the data to the user. So we built a little tool, which I can see that being applicable for customers where like you can talk to your applications and you can ask it questions about your orders, about your inventory, or whatever just using language, not having to know SQL, not having to know field names or column names, table names, all of that.

Charlie: So while you were talking just now I was making some notes, and you threw out some terms that I think are worth just pointing and just expanding upon. The first one you said was the large language model—that’s a term I hear all the time. I also hear it as LLMs; I hear that abbreviated sometimes as well, but what exactly is a large language model in this context?

Alex: Yeah so back prior to these large language models becoming popular, everyone thought that oh, to use AI you kind of have to train it with your own data and kind of get into the neural networks. But what’s interesting is that these huge models have been created, these neural networks, that encompass a whole bunch of knowledge from the internet. I’m sure they scraped a whole bunch of data to put this together, and inherent in it is you know, a form of intelligence. It’s something that can be creative; it can deduce things. So it’s not like just looking things up in a database, which is how we think of computing in a traditional sense, there’s kind of an intelligence to it. So that within itself, by starting to use these large models that have been built on huge servers with many, many parameters—you know building something like this for me or for an individual would be impossible, but using something that was already pre-built and it kind of has a whole bunch of knowledge really opens up the doors for all kinds of applications. It’s just that it has an innate intelligence is one way to describe it, rather than a very specific set of data that was encoded into it.

Charlie: I know through my own reading and everything that at least our customers are using, you know APIs are everywhere. We are in what we call the API economy, and you mentioned it also. You mentioned APIs as well right in here. If you’re going to be working with this technology at all, you need to really be using—I mean APIs is the way to connect. That’s foundational at this point.

Alex: Yeah and the previous example that I mentioned, that was the example of consuming the APIs. So going to the chat, the large language model, something like ChatGPT and the same thing that you would do using the browser interface when you’re working with it, you can do through an API, you can do programmatically. But another way to approach this is that you can create your own API that allows access to perhaps your IBM i data. You’ve got to be security conscious when you do all that and depending on control of what you want to expose, then you instruct the large language model to use your API to answer various questions. So that’s kind of like the reverse approach to doing the same thing, but it’s the concept of you can either consume an API or you can provide an API, and providing an API to a large language model that provides information about your business can be very powerful. So again, this allows you to have a conversation or create an application that allows you to have a conversation with your applications, and the large language model will on its own figure out how to call the appropriate API that you’ve exposed to it.

Charlie: Another thing we talked about, Alex, was you mentioned was the training of the platform itself. You said using your data perhaps or using public data, but what if someone wants to get involved in this? What kind of pre-training might they be looking at, or requirement to get this off the ground?

Alex: A lot of people coming into this think well, if we want it to be knowledgeable about our unique business data, our proprietary business data, then we’ve got to train the AI to know about it, and the traditional model to train a model kind of implies, at least in the AI circles, that you’re training something from the ground up and that can be applicable sometimes. But what I find is that in most cases you don’t need to that. You can start with an existing large language model and then use it, and there are two different ways that you can use it. Let me give you an example of how context could work. Let’s say you wanted to ask a question that is a very technical question, let’s say it’s about RPG programming. So as context you could provide the RPG manual to the large language model and say here’s a manual, now answer this question that I have about RPG programming. So that context can just be kind of inserted into the conversation—here I have a manual; answer some questions from it. Now currently—that context window is what they call it—how much context you can provide can be limited. I think the largest that I’ve seen is around 100,000 tokens, which give or take is 100,000 words, which is still a lot. That can still be a book of information. So providing context is one way, and there are also some clever ways, if you have more than 100,000 words of information that you want to provide as context, you can just kind of divide that up. So for example, I’m asking a question about RPG, but I’m specifically asking about record level access. Well instead of providing the whole manual for that question, you can say I have this whole manual, but first search and subset the information to the stuff that relates to record level access. Then when I ask the question, only that context is provided from the manual. So this whole idea of using context, you can really get far with it, and I like that a lot more than starting to think about training your own model from scratch. But if you do kind of run into limitations with providing context, you can either train a model—which again I said that that’s not really within reach for most people—but you can also what they call fine tune a model. That means you’re starting with an existing model but then you’re providing a whole bunch of example questions and answers. You do need a lot of data for it, so this is going to be a more difficult effort, I believe, compared to that context concept. But again, you can fine tune an existing model much easier than training something from the ground up. Through fine tuning you’re actually modifying the weight of the model and then they’re permanently stored as a new model. So to summarize, I don’t know if training in the traditional sense is the best way for most people, but fine tuning, or even better in most cases, you can use that context window to get your application to function the way that you want.

Charlie: Do you think fine tuning a model helps with help reducing some of the bias that you’re getting back from the platform itself?

Alex: Yeah, that’s exactly what the point of the fine tuning is, but there are some gotchas, and the biggest one is that you do need quite a bit of information or quite a bit of examples, and you don’t always have that. It can be labor intensive to fine tune a model.

Charlie: Right, and because the data biases that you’re getting can give you misleading information on the way back, so that’s important to know as well.

Alex: Yup, yup. I know we’re going off on a tangent, but you mentioned like information that you can’t reply, and they use the word hallucinations. I don’t know if you’ve heard of that term: the model is hallucinating. It’s giving you something that isn’t true, and this is where we found that creating this concept of an AI agent that keeps the conversation going, but when you’re trying to integrate with your business data, it can be in an automated way. I’ll give you an example of one thing that we’ve done. We’ve actually created an AI agent that can create COBOL and RPG programs—and you know we’ve traditionally been in the business of modernizing RPG applications, but more recently we’ve seen more demand for COBOL, so we’re doing more with COBOL these days. So we’ve created a COBOL developer AI agent, and it’s a little different than going into ChatGPT and saying here, I have this requirement. Write a COBOL program for it. Because that would only be the first step—that program could produce the wrong results. But our agent, this whole concept of an agent that we’ve created—and it’s not our concept. This is a very common concept, but we were able to implement it with COBOL on IBM i. It will then compile the program and if the compile fails, it communicates that back to the model and kind of keeps the conversation going in a little bit of a loop until the compile is right, and then it will proceed to the next step of running the code. It ensures that the code runs as expected. So this concept of an agent and having some way of validating the output that comes back from a model is a good way to battle those hallucinations and the wrong information that might come from it.

Charlie: I guess that would also apply to things like improving how SQL performs, improving the format of the SQL, improving the nature of the queries themselves, things like that, to get better results or faster results perhaps.

Alex: Yeah. Actually for one of our projects, this is exactly what we’ve done. We’ve had a bunch of existing SQL that was complicated and it wasn’t formatted well, and by en masse for the entire application, by running it through a large language model, we were able to fix up the formatting, optimize it, but at the same time validate that it’s giving us the right stuff.

Charlie: It’s really amazing what this tool can do. Again, a thing of ChatGPT for example, another thing that comes to my mind is creating test cases or use cases for your code. We talk about creating code and then creating test cases to actually verify that the code itself is good.
Where have you seen that being used or how have you seen that being used?

Alex: Yeah so that entire use case that I provided with creating an agent, this is when we realized how important something like having an automated testing system in place would be.  We’ve been involved in transformation projects and just as part of that, we were going deeper and deeper into implementing automated testing, and that’s good overall. That’s good whether you’re using AI or not, but having automated testing—and we’ve used various tools. Like on the Node.js side for example, we’ve used Mocha a lot. But if you start with kind of like test-driven development and you develop your tests first, that gives you the opportunity then to ask AI to fix code, write code, and the great thing is that if it messes up, that’s okay. You can throw the bad results out or you can provide more feedback to it to keep working it until it gets it right. So the idea of automated testing has been key in kind of getting the AI to work. Going back to the COBOL example, when we first started out, this concept of asking the AI to write COBOL programs, I think it was something like in 90% of the cases the initial program that it would create was not the right one or it would fail. But it’s through the conversation, it’s through that refinement that the automated testing system—and you can call it an automated testing system because it will automatically compile the code, run the code, verify that the output is what we would expect. [But] we were able to get the AI to go from a 90% failure rate to a 90% success rate, and the nice thing is that the 10% where it couldn’t finish the job, we just kind of escalated that to a human to look into. It wasn’t that it built bad code that was committed into our repos.

Charlie: After hearing all these different things—and we haven’t even talked about from the pure business perspective, we’re talking from a developer perspective—but how do you see or how do you explain to other CEOs how AI can give them a real competitive advantage in the marketplace? How do you see that or how do businesses capture and harness this and use it to their advantage from a business perspective, not just from a developers’ perspective?

Alex: Well I think at a high level, the statement that I can make is that if you don’t embrace AI, then your competitors will. It may take a little time and it may take a little bit of investment—especially at this stage, there’s a lot being figured out—but I think you have to jump on the bandwagon. You can’t let this pass you by, because your competitors are looking into this. There are many businesses cases—you know the BI example that I gave is one of them. We created a really interesting—we’ll call it a proof of concept, which is a copilot that sits on top of 5250 applications where as an end user as you’re using an application in a 5250 environment. If you’re stuck and you don’t know how to use something—which is common with older green screen applications because there might be cryptic function keys or little abbreviated codes or whatnot—you can ask the copilot, how do I do this within this application? And it can guide you through it and it can also do some of the actions for you. So you have to take capabilities like that and distill it down to what are some areas of the business where this could be helpful or something like the business intelligence example that I brought up earlier.

Charlie: What kind of advice could you give somebody who’s just dipping their toe in the water in this whole process? They know they want to do this, they know they need to do this, but they don’t even know how to start. What would be a very good first step for somebody? What’s your first step and maybe second step? How do you embrace this?

Alex: Good question. So the first step I would say is start using AI in your day to day. I think you’d be surprised how much it can help. So if you’re a developer and you’re working on something, get into the habit of—you know, be it ChatGPT or Anthropic or one of the tools—get into the habit of having AI be your companion and asking it questions. Oftentimes it can generate snippets of code for you. So just use it is the first step. If you’re using ChatGPT, go ahead and get the plus subscription. It’s 20 bucks a month, but I think it’s worth it because you’re getting the latest and greatest tool. The models are smarter and specifically what I find, like for RPG programming, the latest and greatest model just does a little better with RPG programming than the earlier one does. So that would be the first step: just make sure that you’re using it on a regular basis. A lot of folks kind of feel odd about it, and you have to get through that hurdle. It is odd in the beginning that you’re asking it things and you’re not going to always get the right answers, so you kind of have to play with it and get a sense for how do I ask the question. So I think the first step is just start using it, and then if you want to go deeper, I would say look into the ChatGPT API. The API is actually that simple, because at the end of the day what’s important is the prompt, and the API is simply sending over the prompt. There’s a little more nuance to it, but at the end of the day it’s just sending over some messages, some free form messages in natural language—it’s that simple. So start playing with it and see what you can do with it.

Charlie: And it’s funny you mentioned the prompt. That really, in my learning about AI as well the art of the prompt, it’s very fascinating to me how you can just use almost the same set of words, just rearranging them or tweaking the nuance of them, and how you get a very different response back from the engine. It’s really fascinating.

Alex: For sure. I learned a lot about the different ways to prompt, and maybe this is a good opportunity for me to tell you about—so we built something we call Alex. It’s an internal chat bot that answers questions about Profound’s technologies, but not just kind of customer facing information, even some of the stuff that’s documented internally. This is something that I was involved in developing, and the biggest effort was to figure out the prompt. The coding, the actual programming was less of an effort than figuring out what they call a system prompt, like the initial prompt that tells you what you do and how you are. So we were messing with it and then iterating and trying it until it would answer questions in the way that we would want. So writing that free form text, that natural language text, was the biggest thing. They call that prompt engineering, and you’ve just got to play with it and you’ve got to get a feel for how the model works and how it’s going to generate answers back to you based on your prompts.

Charlie: Yeah. One of the things that when I was playing with it, we were asking the same question, but then we would add to the prompt, respond as this person.

Alex: Oh yeah. That’s a good example.

Charlie: And it was so fascinating to see the different responses, the same identical question in different personalities. Again this goes back to how we started the conversation why it’s so exciting to me, because it’s just such a game-changer at its core.

Alex: Yeah, that’s actually a good trick. Sometimes you want to say you know, answer this as if I’m in fifth grade or answer this as if I’m a PhD, and you will get different insights and different explanations depending on what level of explanation that you want from the model.

Charlie: Exactly. All right, so now I’m going to put you on the hot seat before we wrap this up, and the hot seat is this has been out for—we’ll call it a year. AI is not new of course, but ChatGPT landed on the scene about a year ago give or take I think, and it’s again revolutionized our technology overall in my opinion. Is it even possible to come up with a predication with any accuracy at all short of asking ChatGPT what the future is in the next five or ten years, or even the next three years? Can we even predict that far out anymore, or is it so exponential that we can’t even begin to imagine it?

Alex: I think that’s a very difficult question and it is difficult to predict, but one thing that I can say for certain is that it is something that’s going to be a part of our lives. It’s going to make an impact, we just don’t know how fast it’s going to grow. We don’t know exactly how fast it’s going to grow. We don’t know the exact impact it’s going to have. We don’t know which jobs are going to be affected and how, but we know there’s going to be big shifts. I can say that with certainty.

Charlie: And it’s literally in every aspect of our lives, from household appliances to cars to our smartphones and anything in between. It’s already there and all these things are going to use AI. If they’re not using it today, they will be using it very soon in the future.

Alex: Yeah, yeah, absolutely. You mentioned cars and my eyes lit up because I’m a big fan of self-driving cars. I’ve been playing with that forever and I love how that’s progressed. My car now gets me from place to place. No matter what anyone says, it’s a big convenience feature for me, although I do scare a lot of people that are passengers. They’re like no, you’ve got to pay attention [laughs].

Charlie: How soon before you’re ready to sit in the backseat by yourself and have the car bring you somewhere?

Alex: I think that’s a big hump, because even though the car today can get me from place to place most of the time without me interjecting, the edge cases are going to take a long time. So I don’t think that’s like coming super soon. You have guys like Elon Musk making promises, but I don’t think it’s coming like, really soon. I think it’s going to be awhile before you’re going to trust it that much, and of course there’s also all those safety [regulators] would have to sort of approve that, and I don’t see that moving too fast.

Charlie: We’re not quite there yet. Hey Alex, this has been such a great discussion. Thank you so much. This topic is just so fascinating. I just love talking about it. It’s wonderful. So thank you very much for your time today. It’s been a lot of fun, a real treat and a lot of fun to sit here with you and chat—no pun intended—chat about AI.

Alex: Well thank you, Charlie. It’s also a pleasure for me. This is a topic that excites me very much, so I’m very glad that you brought me on and you had this conversation with me.

Charlie: Terrific. Thanks so much, Alex. For everybody else listening, by all means please visit the TechChannel website. They have so much great content on there. You’ll be far better off I promise you. And also, as Alex did suggest, definitely consider opening up an account with ChatGPT or one of the other platforms. You will not be sorry that you did. This is a great technology to get involved with. Until next month everybody, take care. We’ll speak to you soon. Bye now.