Skip to main content

AI on IBM i: A POWERUp Roundtable

Highlights from a recent AI roundtable featuring IBM i experts Charlie Guarino, Jesse Gorzinski, Alex Roytman and Thomas DeCorte

From the raised floor of IT to the top of the corporate ladder, your organization likely consists of many individuals who have much to learn about AI and its potential business value. But those who run their operations on the IBM i platform can embark on this journey knowing that they already have one important thing going for them: their data.

This is one of the points raised during a roundtable session hosted during the 2024 COMMON POWERUp conference in Fort Worth, Texas, in May.

“We have customers… sometimes [with] over 30 years of good data that they’re sitting on,” IBM’s Jesse Gorzinski, business architect for AI on IBM i, noted. “[AI models] are very data-hungry. For an AI model to be good, it needs more data. That’s when the models get refined; that’s when they get more efficient. With so much data sitting in Db2, that’s really an amazing edge when we start venturing into AI.”

Unsurprisingly, the subject of AI was prominent at COMMON, with each of the roundtable panelists also conducting an individual session on the topic. In addition to Gorzinski, the participants were: moderator Charlie Guarino, president of Central Park Data Systems (and TechChannel contributor); Alex Roytman of Profound Software, the IBM i modernization and tools provider; and Thomas DeCorte, a researcher and data scientist based at Belgium’s University of Antwerp.

Building AI Models on IBM i

DeCorte echoed Gorzinski’s comment about IBM i data, noting that the level of integration between Db2 and the operating system provides advantages in building AI models.

“You have…possibilities to build interesting things that actually perform very well,” he said. “And maybe as a side note, due to the vast amount of data, even somebody that’s not very good at building a model can produce such a high accuracy or precise model because [Db2 is] just feeding in so much information.”

The panelists agreed that IBM i clients exploring AI should start small. DeCorte suggested focusing on narrow-purpose models, like customer churn or sales forecasting. Roytman cited another practical application: using a commercial or open-source large language model (LLM) to create a chatbot that calls data from structured documents.

“We work with a lot of IBM i clients and what I’ve seen [is] there’s always a backlog, things like reports to create…there’s never enough time to get to everything,” he said. “For me AI changes the game because you can create applications very quickly, number one. Number two, you can enable your end users to ask natural language questions and AI can basically create the reports on the fly. So those capabilities are there because we have a lot of database-driven application. And beyond just training on your IBM i data, you can do integration of LLMs to interface with that data and answer natural language questions, which eliminates a lot of potential backlog.”

The panelists also highlighted other key topics surrounding AI and IBM i.

The Limitations of AI

The panelists expounded on AI’s limitations and the challenges that will have to be addressed going forward. Gorzinski mentioned an IBM i client that conducted model training on its inventory information. The good news is that the project came together rather quickly. However, he went on to add a couple of caveats—the first being that everything comes down to data.

“If a human doesn’t know anything about the data, the AI won’t either. That’s actually a key thing to remember. It’s very rare that you’ll have data that makes no sense to all the humans involved, but the AI will somehow magically figure it out,” Gorzinski said.

It’s true: AI isn’t magic. Nor is it the solution to everything.

“We have a lot of industries trying to train models. Especially in the financial and insurance sectors we have people doing risk analysis, doing all kinds of things that were previously done through classical algorithms, and now they’re saying, well, what if we throw AI at it? And…at least for some of the use cases, I suspect we’re going to discover that AI is actually not better,” Gorzinski added. “In some cases, the classic algorithm might still be the most effective route, especially when you factor costs.”

Security and Large Language Models

The discussion later turned to LLMs, with DeCorte explaining why he wouldn’t use ChatGPT, the chatbot/virtual assistant developed by OpenAI, for business purposes.

“There have been several studies where if you try even some simple things, ChatGPT tends to fail,” he said. “Like if you ask for 10 synonyms of the same word, it tends to give you the same one five times. Also…the things you put in can be put back out, meaning that if you have a user or programmer who copied part of his code and the password was in there, then it can be put back out to a different user somewhere else entirely. So, you really need to watch out with it…I think most IBM i customers want a really secure environment where they can do their operations…so that’s why I would opt not to use that.”

DeCorte added that IBM has its own LLMs, which are designed to provide security and auditability. A family of Granite code models was recently released to the open-source community.

AI Is Manageable

Following further discussion of ethics, legislation, energy consumption and other challenges surrounding AI, Guarino offered this closing statement: “I’ve spoken to some people even at this very conference and they said, ‘I’m afraid of AI’ or ‘I don’t want AI.’ And I think in those conversations that I’ve had, it’s just this broad—it’s too abstract to them. They grapple with this, and they can’t figure it out. AI is just this big scary monster. I think if you spend some time educating yourself, learning about what it actually can do and what it can’t do, things like that, it becomes much more manageable in your head and in reality. Hopefully, that’s a fair statement.”