Video: Supercharge Your IBM i Applications With Generative AI
This transcript has been edited lightly for clarity.
Artificial intelligence has been getting a lot of attention these days. Elon Musk believes that it’s the most disruptive force in history and it’ll eventually take over the workforce. Now, whether or not that’s true, there’s no doubt that artificial intelligence is going to have a big impact on a wide range of industries. Now, Gartner predicts that by 2025, the use of AI will be a top concern of 70% of enterprises. And this is especially true in knowledge work. Already in 2023, in technology, media and telecom, 14% are regularly using generative AI at work and only 12% have no exposure or don’t know. Artificial intelligence has been around for a long time. In 1950, Alan Turing introduced the Turing test and the term artificial intelligence was coined in 1956 at the Dartmouth Conference. It wasn’t until much later, however, that the computing power was available that artificial intelligence required.
Early chatbots in the 1990s were programmed to identify specific keywords and respond with scripted responses. They were very good at a specific task: like Deep Blue was able to play chess and beat the world champion. In the early two thousands, digital assistants were able to recognize and interpret speech and images. They could classify things and find information that’s already known. This is like Watson, that was able to win the Jeopardy championship. AI really got the public’s attention and went mainstream, however, in 2022, when OpenAI introduced ChatGPT. And since then, all of the other big tech companies are jumping on board and have their own artificial intelligence. Google has Bard, Meta has Llama 2. Microsoft has artificial intelligence, generative AI on AWS, and Elon Musk has introduced Grok, his own generative AI program. And of course, Watson, the AI that won the Jeopardy championship is still around and available from IBM. We’re going to be using OpenAI and ChatGPT, but the concepts are going to be the same for all of these others. Now, generative AI is different because it doesn’t just look up things that it already know. It’s able to create new content on the fly. It’s generative. And this is what we’re going to look at today.
Getting Started With ChatGPT
To get started with ChatGPT, we’re here on openai.com/chatgpt. We’re just going to click the “Try ChatGPT” button right here. And it does require you to create an account if you don’t already have one. So you’ll need to go through the signup process with your email. And if you have an account already, you just click in the login and you just enter your email address and your password and to log in. And this is ChatGPT. So I’m using the 3.5.
The other option is GPT-4. That requires an upgrade to plus, and that’s a monthly membership. So I’m just using the ChatGPT-3.5 version. And over here is where we can have a conversation with ChatGPT. So we type in the message and chat, GPT is going to respond back to us. And then over on the left here is going to be a history of all of the chats that we have. So we can open up a new chat and start over. And one of the things to keep in mind is that ChatGPT 3.5 does not have access to the internet and cannot provide real-time information, but it can look up historical information that has happened as of its last update in January 2022. So that’s something to keep in mind. Now, some of the other models do have access to the internet. It can get you real time information.
But ChatGPT3.5 does not. Okay, and we’re going to start a new chat and what we’re going to ask is about the AS/400. So let’s ask when the AS/400 was created. So it was created in 1988. It was popular for its reliability, security, and handling and workloads. And let’s ask what programming languages were available. So, RPG, of course, COBOL, CL and ILE, and it means you could use RPG, COBOL, C and other languages in an application and Java. The AS/400 evolved, Java became available. Now something with a conversation with a particular chat is that you can refer to prior questions. So here what I will say is simply was basic available as well and it will understand the context of the conversation. It knows that I’m asking about programming languages that were available on the AS/400. And we’re going to start a new chat here and ask, what are the three top benefits of the RPG programming language?
This is where we’re going to start asking ChatGPT to be creative and actually generate some information. Now we’re going to ask ChatGPT to write a haiku about the benefits. This is where we’re getting into the area where you can’t just go into Google and search for things. This is where these AI products are going to be generating responses. So this is where it really gets fun because now we can start getting creative with things. And now what I want is I want to create an error message that lets the user know they need to fill out all the required fields. Well, please complete all required fields before proceeding. Well, that is definitely an appropriate response, but let’s see if we can’t get a little bit better. So I wanted the message as a kindergarten teacher, whoopsie looks like we forgot something. So this is where it’s using its generative capabilities and it understands how a kindergartner teacher would speak.
So now we want, as an evil dictator, failure to complete all required fields shall result in severe consequences. So that’s definitely like an evil dictator. And now what we want to make it sarcastic. “Oh, what a surprise. Seems like someone decided to skip the required part.”
So that’s just kind of an introduction into ChatGPT and the individual chats. Now we don’t want to have to log into ChatGPT and have a conversation through the browser. So what we’re going to look at next is how to use the API and programmatically interact with ChatGPT.
Setting Up API Keys With ChatGPT
So now we want to take a look behind the scenes and see how all of this stuff works. So we’re going to open up and read the documentation. The documentation, it’s going to start off with an introduction with some key concepts and one of those is tokens, but we’re going to come back to that in just a second because what I want to talk about first is the different models that are available. So, what is an AI model? I think one of the best explanations is from IBM and they explain that it’s a program that’s been trained on a set of data that applies different algorithms to your input in order to get the desired output. What does that mean? Well, we’ve been using the chat, ChatGPT 3.5. 3.4 is available if you have your subscription. And these are models that can understand and generate natural language and you can have a text conversation.
DALL-E is a model that can generate images based on a description that you type into text. So you can type something, I want a picture of a dog on the moon with a surfboard and it will generate an image for you. The TTS is a model that convert text into spoken output. So text to speech, and Whisper is the opposite of that. It takes an audio and converts it into text. So, these are the different models that are available and we’re going to be dealing with the ChatGPT 3.5 model. Now it’s time to go back and talk about tokens because tokens are going to be very important. Tokens are important because you pay for the service based on the number of tokens that you use. And each model that you use has a limit on the number of tokens that it can deal with. So, what exactly is a token?
Well, the easiest way to understand is to visualize it, go into the token tool. And what happens is all of the input that you send in and all of the output that GPT sends back is broken into little chunks. And each of these little chunks is a token. And again, this is important because you pay by the token and the models are limited in the number of tokens that they can deal with.
So here in the GPT 3.5 model, you can see that it’s limited to 4,096 tokens in a context window. That context window would be one of those individual chats where it remembers the conversation that you’re having. So you’re limited to 4,096 tokens in a particular chat for the pricing. If we look at the pricing documentation here, you can see if we scroll down here, you’re going to end up paying by the token, but it’s very, very cheap.
It’s a tenth of a penny for 1,000 tokens of input and two tenths of a penny for 1,000 tokens of output. Now there are rate limits that are also come into play and we want to stay in the free tier. And so our free tier, it’s limited by the number of requests per minute and day and the tokens per minute and per day. So to stay in the free tier for ChatGPT 3.5 turbo, we’re limited to three requests per minute, 200 requests per day or 40,000 tokens per minute. And that’s whichever one comes first. And we need to stay under a hundred dollars of usage based on the number of tokens. So it is very easy to stay in the free tier and you won’t have any problems just playing around with ChatGPT and setting up some APIs. And the first place we’re going to go in order to create our API is the API reference.
Now here, the first thing we’re going to want to take care of is the authentication because we’re going to need to get an API key and send that in with each of our API requests. So you go to the API keys page and easy, you just simply create a new secret key and give it a name new key for video right Now, it’s very important that when this key gets created and they show it to you that you copy this and you put it someplace safe, they’ll never show you this key again, right? So you want to put it in your password manager or something like that. You don’t want to put it in a public repository up in GitHub or anything like that. You need to be very careful with your key because anybody with your key can start making API calls and it’ll be charged against your account.
Making API Calls
So once you have your API key set up, then we’re ready to actually start making API calls. So what does that look like? Well, we’re going to look at these endpoints, we’re going to look at the chat endpoint, and we want to create a chat completion. So what we need to do is make a post request to this URL and send in the messages that’s the input or the chat conversation and the model that we want to talk with. And so we’re going to open up, I’m going to grab this URL and paste this into postman. I’m going to make a new HTTP request to that URL. And it is a post method request, right? So post to that URL, I have to add my authentication. So I have auth and I have a bearer token. And my bearer token is, whoops, not that, it is the token that they gave me. Let me copy that and paste that over here. And in the request body, we’re going to send in JSON, and that JSON is going to look kind of like this. So let me copy that and paste that over here.
What we’re saying here is we want to communicate with the ChatGPT 3.5 turbo model and we have a conversation or a chat, a context that we’re going to send in the form of these messages. This first one has a role of system. This is just our way of identifying or telling the system, giving it some background information. This is some of that prompt engineering, and we’re telling the ChatGPT, that it’s a helpful assistant. And then these roles of user messages, these are the messages that we are the input into ChatGPT and we’re saying hello and let’s see what we get back.
So what we’re going to get back is a JSON object. It’s going to have a whole bunch of information, but really what we’re looking at is these choices here, and it’s an array. You can opt to get multiple responses back. We just want one though. So there’s this first choice, the message here, the role is from the assistant and the content is, hello, how can I assist you today? Okay. And then it also tell why the response had finished, whether it was at the end and was completed or if it reached the end of the tokens that you had allotted, that sort of a thing. So here we said to the chat, GPT, hello, and they said, hello, how can I assist you today? So now let’s see what would happen that if instead of a helpful assistant, we were talking with an angry toddler, what would the response be in that case? So, we’ll send that. We say hello and now grr, what do you want? I’m not in the mood for talking. So you can see where these system messages give you a lot of control over what you’re going to get back.
And we’ll come back over into the documentation. And there’s another couple of ways that we can control what ChatGPT gives back to us. And these are going to be parameters into the API. This first one is the frequency penalty. And what that does is it applies a penalty to tokens that are being returned to prevent chat G-T-P-G-P-T from reusing the same tokens or values again and again. If GPT is becoming repetitive, you can apply a frequency penalty to make each of your responses more unique. The other ones, here’s the max tokens and that will cause your request to stop once that number of tokens has been reached. N is how many chat completion choices to return. Remember we said you can get more than one response back at a time. So if you wanted two or three to allow the user to choose, you would use this n parameter into the API.
The other ones are this temperature and top P, and they generally recommend using one or the other. And this temperature, that’s the one that we’re going to play around with. And it’s a value between zero and two. And higher values will make the output more random. So if you want something that’s more deterministic, you want to use a lower temperature. And if you want something that’s a little bit more random creative, a little bit wackier, you can set your temperature a little bit higher. Let’s take a look and see what the temperature does to our API requests.
Now we’re going to come back over into our postman. What we’re going to do is we’re going to add a temperature with an E on there and we’re going to make this a temperature of minus two. And we’ll go ahead and send that. This is inside of my array that’s in the wrong spot. So let me cut that and paste that down there. And now we’re going to send this request and ah, minus two is less than the minimum of zero, so it’s a zero to two. So we’ll set this to zero and send that request. And now this is an angry toddler says, “I don’t want to talk to you. Go away.” And if we send that again, we’re going to kind of get the same response every time. So now if we set this temperature to two, we should get a more creative random response every time.
So now, whoa. Wow. So here is our choices, zero message content and we, it says hello, little entrance to show 48. 48. This is just gibberish. So we have clearly set the temperature way too high for this. So let’s go cut this down to maybe a 1.5 on the temperature scale and see if that helps us out a little bit. Ah, I’m angry, roar. So that’s a much better response that we’re getting. So you can use this temperature to dial in the responses that you want. We’ll try this one more time. We should get something very different. So that’s what the temperature is going to control. So between these system messages and your temperature, you have quite a bit of control over the type of responses that ChatGPT is going to be sending back to you. And we’re going to come back to ChatGPT in just a minute.
But for right now, I want to just quickly pivot and talk about SQL. And the first thing we’re going to look at in SQL is this function called HTTP post. And what this is going to allow us to do is make an HTTP request to the ChatGPT API, the format of the request you send in the URL, the message and the headers, and this is the same information we were sending with postman. Here’s our URL, here’s the message that we’re sending. And the headers include the authorization where you send in your token and the content type that is explaining that it’s application JSON. And so we fill those in. We are going to make a call to the ChatGPT completions, we’re going to send in our JSON request along with the headers, here’s the token and the content type. And when we run this SQL, what we’re going to get back is the JSO response from ChatGPT, right? Again, it’s the same response that we were getting over here in our postman.
And now we’re going to look at the JSON table function from SQL. And what this is going to allow us to do is it’s going to return a result table like rows and columns based on a JavaScript object that we send in. And the basic syntax looks like this: You select all from JSON table. The first parameter is going to be the JSON that you want to parse. The second is a path to the JSON data that you want included in your response. And then you define the column names. So we’ll need to give a column name, data type and then path to the individual column data elements that we want to include. So for example, if our return message from ChatGPT was just a single object with the message in there, our path to the data we want to be included would just be a dollar sign.
Now, the dollar sign represents the outermost JavaScript object. So it’s the whole response. And then we can declare a variable called RESP, defined it as 1028 varchar. And now we specify the path to the data for this column relative to the path to data being included. Now here we’re including the entire JavaScript object. So the path relative to that for this particular response column is going to be the message attribute. So if we run that, we can see that we get returned back a single column called RESP that has the value of the message attribute of the main JavaScript object. Now this column name could be anything you want or we can make that say response instead. And now we have a column being returned that says response. Now if the message contains another object, and that’s where the data is that we want, we want this message content, we have a couple of ways to go about doing this.
The first way is to take our column data or the column path and specify that we want the message content from the main JavaScript object. So this will look at the main object, it’ll look at the message attribute and then find the content attribute of that. And we can run this and we get our response there. The other way to do that would be to change the JavaScript object data being returned as a response. So here we’re saying we only want the message object this time, and now the path to the column data is going to be just the content attribute of the message object. And we run that and we get response of go away again. Now applying this to the response that we got back from ChatGPT, we had a very large JSON object. What we want is the choices zero. We want the first choice element in the choices array. And that was way over here. Here’s our choices array. We want the first object, we want the message from there and we want the content. So what we’re saying is we only care about the choices array and we only care about the first element of that array. And then we’re going to define a column called response and the data for that column relative to the choices element. Here is the message content. So if we run this SQL, we get our response parsed out of that JSON object that we received back from ChatGPT.
Now what we want to do is combine our JSON table with our HTTP post. Now remember we use the HTTP post to get the response from ChatGPT. So what we’re going to do now is use JSO table and instead of having a hardcoded JSON in here, what we want instead is our HTTP post request to ChatGPT. So this is going to make that HTTP request and get back the JSON response and then that will be sent through the JSO table to give us back our parsed response from ChatGPT. Lemme run this a couple of times and you can see that it’s making these requests. I’m probably going to hit my limit here pretty soon for requests per minute, but that’s how easy it is to use SQL to make an HTTP request and then parse the JSON response and get back just the information that you want. Well, this SQL is working very well.
And what we want, it’s a lot of typing that we don’t want to have to do every time we want to make a request. So we’re going to look at one more SQL item before we swing back around, back to ChatGPT and that SQL item is going to be creating a function. And this is going to make it really easy to call ChatGPT. Our function name is going to be ask ChatGPT and it accepts up to four parameters. The first one is the prompt or the user content message that you want to send into ChatGPT. And this is required and it’s going to be a varchar 1024. Next we have the system message, the temperature and the max tokens, which are all optional. This function is going to return a varchar 4096. That’s the response we get back from ChatGPT.
Here we’re going to declare our number of variables. We have the API URL, that’s our chat completions, URL and the API key. The model that we want to call. We’re going to specify the headers and the messages. The messages is the array, it contains the system role message and the user role message. And finally we build the request. So the headers include the authorization, which is the API key and the content type of application. JSON. If you have sent in a system message, we’re going to go ahead and create the object for that. The system role. And this is where we told ChatGPT that it was a kindergarten teacher or an angry toddler, that sort of a thing. There is none by default, but you can specify that if you want to. The user message is going to be required and it includes the prompt that you want to ask ChatGPT.
We build the object for that and the messages is an array which contains the system role and the user role messages. Finally, we build the request, which is our JSON. It includes the model, temperature, max token and the messages array. And then we use the HTTP post and JSON table to call the API get back the response and parse it back and then we send back the API response from this particular function. So if we save this and run the SQL to create the function, we’ll go ahead and do that. Okay, that was successful. So we should now be able to ask ChatGPT any kind of questions we want fairly easily by just calling this SQL function. So we’re asking the IBM i’s midrange computer, Excel and RPG, SQL, even open-source languages and web development. Okay, we can control the temperature, right? Remember the default temperature is going to be one we can send in a temperature of zero. And the lower the temperature, the more deterministic the response will be.
And here we go, known for reliability, security, scalability. If we want a less deterministic response creative, we can up the temperature to 1.5. I wouldn’t recommend going too far above 1.5 for those sorts of things. And we can also specify that the system messages that ChatGPT is a CIO writing a memo to the IT department. We specify the max tokens, and we want ChatGPT to explain in one short paragraph and five bullet points why the company is switching from Windows servers to the IBM i. So let’s go ahead and run that and see what we get.
And let’s copy this and put that over here and see what ChatGPT came up with for us here. So it very much looks like a memo to the IT department from the CIO switching from Windows to IBM i, important decision transition servers to the IBM platform intended to enhance overall IT infrastructure, which it will provide you with an explanation, increased reliability and stability, enhanced system security, streamlined system management, improve scalability and performance cost efficiency in the long run. All right, so you can see that by creating this SQL function, we’ve made it very easy to ask ChatGPT to give us back some information, right? So we created an entire memo with a one statement here.
ChatGPT and RPG Applications
So now that we have our function created, I just want to show how easy it would be to incorporate something like this into an existing RPG application. So I created an example item master file where you have the item number or code and a description, which is going to be entered in English.
What we want is to get the Spanish, French and Italian translations and get those translations updated into the database for us. So I created a display file where we can enter in the item information, it has just the item number and the description, which would be in English and an RPG program to display the display file and write to the file for us. So we have our display file and our item master file. Now there isn’t any kind of error trapping or anything like that in this example program. There’s just to demonstrate use of the function. So what we’re going to do is we’re just going to display the screen to the user. If they press enter, we are going to call translate and get the Spanish, French and Italian translations and then write it into the database. So this translate procedure, what that’s going to do, you send in the English and tell it what language you want it translated into.
It will create the prompt to send a ChatGPT, and it’ll send you back the response that ChatGPT gives to us. So let’s just go ahead and debug this. We’ll go to the bottom here and I’m just going to put a break point in there. We’ll call this now we need to create an item and I just went on Amazon and found this particular item in the deals for today, some sort of an air quality monitor thing. So I’m going to copy this text and go back to the green screen over here. We’ll call this “air monitor” and I’m going to paste that in. You do need to be careful of invalid things such as smart quotes and that sort of stuff. So go ahead and change those, and then we’re going to press enter. And now if we look at the prompt variable, you can see that we’re telling GPT to translate this into Spanish.
And what we get back is going to be the translated text. And then it’ll do the same thing for the Italian and the French. And now at this point, we should have data in our file. So let’s take a look at the item master. Let’s take a look at the item master. Maybe I didn’t have a commit in there. So let’s, there we go. So there’s our air monitor, our English description, and now here comes the real test. Let’s take a look. This is going to be our Spanish, so let’s go over here and put some text in here. Spanish to English. Know your air smart air quality monitor knows what’s in your indoor environment. So that sounds pretty good. Here’s the French, let’s do this French, indoor air quality and the Italian that one more time. There we go. Italian detected. Know your air Amazon smart quality monitor makes it easy to understand what’s in your indoor air. So just like that, we now have an RPG program that not only stores the English, but it’ll go and get those translations for you and keep track of those as well.
Other ChatGPT Use Cases
And now what we’re going to do, we’re going to go back to ChatGPT and see what are some other possibilities and some things that ChatGPT can do pretty well for us. And I’m back on the openai.com/examples and these are the prompt examples for all of the things that ChatGPT can do. Pretty good. Now, one of them here is grammar correction, where you can send in your badly formatted English and ChatGPT can correct that for you. That would be really good for memos and that sort of thing. To summarize something for a second grader, emoji translations, it can do sentiment analysis. If you send in a bunch of reviews, it could tell you what your users liked or didn’t like about their experience. Keywords, product name generator. So if you had a description and some attributes for an item, you can maybe get a good name for it from ChatGPT. Tweet classifier, mood to colors, those sorts of things.
Now, you can also do code stuff. Python bug fixer, ChatGPT is pretty darn good with Python code. I don’t quite trust it with RPG, you might use it to get started with your RPG code. The Python I would have a whole lot more faith in for ChatGPT to write some code for us. Memo writer, we already took a look at that translations, we’ve done that. Meeting notes summarizer. So you can send in a large document and it can summarize that into bullet points for you. So those are kind of cool. So there’s lots of things that ChatGPT can do, and it’s just an SQL function that we’ve created that can allow us to communicate with ChatGPT. So it’d be easy enough to do, obviously doesn’t have to be RPG can be any of your languages, but it doesn’t preclude RPG and SQL from being used.
Ethical Considerations for Generative AI
And now just because you can do something doesn’t mean that you necessarily want to do something. And if you’re going to use AI, especially in the business, you need to be very careful. Now, IBM has a blog post exploring the risk and alternatives of ChatGPT. Now this is more of a commercial for watsonx, but it does bring up some very good concerns and you could Google this yourself obviously as well. So one of the issues is that ChatGPT’s training data was never fact-checked and the model ChatGPT can generate responses even if no factual information exists. It can create false news stories. It can give you answers based on stories that don’t exist. And this is a phenomenon known as hallucination, right? The data isn’t curated, so it may contain biases. Hackers are doing all sorts of stuff that you can have prompt injection.
So you need to be secure in the way that you deal with these things. Personal data, including payment information and chat history, can be leaked or hacked or shared with third parties and it may even become part of the data model and could be part of your confidential information could become part of ChatGPT’s answers to other people’s questions. Okay? So never send confidential information to ChatGPT and don’t send other people’s data to ChatGPT, too. The other one is intellectual rights, property rights. If the AI generates code for you, do you actually own that code, right? What if the AI uses someone else’s code from GitHub to generate code for you? Are you violating any licenses and that sort of a thing? So there are some concerns about AI apart from Skynet taking over the world. There are legitimate concerns for using AI in the business. So just something that you want to keep in mind and make sure that you do your due diligence before you start unleashing ChatGPT in the enterprise. And then there’s one last thing we need to do. I need to create a social media post to advertise this new video so that hopefully you’ll find it intriguing and watch the video and share with your friends. So that’s all that I have. Thank you very much for your time and good luck with your ChatGPT and AI adventures.