Skip to main content

The Business of AI: Unlocking AI for Legacy Systems, With Michael Curry of Rocket Software

Brian Silverman interviews his friend, Michael Curry, about ways of unlocking the value of AI for the mainframe and IBM i

TechChannel AI

This is the seventh article in the series, “The Business of AI.” In case you missed them, the first six articles are linked at the bottom of this page.

This transcript has been edited for clarity.

Brian Silverman:

Michael thank you for taking the time for this interview.

You and I have been friends and colleagues since our time at IBM, and as I mentioned, I am writing a series of articles called “The Business of AI” for TechChannel. The focus is on helping organizations develop AI strategies that drive real business value.

Given Rocket’s long history of providing solutions for IBM mainframe and IBM i systems, along with its expertise in data, application modernization and AI enablement, I wanted to tap into your insights for my next article focused on “Unlocking Value of AI Evolving Legacy Systems.”

About Michael Curry

Brian Silverman: Michael, please introduce yourself and your responsibilities at Rocket Software.

Michael Curry: I’ve been at Rocket Software for about a year and a half. I run the data modernization business unit within Rocket, which is really a set of software that’s focused on helping companies to modernize their data estates. Our particular focus is helping customers with their data that tend to be difficult to integrate supporting initiatives including analytics and AI.

(See Curry’s bio in the addendum below this Q&A)

Leveraging Data Beyond Low-Hanging Fruit

Silverman: We see so much hype on the different co-pilots, the different AI models and assistants out there.

What are you hearing from Rocket customers with regards to their legacy systems, and how they approach the AI from a strategy point of view?

Curry: It’s interesting. I think there’s a few things that are happening in parallel. One of the big things that most companies are recognizing is that AI is only as good as the data that you bring to it including generative and non-generative AI initiatives.

People have exhausted the low-hanging fruit, that’s relatively easy to move data from where it lives today into these newer modern data stacks and generative AI environments that companies are working with.

Now they’re looking at all the data that they are not using. In fact, if you think about the number of transactions that happen all around the world, something like 70% of all transactions touch a mainframe at some point. Many of the largest companies in the world still run mainframes that run the core of their business. It is not just mainframes. There’s also a large number of IBM i(AS/400) boxes that also run transactions for businesses.

These core systems are the framework that runs these businesses, and they have a huge amount of data within them. This data tends to not be the low-hanging fruit and therefore didn’t make it into those analytics initiatives. So that’s one big thing; there’s this big blind spot.

We actually did a survey last year and we found that it was less than 30% of customers that had mainframes thought they were fully leveraging the mainframe data in AI and analytical initiatives. That’s a clear indication that there’s a huge opportunity. It’s the most important business data that runs the core of your business. It has those transactions. Yet it’s not getting into those initiatives because of these technology gaps.

The other big blind spot that we are seeing is on the content side: unstructured data. I am always fascinated with unstructured data. The statistics out there are pretty common for that from 80 to 90% of all enterprise data is unstructured. If you flip it around, we’ve only codified about 10 to 20% of it, a tiny little fraction.

What is that data? It is contracts. It’s your communications with your supply chain, purchase orders, invoices and bills of lading and all of those types of data. It’s your policies. It’s regulatory documents and customer onboarding and interactions. The data which is unstructured represents a vast majority of what matters to the business. Yet it has been almost impossible to analyze. People have tried and tried, put it into closets and protect it, put big firewalls around it, and then search against it. But the search was not that great. With generative AI that opportunity to really understand it and leverage it and democratize it for people to be able to use it in new ways is this biggest change that’s happening.

The gaps between transactional data that runs these businesses, and what you can do with that in the AI world. As well as the even bigger gap with unstructured data, all this fantastic, unstructured data, and these generative AI opportunities are the two main things that we are seeing.

Silverman: I think it’s interesting it takes both sets of data, especially with the future of Agentic AI that needs to interact with both types of data to really realize value.

Curry: Yes!

Barriers to Utilizing Data

Silverman: Are there any unique challenges that your customers are facing that is different and possibly surprising?

Curry: I wouldn’t say it’s necessarily surprising. But it’s the same problem. There’s two big problems that block people from being able to do those things. The first is there aren’t good tools and technologies in the modern tech stack for dealing with this kind of data.

That is true for both the core transactional system as well as unstructured data. There are not any good tools for dealing with the data rand bringing it into this AI world in a safe way. The problems are not just technology problems. There are definitely technology gaps. There’s also security challenges. How do I, as an example, take a customer contract and put it into a large language model, and not accidentally leak information out or train the model to do things that I’m not allowed to do. Those types of questions continue to remain.

The other side of it is a trust piece. People don’t trust the models because there’s been too many examples, such as using ChatGPT, that it’s very easy to confuse it. The more data you give it, if you have provided conflicting data, the more confused it gets. Overcoming those trust challenges is the second problem.

The third problem, that hasn’t really emerged very much yet—AI is going to cost a lot of money. If I were to take all of my data in my business that’s unstructured and run it through one of these large language models that will be very costly.

The idea of AI makes a lot of sense, but I want to be judicious about how I employ models to do work. I want to understand the impact, the cost impacts of actually using those models before I open them up. If I were to open up every customer being able to ask questions about their banking statements, that sounds like a great idea. But that could be, you know, hundreds or thousands of requests against back-end large language models per second that all of a sudden are overrunning my GPU capacity, and that gets expensive.

I think all three of those considerations, and the third one hasn’t hit as much, because most people aren’t to the point of scale in these kind of implementations.

Silverman: Yes, we talk about AI imitating humans, and forget that humans are fallible. So without careful planning, why would AI be any different?

In our last article focused on cybersecurity and AI, my colleague Paul Robinson focused on your key point: Focus on the data first to build the right foundation and implement the right cyber policies to power AI starting with the core information of the business.

Curry: That’s right.

Approaches to Hybrid Cloud

Silverman: How and why is Rocket approaching a hybrid cloud strategy to give clients a reasonable breadth of capabilities and to support scale?

Curry: There’s a key element there—a lot of data that is going to continue to live where it lives. The initial forays into cloud that most people went through over the last 10 years, the focus has been to move everything to the cloud. Going to dump it all into a data lake, or dump it all into S3 buckets, and then we will deal with it there.

That didn’t really work. You didn’t gain what you thought you were going to gain and it’s still expensive, and there are new security concerns. Rocket is really focused on allowing the data to live where it lives in a controlled environment and feeding it on a curated basis and in a secure way to the places where you’re doing your analytics and AI.

We provide that bridge. Analytics and AI are mostly being done in the cloud because often you need horizontal scalability. You want it to be close to where the data is being accessed by customers or distributed around the world. So doing that work in the cloud makes a lot of sense. Also, that is where a lot of the modern tools are as well. But you still want to control how data gets fed to make sure you’re not putting sensitive data there unless it’s absolutely necessary.

You also are being judicious, again, about when you use a model. You are not just throwing everything at models and hoping you get a good result. You’re controlling it and only using it when needed.

Silverman: In the article we did on foundation and large language models we added an addendum on cost to explain tokens, their cost, also understanding that it is difficult to predict the cost as you are paying for tokens in the prompt context window, the processing inference of the prompt and the results.

Curry: Especially with the techniques that are being used today, you’re calling the models a number of times. You’re taking the question, and you’re sending that to the model, and then you’re getting that cleaned up, and then, sending it again into the model, getting an answer back, and then you test them. Test answers against a different model, may have five or six or seven calls to different models as you go through that process. How do I even know what that’s going to cost? And you know, on an individual basis, when you’re looking at that on a small scale it’s nothing. It’s pennies. But when you open it up to all of your customers or all of your employees, that immediately can start to escalate quickly.

Finding the Right AI for Your Use Case

Silverman: That leads to the next question as we focus on data.

When you consider the Rocket customer base and your solutions for managing data, do you have recommendations considering their legacy systems and how they might start their strategy around data for AI?

Curry: Yes, this goes back to the point you made at the beginning, which is doing AI for AI sake. I’ve never seen a project be successful in that way. It’s really great for experimentation and learning.

But if you’re really trying to provide business value, you have to map AI initiatives to the outcome you’re looking for and that’s how we approach customers.

It’s not the technology. The technology is what it is, and there’s always a new state of the art, such as we all saw with Deepseek. That’s a great example of state-of-the-art shifting under everybody’s feet overnight. So, the technology really doesn’t matter. It’s exciting and constantly evolving. But it’s not the real thing.

The real thing is what are the use cases. What are you trying to achieve? Where do you find the business outcome and value of AI?

So as an example, we work with companies in the manufacturing space. What could they use Generative AI for? One of the key use cases that we’ve seen is in the manufacturing space, where our customers can have tens or hundreds of thousands of SKUs. These SKUs are complex technical products that they support through field technical people working with customers to answer questions and determine the applicability for a specific application. That’s an expensive process. That’s their scale limiter. So as a business, they can only move as fast as they can fill the demand for understanding whether those technical products will work for specific customer needs. They asked if they can apply generative AI to that problem with thousands of SKUs, and each of those SKUs has a technical set of documents and manuals, and other data associated with it. How can they use generative AI to answer some of those questions without having the technical person necessarily have to be involved directly in answering every one? Then I can scale up, not to replace the technical specialist, but give them the ability to not have to read seventeen manuals. They get back the snippets that are answering the customer question from each of the seventeen manuals and check the work of the AI model, but the AI model is providing them with the answer. This has been a huge savings and a scaling benefit to those kinds of customers.

That’s just one example, but that’s where the value is applying AI in a specific way. As compared to saying oh, let’s, buy some open AI, the capacity and some LLM, some GPU capacity and run these things. They may never have really realized that solution to a real problem that they had.

That’s what you have to get to. You have to be able to define the use case and what outcome you are trying to achieve.

Working With Unstructured Data

Silverman: That’s a good transition to the next question around Rocket’s Mobius offering focused on the unstructured data such as manuals and support information. Mobius is also a solution that Rocket has incorporated generative AI capabilities focused on unstructured and legacy data and systems. To bring that together, is that right?

Curry: Yes, absolutely, that’s a big part of our offerings. We essentially have two places that we focus on the structured side with Rocket DataEdge, which we just announced a couple of weeks ago.

On the unstructured side, we have Mobius. We’ve been in the unstructured data business for a long time, helping people to manage and protect and build their processes primarily around documents. That’s been a long-term business of ours. What we’re noticing is that all of that stuff has really been in a closet. It’s a closet that you put things into, and maybe you use it to process the initial sets of approvals through a document or something, but then it gets stored and people only access it when they need it, and they access it through a relatively low-fidelity interface, which is search.

But there’s so much value in those documents. They’re the contracts and regulatory filings, and all this wonderful stuff, and the business would love to be able to use it for other things, but they don’t know how. So we’re bridging that gap by enabling them to really create curated data sets that can now be brought to generative AI.

I’ll give you an example of a bank, right? I like this example that most people can identify with it. If you’re a banking customer, I’d say 99% of the banks in the world.

If you wanted to know last year how many payments, you made to a specific company. Well, how would you answer that question? And the answer to that is, usually you’d have to go into the bank interface. Have to go to your statements. You’d have to take the statement from December. Open it up, read it, write down how much you paid to that vendor. Then you go to the November statement. You’d read it. You’d find all the payments that, and you write it down, and you do that for every month of the year, and then you’d manually add all those up, and there you’d have your answer.

With generative AI, I can change that paradigm. Those statements are stored in a content management system. So you could ask, how many payments did I make to that vendor last year? And boom! Here’s your answer, and that usability shift is enormous. The difference in experience, and that’s just a simple example. But we could improve all of those experiences that people have become accustomed to, but they still don’t like. Those are customer-facing ones.

Think about internal ones, such as, answering questions about contracts. What is my expiration date on these seventeen contracts? What’s my maximum liability across 47 contracts with this supplier? Those types of questions are so hard to answer. It takes hours and hours, and it’s not just hours and hours of an individual’s time. It tends to be very specialized people that cost the most. It could include lawyers, doctors and very high-end technical people that know how to parse those documents. Sometimes they are in different languages, and that can be overcome. That’s what we’re doing with Mobius is helping to bridge that great enterprise content protecting it along the way. We’re not just sending it into random, large language models. We’re applying capabilities that we have around redaction of sensitive data. We’re only calling the model when needed. We’re doing a lot to make sure that the hallucination potential is as low as possible, finding very good results with that working with our customers.

Silverman: In the early days in my career, in sales at IBM, negotiating a contract with a customer; you would have terms agreed to but later you may not remember the why. With generative AI, and content management, you can review the progression of the contract and not only understand the final terms, extracting the meaning and why which is so important.

Curry: Yes, the possibilities are endless.

Silverman: The intersection between the transactional data you started with, and the unstructured content and merging that value is something really unique.

Curry: It’s interesting. We hear about the promise of these lake houses, and before that, data lakes. One of the promises was the ability to handle both structured and unstructured data.

But when you look at it in practice, there’s not a lot of that happening. You’re lucky if maybe you get some sentiment analysis, or maybe extraction of some classification of data out of unstructured information. The wholesale use of that information to answer questions, to change customer interactions, to improve access to information for employee productivity that hasn’t happened. This is a huge opportunity, that we can be in the center of that.

Silverman: You start with the big picture, everyone goes at it as a whole enterprise, and we’re going to solve this big problem instead of where you started earlier, what is key use case. Focus on the content that’s going to support that use case. Then you will be able to get the expected results. If you focus on the whole enterprise you’re going to spend so much time, and by the time you’re done there’s going to be something new to figure out and be distracted and you will not achieve the expected value.

Curry: That’s exactly right.

What About IBM i Customers?

Silverman: Michael, I started my career with IBM around the AS/400 announcement, in Jacksonville. It was a great time and has always been near and dear to my heart, as it has progressed to IBM i on Power today.

Is there anything unique with IBM i customers, and how they should be planning on how to leverage AI?

Curry: Yes, interestingly, the AS/400 or IBM i, is still very prominent in industries such as manufacturing, CPG. These types of companies that have physical manufacturing and process manufacturing type businesses. They rely on very document-heavy processes. There are tons of details stored in these documents that can be managed and leveraged through this type of approach. Also, all of that data that is associated with your supply chain and your manufacturing processes, and other things that you’re doing, even your order to cash processes tends to be stored in these systems and managed in IBM i environments, and it’s hard to get to.

There are not a lot of technologies, if you think about tools and software that are being used by cloud service providers as an example, that enable easy access. They can get to your oracle database on prem, or your postgres database in the cloud, or whatever it is. But they don’t have ways of getting to that IBM i. We can help bridge those gaps, and again, do it using the security and the proper governance around that data. It’s not just about providing point-to-point data access; what we do is we take a metadata driven approach. We make sure that we understand what lives in those environments. First, by scanning everything that’s going on. We understand exactly where the data comes from, all the lineage of it. Then we take that and link that to what is happening in the cloud.

So you have a clear understanding that, yes, this is customer data, and it links perfectly to the customer data that we’re using in our snowflake instance, as an example to understand and know that you’re using the right data, even if you don’t know how to access or use an IBM i. You may be a cloud data person who is now able to bridge those gaps and make that data very easily accessible on an ongoing real time basis and able to be synchronized bidirectionally. That’s something that only we can do. Where we really help is bridging those worlds and keep taking on what hasn’t been done because it was hard to do and making it completely easy to do.

Nobody wants to get off of their IBM i, because you don’t have to touch them, they’re great. They just run forever, and low cost of ownership to keep them running. However, they do need to be able to get your data off in order to be able to take advantage of it in these AI initiatives.

Integrating APIs

Silverman: One of the articles in “The Business of AI” series focused on data as the fuel for AI as well as capabilities of applications delivering and generating data as well. The next article focused on APIs as the wire that connects data, AI applications, agents, and programs to realize the real business value of AI we have been discussing.

Can you share the insight on how Rocket Software is helping to integrate, or API-enable these systems, with solutions such as Rocket API?

Curry: We see three patterns around APIs.

One is very closely related to the data story I talked about earlier. We will take existing data that lives in those environments and create an API interface. That API could be a SQL interface to plug it into any BI tool or whatever I want to use on the other side, or application, or it could be a Restful API you can call out, so that’s the first pattern.

The second pattern is to provide an application-level API to be called to execute a program remotely. We do that through Rocket API. We can provide an extension to an app for a mobile application. As an example, I’m building a mobile application to run an inventory lookup or a calculation, or something that lives in that IBM i programming logic, that I want to call out to that program and get the answer back.

The third pattern is around modernization. There, we might want to put an entirely new interface around an older system that might have a green screen access today, and want to build a mobile application interface to that, or a nice new progressive web application interface to make it easier for my business users to use and not have to deal with the green screen aspects.

These are different patterns where APIs are used across customer environments. We facilitate those through multiple products within our portfolio, Rocket API being one of them.

Cybersecurity

Silverman: How is Rocket addressing the cyber security challenges in AI?

Curry: We are very focused on the idea of being able to provide APIs and secure APIs back into the protected core systems that run your business.

A couple of weeks ago we announced something called Rocket Secure Host Access, which enables us to create a consistent single sign-on, with multi-factor authentication across all environments, whether you’re using green screen applications, new applications that you’re building or have, or third party access through those systems.

There are regulations that require consistent multi-factor authentication across those different access channels. There are requirements and mandates such as a new New York state requirement. Just having consistency makes it much easier to manage your end-to-end security, vulnerabilities and profile. That’s a key element, being able to centralize and have one way of providing multi-factor authentication for anything accessing that host. That is at the core of building out the type of security and cyber protection that you’re talking about.

That’s just part of what we do, and we’re constantly helping companies to understand the security risk. Because mainframes, you know, they are very secure by default. They are protected environments. But there’s a lot of things that people can do to make them not secure. We provide solutions that help secure them from scanning those environments, to making sure that some program hasn’t accidentally opened up some vulnerability, all the way through protecting the doors with the secure host access capabilities.

There is also a concern with documents and the scenarios I was talking about earlier. For example, the statement example I gave you the way we handle secured access, is we inherit the specific access control for the customer if they want to be able to access their statements, we’re applying access control and only bringing back a set of data that they have access to. We go through the whole process around what are we going to send to the model and all the different things that we do answer the question, but it never gets outside of the scope of what they can actually use.

Where This Is All Going

Silverman: I think the IBM mainframe and the IBM i systems are a great foundation to build an advantage in the AI world. I’m wondering if you’re seeing anything evolving in that space, as your customers are looking at AI and those systems that they’re evolving toward, or strategies that you see coming forward?

Curry: There are two different scenarios. One is what we’re talking about, where we’re helping to bring some of the data that’s been locked away in those systems. Bring it into this modern place, where generative AI and all types of AI, and analytics are happening. So that’s a big piece of the value prop. We do see there are companies that are developing AI models in the cloud. Because you generally are training during your training work in the cloud. So they need to move data to do the training.

But then they need really fast execution of those models. Usually it’s not generative AI, it is more predictive models, statistical models, and they are putting them back on the mainframe where IBM has native chipsets that accelerate the running of those types of AI. It is important for them to be able to support AI, locally, at scale at sub, very, very fast sub microsecond type execution against transactions coming through at high volumes.

That’s something that we do see as a trend with our customers. I don’t know how far that will go, but certainly use cases around fraud and anything where you have to make a split-second decision without slowing down end to end transactions will be a prominent use case.

Silverman: I understand that the latest AI Chipsets Telum -II and recently announced IBM Spyre AI Accelerator, which have IBM’s own AIU ( Artificial Intelligence Unit) chips for mainframes and for POWER servers as well, which will be beneficial to IBM i companies.

Curry: Telum was the last one. The latest are Telum II and Spyre, designed especially for AI workloads. I’m definitely not the person that’s best to talk about the hardware side, but I know that they’re trying to create some consistency, so that you have a common chipset that can be applied across many different things that scales without having to go out and buy a bunch of GPUs from Nvidia. Essentially slightly different technique, but similar in its ability to scale for those types of AI transactions and processing needs.

I expect that all vendors that sell hardware, whether that’s virtualized through the cloud, or selling actual hardware devices, are thinking about ways to incorporate some of that into their own offerings just to reduce the reliance on Nvidia GPUs.

Silverman: Going back to security, the safest place to keep AI is local.

Curry: That’s right, that’s right. Do it locally. It’s the safest and fastest. I don’t want to necessarily have to jump somewhere else to do the processing.

Generative AI

Silverman: Rocket is using generative AI in its solutions such as Mobius, and you all just came out with a content, smart chat solution. Do you see that continuing where you infuse generative AI in your solutions?

Curry: Yes, there are three basic areas. One is using Generative AI as a way of unlocking that unstructured data we’ve been talking about. That’s the Mobius, Rocket Content Smart Chat offerings that have been a big winners for us in terms of getting customers using and getting more value from the software.

The second area that we see is we live in a world where a lot of these systems were written a long time ago, and they are written in languages that there’s not a lot of people in the market that have those skills. Even the core systems themselves. You know, mainframes and IBM i. The number of people that know those is less than the number of people that know the cloud environments these days.

Now, that’s changing a little bit because there’s really good education that’s focused on that challenge. But there’s still a gap for many companies, being able to understand both the environments and the actual systems that’s valuable to most companies. If you bring a new developer in and it can take a long time to figure out what’s happening in those systems. We are using a lot of explainability against some of those core languages and those core environments.

We’ve got thousands of people, true mainframe and IBMi experts, that work on them every day. We have built many of those systems and applications. We’re able to use generative AI and other AI techniques to be able to parse through, not just the code itself, but also the environmental things going on. Some of the telemetry data that that we collect in our own systems and figure out what’s going on. So if something’s going wrong, we can give advice as to what the core root cause of that, and what you might want to do to solve the problem.

A great example, we work with IBM on a product called DB2 Automation expert. It constantly looks at the environment to determine if something is slowing down, or it looks like there’s memory spiking or different things happening when you might want to do a reorg, and what the right timing of doing that reorg is based on the peaks and valleys of the usage of the system. That’s just a really simple example. But there’s lots of those types of opportunities. And a lot of times it’s just explaining what’s happening, explaining how this application works, and it’s not necessarily because you want to rebuild it to convert it to Java or something.

I might just want to be able to understand it, so I can maintain it more easily, or I need to get people up to speed on it, so that if it. Something goes wrong, they know how to deal with that which is a big part of what we’re using AI for as well.

The third area is about improving our products and their user interfaces. This is Agentic AI. If I do need to do reorg a database instead of having to know some arcane command on a command line or something just being able to say, Reorg that database, and it just does it correctly. There are lots of things like that where I don’t necessarily need to have deep knowledge, such as CICS or system internals. With our expertise we are able to make it simpler by creating human language interfaces to be able to actually enact specific types of commands that you want to get done. That’s another place that we’re using AI.

So those are three good ones, I’m sure there are more and more will evolve.

How to Measure Success

Silverman: Any advice you have for companies about measuring their success as they are headed down their AI journey?

Curry: Again, it goes back to what we said earlier, know what your outcomes need to be.

I’ll just go back to the example I gave you earlier about the technical documents. In that case they knew the average time to answer a question. It took many hours to be able to answer the types of questions that were coming in from customers. Their tech specialists would literally spend hours and hours trying to find answers across the different manuals and tech specs. Their goal was to reduce that down substantially. They got it down to minutes, literally less than 5 minutes. That kind of reduction is easy to show the value of AI.

Focus on the metric or the set of metrics underlying the business problem you’re trying to solve.

Quantify where you’re at today and set a goal for where you want to be, one you can reach using AI.

The 2 Main Problems

Silverman: Sounds great. Do you have any final words of advice?

Curry: You brought it up at the beginning, and it’s so true. AI is a business tool. first and foremost, and the more you think about AI as [only] a technology, the less likely you are to be able to get real value out of it. You can see that in the adoption patterns where most people are adopting AI through the tools they’re already using. They’re getting business value out of those assistants, such as the co-pilots and ChatGPTs out there.

Start with that foundation. Where people are struggling the most with AI, from what we’ve seen, is how can they protect the data that they’d like to use with AI. Then making sure that what comes out of the model is trustworthy. Those are the two areas that really inhibit people the most. So if you can work with a vendor that helps you with those things.

You’re generally going to focus on those problems that can help you the most, the ones that are showing you magic, and there’s a wonderful result that comes out. It may be great to look at, but at the end of the day. If they’re not addressing those two problems, you’re never going to get it through your organization.

Silverman: Michael thank you for your insight, and information on how Rocket Software and your team are helping companies in their AI led journey.


More About Michael Curry

Michael joined Rocket Software in 2023, bringing with him extensive knowledge of building business software products, defining and executing software product strategy and implementing large-scale systems within Fortune 500 companies. As president, he oversees worldwide strategy, development and product management for the Data Modernization business unit.

Prior to Rocket Software, he served as an executive in residence for Great Hill Partners, a growth private equity firm in Boston, where he led a detailed market assessment of the data and AI markets. His background also includes leadership positions in multiple software companies, including 17 years at IBM leading strategy and execution in business lines across integration, artificial intelligence, data governance and analytics, and SaaS business applications. He has served as a member of the IBM Technology Team, the IBM AI Ethics Board and the IBM Distinguished Industry Leader Board. This deep experience underscores his track record across all facets of building software businesses.

Past Articles

1. Focus on ‘The Business of AI’ to Move From Hype to ROI  

2. A Look at the Different Types of AI and Their Value

3. From Foundation Models to Large Language Models

4. Current Applications and Data Fuel AI Innovation and Solutions

5. The API Advantage—Getting to Work with Generative AI

6. Cybersecurity and Trust in the Age of AI, With Paul Robinson of Tempus Network LLC


Key Enterprises LLC is committed to ensuring digital accessibility for techchannel.com for people with disabilities. We are continually improving the user experience for everyone, and applying the relevant accessibility standards.