Skip to main content

Using Power10’s Superfast AI With Event Streaming

IBM’s latest generation of Power processors is no incremental update. It comes with built-in acceleration for artificial intelligence (AI)! This is done by integrating matrix math assist (MMA) units right into the chip. Trust me, this is a game-changer for IBM i!

I was first involved with IBM i and AI back in 2017. We were architecting all the ways to wire IBM i systems to IBM Watson and other AI technologies. If you attended COMMON that year, you may have even tried out our Watson-powered Harry Potter sorting hat. You’d place the adornment on your head, tell it a bit about yourself, and it would use the power of Watson to assign you to a Harry Potter house. I was a Slytherin. Scott Forstie, my fellow business architect, was a Hufflepuff.

Since then, IBM has provided a number of different technologies and approaches to AI. For starters, the IBM i open-source team ported data science and machine learning libraries to the platform. There’s also IBM Watson Machine Learning, “a full-service IBM Cloud offering that makes it easy for developers and data scientists to work together to integrate predictive capabilities with their applications.” There are IBM Watson APIs that provide value straight. We’ve had customers (like this one) embrace H2O Driverless AI. The list goes on, but the ability to exploit Power10 and MMA is the most powerful opportunity yet!

If you haven’t already, now is the time to explore AI. IBM i is a natural fit for this purpose. In order to be effective, AI algorithms need “ground truth” data. Simply put, the more “ground truth” is available, the better the intelligence. And guess what houses tons of data: Db2 for i! Some companies have decades of historical data, and some process mind-numbing amounts of transactions every minute. It’s a treasure trove for cognitive computing.

There are a number of ways for an IBM i company to exploit this revolutionary processor technology. Today’s blog entry will focus on an approach that uses real-time event streaming from Db2.

Event Streaming—Why and How?

In some scenarios, you may wish to analyze transactional data in real time. A logical approach for this is to decouple the transaction processing from the analytics. An event streaming model can be used, whereby transactions are published in real-time over some medium for another component to examine as needed.

In the example application we will discuss today, I chose to leverage Apache Kafka as a data streaming platform. Kafka is a form of message-oriented middleware that excels at the streaming of large amounts of data in real time. The data flowing through Kafka can be an important building block.

As Kafka’s documentation so aptly puts it: “Event streaming is the digital equivalent of the human body’s central nervous system. It is the technological foundation for the ‘always-on’ world where businesses are increasingly software-defined and automated, and where the user of software is more software.”

Because of its reliability, security and scalability, Kafka is used by many key industries, including some familiar industries that IBM i calls home (see Figure 1):

Analysis of top 10 companies in key industries, according to Apache Kafka's: manufacturing, 10 out of 10; banks, 7 out of 10; insurance, 10 out of 10; telecom, 8 out of 10
Figure 1: Analysis of top 10 companies in key industries, according to Apache Kafka’s web site

Leveraging Event Streaming for AI

Without further ado, let’s talk about a sample application that takes this approach to machine learning. I was recently involved in a project where we applied neural networks to analyze Db2 transaction data in real time. More specifically, the sample application is DayTrader, an application that runs a (fictional) stock exchange. We configured it to use Db2 for i as the underlying database and applied the event streaming approach.

The end result? We leveraged AI to learn stock price behaviors and predict future stock prices.

That may sound pretty sophisticated, but the base concept is quite simple. Every time a stock order executes, that transactional data is streamed to Apache Kafka. From there, AI software can read this transactional data, learn and make predictions “on the fly.” And if that AI software is leveraging MMA on Power 10, the performance will be blazing fast!

So how, exactly, does one wire all of this together? In an earlier blog post, I discussed Apache Camel and how it can be used to integrate different technologies. Naturally, when I wanted to stream Db2 transactions to Kafka, I turned to Camel Here’s the flow every time a database transaction happens:

  • A Db2 trigger transforms the transaction details to JSON and places it on a data queue
  • An Apache Camel route reads the message from the data queue and sends the message to a Kafka broker (server) that is, in this case, running on IBM i
  • Some application reads the message from the Kafka stream. Since Kafka provides many clients, there’s complete flexibility regarding which programming language to use and where to deploy it. In this case, we use a Python application that leverages Power 10’s new MMA capabilities.

In essence, the IBM i side of the solution just consists of a database trigger, a data queue and a simple Camel route (see Figure 2).

Figure 2: A database trigger and Camel route for streaming to Kafka
Figure 2: A database trigger and Camel route for streaming to Kafka

That’s pretty simple! If you care to take a peek, we have code on GitHub for the IBM i piece (and we’ll be publishing the AI piece soon). Better yet, the existing application doesn’t even need to know about the AI piece. All of that is kicked off by the trigger. A more detailed architecture diagram that includes the AI component might look something like Figure 3:

Figure 3: Architectural diagram showing integration pieces (credit: Sophia Huang of IBM)
Figure 3: Architectural diagram showing integration pieces (credit: Sophia Huang of IBM)

We built an AI component to leverage Power10’s MMA capabilities by using the Python-based PyTorch framework and OpenBLAS libraries on Linux. We applied the N-BEATS deep learning model, which stands for “Neural Basis Expansion Analysis for interpretable Time Series forecasting.”

Once these pieces are linked together, we have a real-time, AI-driven prediction engine. We also built a real-time visualization interface, which shows both the historical and (predicted) future stock prices using this methodology (see Figure 4):

Figure 4: Real-time stock price prediction
Figure 4:
Real-time stock price prediction

What’s more, the Power10 processor didn’t disappoint. We compared the performance to POWER9 and observed an estimated 4.5x speedup in this particular stock prediction case!

Why Use This Event-Based Approach?

This approach, like others, can leverage real-time information to provide the best insights. It’s unique in that it takes an offline, or out-of-band method. This could be a suitable end goal for a variety of AI applications. In this case, the event-streaming model offers complete flexibility. Ultimately, there are three main components to this approach: a data source, a streaming platform and the AI (see Figure 5).

Figure 5: Three main components of the event-based AI approach: transaction, Kafka, AI
Figure 5:
Three main components of the event-based AI approach

IBM i database transactions are typically the data source. The other components can be deployed in whatever place that works best for your needs. For instance, the Kafka broker can be run on IBM i (and we do support that), or it could be run on a different platform. There are also several cloud-centric approaches, such as Red Hat AMQ streams on OpenShift. Similarly, the AI component could be run just about anywhere. Because of the flexibility, this approach can be used regardless of network infrastructure or budget constraints. It also provides a future-proof design: If a component needs to be changed, upgraded or relocated, it won’t require a new solution to be architected. In addition one can start this journey regardless of what hardware is in use. It could make perfect sense to start building your solution on POWER9, for instance, and be ready to “hit the ground running” with blazing-fast performance when you get to Power10!

I also spoke with Peter Hofstee, Distinguished RSM, Power Systems at IBM Research, who points out that this could be a seamless step of an incremental journey. As he puts it, “offline analysis using AI can be a precursor to tighter, inline, integration of AI function with the enterprise application. Because Power10 provides the AI functionality within the processor core rather than in a separate system or accelerator, the Power10 hardware allows non-disruptive integration of AI functionality once developed and validated. Thus, for example, once a fraud detection model has been developed and validated using the approach in this example, a user may then deploy the fraud detection AI as an inline step and prevent a fraudulent transaction from committing in the first place.”

The Time for AI Is Now!

Like I said earlier, IBM i and its precious data stores are a great place to start leveraging AI. Regardless of how you tackle it, the technology is ready, and the best time to deploy is ASAP! Feel free to contact IBM Lab Services and/or engage the many community resources. If you’re not sure, start with the IBM i open source folks. Later this month, I will also be presenting this topic at IBM TechU (2021 virtual edition). In any event, I hope you take advantage of the blazing-fast Power10 hardware!


Key Enterprises LLC is committed to ensuring digital accessibility for techchannel.com for people with disabilities. We are continually improving the user experience for everyone, and applying the relevant accessibility standards.