Announcing Manzan: The Future of IBM i Event Monitoring

In today’s landscape, it’s more important than ever to keep a holistic watch over your infrastructure’s day-to-day operations. With each new technology comes new considerations for performance, security, and reliability. In short, that means a growing number of metrics and events for your administrators to “keep an eye on.” Over time, this can become quite an involved task! To help address this, IBM is officially announcing the technology preview of a new solution for infrastructure monitoring and event management on IBM i: Manzan.
Manzan is an open source project designed to simplify the process of publishing IBM i system events to a wide variety of endpoints such as user applications, external resources, and/or open-source technologies. Whether you need to monitor system messages, trigger alerts or consolidate logs for compliance and reporting, Manzan makes it easy to integrate IBM i with the rest of your IT environment.
Why Manzan?
For years, clients have been writing their own solutions or relying on commercial third-party software for monitoring the IBM i system. After all, IBM has provided a number of mechanisms, such as APIs and SQL services, to help interrogate various aspects of the system. Still, some facilities have gone underutilized, like the system “watch” facility (managed by the STRWCH/ENDWCH CL commands). On top of that, IBM i events might end up in a variety of places. Some things will end up in a job log, other things might be in a message queue somewhere, and yet others could be hidden in a stream file. Wouldn’t it be nice if a single tool provided visibility into all these events?
One could imagine that if such a tool existed, it would satisfy some very reasonable desires. What about a text message when the main accounting program crashes? Maybe an email based on history log events? Perhaps it’d be handy to post a message to a Slack channel when a Node.js program writes a warning to its log file. What about deploying a dashboard in tools like Sentry or Grafana Loki? That’s where Manzan comes in. With this tool, any of these tasks could be configured in mere minutes!
At its core, Manzan performs two functions. First, it consolidates a multitude of event types behind a “single pane of glass.” You no longer need different tools (or different hand-written programs!) for each of these types. Second, it allows these events to be integrated with a large number of technologies, including a number of open-source monitoring tools or artificial intelligence.
Now, let’s take a closer look at how Manzan works and how you can get started in just a few minutes.
Understanding the Architecture

The architecture of Manzan is best understood by first exploring two core components: Inputs and Destinations.
- Inputs are the sources of your data. In its most simplistic form, this could be a stream file which you would like to monitor. However, an input can also involve using the system watch facility, which is a powerful yet underutilized tool built into the IBM i. This tool uses the STRWCH command to start the watch for event function, which calls a specified program when a specified event occurs. This means that your inputs can be a message on a message queue, a Licensed Internal Code (LIC) log entry or a Product Activity Log (PAL) entry.
- Destinations are the locations you would like to send your data to. One option is to retrieve the data using your own custom ILE code, but in most cases you will want to use one of the many supported destinations based on your use case:
- HTTP/HTTPS endpoints (REST, etc)
- Email (SMTP/SMTPS)
- SMS (via Twilio)
- Slack
- FluentD
- Kafka
- Sentry
- Grafana Loki
- Google Pub/Sub
- ActiveMQ
Manzan is the gateway that bridges your inputs and destinations. It itself consists of two core components: Handler and Distributor.
- The handler is what receives and handles your inputs if they are a system watch or exit point event. It does this by first starting all the system watches and then transforming the data into a usable format and lastly placing it on a table or data queue when an event is detected. So how does it know what your inputs are? Well, you will define a configuration file (data.ini) that contains this information.
- The distributor is what sends your data to its ultimate destination using the power of Apache Camel, the Swiss Army knife of integration. This component will retrieve data off the table or data queue and then feed that to your chosen destinations. Similar to the handler, you will define a configuration file (dests.ini) that outlines what these destinations are.
Configuring Inputs and Destinations
As mentioned earlier, configuring Manzan is made easy with the help of a few configuration files that will be located in /QOpenSys/etc/manzan/. These configuration files each follow an ini file format and are structured to make it easy to add and remove both inputs and destinations in minutes. Rather than having to know different protocols for each new tool that you would like to integrate, you will simply need to update these standard files.
- app.ini: This file will contain some information that specifies the library containing an ILE component necessary for Manzan to function. In most cases, this file can remain untouched.
- data.ini: This file will outline your inputs in a simple format. These will each have a unique ID and a type assigned based on if you are watching a stream file, message queue, etc. This is where you will also mapping your inputs to your destinations.
- dests.ini: This file will outline your destinations, also with its own unique ID. Based on where you would like to send your data, you will need to specify some required properties such as credentials.
The best way to understand how easy it is to set up these configuration files is by diving into two practical examples.
Sending Messages to Slack
Slack is an awesome collaboration tool for keeping teams up to date. With Manzan, this can include keeping them updated on the status of an application running on IBM i. In this first example, let’s take a look at how you can get Slack updates on an IBM i application as it dumps error logs into a stream file.
To be able to send messages into a Slack channel, you will need to first create a Slack app using the following steps:
- Navigate to the Slack Apps website.
- Click Create an App and select From Scratch.
- Give your app a name, select a workspace and click Create App.

Next, you will need to create an incoming webhook for your newly created Slack App to post messages into a channel:
- Navigate to the Incoming Webhooks section under the Features heading.
- Toggle the feature on to activate incoming webhooks and click Add New Webhook to Workspace.
- Select the Slack channel you would like to receive updates in and click Allow.
- Copy the generated webhook URL and keep it handy. It will look something like https://hooks.slack.com/services/….

In my case, I created a Slack App named Manzan, gave it access to my IBM i Testing workspace, and gave it access to a channel named my-app-status.
Now that you have a working Slack app, you just need to update the Manzan configuration files for it to start posting messages. Let’s start by adding a new input to the data.ini file:
[file_my_app]
type=file=
file=/tmp/my-app-log.txt
destinations=slack_out
filter=ERROR:
format=$FILE_DATA$
So, what exactly did you do here? Well let’s pause and review what each of these lines are doing:
- [file_my_app]: Assigns file_my_app as the unique ID for this input.
- type: Specifies this input to be of type file since you want to monitor a stream file.
- file: Specifies the path to the file you want to watch. In this case, it is the application’s log file.
- destinations: Specifies slack_out as the destination to send this data to. This same ID will be defined into dests.ini.
- filter: Since you do not want Slack updates for every log message, you can define a filter to only listen for lines that include ERROR:.
- format: Defines the format in which the data should be sent in. If not specified, all information will be sent. In this case, you are only interested in the log message itself so you can use the special key $FILE_DATA$ to get back just the file content. All possible keys are listed here.
Now the last step is to add a new destination to the dests.ini file:
[slack_out]
type=slack
channel=my-app-status
webhook=https://hooks.slack.com/services/...
Again, let’s review each of these lines:
- [slack_out]: Assigns slack_out as the unique ID for this destination. This is the same ID you used previously in the destinations field of dests.ini.
- type: Specifies this destination type as slack, which is the predefined type for Slack. All possible types are listed here.
- channel: Specifies the channel which you assigned your Slack app to post in.
- webhook: Specifies the webhook URL which you generated earlier.
Now you can go ahead and start up Manzan using Service Commander. With Manzan now running, you can observe that whenever a message is added to the application’s log file, it will also post the message in your Slack channel.

In my case, the Manzan Slack App posted a message in the my-app-status channel when my application logged an error due to failure to start its REST server.
Ingesting Logs Into Grafana Loki
Now that we have gone over a basic example, let’s take a look at a more complicated one involving Grafana Loki. Keep in mind that while this example is more involved in terms of setting up Grafana Loki, the steps to setup Manzan itself using the same configuration files are exactly the same.
For those who have not heard of Grafana Loki, it is a log aggregation system used for ingesting logs and processing queries. With these application or IT infrastructure logs, Grafana allows you the ability to query and display them using custom designed dashboards. Let’s have a look at how you can watch for messages being adding to the QHST history log’s message queue on IBM i and ingest this data to Grafana Loki.
The steps below assume you have already created a Grafana Loki instance and have the credentials (url, username and password) for it on hand.
Just like the previous example, the first step is to add a new input to your data.ini file:
[watch_hstlog]
type=watch
id=sanjula
destinations=loki_out
strwch=WCHMSG((*ALL)) WCHMSGQ((*HSTLOG))
format=$MESSAGE_ID$ (severity $SEVERITY$): $MESSAGE$
Let’s review again what happened here:
- [watch_hstlog]: Assigns watch_hstlog as the unique ID for this input.
- type: Specifies this input to be of type watch since you want to monitor a message queue.
- id: Assigns sanjula as the session identifier (SSNID) for this watch. This identifier must be unique across all active watches on the IBM i.
- destinations: Specifies loki_out as the destination to send this data to. This same ID will be defined into dests.ini.
- strwch: Specifies additional parameters for the STRWCH CL command to describe how the watch should be started when Manzan starts up. In this case, you are trying to listen for all messages being added to the history log’s message queue.
- format: Defines the format in which the data should be sent in. Unlike the previous example, several keys are used here to include more information.
Now that you have an input defined, you then need to add a new destination to the dests.ini file:
[loki_out]
type=loki
url=<loki_url>
username=<loki_username>
password=<loki_password>
Yet again, let’s review each of these lines:
- [loki_out]: Assigns loki_out as the unique ID for this destination. This is the same ID you used previously in the destinations field of dests.ini.
- type: Specifies this destination type as loki, which is the predefined type for Grafana Loki.
- url/username/password: Specifies the credentials to your Grafana Loki instance.
Now you can start up Manzan just like before, but this time observe that whenever a message is added to the history log’s message queue, it is also ingested into Grafana Loki.

Now that you have all these aggregated logs, you can design and build your very own dashboard to visualize this information. The example dashboard below can be used by system admins to quickly monitor message frequency and quickly isolate critical logs.

Applications in AI
Naturally, having access to this level of information provides an unprecedented opportunity to exploit artificial intelligence. Since Manzan can pump all of these system events into an Apache Kafka topic, for example, a program can ingest and analyze them in real time. And because Kafka is such a prominent industry standard, the data can be ingested into almost any AI stack. As an example, in the below scenario, I’m ingesting real-time history log data into watsonx.

The possibilities here seem to be endless. We at IBM will continue to explore AI applications using this data, but don’t let that hold you back. There’s plenty for you to explore and invent here! We’ve gotten some really exciting ideas from the community. To name a few possible use cases for AI and Manzan:
- Intrusion detection
- Security configuration analysis
- Performance analysis
- Anomaly detection
- Proactive application failure prediction
Better yet, these ideas are no longer science fiction fantasies. With the constantly evolving technology around deep learning, machine learning and large language models (LLMs), there’s a lot of possibilities within our grasp.
How To Get Started
As of March 2025, Manzan is available as a Technology Preview. To get started, check out the documentation page which covers installation and more examples.
With this being a technology preview, we have a lot more on the road map including support for more event destinations and more security features, like audit journal and network exit point support. Plus, we’ll be exploring synergy with the IBM i exporter for Prometheus. Since this is an open-source project, if you have a suggestions for a new destination to support or other ideas, please share it with us by opening a GitHub Issue, or even feel free to contribute!