Skip to main content

Analyzing Joblogs with IBM’s Db2 for i Services

One of the many great services available under the banner of “Db2 for i Services” is QSYS2.JOBLOG_INFO, which deals with jobs and job logs.

Green and yellow code against a black background

One of the many great services available under the banner of “Db2 for i Services” are those dealing with jobs and job logs.
The service we’re talking about is QSYS2.JOBLOG_INFO.

JOBLOG_INFO is a UDTF, or User Defined Table Function. There’s a little more to it than querying a simple table or view but it’s not tricky at all.  When you query a view or table you use: 


But when you use a UDTF you have to say FROM TABLE and pass some parameters, as in:


The parameter '*' says to retrieve the information from the current job.  If you wanted to look at a different job you would change the parameter to the fully qualified jobname such as:


If you wanted to see the joblog in reverse order you can do this with:

ORDER by ordinal_position desc;

Let’s say you performed an operation in RPG. You have an array of conditions you can monitor for in RPG but still you decide it would be really sweet to retrieve the last message in the joblog for further analysis.  So you run:

SELECT message_id, message_text, message_second_level_text
ORDER by ordinal_position desc
fetch first row only;

And you may see the message text being descriptive with something to the effect of RNQ1022, ‘Referential constraint error on file.’ And if your DBA was really good your constraints may be nicely named so you can retrieve a meaningful constraint name like ‘Item_master_to_item_class’ from some related message in the joblog.

Basically, this boils down to these services being much easier to implement than calling APIs or injecting CL to retrieve messages. Suppose now you were tasked to find if any jobs on your system had a particular message in their joblog. And yes, you had to check joblogs as this message would not appear in QHST (history log) or any other such locations. You can do this by combining a couple of services.  

The first service is the JOB_INFO UDTF. This is comparable to a combination of WRKSBMJOB, WRKUSRJOB and WRKSBSJOB. The details of JOB_INFO may be found here. The optional parameters to JOB_INFO include:

Most of these default to *ALL, except for JOB_USER_FILTER which defaults to the current user.  

A sample search might be looking for all users who accessed any directory share. And you need to know the user, and from what IP address they came from.

 select a.job_name, b.message_id, b.message_text
  from table(qsys2.job_info(
  lateral (select * from table(qsys2.joblog_info(a.job_name)) x) b
  where a.job_name like('%QZLSFILET%')
      and b.message_id in('CPIAD12');

LATERAL is a very useful function. I like to think of it with an American football reference as basically a pass to the side. It runs the first search (job_info), uses the WHERE clause to toss the ball to joblog_info, which then presents its results off the side on the same row. Here’s a sample:


At this point, some of you may be wondering, “What about jobs that have already finished? Won’t their joblogs be spooled off already and this invalid?” Well, several releases ago IBM came out with a couple of new features. One appears on CHGJOBD and CRTJOBD. It’s “Job log output” or LOGOUTPUT. The useful new feature is *PND. IBM has changed many of their job descriptions to this value.  Basically, what this says is that when a job completes instead of creating a spool file of the joblog it stays as a joblog. The system value related to this is “Job log output” or QLOGOUTPUT.  It too now supports *PND.  All 15 of my LPARs of IBM i are set to this, and have been that way for a few years.

Cleaning up Joblogs 

This is where I normally start hearing numerous reasons about why some shop cannot do this.  Frankly no shop CANNOT do this—they just choose not to.

How do I clean these up?
Basically, the same way you clean up spooled file joblogs. You use GO CLEANUP and it will use the value in “Job logs and other system output.”

Now I use WRKOUTQ QEZJOBLOG to find joblogs, how would I find these kinds of joblogs?
I prefer the SQL approach, however, the command you are looking for is WRKJOBLOG. Prompt that and view the parameters and their defaults. WRKJOBLOG can be used for both spooled joblogs and “pending” joblogs.

What if I need a spool file version to send to IBM for analysis?
Simple, just run DSPJOBLOG JOB(…) OUTPUT(*PRINT). Or you could really blow their minds and use the SQL to export it to Excel and send them that.

My auditors require me to keep joblogs as spool files.
No, they really don’t—most auditors don’t even know what a spool file is. Do they require you to keep job logs on your Windows machines as spool joblogs? Do they require you to keep job logs on your Linux machines as spool joblogs? Or, if you get anything at all, will a log.txt file or some such thing suffice on those other platforms?

 I really need to store these joblogs and save them to backup. Therefore, I have to keep the spool files.
This is a strong opportunity for improvement. Wouldn’t it be easier to search joblogs, save them, and easily purge them off if you could store them all into the same table?  For example:

And then populate that with a program which processes the job_info, and the joblog_info, table functions to insert rows into history_table which weren’t already there. I am hoping to have a sample program available shortly.

Now, to really think outside of the box ponder this. Let’s say IBM initially required you to do a


before you could query the data with SQL, QUERY/400, or any other tool? Wouldn’t that be an absolute chore to parse out that data from a spool file? Now let’s say IBM came out with a new release and this new release allowed you to query the tables directly? Would you be throwing out strong arguments as to why there was no way you could do this? Would you be saying “our auditors require that we use the spool files to query from as to have a “point-in-time” capture of the data we are querying”?  I truly pray this wouldn’t be the case.

Stay on top of all things tech!
View upcoming & on-demand webinars →