Tackling Unique Audit Requirements with RPG
IBM i experts Jon Paris and Susan Gantner share tips to tackle Data Structures and audit requirements.
Instead of discussing new RPG features, we thought we'd focus instead on describing an approach to a problem that many of you will have encountered in one form or another. As is often the case, this piece was "inspired" by a question on an IBM i internet forum - in this case the one at code400.com.
The questioner noted that they had two Data Structures (DS) both of which were based off the same external description. One contained the original data as read from the table, the second held the (potentially) updated content coming from the screen. Determining if any changes had been made was straightforward. Simply compare one DS with the other. If they didn’t match then changes had been made.
Now came the tricky part. The questioner had been tasked with writing audit records to identify the specific field(s) that had been modified. Obviously one approach would be to write code to compare each of the fields in turn and report appropriately. But apart from being mind-bogglingly boring to code, it would need to be updated any time a new field was added to the mix and would be just plain ugly. In addition, code would have to be written specifically for each and every table that had this audit requirement. We all know that while we are told that it will be "just this one table" that it will never stop at one, so a more generic approach is needed. There has to be a better way.
Indeed there is, and it makes identifying the field in question a relatively simple matter. The basic requirement is to have a means to determine the start position and length of each field within the DS. That permits you to compare the relevant portions of the DS using a substring operation. The fact that some of the fields will be numeric makes no difference at this stage - we are first interested only in comparing the bytes used to store the values.
There are at least three methods we can think for obtaining the necessary information about such externally described DSs. One method is to use SQL to access the SYSCOLUMNS or SYSCOLUMN2 views which contains all of the required data. Alternatively we could have used the QUSLFLD API or, last but not least, the good old DSPFFD command with an outfile. Since DSPFFD is likely to be familiar to the majority of our readers we chose to use that method for our example.
The DSPFFD outfile contains one row for each field and contains a wealth of information, most of which we don't need. We devised a DS array to contain the information we needed for this example. This is what it looks like:
dcl-ds fieldDetail Dim(99) Qualified; startPosn Like(WHIBO); // Offset within the record length like(WHFLDB); // Length in bytes name like(WHFLDI); dataType like(WHFLDT); decimals like(WHFLDP); // Number of decimal places End-Ds;
This array was then populated by reading through the DSPFFD outfile and copying the required values into their respective positions in the array.
This is all the preparatory work that is needed, and it can be done dynamically at the beginning of the program's run.
Identifying the Changed Field(s)
To simplify the subsequent logic the first thing that we do is to make a copy of the bytes for the field we are processing. This is done by taking a substring of the DS that begins at the field's start position (fieldDetail(i).startPosn) and for the length of the field (fieldDetail(i).length). The substrings of the before and after data are stored in the variable length fields fieldBefore and fieldAfter. Using a variable length field here simplifies the subsequent logic. At 256 bytes in length, it’s much bigger than we need for this example. Your mileage may vary!
Once we have the temporary copies of the field's before and after state we can compare them to see if they have changed. If they have changed, and if that is all we need to know, then the job is done. But in the questioner's case, he needed to know the new values of the changed fields. So next we'll look at how that is done.
dcl-s fieldBefore varchar(256); // Length assumes no fields dcl-s fieldAfter varchar(256); // exceed 256 bytes in length ... For i = 1 to count; // Copy relevant portions of before/after images to a // temporary field to simplify subsequent logic fieldAfter =%subst(after:fieldDetail(i).startPosn:fieldDetail(i).length); fieldBefore = %subst(before:fieldDetail(i).startPosn:fieldDetail(i).length); if fieldAfter <> fieldBefore; // Field has changed so report it // Add logic here to report the before and after values Dsply ('Field ' + fieldDetail(i).name + ' has changed');
Handling the Numeric Values
To help us demonstrate the mechanics of this task we are going to show how to process the two most common types of numeric fields - zoned and packed. To perform this task, we’ve used this simple DS which allows us to handle both types.
In essence, what we need to do is to copy the bytes containing the numeric value into the rightmost positions of themapNumericfield. We could then access the numeric equivalent via the zonedValueor packedValuefields as appropriate.
But there's a problem with this. Assume that we have a zoned field of 3 digits representing the value 123. The three bytes representing this will contain the hex value 'F1F2F3' but in order to be valid in the 15 digit zonedValuefield they would have to be preceded by a series of x'F0's. Similarly a packed value of 123 would be represented in two bytes by the hex value '123F' but unfortunately the value required to precede a packed numeric would consist of all x'00's.
To deal with this, when dealing with a zoned numeric we set zonedValueto zeros and when dealing with packed numerics we do the same thing with packedValue. We then proceed to overwrite the rightmost character positions of mapNumericwith the bytes containing our numeric value. Even if the original value was only 3 digits long, we have now ensured that the leading positions of the number are correctly initialized.
Once this has been done all that remains is to scale the value to match the decimal places in the original. Let's study this by looking at the code that handles the packed values. The zoned values follow the same pattern and you can study that in the downloadable code sample.
// Determine where numeric field data starts in the // mapNumeric DS. (A) start = %Len(mapNumeric) - fieldDetail(i).length + 1; If fieldDetail(i).dataType = PACKED; packedValue = 0; // Initialize packed work field (B) %subst(mapNumeric: start ) = fieldAfter; // Scale the numeric value to account for decimals If fieldDetail(i).decimals > 0; (C) decimalValue = packedValue / (10 ** fieldDetail(i).decimals); Else; (D) decimalValue = packedValue; EndIf; (E) Dsply ('New value ' + %Char(decimalValue) ) ;
At (A) we calculate the starting position in the mapNumericDS so that we can copy the bytes to the rightmost positions (B). In our example mapNumericis 15 bytes long so if we assume that we are copying 3 bytes (i.e. a packed number of 5 digits) the start position will be 15 - 3 + 1 = 13. So the data will be copied into bytes 13, 14 and 15 which is what we want.
At (C) we scale the value that is now inpackedValueby using the decimal places information for the field. If there are no decimal places then packedValueis simply copied to the result.
Finally at (D) we display the results of our labours. Of course in a real program we would be also capturing the "before" value as well as the "after" update value. We would then write an appropriate audit record to a report or disk table or ...
Since this auditing function will potentially be used by many programs we would normally house it in a Service Program. But to make it simple for you to download and experiment with, we have it all in a single program.
This technique is presented as an example of what can be done. Clearly, this is not a comprehensive solution. For example, if you look at the complete program you will see that the field decimalValue is defined as having five decimal places. The result of this will be that any changed field that that had less than five decimals will be displayed with additional trailing zeros. If you’re just producing an audit report this probably doesn’t matter. Just set the decimal places in decimalValue to have the maximum number that any of your fields use and you’re good to go.
You will also have to decide how to handle character fields. These require no additional processing, with the possible exception of variable length fields, but you will need to decide how to report them and what the maximum size of field that your program can handle should be.
In addition, if your tables contain any graphic (i.e. Double-byte) columns or UCS-2 data or BLOBS, CLOBS, etc. ... then you may need to develop additional techniques for handling those.