Dusty Rivers on IMS, CICS, Db2 and IBM Z
Reg Harbeck: Hi, this is Reg Harbeck and today I'm here with Dusty Rivers, an experienced IBM mainframe expert, an IMS person who has been working all across the mainframe ecosystem especially in areas that has gotten enough attention that he has been able to be an IBM Champion for the past 10 years as well as being heavily involved with SHARE. Well, Dusty rather than spending a whole bunch of time telling you about that, why don't you tell us. How did you end up on the mainframe?
Dusty Rivers: Actually I started out of college on the mainframe. I started as a COBOL programmer, worked my way—my specialty in college was databases and database management systems. I actually worked for a company for a year in COBOL and then I moved into IMS as a DBA so I've pretty much been, for the last 42 years, I have been working on the Z system, on the mainframe with IMS, CICS, and Db2.
Reg: Cool. Now given that, how did you find yourself working with IMS?
Dusty: I had studied database management systems and so when I came into—it was a Bell system then. When I came into the Bell system, they were an IMS user. Like I said, I started in DBA and then I moved into the system side. IMS was the database management system of choice for that company so I focused mainly on that and then from that I started going to SHARE back in the you know the late—I guess the early 80s with going to IMS sessions, going to DBA sessions, going to system sessions so IMS was the first thing I actually worked on on the mainframe.
Reg: Okay. Now by that time of course IMS was about 15 years old after having been invented for the—I guess for NASA and for the space initiative but I'm going to guess it had a lot more features than just those focused on getting us on the moon by the time you were working with it. What were some things that really stood out to you as you got to know IMS?
Dusty: I guess then we—like I said I was working for a Bell system and the scalability, the reliability and the sheer size of the number of transactions it managed. They were using customer service systems. If you know anything about telco they had systems that managed the lines, the trunks, the actual wires in the ground, wires in the air and it managed the customer systems. Just the sheer speed, reliability and security that it provided and at that time you know IMS was predominantly used heavily in Bell systems so we actually had an inner—I guess you would say an inner Bell system user group where all of the Bell systems got together and talked about IMS and what they did so it was kind of a Bell system version of SHARE for IMS.
Reg: Hmm, interesting. Now I assume when IMS first came out the DB and DC were probably a single product and then they split up at some point but that's just a guess on my part. How did that all work and where did CICS come into all of this?
Dusty: Well actually one of the things is they did a history session as IMS came out as you mentioned for the Apollo—for the Apollo system and it was initially designed to keep up with the parts of the rockets so yeah DB and DC were together and they kind of split it out where they had a transactional system which was IMS DC which has now become IMS TM and the database was a hierarchical systems that did the parts management. You can think of the rocket being the top of the hierarchy going into all of the parts in the assemblies and then they needed something at the time they said a little more lightweight for public utilities and utilities so they started with CICS. So if you think about IMS there is queuing and there is some latency. It comes into the system, goes into the queue, gets processed where CICS came in, it was processed and went right back out.
Reg: Okay. Now just thinking about that, you were mentioning to me that you kind of got right in the middle of that the transition of moving to CICS and also the transition to bringing on board Db2 and so I'm just kind of curious about how that all plays. What advantages did CICS and Db2 bring as a distinction from IMS in this transaction environment and what things did IMS and its environment still continue to be so attractive that you stayed with it?
Dusty: Well one of the things is at the Bell system they stayed with IMS so one of the things they started looking at was once the relational database management system started being talked about and IBM announced, I think the code word was system R, was they actually implement Db2, and a lot of companies started looking at Db2. It's relational; it's easier or they thought it was easier for the relational systems to understand tables instead of the hierarchies in IMS and so they started implementing that and they started looking at putting larger databases on there. Then you know some of the other companies in the Bell system they used CICS, well you can have “C-I-C-S” or “kicks” however you want to pronounce it at the time they could have VSAM underneath the transaction system so you could have CICS running VSAM or the IMS running with the Db2—I mean the DL1 databases or the IMS databases but they went to the transition where both CICS and IMS could use an underlying structure of Db2 under both of them so it allowed both scenarios. You could have IMS with Db2 databases and IMS with IMS databases and CICS with Db2 databases.
Reg: Cool. Now in all of that journey you know you seemed to have sort of reached out, and learned these other things but then held fast to the original core that you'd worked with and continued to build on that. One of the areas that you mentioned you did that with was getting involved with SHARE. How did that all take place?
Dusty: Initially I started in SHARE like I said back in the early 80s for training. I mean SHARE was and still is the predominant area where you could get technical training so I was a DBA and I wanted to—you know if you wanted to get training, you could go to IBM training courses or there were some other training courses but at SHARE you could go and take a full week's worth of sessions that you handpicked. You could go to session on IMS, on – at the time it was MVS -, and so I started out with that and then as I started moving in my job into relational systems and Db2 at that time there was not really a relational project so I actually worked with a couple of people and we actually had Db2. At that time there was an SQL product under VM called SQLDS and so they formed a relational project. Someone long ago told me if you want to get something out of SHARE, get involved, so I did. I started volunteering on the relational side, the relational projects, so I've always been working with SHARE as a volunteer and then for a while there I worked with other companies where they weren't as committed to going to SHARE. So for the last, I would say the last 15-16 years I've been back working. I was working on the IMS project as a volunteer, just as a project member. I've been presenting at SHARE in that time and then I actually moved from in the last couple of years from IMS project member volunteer to the IMGT which is the information management program manager which underneath me there are three projects. There is IMS; there is database which covers all of the databases and then there is SAN/Disk/Tape which is a little different from both of those but all of those sessions are under IMGT.
Reg: Okay. So now when did you become the program manager for that?
Dusty: It's been a year—about a year and a half ago.
Reg: Okay.
Dusty: The existing program manager I've known for a while, and he mentioned that he was actually moving over to take a board position and he asked me if I would interested. Of course you know my initial question was well what does that involve? What do I have to do? So you know he has actually worked with me. He still mentors me on a couple of things because I'm still—you know as program manager I'm still probably one of the newbies on the program council.
Reg: Hmm interesting. Now you may be a newbie in that area and that's one of the beauties of course of both of being the mainframe and at SHARE, is you can always find something new to be learning and doing but at the same time you're pretty established and so much so as I mentioned in the beginning of this you've been an IBM Champion for 10 years. How did that happen and how do you stay in that role because as I understand it does not automatically renew.
Dusty: No, it actually—you have to stay active. You have to be an advocate. Initially I was nominated—I was working on IMS. I was working on integration for IMS and you have to stay active. One of things is you know it’s in the IBM Champion program it's not once a Champion it's there forever. You have to continue being active so active in Z. I'm a Champion for Z and Analytics but the idea is you know I work with the SHARE group. I work with IBM TechU as a panel speaker. I do articles. I do blogs. I do IMS. I do user groups or virtual user groups or real user groups so it's being an advocate for the platform, being an advocate for keeping—you know I hate to use an analogy because it is but you know keeping the mainframe relevant, keeping IMS relevant you know working to basically integrate those new systems with the mainframe systems.
Reg: Now of course when you talk about keeping the mainframe relevant and really not just relevant but in many ways leading edge. One of the ways that you've personally done this is you've gotten involved with APIs as well as you moved forward with your career. How did that all work out?
Dusty: So I was working for the Bell system years ago and we actually did a joint venture with a company to produce initially it was called—it was in the old SOA days of creating an ORB or an object request broker or a way for distributed systems to talk to the mainframe through APIs and then I moved from that into now where we've talking I guess in the last 15 years about APIs against IMS, against CICS. They initially started out as SOAP or you know SOAP XML and now they're RESTful APIs but the ability to allow the mainframe resources, mainframe transactions like IMS or like CICS or we actually have other customers that are doing like IDMS/DC, I mean anything on the mainframe allowing the mainframe to just look like another API provider to the distributed system so you know we have—I work with customers all over the world that they're looking to offer up APIs, you know very complex APIs, sometimes simple APIs to those systems where really the developers and consumer of those systems don't realize that what's happening on the back end is actually causing an IMS transaction to run or causing three IMS transactions to run or causing CICS transactions. Then we're seeing a little flip side on that is in the converse is they have mainframe systems that are talking to distributed systems. We have customers talking to Google APIs. We have customers talking to distributed apps and the mainframe application thinks it's just talking to another mainframe application. It's actually invoking an API on another platform. Also as we're moving into the new world of talking about containers or Docker containers or the OpenShift or Cloud Paks, all of that is still morphing but at the same time what we're doing is we're offering APIs for consumption without the user having to have any idea that they're talking to a mainframe system.
Reg: Now there's two things that come to my mind when you say that. One of them is JDBC and the other is code pages so let me start with the first one. How similar are these APIs or how often are they using something like JDBC?
Dusty: Well if you think about it, JDBC, they're going after a data source like a database or something like that so they really have to know the structure of the database. They may have to know the table name. They have to know column names where at the API level they may be saying, “Hey, get me the customer information or get me my checking account,” so it's more of a higher level abstraction the API is. Yeah, you can still do some of it with JDBC and in some cases maybe what you want is you really are going after a data source and underneath the covers you may be using a JDBC like if you're going against a relational store but the API you want to expose is a RESTful API.
Reg: Okay, got it. Now as far as code pages, I mean one of our great legacies on the mainframe is EBCDIC you know and so much of our data is in EBCDIC and yet of course the distributed world has no idea to make of EBCDIC and translating between EBCDIC and ANSI or ASCII or whatever is non-trivial unless you've got somebody taking care of it 100%. How does that map where you're working?
Dusty: Well there's two sides to that. One is the company I work for, obviously the ASCII to EBCDIC, all of that translation is done automatically so you don't have to worry about it. The second part of that is when you have the code pages like you have French code pages or German code pages or double byte character sets or things like that were code pages where you may have to have that concern but we're trying to hide as much of that as possible from the consumer so you know they don't have to worry about, OK, am I starting out in English and I really want to be in French or vice versa?
Reg: Well thank you so much Dusty. This has been really interesting and educational. Did you have any closing thoughts you wanted to share with us?
Dusty: Thank you Reg. No actually, it's been nice to actually reflect back on where I started, where I am now and how things—you know there's a saying, the more things change the more they stay the same. The mainframe has been around for my career and I think it will outlive me. Working here at GT Software I still get to see new applications come to life every day that are using the mainframe in the background in an API environment.
Reg: Cool. Well I really appreciate this and thanks for taking the time.
Dusty: Oh, you're welcome.