Db2 for i, Under the Covers
IBM software engineer Ryan Moeller on SQE, AQP, query optimization and the rewards of interacting with the IBM i community
Click to listen to the audio-only version of this podcast
This transcript is edited for clarity.
Charlie Guarino: Hi everybody, this is Charlie Guarino. Welcome to another edition of TechTalk SMB. We are trying something new this month. We’re actually going to share the video of our discussion. I am thrilled to have Ryan Moeller with us, sharing his expertise today. Ryan is a software engineer for Db2 for IBM i working at IBM Rochester. Ryan, thank you so much for being part of this conversation today. It’s so great to see you.
Ryan Moeller: Thanks Charlie, it’s great to be here. Thanks for having me on.
Charlie: Absolutely. I just want to share that I first met Ryan just over a year ago at one of the large POWERUp events—in Denver, I believe it was—and you were a new speaker. You were not new to IBM, but you were new to the speaking circuit. How long have you been with IBM, Ryan?
Ryan: About 3 1/2 years.
Charlie: 3 1/2 years, but you were a newbie back then at COMMON POWERUp in Denver.
Ryan: Correct, yup. That was my first ever event. They sort of threw me off into the deep end and said swim, and I thought I did pretty well—clearly enough if I made a decent impression with you, Charlie. It’s been [laughs] a wild year since then.
Charlie: And swim you did, and look at you now. Now you’re ready for the Olympics, so to speak.
Ryan: 100-meter dash, here we go.
Charlie: 100-meter dash, here it comes.
Ryan: Dash, French freestyle? I don’t know what this is.
Charlie: I don’t know. That’s a whole separate topic, I suppose, but I’m thrilled that you’re really here. Thank you so much for joining me. Ryan, you are one of the newer faces in the community, but you bring a really renewed spirit and interest I think to Db2 programming, especially around SQL. I know you and I have had some conversations, and the one thing that comes to my mind when it comes to SQL for a lot of developers—and maybe some developers it should come to their mind—is the SQL query engine. I think that’s such an important part, or maybe it should be an important part of your programming. I have a lot of questions, but even from a high level, what is the SQL query engine—or we’ll call it now SQE—and why do I care or why should I care about SQE?
Ryan: Sure. So the database is composed of lots of different parts. SQE is—well, as the name suggests, it’s the SQL query engine, and it’s responsible of the optimization of an SQL query. It’s responsible for the execution and it’s also responsible for sort of what we call statistics. So we have a bunch of database statistics that we collect which helps to power the optimizer, so those are the three sort of big components underlying SQE. The end goal is we take sort of your rough representation of the query itself, and we execute it and return the rows. Of course, in between, that’s where all the magic is, right? So it’s lots and lots of complicated statistics and sort of database science and technology and that sort of thing.
Charlie: So all that real heavy math is happening independent of me as a developer. I don’t necessarily need to care about what’s happening; not that I don’t care but IBM is doing most of the heavy lifting for me in that regard.
Ryan: Correct, yup—and you bring up a good point. Some people say well, why should I care? It’s handled by IBM. This thing just works—and it does just work for a lot of folks, lot of queries, that sort of thing. But you need to understand a little bit about how it works so that you can maybe make your queries perform the best or make them in a form where the optimizer can best optimize them. These are absolutely very complicated topics. People have built entire careers off of this sort of thing. So maybe today we’ll get into sort of the first steps into how you might look into these big, complicated processes and things you should keep in mind.
Charlie: You know what I find, Ryan? I find when you’re dealing with these types of topics, it’s very important I think to give a bit of a historical context because it helps you build a timeline up to where we are today. And in the context of SQE—or SQL—SQE is new or newer. It’s not the original engine. We had the classic query engine. I’m not sure it even had a name. It was just the original engine and then it became dubbed the classic once SQE came out. But tell us about the CQE or the classic query engine and why do I need to know about that even today. Is anything even using that anymore or is everything now using the newer engine?
Ryan: Yeah, so CQE was the original engine that essentially powered the database during the pre-SQL times and we sort of added SQL functionality to it. So it was a very old architecture and many, many years ago we decided to replace it. You should still know about CQE or the classical query engine because it still exists on your system. There are very, very few things that will run in CQE. The main one that I know about is read triggers, so if you have read triggers on a database table, they will all run through CQE in terms of the queries issued by those read triggers. That’s a very uncommon case and there are other more uncommon cases but I would say 99% or more of folks’ queries will be running via SQE. We started working on SQE—well, I say we. You know I can say we as part of the team now, but I was not even born yet when SQE was first being created. So that’s kind of fun seeing something like the original file creation dates. I’m like wow, I think that’s pretty cool. But we created it because the CQE was getting pretty old, pretty stale and it was very difficult to maintain from an engineering perspective. Alongside that, with SQL booming in popularity, we saw the ability to increase performance of the optimizer by moving it from XPF down into SLIC. S o for those unfamiliar, XPF is kind of your operating system layer where applications live, that sort of thing and SLIC is the license internal code—or it’s the kernel. So there’s a lot of performance benefits to be had by running the optimizer in the kernel. Note that doesn’t necessarily result in performance improvements in the query running, but rather the optimization process. But I’m sure most of my listeners here have probably run into a situation where they issue a query and it spins and spins and spins. That might be the runtime but we have seen cases where queries could optimize for tens of minutes or hours and even more than that. Of course those are ugly situations, big nasty queries, but the performance if we did that in CQE would be so much worse. So for those two big reasons we sort of maintain the ability for our team to extend the optimizer/engine and the performance reasons were two pretty darn compelling reasons to move everything over.
Charlie: But yet you mentioned a couple of outliers. You did say, for example, read triggers, so should that be an impetus to me to maybe to change my read triggers into something that’s—you know to move away from CQE and get out of there? Will that help my application as a whole to perform better? Is that enough reason to make me want to not use read triggers, for example?
Ryan: I’ll give you our favorite answer, which is it depends. So CQE isn’t necessarily a bad sign, right? It’s just older technology. In fact, we get some folks who are upgrading from very, very early releases and they’ll go to something like—ah, within the last 5-ish years, upgrade their OS—and things will actually perform worse sometimes. It’s very, very rare but CQE has its own benefits and because it’s very, very simple, sometimes it nails that simplicity, right? Sometimes we get a little bit too complicated. I would not necessarily look at the usage of CQE as a sort of red flag. Certainly it means you might be sort of using your application or your application is written in a more legacy sort of format or using legacy tools. That is a whole separate discussion on you should upgrade because you want new development talent to come in and work on that stuff. But from an SQL performance perspective it’s not necessarily super concerning, but I would suggest trying to move stuff over to SQE if possible. But once again, that’s such a small amount of queries, such a small amount of your overall database footprint that unless you’re seeing significant CQE usage—which you can investigate using some of our tools—it’s nothing that I would really concern myself about too much.
Charlie: So, if I’m in a pedestrian—I hate to use the word run of the mill, but a normal developer. This is not something I should spend a lot of time in unless I have significant performance issues and that’s worth doing a deeper dive it sounds like.
Ryan: Yup, and if you take what’s called a planned cache snapshot—and getting super into that is not probably in today’s discussion—but you can see the number of queries that have run in your planned cache with CQE vs. SQE, which is actually very, very nice because you can tell you know, maybe 10% of your queries are running through CQE, which is quite high. And if you’re having performance issues with those specific queries, yeah, it might be worth looking at.
Charlie: So suffice it to say there are indeed tools to help us identify the CQE queries and to help us move out. Are there any ways that we can move out of SQE deliberately? You mentioned like a retrigger if I change a trigger, but if I’m doing a standard query, via SQL for example, I suppose I wouldn’t be—unless there are certain functions in SQL that still go to the CQE.
Ryan: There are some old interfaces that exclusively use CQE, but to be honest I’m not very familiar with them. I wish I had a list form that I could sort of spill out here, but the vast majority—even open query file that used to go to CQE, the vast majority of open query file things will actually go SQE now. But that is one interface that will use CQE more often compared to a SQL environment. So I guess if you are moving towards more modern tooling in terms of using SQL maybe it’s like JDBC or ODBC connectors, those types of things, away from older interfaces like Query/400 or open query file [OPNQRYF], you are just sort of innately moving over to SQE. I wouldn’t be super concerned about the granular level. Maybe more look at it from a big picture and move over your technology to more modern, more powerful, newer things.
Charlie: But I guess the ultimate goal is better optimization. I mean that’s I think what we strive to do. We hope under the covers it’s happening. But for those who don’t really know in great detail—I mean I know what the word optimize means, but how is it in this context? What is a better optimized query? What does that actually mean from a developer’s perspective, or maybe even from a user’s perspective?
Ryan: Sure. So I guess from a user’s perspective they don’t know if it’s completely optimal, but every user has some sort of expectation for performance, right? If you are logging into your bank’s website and it take a minute to load your page, that’s not very exciting, it’s a pretty terrible user experience. So therefore that’s probably not optimal, or it’s not been optimized very well, and from a business perspective maybe if you’re creating batch reports and you fire it off maybe at midnight so that you have employees look at it in the morning and if it doesn’t get created by let’s say 9 a.m., that’s also not optimized well enough, right? So it’s kind of about what is the user experience like and what kind of thresholds do you need to meet so that those users can have a good experience. From an actual sort of engineering perspective, what the optimization sort of process or goal looks like is that we want to minimize the amount of time we spend actually running these queries. When you issue a query, the more complex it is, the more possible varieties or flavors of this query we could implement it using—so for example let’s say we have a query that had two tables and we’re going to join them. We could join Table A to Table B or Table B to Table A, right? There are just two possible orders there, but if you start adding a third or a fourth or a fifth table, things get much more complicated. Things get even more complicated when you take indexing into consideration along with ordering and grouping and all sorts of stuff. So all of a sudden this relatively compact, nice, neat, straightforward query becomes very large and all of these different possible implementations need to be weighed out. The optimizer cannot try all of them, but it has a bunch of smart logic inside to attempt to get the best performing plan that we can find. And by best performing, we are looking at clock time—so elapsed time for someone clicking a button, clicking a stopwatch and waiting and seeing how long it takes to run. So that is the metric we’re sort of optimizing for, and we try to minimize that.
Charlie: That’s a lot of magic happening under the covers. That’s mind-boggling to me actually.
Ryan: Yeah, it’s a ton of magic.
Charlie: And even the way you structure your query, that can have from what I’m hearing a direct impact on how it runs, obviously. I imagine Visual Explain would be helpful in helping me identify one query vs. another query and seeing how they’re going to perform?
Ryan: Yup, Visual Explain is a great tool. It basically will take a picture of your query—and a query is comprised of a bunch of different pieces we call nodes. So one example of a node is a table scan, which is you look at a table and you start looking at the data from row 1 and you go all the way to the end. And there’s lot of other different flavors of these nodes. There are temporary data structures like lists, sorted lists, hash tables, other sorts of things. And so all of these nodes are sort of built together to sort of make this picture of what the query will look like. Oftentimes this Visual Explain information is going to be estimated, so it’s what the optimizer thinks the query will sort of return in terms of row numbers and the amount of time it will take, but it’s not always perfectly accurate. We actually don’t mind that too much because it’s all about relative comparison. If we are off by 10% in terms of the row counts—like we think you’re going to get back 100 rows, but really you get back 110. Well if we’re always off by 10%, that’s actually great because it’s all about Plan A vs. Plan B vs. Plan C. Now obviously if we can deliver the most accurate estimates we can and publish that via Visual Explain to a user, that would be great. But it’s all about these relative comparisons between plans and which one is faster compared to the other. So that’s sort of some inside baseball on some of the optimizer tricks.
Charlie: Wow. So there are developers who can go through their entire career literally and not know what’s going on really in the bowels of the system—
Ryan: Yeah, yeah.
Charlie: And again, do they really need to know to the level to which you just described? Does the everyday developer need to know what you just described in any great detail? Might it make them a better developer if they know more about some of this stuff?
Ryan: This very conversation proves that the answer is no—funny enough. If you go and ask your average IBM i shop, do you have a database engineer? The answer is almost certainly going to be no, because we handle so much of that hard work for people, right? One of the big value propositions of the IBM i is that it is self-maintaining and it doesn’t cost a whole lot to actually maintain. You don’t need to have a huge staff. If you look at other database platforms, the maintenance, especially in terms of database, is massive. You might have three or four database engineers or admins only focused on performance and making sure things run well. So the average person doesn’t really need to know a whole lot about this. I would argue though that if you do want to learn about query optimization, how to write better queries in terms of performance, readability, maintainability, that sort of stuff? That is what really separates good engineers from great engineers. So if you are an RPG developer and you’re writing SQL embedded in your RPG and it just works for you but you want to dig in, that’s a really great tool to have. You can make your queries run faster, make your applications more responsive, make your users happier. That’s kind of the end goal of this, right? We want to deliver high performance, reliable software to the people that use them and expect something out of them. We’re just one sort of small piece of that.
Charlie: Ryan I know one thing we spoke about before we got into this conversation was the adaptive query processor—or adaptive query processing, I should say: AQP adaptive query processing. And it was something that you seemed very excited about, I guess, or one of your favorite topics to discuss. What can you tell me about AQP and why it’s so important in this SQE conversation?
Ryan: Sure—and AQP is a very interesting thing that is sort of secondary to the main optimization process. So when a query runs, the first step is we must optimize it and come up with some sort of plan to execute. When we execute that plan, we are you know, crunching the numbers and building these rows to return to a user—and this is a very expensive operation, right? These queries could be anywhere from milliseconds to hours depending on what you’re doing. And some of these larger queries, we don’t get right the first time. Like I said, we are all estimate-based. So as we go through, we try to figure out the most optimal way to get the data. Sometimes we’re wrong. AQP was originally developed to try to correct some wrong judgments or bad estimations from the optimizer. AQP is actually a secondary thread that runs alongside your query and it kicks in after 2 seconds. So it boots up and it starts looking at the query as it is running. It actually does quite a few different things, but the main thing it does is it looks at the row counts and the time spent on each node in that query tree. What it does is it takes a picture of what we are expecting and what we’re actually seeing and if those look similar, that means our estimates were good and we can probably expect our optimization process—it’s worked well. It’s worked correctly, and our current plan is performing pretty well. However, if those two pictures are different—maybe the rows are off by a factor of 10 or something like that—we can start to question whether or not our optimization process worked well. And one of these big things is what we call join ordering. I talked earlier about joining Table A to Table B and Table B to Table A—that’s called join ordering. So it’s a very important topic in database, especially database optimization, because the order in which you join these tables really matters. Of course you as a user can’t really do much to influence that, but that’s the optimizer’s job. Sometimes we get this wrong and sometimes it’s not even our problem. If we are given a really weird join clause—like for example we’ll see join on a substring of VARCHAR column, which is really nasty. That statistics component that I talked about earlier? It will collect information about all the different columns and the values contained in that column. For example, if you have a states column, it will see how many roughly values will be assigned to Minnesota, or to Illinois or to New York, and the exact sort of algorithm there is outside of the scope of this conversation. But it can’t look at all the values. It’s just not possible because there are so many. I should also note that it looks at the whole value; it doesn’t look at pieces and parts of the value. So if you’re taking a substring of a character column and then using that to join, we have very, very little success in using that to join these two tables together. So AQP might find that if your join predicate is very strange, we might step in and say ah, we actually messed this up. Can we switch those two orders? It will go in and change them and restart the query. So it actually kills the query entirely. You might be 90% finished, but we are going to kill it and restart it with this better plan. You might think, well why? We’re almost done, let’s keep going. Two things: One, we actually don’t know how far along we are. Because we’re all estimate-based, we don’t know how good our estimates are. It’s just not really possible. So we don’t know if we’re 90% done or 10% done. All we know is that we something bad, and we should sort of bail out. The second thing is reuse. When you optimize a query or when you run your query, it’s optimized and stored in what we call the plan cache, and that keeps these plan objects around for future use so that if you run that same query over and over again, we only have to optimize once and we can reuse those plans we’ve built to run the query. It would be a real shame if we didn’t end that poorly performing query because we would just continue to reuse that over and over. So AQP will actually replace the plan in the plan cache as well so all your future executions will be even faster than the one before.
Charlie: Hence the adaptive.
Ryan: Exactly, yes. And it will do this multiple times for each query, and we’re constantly adding new things to the optimizer including adaptive query processing. One of the ones we did recently was symmetric multiprocessing, or SMP. We use AQP now to help with that. People found it difficult to manage SMP usage on their system, and now AQP will automatically look at queries and based on their runtime, will determine if they will benefit from SMP or not. So once again your question about do people need to know these things? They don’t for success because we handle now, via AQP, SMP in terms of its usage in your queries automatically. We take these things away and do them for you because it makes your job so much easier, but if you understand the guts of all this I think you can be a really successful developer. But that was just a small bit about AQP.
Charlie: But a significant bit, nonetheless.
Ryan: Absolutely.
Charlie: That was amazing. So with AQP you mentioned—like, for example Minnesota and then New York or Illinois or name pick a state it doesn’t make a difference. Is it data-based, meaning that if I had a different set of states or if I had more of one state than I had another state for example would that affect how the query itself would run or how the AQP would come up with a different plan perhaps? Is it data-dependent or is it just the query design that really influences how it works?
Ryan: That’s a great question, and there’s a lot to the answer. I won’t get into all of it. It’s based on the data that we’re seeing come back. The tricky thing about AQP and the optimizer in general is that when you issue a query, we will strip out the literals that you put. So if you have like a =5 or =Wisconsin or =Minnesota or something like that in your query, the optimizer typically doesn’t see those exact values. So it won’t see the =MN. It will see =? So we call those host variables. One of the common things we see is that you might have a plan that works really well for one set of host variables. So for example maybe we’re looking at the number of residents that live in each county in New York. Now if we change that state filter to Minnesota, it’s a completely different set of data, right? It might be in that same table that query is roughly the same, but the underlying data might be extremely different, and that might cause that plan to actually be suboptimal.
Charlie: Sure.
Ryan: We’ve had checks in the place over the years that allow you to do checking of these host variables when you rerun the queries, but they can be slightly disruptive. So many people have them disabled by default, which is totally fine, but AQP exists also to sort of serve that, right? Maybe we ran initially with one set of host variables, but then as we continue to process this query, we are running with different host variables that might not be running in an optimal way. So AQP can still step in and say hey, let’s change this around. It’s also like you mentioned sort of based on the query structure too, so the data and the structure are very intertwined. They both need to be in a sort of coherent states for us to be happy with the performance.
Charlie: You know another question I hear—or, not a question. I mean it’s a statement of fact that what makes Db2 for IBM i so special I think is that it really is optimized and fine tuned for the hardware in which it is running on, and that’s an important thing. I know you mentioned to me, even before this conversation, but for example the types of disks that are in play at any given time has a direct influence on the query planning, I guess, and things like that. So expand on that. Tell me more about why this is so special and why we say it’s optimized for the hardware or for the OS, or maybe both. Maybe it’s optimized better. So fill some of the blanks on that point.
Ryan: Sure, and I’ll start sort of from like an OS perspective, and then we can talk about disks and hardware and that sort of thing. Db2 for i is built into the operating system as I think all listeners would know. That’s pretty unique among database platforms. You look at other database platforms besides Db2 for i and Db2 for z/OS, and there’s not too many integrated database solutions. That gives a lot of advantages to be integrated. Like I mentioned earlier, we moved our optimization process into the kernel. I don’t think that happens for too many other databases, and that’s a really great performance benefit.
Charlie: Agreed.
Ryan: It also allows us to have some really tight coupling with other parts of the operating system. So for example, when we store data in memory, we are actually able to tell the component that manages memory usage and we tell them this is a very important high priority process that’s running. Do not take this data and put it out to disk, page it out of memory and you know, put in your swap file. Don’t do that, because we need this data right now. Other platforms, they have some controls over that, but it’s not nearly as fine-grained and as powerful as ours are. Pretty nice. On that same sort of page with hardware, when we do all of this optimization we can actually look at, like you mentioned, disks. So we can actually see if your disks are SSDs or not. Of course SSDs perform far better than spinning disks, but it’s not quite so black and white. Spinning disks do really well—I shouldn’t say really well. They do okay with sequential reads. So if you’re just going to read from Address 1 to Address 10,000, it’s much quicker than if you said go to Address 1, then 10,000, then 200—
Charlie: Sure.
Ryan: So when you’re jumping around, SSDs are great at both. So that can allow us to understand sort of the I/O characteristics of your query and we can sort of plan on those and understand what might be better or what solution or what query tree structure might be best for your specific query. We can also look at disk response times, which is very interesting. So we can look at how many milliseconds it has taken to get an answer or to get data from the disk itself. If we see that a disk is really, really slow, we can factor that into our calculations. There’s also hardware coupling where we run on a very limited set of hardware. Other database platforms can run on a whole swath of hardware, right? x86 is a pretty, pretty large product in terms of the number of processors and memory architectures you can run on, those sorts of things. We have one or two processors you can run on, so we can make optimizations for that. Like the width of a cache line in your CPU, we know that and we can optimize the data structures within the optimizer to take advantage of it, and there’s lots of other stuff. But we have that really tight grained control over our hardware and over our software, and it provides some pretty awesome advantages for performance, both in the optimization process and the runtime.
Charlie: That was a great response. Very succinct and very informative. I mean this is amazing stuff. It gives a lot of back information—you know, to back up that statement. Oh, it’s integrated. That’s almost a sales line, but to speak to it at the level which you just did, that’s pretty impressive, I think. That was really great. That was fun to hear, actually. Well actually, we spoke before we started recording this and how quickly 30 minutes goes and holy cow, we’re almost at the end already, which is okay, but I do have a couple more questions. One of them is this is a starting point, and this was never intended to be a complete lesson in SQE. It’s impossible. I mean, I suspect we could do an 8-hour discussion on this and still not cover the entire breadth of this topic. But as a launching off point, where might people find more information if they were inclined to really take a great interest in this? What’s out there, either through IBM or other resources, what’s out there that they could use or to look at to just to do a deeper dive if they really have the interest, and the time quite frankly?
Ryan: Sure. The first thing I would say is try to attend your local user group or maybe some larger events across the country, right? Some of this education that I put on and other members of the Db2 for i team put on, it’s pretty unique, pretty special. You don’t get those opportunities everywhere, and I think like this conversation among many others highlights the value of that. And it’s not just talking with IBMers, but I think having that sort of really low-level understanding of the platform and of the database can answer some questions you might not have thought have had answers, at least online. So that would be #1. #2 we do have a PDF online and it’s all about optimization. I actually just looked, it’s 1346 pages long, so just some light weekend reading for you folks interested in SQL. It’s called the Database Performance and Query Optimization guide. It doesn’t have an actual noun, it just says Database Performance and Query Optimization, but it is a guide on what the optimization process looks like, how the optimizer functions at a high level, various different pieces and parts of Visual Explain and our tooling. It’s got lots of things, and that can be a really great in-depth resource. There are some Redbooks for example that have been published many, many years ago for indexing strategy, primarily published by the Lab Services folks here, or now known as Expert Labs. Those offer some slightly antiquated but still valuable lessons in database indexing strategy among other things. Once again, not maybe my first go-to there, but they can still be very useful. Other than that, diving into our documentation online and also just playing around with the tools can be pretty enlightening. There’s a lot of things out there, but if you just dive and try to put a good effort into understanding this pretty complex world, I think people can get farther than they maybe expect.
Charlie: I think that’s a great summary of what you just said. Hey, let’s kind of wrap this up, but this is a question I think some people might be interested in hearing. So you’ve been speaking on the circuit now for just over a year. You have a couple of POWERUps under your belt and some of the local user groups. What’s been your experience? Has anything really jumped out at you or what stands out about what you’re seeing so far, what you’ve learned because you’ve been with IBM for 2 1/2 years prior to even coming out as a speaker. When you were working internally at IBM, before you actually came out into the speaking community, what do you think about speaking at events? Do you have any favorite parts or what’s been a really good experience for you as a speaker now?
Ryan: Sure, yeah. Great question. In a weird way the speaking is the side activity for this whole thing. When I go to these user groups and I speak, it’s awesome and it opens doors, but the really rewarding parts are when someone comes up to me after and says that was super cool, but I have this question about, you know, here’s this performance thing I am facing, here’s some other question that wasn’t answered by the presentation, like can you tell me more? And the answer is always yes, I’d love to. So I love the speaking thing because you can see light bulb moments, you can see people asking questions, and you know I had some folks like laugh and like clap at some stuff, some tooling we put out. Like really, they were super shocked and excited about it and that’s rewarding, but it really is the conversation with community members that makes it all worthwhile. I mean these are long trips away from home and it’s a lot of talking, but conversations to people who just use our stuff makes it all worth it. I really would not be as excited to work on IBM i if it wasn’t for that community interaction. Other products aren’t quite as lucky as we are, and having that connection to the people that use our things is very rewarding, very cool. And it also helps us. It’s a symbiotic relationship, right? We get awesome ideas from our community all the time, and being able to hear these ideas firsthand, or hear these problems or complaints and concerns, it helps us guide the direction of where we need to go. It’s just a really great experience I think for us speakers and all the attendees as well. It’s a really, really fantastic experience.
Charlie: Yeah you mentioned the symbiotic relationship, but I hear from many, many attendees, and I think you really embody that, is the access that people simply have to people who are in the labs. I don’t know many other platforms that provide such access to the people like you, quite frankly, with the knowledge that you have, and Joe Attendee or Jane Attendee can go to a conference and just ask these questions as you mentioned. That’s a huge, huge value that IBM provides for us, so I’m grateful for that too. It’s a real great treat for us to have access to people in the labs.
Ryan: Yes absolutely, and for me it’s a great treat too. Like I said without that interaction, without that relationship, this job wouldn’t be nearly as fun. I also don’t think IBM i would be as good of a product. You know, this relationship we’ve built for 36 years is it now, something like that?
Charlie: Right.
Ryan: It wouldn’t have lasted so long and wouldn’t still be so strong without people like yourself or other community members who are here to make this as good as it is. So I guess thank you, and I guess thanks to listeners as well for being a part of that.
Charlie: Absolutely. Hey Ryan, I’m so grateful for you joining me today. This has been a great conversation. I learned so much just chatting to you in this conversation. On behalf of everybody who is listening to this, thank you again for your time and for all you do. I know that you’re going to be out there for many years to come and sharing your knowledge and more importantly, your passion. If nothing else is evident to me today, it’s just your passion for what you’re doing. That’s important and it really makes the entire learning experience for everybody that much better, so thank you for that too.
Ryan: Of course. Thanks, Charlie.
Charlie: Yeah. Thank you for joining me, and for everybody who has been listening, thank you for joining us today. It’s been a real delight speaking with you and having you listen to our podcast with TechChannel. Until next time everybody, we’ll see you again. Thanks. Bye bye.