Skip to main content

The State of AI on IBM Power, With Dr. Sebastian Lehrig

Charlie Guarino interviews a worldwide thought leader on AI for IBM Power, Dr. Sebastian Lehrig

Click here to listen to the audio-only version

The following transcript has been edited for clarity:

Charlie Guarino: Hi everybody. Welcome to another episode of Tech Talk SMB. Today I’m thrilled to have a very special guest, Dr. Sebastian Lehrig. Sebastian works with IBM. He is the renowned thought leader of worldwide AI on IBM Power, a real thought leader in this space. And when I read his bio online, it says, driving design, development and market success of AI offerings and capabilities on IBM Power platform. And you could also contribute to and leverage open source very much on the power platform. So we’ll talk about that. But first, Sebastian, thank you so much for joining me here today. What a thrill to have you here.

Dr. Sebastian Lehrig: Actually, it’s my pleasure. We know each other, it’s always fun talking to you, so I’m happy to be here and to have this interview with you.

Guarino: Yeah, it’s really great. I am so happy that you’re here. So I read your bio just now and it says that you are the thought leader for IBM Worldwide AI. That’s a big title obviously, and that has a lot of responsibilities. Certainly one thing, the first thing that came to my mind in preparation for our conversation was the surge or maybe even the resurgences of AI because AI is not a new topic. You and I both know it goes back 70 years or so, but it seems more recently to be on everybody’s mind clearly, especially in the last three years, I suppose, with the advent of the ChatGPT. But what do you attribute to that? And more importantly, is it just hype or do you see a legitimate ROI, and particularly how does that work in the power platform?

Lehrig: I mean, you’re exactly on the spot with ChatGPT. This triggered a lot of hype because everybody suddenly understood what’s the art of the possible with AI. ChatGPT showed that everybody really can suddenly use AI really easily or at least experiment with it. So you just go on a webpage, you log into a chat bot and suddenly this bot can interact with you with a human. And this inSpyred people really to test it on their own. And suddenly everybody was talking about it, it was in the news. My friends started talking about it. They asked me, “Hey, you’re in that space, what’s your take on it?” So there was a huge momentum around it and naturally enterprises also jumped on that topic and started experimenting with it.

Now obviously it got hyped, everybody know that. But if you now look into recent studies, there’s a recent study by the MIT for instance. We hear more and more that enterprises are also struggling to move from pilot to production and to actually see ROI, return of investment, from the investments into ai. So the MIT study or suggests that only 5% of enterprises have actually managed to get ROI out of it. And those Gartner curves hype cycles, if you follow these ideas after a huge hype often comes a fall, if you will, and suddenly you become more realistic in terms of expectation handling and how to go with it into production and see return of investment. And that’s currently going on a lot in this market. Like, how can we guarantee return of investment or at least improve the situations that this can be guaranteed? Because other studies show that you can actually achieve ROI, but how that’s sometimes the more tricky question.

Guarino: You mentioned 5%, is that number surprise you at all? That seems a little low to me, but then again, I still think we’re in our infancy of adopting AI in the workplace. Does that number surprise you at all, 5%, would you expect it even higher or is that where you think it should be right now?

Lehrig: It wasn’t a complete surprise. I’ve seen other numbers. Some other numbers will suggest something like 20% managed to go into production. The MIT study focuses more on generative AI, which might more be in its infancy compared to more established classical machine learning kind of AI algorithms. So it’s the right ballpark also in my experience. Depending on the study, it varies, but the ballpark is right. Going from pilot to production to return of investment, that’s still something where enterprises struggle and I believe they also need help with. And yeah.

Guarino: I know Sebastian, you’re in the Power space clearly. And what do you think is the unique role that Power provides in the ecosystem that makes it maybe profoundly unique to handle AI in an enterprise?

Lehrig: Yeah, I mean we’re at the core of the enterprise. We’re not the general purpose public cloud provider that provides general purpose use cases as services. We are rather in the back office, not in the front office of things, but we keep enterprises alive because core workloads run on our platform. Core databases like Db2, like SAP landscapes, like the Oracles of this world, they entrust us their workload, their data because we’re so reliable and can nicely consolidate these kind of workloads on our platform and give strong guarantees on everything that is mission critical, the roster of security. That’s what’s in our core and we are unique in that sense. And that’s where we’re unique in that sense. And the key here, now when we shift that to AI, that thought is we’re so close to the core of the business that we are predominated to make more or create more value around AI and to actually create ROI. So other studies have shown that the biggest opportunity for ROI from AI is actually in the core of the business. It’s the back office where there’s big ROI to get from. We are really close to that and that makes us unique and it’s different to other players in the market that more have different context are maybe more consumer hardware facing and now try to take the step into the door with enterprises. But that’s where we already are, where we come from and have a long history.

Guarino: We talk about Power, but the term Power is such a broad platform and there really are different silos, if you will, or different camps. We talk about IBM i, for example, AIX workloads, things like that. Is there anything that’s different across those workloads or is it just unique, or the Power brand in general, it just has this innate ability to work with AI?

Lehrig: No, there are clear differences, right? So if you talk about IBM i for instance, we see a lot of also small and medium sized companies that have IBM i workloads. We see RPG being used on our platform. We see lots of custom or third-party ERP systems, opposed to larger ERP systems like SAPs and Oracles of this world in that space. So that’s special for IBM i for instance, whereas in AIX we see more Oracle workloads. We also see SAP workloads having unique characteristics with all the motions going on currently in the market. SAP Rise is a big thing if you talk about AI Jewel is a big thing in that context. So there are different segments, if you will, in the power ecosystem and each segment is a bit different. I’m overseeing all of those segments from an AI angle with an eye towards their unique characteristics.

Guarino: Even with the unique characteristics of each of those platforms. You mentioned just now there must some recurring theme or some recurring challenge that everybody seems to face. And what have you seen in that regard as far as recurring themes or challenges? It doesn’t matter what you’re running.

Lehrig: Yeah, I mean let’s talk about AI, right? So how do I get started? That’s a simple question, but so much to this question. Where should I get started? So even though ChatGPT managed to make it appear quite easy to do AI, science enterprises independent of the segment, are kind of still wondering what’s the low-hanging fruit? Where can I really see ROI real quick? And for some that start doing data science, it’s also like it appears complex, appears hard and it doesn’t help if you provide a gazillion of options how to do or what AI use cases are out there. To the contrary, I think it would help to be laser focused on just a handful of topics to start with

Guarino: These systems. I mean enterprise systems can be very quite complex and intricate, things like that of course, I mean they’re running large corporations. When I think of IBM and AI, what comes to my mind of course is watsonx and even the Granite models, things like that. How does the Granite model or even watsonx as a larger thing, how do the two intersect? Do I intersect that on a power platform? How do you marry those two technologies?

Lehrig: I mean you are talking about technologies, right? So if you mention what’s an X-ray, it’s a whole portfolio of technologies. It’s not a single technology, that’s not one watsonx product if you will. It’s a whole ecosystem which is inclusive of IBM Granite models for instance. So I would think of it as different layers and aspects that play together. And when we are on a model layer and talk about IBM, we have our own models that we can tune for the individual needs of our enterprise customers. So we can pre fine-tune it if you will, for the needs of our enterprise customers from a quality perspective in terms of how accurate is the model responding, but also from an enterprise readiness perspective. Indemnification, compliance, these are big topics and we can factor that into our own models and have the control over that and this control for us, it’s kind of important, because if you think about it starting down from the silicon, we have control over our servers, the Power, we have a storage brand so we can provide the infrastructure with Spyre that we just launched.

We also have acceleration cards, which we can fully tune towards the pain points our customers need to get fixed. So if customers want to get superior Rust properties, we can optimize our acceleration card for it, which is something we couldn’t do with NVIDIA cards, that easily at least. We can control the firmware, we can control how it’s ingested into the kernel, because guess what? We even own AIX, IBM I, REL. We can control that and optimize this to this layer to fix everything our customers needs to get fixed. So to help them overcome obstacles. The perception that AI is hard, it’s partly because of the inherent infrastructure complexity. We can kind of integrate everything in one consistent layer to make it easy to consume and it moves up the stack. OpenShift, we can easily integrate into OpenShift because we are really actively collaborating with Red Hat. We can move further up on top of this Red Hat portfolio.

Red Hat’s AI portfolio, Red Hat’s AI Influencing Server is one that runs natively on a can run natively on a REL LPAR or if you like containers you can go with OpenShift AI from Red Hat on top of containers. And then the core of watsonx portfolio sits on top of that where we also there we have some control on what our customers need. We an enterprise ready data and AI platform or off the shelf use cases with assistance that come as easy to install. Power nicely factors into that system. And our strategy from Power kind of also aligns with this layered view on things and how we can then drive to fix or address any obstacles our customers see when adopting AI.

Guarino: Wow. So while you were speaking just now, I was writing down some of the key words I want to just address them in more. So the first word that you said there that jumped out at me of course is Spyre. That’s a big one I think. I mean I’ve been reading a lot about Spyre online recently. It’s been making a lot of news really. And what impact does Spyre have for example on performance, things like that. And as we talked about Power, what role does Spyre play on that platform?

Lehrig: So in one or two words it’s turnkey AI.

Guarino: Turnkey.

Lehrig: Turnkey AI. That’s what Spyre is in just two words. So the idea of Spyre is let’s—all the complexities when onboarding to AI, when struggling to get started, how do I go about configuring my network? How do I attach external accelerators? How do I integrate with my business workflows? How do I manage data? How do I cope with skill gaps I have in my team? Because it’s an interdisciplinary approach to onboard to AI, you need data science skills, development skills. You need to have security experts in your team. You might need an ML ops expert. So that is complexity. With Spyre we reduce that complexity to the bare minimum and essentially launch now a whole catalog of pre-built AI services that you can install with a single click on Power and everything underneath all the layers I was referring to—they are pre-integrated with that and just work. So literally what we now launch with Spyre, it’s a single click AI experience where your digital assistant can come up with a single click of a button, and that is powerful.

And the performance angle, because you asked me about performance. Yes. In one layer of the full stack solution we GA here with Spyre, there is an acceleration card, the IBM Spyre Accelerator for Power, which empowers the stack with the required performance for those AI use cases. So when we talk about a digital assistant, we bring it up to the level of let’s install the digital assistant on Power and the performance angle tool that is, well you get a good user experience with time to first token, speaking technically here, of maximum of three or four seconds here. So it responds timely, it streams the answer faster than you can read. It’s solid performance. That’s where we optimize it for. The rest is speeds and feeds and details our customers don’t need to care anymore about because we simplified it to that level.

Guarino: You mentioned OpenShift and containers and things like that and that made me think, are there certain metrics or processes that you go through an algorithm perhaps that you go through where you would typically bring AI to the Power platform versus moving the data to a container in the cloud or a hybrid environment? Are there certain metrics that people can look at and say, well that’s why I would choose that technique versus a different one?

Lehrig: Yeah, so first of all, with what we now establish with the whole portfolio around Spyre, the easy button to do AI when you have data on power and want to ingest it to AI, it’s just the easy Spyre button and do it there because it’s easier. Generally the decision on where you move your AI and where you move your data, it often boils down frankly to the data. If your data gravity is in a cloud environment, I would not recommend you moving the whole data back to an on-prem environment and do AI there. Instead, let’s unlock the AI in a cloud environment using AI services that are often available in cloud computing environments. If you’re in Power VS, we would offer you connection or we call it, satellite connector. So we have connectors with which you can connect from Power VS environments to what’s next services in IBM cloud for instance. That makes it easy to consume AI there so you don’t have to move around your data. Moving around data is to be quite expensive.

Lehrig: You’re on-prem, what you can do is if you’re in a dev test experimentation kind of mindset, feel free to connect to external cloud services because you might be able to use sample data. It’s not your production system so that’s safe. But as soon as you move to production, often it’s easier to also move the AI then to where the data lives. And again, with Spyre, we provide the whole Spyre stack with industry standard APIs. If you today test with let’s say OpenAI services, you can just move them to the Spyre stack because we provide the same API here.

Guarino: The word that we use over and over again and we’ve been using already in this discussion is just data. I mean AI, clearly data is the lifeblood of AI. It truly is. And that’s been my experience with it so far. And IBM i, AIX, things like that, they are well known for storing decades and decades of data. Tons, b

Lehrig: So I think data is the main entry point into your business processes. So even if you have the best AI service in this world up and running to do something, it’s useless if you don’t manage to connect your enterprise data to that service and make something out of the data and put it back into your business process. So the key is really to think of data not as a static thing but as some kind of thing that flows through your enterprise processes. It’s highly dynamic and for that reason I think it’s key to have appropriate data fabrics in place. And there’s a spectrum on what a data fabric could mean. In the simplest form. It might be just a simple database connector with which you can get your data into AI service—really lightweight, it’s good.

But as you mentioned, the time that passes on those platforms, data grows. Maybe there is not only one data source but multiple. So if you have a landscape that is really distributed, if you have a landscape where data is all over the place and you even want to ingest external data, then you need to go for the other end of the spectrum. Have a dedicated solutioning for how do you go about data management, data lakehouse, data warehouses, these kind of words. Therefore a reason. If you are facing lots of use cases where data management becomes your bottleneck, I think that’s where you need to do more about it and then go to more enterprise-ready solutions for managing data. I mean, what’s next portfolio would be what’s next for data, which is also something we now support on Power. That’s something you could use in that realm if you are in an IBM i world and need to send your data around with a lightweight footprint, lots of open-source tools. Most of you folks probably know Jesse Gorzinski; he talks a lot about how can you massage your data so it can flow from A to B, right?

Guarino: With all the amount of data, and it’s voluminous is the word we use. It is so much data. How do you help businesses determine which components of the data are actually AI worthy? Is all data considered fair play or are there segments or how does somebody begin that journey on identifying proper data?

Lehrig: I mean you can start holistically and make data itself the problem to solve, and then you can start creating ways of systematically managing your data. Or if you are in like a concrete business unit for instance, and that’s something I would recommend, be laser focused on the AI project at hand and view data from a lens of the business. Now where do you maximize, based on your data and data insights, the business value for a given AI use case? So scope it down to the bare minimum, and if you really need to run a pilot or something like a test, start with a minimal amount of data you can showcase value from. It will anyhow take more time to ingest more data to it, so just start with the most valuable one. It depends on use case. So it’s hard to give a generic answer where it’s the most value. Today’s hype would indicate something that is unstructured provides lots of value because it’s untapped data it hasn’t been tried to leverage recently. So if you have natural language kind of text, images to analyze a lot, probably this is something that would at least fit to the recent type around generative AI.

Guarino: It goes beyond just identifying the data. Once you have identified the data, you also need to clean it, structure it and label it properly. Are there any capabilities in power IBM i to help with that process? That could be a difficult process, I think, to get started.

Lehrig: Yeah, especially historically seen a really tough problem to massage the data so it can go into your AI algorithms. I mean, some good news here is that generative AI has at least the capability to kind of do some filtering and massaging for you. So you can today even let a model decide on how to go about some data. And this has potentially partly led to this hype around generative AI because it’s a bit more robust against unstructured data that is not completely sorted out. So to me it still boils down to what’s your time to value. So if you can speed up the time to value by, for instance, using generative ai, not put your head too much around how do I at least get started with data transformation to prove some value? Probably that’s what you should do. And then have more data massaging and more data integrations. That’s the second thought, but it’s really key to prove value end to end, not solve a singular aspect.

Guarino: Sebastian, I have seen you present topics on AI at various conferences around the world, and I know that, we go back to the initial conversation about hype and things like that and you hear, I read stories about CIOs and corporate C level—we want to do AI. What do you want to do? We want to do AI. It’s this hype, but we want to do AI. But do you think that people are surprised by the amount of effort that’s required. Even with advent making it turnkey, there’s still some process you need to run to make your data AI ready. Are people surprised by that, the effort that’s involved or required to be done before they can start getting real ROI and value back?

Lehrig: Yeah, I mean, I think there’s a history to that, and historically we’ve seen lots of AI engagements were put into pilots, MVPs or prototype, however you would call ’em. And I mean it’s classical software engineering really or project management. You need to scope it down so you can easily manage it. And often those pilots went out of hand and they took like a year or so. And then initially when the decision was made, let’s do AI, blind investments were probably made without setting a clear focus and goal of those investments where they lead to. And that then kind of naturally really led to long-running pilots that didn’t result in ROI. So stepping back here, I think, picking up the term turnkey, again, if you’re a laser focused on a singular solution that is already turnkey, your time to value will speed up drastically. So if you’re a C level, even at C level, you should ask like, okay, you want to do AI, but for what? What do you want to optimize? Do you want to save your sales? Do you want to increase your revenue? Do you want to increase your internal productivity? Do you need to improve your internal fraud detection algorithms because frauds are such a big pain for your business. This needs to be super crisp so you become laser focused. And then you can select the right approach to it and actually manage it and prove your ROI real quick. And then it can be done in less than a year, less than a month, even less than a week, if you’re that focused.

Guarino: And I would suppose that it is in these instances that you’ve just mentioned, these are the enterprises or the shops that with laser focus and the preparation, the upfront preparation, that’s where we talked about that range of five to 20%. I would imagine shops that have this mindset of preparation, things like that, that they would trend more towards the higher side of that range.

Lehrig: Yes. That’s perfect. Fine. Nothing to add.

Guarino: Nothing to add. Okay, good. You mentioned Jesse Gorzinski earlier. We were talking about open source and I’ve even heard, you’ve done a presentation, I’ve heard a quote about how open source really is the foundation of all things enterprise IBM, AI and IBM Power. I’m just curious, what’s driving that in open source? In the open source world?

Lehrig: Innovation in one word, it’s the top reason clients go for open source, it’s to innovate. And the reason here is especially in the AI space, we see so much of innovation happening in open source, and enterprises quickly jump on this innovation and start experimenting with it. There’s a whole momentum around that, whole motion. If you completely ignore that, you will constantly be out of sync with innovation and you will lose track of where the market is heading. And that’s at least one reason why we mean it seriously with open source, because we can rapidly ingest new capabilities into our platform. Speed matters here. The AI momentum is right now and clients are investing right now into AI. So we have to be on that train of innovation following open source. So that’s the market reason for it. We want to follow the market trend on AI to have a solid story for AI here.

And then there is also the internal side. So our development for instance, cost. If we base everything on 90% open source, essentially we need to only develop 10% on top on our own. And this 10% then covers how do we make enterprise open source reality. Now how can we move from raw open source to the enterprise needs our customers have. How can we harness it? How can we get guarantees on open source software, right? I mean the whole Red Hat is centered around those ideas. How do we handle common vulnerabilities and exposures? How can we lower the risk of our customers in consuming it? How can we guarantee that everything is compliant to open source licensing so as to not expose our customers. This is a missing 10%, right, plus some plumbing around to make it consistent.

Guarino: But there are still some misconceptions out there about open source and there are some customers who may be skittish perhaps is the right word, or concerned, I should say, about using open source for a variety of security, things like that. How do you address those big misconceptions, especially when you’re trying to put these into mission-critical applications? How do you address these misconceptions?

Lehrig: I mean the trick here is we don’t just download open source binaries from the web, put in simple words. So everything open source we do in our AI stack, we as IBM build, and we call out enterprise support for it. So when a customer has an issue, or let’s say there’s security vulnerability out there. You can formally file a ticket at ibm.com/support and ask us to fix that because our customers are entitled actually to get fixes timely. We can also do down patching. So in an old version of a software, we could even apply security patches, and that gives this security, or this removes the risks of open source to the degree that is needed and enterprise ready. And I mean technically speaking, the trick is we built our open source from scratch end-to-end and then call out support for those reasons.

Guarino: Is there anything that makes Power, the Power platforms, really unique to integrate open source into these environments?

Lehrig: So the open source often is related to x86. So in x86 world, lots of builds are out there in the wild and not controlled, which potentially then leads to those security threats you mentioned. What is unique is that we created a whole ecosystem based on open source on our own and optimized for it. So we provide the same features if you will, because we’re just rebuilding the very same open source libraries packages and tools, but we harden them for security, we optimize them for the underlying infrastructure as well. So with Power10, we introduced on chip exploration with MMAs D units, high memory bandwidth, we’re continuing that with Power11 with better performance. That is all integrated already in our open source libraries. So if you’re like a data scientist, a developer using Python, you can just continue using Python as is while getting all the nice performance optimizations out of the box on our platform. So that makes it kind of unique. It’s pre-integrated, if you will, and enterprise supportable.

Guarino: That’s a perfect answer, thank you for that. A couple more questions. So the big topic, at least right now, this moment in time, everybody that I speak to in AI, the term that everybody seems to use very freely is just agentic AI, AI agents and agentic workflows and things like that. So how is IBM helping customers enable agentic workflows, for example, into their enterprises on Power, IBM i, AIX? What’s IBM’s role in that?

Lehrig: I mean what are agents, let’s start there. Maybe not everybody’s deep into the topic than I am. So I think agents are autonomously deciding large language models really that take over business workflows, steps within business workflows. So I would really anchor on the concept of a business workflow. So there are typically steps you do in a business workflow, tasks, and now the agent can call those tasks autonomously. So that allows you to automate processes. So the tasks can be different things, like you get an email, you need to read through the email and put some information from the email in a form. That could be like a simple task. Now the agent could call tools that would automatically read through an email and fill out the form automatically. And if they are part of that business workflow, and that’s cool because sometimes reading through emails is tedious, entering forms is error prone, annoying if you do that over and over again for a gazillion of emails. But the important concept is there is a workflow that can now be controlled by agents, and then there are different tools the agent can call. Now, what makes Power really interesting here is the tools an agent can call. We are now making them part of our DNA. So with this turnkey AI idea of Spyre, I mentioned the catalog of prebuilt AI services. Those services, they can now be called by agents

And they can become part of the agentic toolkit. So an agent can for instance now translate on the fly because we provide an out-of-the-box translation service as part of the Power platform. An agent can literally extract those emails and put them into an ERP system automatically, which we did with our IBM i customer hunts guys in Germany for instance, they improved their process throughput by five X. That’s also referenced we published recently. That’s where I can freely talk about it, but that’s something we have showcased. There are tools that can be used within business workflows, do something with an image, do audio tool, text translation, and do something about it to maybe summarize a customer call you had and put it into a database that can now come out of the box and be controlled by agent AI. I, from a Power perspective, find it first important to establish this tool set. Agents can actually call and make something meaningful out of it. And that’s where we are currently. And establishing those tools also for IT administration like unlocking eagentic tools for capacity planning, projecting how migration from Power10 to Power 11 could go, these kind of things, we expose them as tools so agents can call them. It’s kind of our interface to the agent world to contribute like an intelligent infrastructure, but maybe something you how you can call it now

Guarino: And non-deterministically.

Lehrig: And non deterministically, right. And then the brain on top, the agent that calls this and orchestrates it, I could now speak about products. Watsonx Orchestrate, for instance, is an agentic AI tool from IBM that could actually call those tools and then make use of it. Or you implement your own agentic workflows on top of this Power architecture because it provides you all those nice and neat tools. So this is kind of the vision where I see this going and where we’re driving to. And I think the natural progression here that we’re seeing in the market is we started with ChatGPT with digital assistants, digital assistants, or first of all have a Q and A chat interaction, but also several assistants have other capabilities like ingesting knowledge from knowledge bases. Some assistants can summarize things, they can translate. You see today we manually call those tools already with those digital assistant. Then the next evolution of this idea is well, let’s automate this manual calling.

Guarino: This all goes back to that same word you started the conversation with, turnkey. I love that expression because it takes something that’s potentially so abstract and you’re really just simplifying it as much as possibly could be simplified to make AI egalitarian, for the masses, which is a good thing, because its time has come certainly, and that’s going to, in my heart, it’s going to address the concerns that people have, and it’s also going to—all these things, the ROI is going to increase, it’s going to increase the adoptability of AI. I think it’s a very good direction that—

Lehrig: I’m happy to hear it because that’s our big bet, obviously, right now. So with the whole portfolio, everything we do with AI on Power, our number one priority is let’s simplify that so everybody can use it easily. That’s our main paradigm with which we are driving all the full stack portfolio. And I mentioned it before, but we can control the full stack so we can pre-optimize for that goal and make everything pre-integrated, pre-optimize, pre-built, reusable or turnkey in another word.

Guarino: So let’s start wrapping this up, but there’s one question that I’ve been waiting to ask until the end because it’s a question that’s very curious to me—

Lehrig: Hardest for the end.

Guarino: Well that’s a good question though.

Lehrig: OK

Guarino: I see your journey and I see where you’re going and I see your role as a worldwide thought leader and that’s, to me, very impressive. But there are people who are still just starting out or haven’t even taken the first step yet. Based on your own experience or what you’ve seen other people in similar roles in your journey, what one piece of advice would you give somebody who is just starting or considering doing this, particularly in this space that that’s so rapidly evolving? How do you stay current and there’s a potential of maybe of missteps, what one piece of advice can you give them to remain relevant even down the road in three, five years, which obviously in IT is more like a hundred years, but what’s that piece of advice you can give somebody today?

Lehrig: Yeah, be curious and have fun with it. So for me, AI is a lot of fun. There are so many cool stories we see. There’s so much you can do. You should be curious because it’s not always good you can do with AI. There’s also a downside of AI. It’s always two sides of a coin can be used for the very good and to the opposite.

And I think in today’s world, AI matters a lot. We see it everywhere impacting all industries being quite disruptive. So I think there is a huge motivation behind that to be curious and to potentially also have fun. So I asked the same question to a colleague who already retired now when he was a Distinguished Engineer at IBM and he also mentioned to me, be curious, stay on top of recent trends, watch videos on the web that talk about GenAI, for instance—that’s something we literally talked about—because the world is moving fast. You don’t have to know every detail, you don’t have to learn how to calculate a neural network on paper. I mean I learned this stuff, but it’s not really helpful to always stay current. It’s more helpful to really drive it in a way that you like it and you have fun with it, because then you can motivate yourself staying current on top of those things. So you started with somebody who wants to get started. I think that somebody already has some kind of motivation to do AI, and here it’s really important to have fun with AI opposed to—some people might like it so nobody feels offended—but opposed of doing math for several years now on AI, doing matrix math calculations and to each algorithm if that is your thing, cool. I also dive deep into that, but depending on where my interest went, I went full detail for that and had fun with it.

Guarino: I think that’s the secret. I mean, I think you’ve hit it right on the head, stay curious and with your curiosity and the fact that you’re having fun, that will just allow you to continue to adding value back into the ecosystem as well, I think. You’ve distilled it perfectly there. So thank you for that. That was a great answer and that’s a great answer I would’ve gotten from AI too, I think. So that was great. That was wonderful. Alright. Hey Sebastian, I just want to thank you so much for being a guest here today. I’m so grateful for your time and I’m really happy that you answered some of these questions and I really do admire your role at IBM and how you’re really keeping the world focused on IBM technologies and AI in general. So thank you for that as well, and I’m just happy to also call you a friend. So thank you very much for that too.

Lehrig: Well, thank you Charlie, again for having me. And you’re an IBM Champion as well for Power. So all the work you do with your great community here, I think that’s also fun, and being curious and asking the good questions. So thank you for having me.

Guarino: Great. Alright everybody, thank you very much for joining both Sebastian and I. We look forward to seeing you at a future podcast. And until then, see everybody. See you down the road. Bye now.


Key Enterprises LLC is committed to ensuring digital accessibility for techchannel.com for people with disabilities. We are continually improving the user experience for everyone, and applying the relevant accessibility standards.