IT Social Hour: The Business of AI With Brian Silverman
Brian Silverman, author of the TechChannel series, "The Business of AI," joins host Andy Wig to outline the factors that companies must consider as they leverage the technology
Click here to listen to the audio-only version
The following transcript has been edited for clarity:
Andy Wig:
Hey everyone, welcome to IT Social Hour. We are back and we are taking a little bit of a different direction for this one. In the past, if you’ve tuned into IT Social Hour, you’ve noticed it’s been kind of a roundtable discussion to get a few folks in to talk about various topics. This time we’re trying to go one on one, and we are bringing in Brian Silverman. You might recognize him from Tech Channel. He was the author of the “Business of AI” series for the first half of this year where you got real in depth on various aspects of implementing AI and delving into really, where the rubber meets the road, for AI, and really how to make things happen with it. And so he is the chief solutions officer at ABS AI Consulting, and he is here to let me pick his brain a little bit and really learn about, kind of well, what AI can do for business. So welcome Brian.
Brian Silverman:
Thank you, Andrew. Thank you for having me. And it’s good to be working with Tech Channel.
Wig:
Yeah, it has been great working with you for the last, yeah, since January basically. And I know I’ve learned a lot and I think you’ve learned a lot over the last few years just diving into AI for business and trying to help people make it work for business. It seems like it’s one of those things that’s not going away. So something I think everybody’s pretty much is realizing they need to pay attention to it. But to start off, I’m just interested, how did you get into this specialty and what do you do as the chief solutions officer at ABS?
Silverman:
So Andrew, I came to AI in a sort of a roundabout way. I started with IBM in college a number of years ago, and at the time I was a MIS major and I actually did a seminar study on AI. So I wrote some papers and had some information, but in those days AI was certainly a technology that was being thought of, but it was also something in the future. Then as I evolved my career at IBM, I also became an internet specialist and talked about—IBM came up with the term e-business about how do you bridge the technology to deliver real business value. And about three years ago, I actually started around ChatGPT being announced. I always sort of was dabbling in the AI space, but when ChatGPT came out, it started to interest me a little bit more again, and not just because AI isn’t new, but because the idea that ChatGPT was making AI accessible to everyone and it could do some really creative things.
And so I started to wonder how is that going to change both from an implementation and a technology standpoint, but also how does it change the way we access and use technology in general? So I started to take classes, understanding what it could actually do, learning things about prompt engineering, understanding sort of the critical issues around data and quality and technology, and then started to wonder how this all would evolve. And so what I’ve been doing is taking that evolution, working with some colleagues, but also sort of what we did around the AI series we did earlier in the year, which is how do you actually take the technology and move it from something that is exciting to something that is real? There are a lot of studies out there. There’s a Rand study out there showing that I think it’s up to 80% AI projects fail. And so how do you address those challenges and make it real? And so I’ve been working on that idea. We ended the series with an AI roadmap. I’m sure we’ll talk about more about that, but the reality is how do you take something that is becoming generally accessible by everyone and then make value for a business or an organization?
Wig:
Yeah, I think it’s real easy to let your imagination run wild with AI. I mean, I feel like I certainly do, especially since the GPT stuff started happening and the GenAI stuff started happening about three, four years ago. But I mean what you’re doing is really kind of bringing it back down to earth and helping people figure out how they can actually—if there’s not a business use for it, it’s probably not going to stick around for very long. That’s just kind of the way the world works. So really going beyond all the fascinating stuff and you can really, I mean you can gaze at your navel all day about AI stuff, but getting down to brass tacks, just high level, just what can AI do for business?
Silverman:
Well, I think the first thing to understand with AI is that AI is a general term, just as science would be. And there are different types of AI depending upon what you’re trying to achieve. And so traditional AI or predictive AI has been around for quite a while, and that is taking data and information and learning and implementing AI. In this case, you’re actually trying to predict an outcome and a result. And so if you look at, there’s an example of the Mayo Clinic using AI to study X-rays and do a better job at diagnosis and talking and interacting with patients. In that case, you’re actually, for lack of a better term, you’re trying to get the actual answer, but then you turn around to something like GPT and generative AI. And what’s unique about it is it’s not just trying to predict or answer something; it’s actually generating new content, new capability.
And if you take that Mayo Clinic example, another step further, the ACA is using generative AI to do a better job of explaining what those diagnosis are, helping nurses and doctors better implement or chart the results, and then to be able to communicate to patients as well. So you have this capability in the predictive side where you’re actually doing something actually accurate or trying to get to an accuracy, and then you have this ability with generative AI to actually generate new content and explain what that you can bring different types of AI together that becomes actually real. And that’s something that I think is the most exciting is how do you take the right type of AI, how do you take it and combine it into something that can be truly valuable.
Wig:
The next question really is how do you do that? I know you’ve got this AI roadmap that really lays it out in a practical way, so maybe you could tell us a little bit about either that roadmap or what principles go into that roadmap, that approach.
Silverman:
Well, I think one thing interesting is to look backwards a little bit and then is to look at the internet era. If you look at the internet in the beginning it was a network connecting government and research organizations. And it really was the advent or the enablement of a web browser that made us all aware of what the internet could do. But it really was the back end of the internet where the standards took off and businesses started to connect and figure out how to use that technology that moved it from being something interesting to browse or maybe look at an ad to something that really generates business. And I think the foundation and really the key to the puzzle is I keep thinking of the 98%, and that is there’s a number of organizations at the high end, whether you’re an Nvidia, you’re a Meta or you’re a large company that is trying to use AI to reinvent what you’re doing.
And they’re obviously going to—you might consider a moonshot, if you will—approach to doing something really revolutionary. But most companies are going to be just like they were in the internet era. And then as they need to understand what is it that their core goals and objectives are for their business, we want to add a region, we want to add a product, we’re going to try and innovate how we do customer support and then say how does AI enable that capability and how does it work in that fashion? One of the reasons is because, number one, AI, depending on how big of an effort can take a long time and it can take a huge investment and if it can’t return value to the business, even if it sounds really good in the beginning, it’s not going to materialize because the organization is either going to lose patience or the ROI is just not going to be there.
So the first thing I consider, and the roadmap is really the organization’s readiness for AI and how aware are they and how can they align their business goals and objectives to that capability. And if you start there and you say, we’ve got a, we want to increase our customer sat (satisfaction) scores and we’re going to do something creative. There’s a great use case around Bosch using SAP AI to actually analyze when the customer calls in wanting a support call to make sure they use AI to help route and improve the time and the shortness, if you will, to support their customers. So that’s a great example of having a business use case, finding the right AI element in this case. It’s already something available from SAP and going to deliver real business value. So that kind of alignment makes the investment easier. So if you take that use case, you have a customer support use case, you have a need as an organization either to reduce the time and improve the customer experience or maybe to reduce your costs, then you have to look at, did I pick a use case that’s feasible for the organization?
And I think of feasibility in two terms. Number one, do we have the skills and the technical ability to implement? But the other thing is, will it meet the objective and the time that we’ve set forward? So a lot of times you think about we’re going to improve customer sat in the case of Bosch, maybe because they’re using SAP or whatever. They have the right documentation, they have the history of support calls to train that AI model to be effective. But if you don’t have all that organized, even though the objective is correct, you’ve got a lot of work to do before you can get to the AI piece. And one of the things that I highlight in the roadmap in my upcoming class is sometimes it’s not an AI solution at all. You’ve got a process, you’ve got a business issue to address. Maybe you have a supply chain issue with a supplier.
There are things that ai, you might want it to fix it because you think it’s an easy approach, but it really turns out to be a non-AI issue out there. And then, is it going to be a return on the investment? And the other thing about feasibility and ROI and the other issue with time, if you’ve got a two-year project for AI, the technology is going to change so rapidly in that two years, how do you actually maintain focus? And so if you start out with, I have a business goal and objective, I’m going to use AI, I’m going to stay current in understanding the technology and deliver a real solution that’ll deliver value, then that I think is the right answer. And then the last thing, one of the consultants out there had a really good idea, and that is as you’re building organization readiness and awareness, go find a quick win or two: I’m going to, marketing is going to improve content generation and use AI to do that.
Or maybe the customer support issue is not to automate a chatbot or something that’s going to do customer support, but it’s to use something like an assistant or something maybe that’s available from Microsoft or others that ingest all the manuals and just makes it easier for the support the human in the middle to actually do a better job of supporting the customer and then brag about it to everyone and promote it. And then the other thing that really strikes me is a lot of times leadership in a company will actually go promote the value of AI, but they forgot the employees along the way to get them enabled. And truthfully, there’s also a KPMG study out there that shows that up to 57% of employees are using AI; they just don’t tell their management that they’re using it. So there’s a lot of times the organization thinks they’re not ready or they’re not using it, and yet it’s already fueling change.
So understanding the organization, aligning your strategy to what your business objectives are and what you’re good at and making sure it’s feasible and returns on the investment I think are keys. And then one last mention because keeping the roadmap in mind is to constantly reevaluate where you are in the process. Take the Bosch example: Maybe they didn’t start with SAP in mind and they thought they needed to develop their own customer support app, and then they discover SAP comes out with a new feature and their own support solution, and all of a sudden they can accelerate their way to a result without having to do internal development to get there. Keeping an eye on it, keeping it aware and having an objective evaluation process of what you’re doing on your AI journey, I think, is important.
Wig:
Yeah, so I mean you’re really talking about the pace at which this technology is progressing where you’re maybe a year into the project keeping abreast of how the technology has changed so you can kind of pivot as needed is what I’m hearing. And so that kind of leads to my next question about this whole, I think there’s a lot of FOMO going on right now. People worried about missing the boat, and that has a lot to do with just the pace at which these GenAI, these GPTs, I think ChatGPT five is about to come out. And so people are trying to, feels like a lot of people are maybe trying to, rushing to get on and make sure they don’t miss that boat. But you said at the beginning, you said you have to make a plan. There’s a lot to think about here. How do you get on the AI boat before you miss it, if that’s even a thing, while also kind of having the roadmap in place and having it be relevant to where the technology is at?
Silverman:
Well, I think the first thing is to go back, think about the RAND study that 80% of projects fail. And that, to me, is because it turns out they’re either not aligned to the business, the technical feasibility isn’t there, but it really comes down to, there’s always going to be something new tomorrow, and you can stall and try and adjust to what that is and what it’s going to be. But if you really understand why you’re, and go back to my comment sometimes joke I’m going to change my company title to 98%, that it’s really about AI for businesses that are not necessarily an AI-generated business, if you will, because it really is about finding the reason that you need this technology to be a value to your business. And sometimes—I have a barber, and I think he’s probably the most AI-proof job I know of because even though we like the Jetsons idea, I’m not ready for a robot to cut the few hairs I have left.
But the idea of you don’t have to have a fear of missing out if you know that what you’re doing is going to achieve something valuable. And most organizations, they don’t start out saying, what I really want to do Andrew tomorrow is people acknowledge that I’m the best AI implementation. They really want to be the best at manufacturing a car or delivering, creating pizzas or doing whatever they do on a day-to-day basis. One of the examples that I thought was interesting to me was restaurants, because I think you and I have talked about this before, but if you’re a fast food restaurant, there’s a restaurant in Bayside, Arkansas, cus and Chickens, and they are growing very rapidly in using AI. They’re using it to, they’re a fast food chicken chain. They’re using it to improve drive through ordering how they market to their customers, and they’re growing very rapidly.
And you could see end-to-end AI is going to fuel and help them grow their business because of the business that they’re in. But if you turn that around and say you’re in New York and you’re at a named restaurant where the chef is a James Beard Award winner, and if you have seen the TV show “The Bear,” you imagine where AI would fit, right? The chef is not going, you’re really going to the restaurant because of the chef. You get to be a Michelin star because humans are doing their job. The front end of that restaurant is probably not an AI-enabled, at least today. I mean there’s certainly cruise ships where robots are pouring drinks, but that’s more of entertainment to me than reality.
But if you look at the back end of that named restaurant supply ordering, if you going back to “The Bear,” they don’t seem to have a great awareness of what the financial backend they have to wait for. There are benefactors to tell them how bad off or good they’re doing. And then AI at that side of the business would be very valuable. So you have to look at the business and the type of business and the model that you’re in and whether AI will really make a difference. I can’t imagine going to “The Bear” restaurant—Sam Altman says that OpenAI, in the next few years we’re going to see robots walking down the street with us—imagine you go into the beer restaurant, it’s season 12 and now there are no more actors or actresses, it’s all robots. I don’t think we’re ready for that. So understanding how you use the technology and what’s going to make your people wake up in the morning that gets them excited about their business or what they’re doing and how does AI accelerate that? And if you do that, then if ChatGPT 5 ends up with—say language translation becomes dynamic and even better than it is today and you’re wanting to expand into France, that’s great.
But if it’s not going to accelerate your business, figure out what you do need and what value it’ll bring. But don’t let the technology be the lead.
Wig:
Yeah, I mean the thing you said about restaurants and “The Bear,” it really reminds me that I think it seems like there’s always going to be a desire for that human connection. And when you look at, I was just looking at a list of jobs or skills that will be important considering AI say five years from now, years from now, it’s like soft skills are at the top. It’s the human stuff. So I guess that’s just something, another thing to keep in mind is you’re trying to leverage AI for business.
Silverman:
It may not be popular and maybe I shouldn’t admit it on this call, but when friends ask me what their kids should be doing in college and I think about if they’re not going to be the $250 million a year Meta hire for AI, the right answer is what you’re talking about, soft skills and language. Everything about generative AI, about prompting about understanding your business to train AI, it all comes down to language and knowledge and awareness. And truthfully, ChatGPT does a great job of writing code. IBM has a number of AI products that do code assistant and understand code. And so coding is becoming, if you’re at the high end of the food chain, just like anything else, you’re going to be doing well. But the basics and the structure underneath that is going to change drastically. But understanding language and how humans communicate and how we interact with one another and getting the language, I mean, I don’t know if you’ve done, I’ve entered a prompt to ChatGPT and just realized I just didn’t explain what I really wanted. Clearly enough, I’m getting something back that has nothing to do with what I thought I asked for. And your inclination is to think that ChatGPT is broken, but it’s not. It’s doing, behaving exactly the way it was trained to behave. It’s me who didn’t do my job, if you will, in the prompt itself. So understanding language and soft skills, and I think you were sort of headed in this direction, the human qualities that are unique that are not AI, those things are always going to be needed and be important.
Wig:
As long as we’re around, as long as we’re kind of the point of it all. Yeah, long as I’m going off, I’m going off on a tangent here, but I don’t want to get into the AGI stuff and whether humans are going to need to exist.
Silverman:
Well, we’ve talked about it before and I’ve written about it only 10. We want human intelligence to be defined and structured in the AI world. We want to say that if you can do the best job taking these tests, then AI has become human becoming. And certainly when it comes to things like calculating and understanding and processing language, AI is going to do as good or better jobs than humans just because of what they may be. But human intelligence, there’s not one definition. A brilliant sculptor, I would suggest, is as brilliant as a surgeon, but you certainly would compare their intelligence to one another. And so humans have unique capabilities that AI will be hard to emulate. And I think truthfully, hopefully, we evolve, the more AI evolves, the human intelligence will evolve along with it as well. So I think there’s a lot of opportunity, but it really does come back to centering on what is unique to the human capabilities and how do we over time, what do we do with that alongside AI itself?
Wig:
Yeah, you wrote about that kind of, I think the article, I think you published it on LinkedIn, but it really kind caught my attention. It is titled “The Mark of Success Isn’t Human, It’s Practical.” And it really got me thinking about what is intelligence anyway? Should we be calling it intelligence people talk about, okay, this is how smart AI is going to get. And they talk about IQ and be like, what happens when we get a thousand IQ AI? That’s been the question I’ve heard. Or people just imagine that. Should we be, can we measure AI by IQ? Is that even, this might be getting done a little bit rabbit hole, but …
Silverman:
Well, let’s go down two paths real quick. Number one is humans are fallible. We make mistakes every day. Do we really want AI, especially in a business, in a use case and what we’ve talked about, is that what we want AI emulate? I don’t think so. So we really want AI to be better and generate content or answer questions or do the things that we need it to do. And then I know this is going to sound really weird, but on Netflix now there’s, there’s a documentary about dog intelligence and they’re just discovering what makes a dog intelligent. So how do we know, are we going to say AI is smarter than a dog, but we don’t really know how to define what the dog’s intelligence really is?
I think the key is to get the word human out AI. And that is, we should talk about AI and capabilities and knowledge and what it can do. And if it does make sense at times to compare it to human results, that’s fine. But the human part of it doesn’t make any sense because first of all, you and I make decisions every day based on very limited information, and we get it right because we have experience, we have feelings, we have things that make us—if you’re walking into a store or something, sometimes you just get a feeling that something’s not right and you leave, or some kind of decision you’re making. And I don’t know when AI gets to that, if it ever does, but the idea that AI should mimic a human, I just don’t think that’s a realistic. I mean, you can sit with ChatGPT, and all of a sudden I’m saying, please and thank you and I’m having an interaction, but that doesn’t necessarily turn it into human, nor should it be in my opinion.
Wig:
Yeah, I mean you’re kind of talking about the intuition here when you get the feeling like something’s not right, can AI have that? Or maybe when we have intuition, it’s kind of fuzzy, right? Something’s not right or something is right. We don’t know exactly what it is. Maybe AI doesn’t have intuition because it does know exactly what is right or isn’t or isn’t right? I don’t know. Yeah.
Silverman:
Well it’s funny because when I was in college, the example of the challenge AI was in the human realm was a broken coffee cup on the floor. At the time, AI didn’t know what it was, but the human instinctively knows it’s a coffee cup that was broken. So you start to think about what are the key, and I think you were headed down this path a few minutes ago, but there are unique human qualities and we’re not as a species, we’re not standing still either. So if you think about where we are from an intellectual standpoint and you compare a hundred years ago or 200 years ago, we’ve also evolved. So hopefully we’ll evolve. But truthfully, there are ethical concerns and things to consider and we can talk about that later. But there are things to be concerned about with AI. But as far as asking it to emulate a human, I don’t know that that’s the right answer. I think it needs to be the best AI that it can be and we should be the best humans that we could be. And then hopefully we keep bringing value to the world and AI does as well.
Wig:
You mentioned ethical considerations. I’d love to hear about that. What do you mean, what’s on your mind in that realm?
Silverman:
Well, so there’s a great UBS finance example where they are using avatars and the avatars are being trained based on the financial analyst so that they can automate communicating back to customers. Now, that’s an interesting business decision to make. So you’ve got an avatar who’s been trained to be like you, Andrew, who’s been trained to analyze financial market data and effectively respond to you back to customers whether you’re in the loop or not. So that’s interesting, unless what happens when you’re no longer there and that avatar is actually branded and named as you are, do you own that avatar or does UBS own it? And then the scary part of that is what if UBS decides a year or two from now your avatar is so good, they don’t need you anymore, but it’s still branded as you, so do you own it or do they own it?And how do you handle that?
And so that’s really one issue. And there was a hearing in Congress about, I think it was in the Senate about this as well, where an author was talking about that Meta has studied all these books and now it can write a book like he can just as almost as well or better than he can, so it can generate the 31st book. Now, his issue primarily was not getting paid for the training, and I don’t think you can solve that problem personally, but the idea that ChatGPT or Meta is going to generate the 31st book and the style of him and he doesn’t get any credit or get any compensation for that, those are issues that we have to consider as far as intellectual ownership.
And obviously the other issue is we go back to programmers. There are a lot of kids who went to programming school just 10 years ago that are going to find out their jobs are going to be either not as financially beneficial or they’re going to have challenges. What do we do as employment changes occur? Because there are, Microsoft has made comments, I think Chase has made comments. There are a number of businesses out there that are reducing headcount in particular areas because they’re using AI, and what’s the long-term impact of that?
So there are a number of those issues that are going to, and if you have a robot walking down the street, how do you handle all that? I think it’s interesting in the AI security space because you have AI agents that may have greater authority than the humans they’re interacting with, or the reverse is true, and goes back and then I’ll let the go. But you instinctively know when you got information that you shouldn’t have seen. Humans are pretty good at that.
You might enjoy it that you got, let’s say somehow you got your manager’s payroll statement emailed to you, you know that you shouldn’t have seen it, right? You probably read it and entertain it and think about it and then you know deserve a raise, right? But you shouldn’t have seen it and you know it. But if the agent is not going to have that instinct, it’s going to say, it’s been sent to me. I must have the authority to know this. And maybe it goes and shares it with other agents or other humans that it interacts with because its job is to show share when payroll is out of whack. And in this case it’s out of the standard for the organization because the manager, you start to get to this point where the human and the AI interactions need to be considered and how do you interact with that as well? So those are all kinds of things that come up in the puzzle. And at one point a lot of the AI companies were asking, or governments and sovereignties to actually get into the regulation business. I think they’ve sort of taken a step back from that and Europe is still pretty leading in that area and hopefully as they did in the internet era, they’ll continue to do that and hopefully that’ll spread beyond Europe as well.
Wig:
All right. Well, before we run out of time, I want to make sure—you brought up some IBM stuff earlier and worked for IBM, and it is a big year for IBM. So we should probably talk about the hardware releases where they’ve made AI very much a focal point. You got the z17 and Power11 both coming out this summer, spring, and summer. How significant do you see, well, we’ll start with the actual AI capabilities of these machines. How big a deal is this?
Silverman:
I think it’s huge. I think part of it, the reason I think it’s huge is because first of all, IBM has been in the silicon and the chip business from the beginning. So IBM, a lot of times we forget about, you think about the NVIDIAs of the world out there and the other companies and a MD and even the fabs, but IBM at one time owned fabs and did a lot of manufacturing, but the silicon development itself has continued to be an innovative part of IBM. And we published together an interview of one of the research leaders on the chip space. I thought that was quite interesting. And what’s interesting to me about it is, so the z17 announcement is great because they talked about, they’ve included within the Telum II processor AI capabilities improving the inference. So imagine going back to we have the 98% or whatever the number is, but you have a business that’s been running in the mainframe space for a number of years and they have all this data and transactional information and things being stored. Now they’ve got access within that processor, within the IBM realm of capabilities around AI. And they can now do things, and they’ve been using it in previous generations for fraud detection and a number of things. But now that capability is expanding. And the same thing is happening with Power11 where they are enabling AI within the Power processors and enabling that kind of capability.
And that’s across there. They have different operating systems, whether it’s AIX or Linux or IBM i, but then IBM is actually thrown something added into this equation called Spyre. And Spyre is a PCI card with 32 cores, 128 gigabytes of memory. And it’s designed to handle the next level up, if you will, of where you need to scale an AI operation. Maybe you’re going to do generative AI or capabilities in that area. And now all of a sudden not only is that data available in real time in these processors, but now you can generate new applications and new capabilities. And if you think about one PCI card I’ve seen, I don’t know whether it was accurate, but in the z17 world, you’re going to buy these cards in sets of eight I think. I’m not sure what the Power11 is going to look like, but imagine adding that capacity into two systems that are not only reliable, I think what the targeted availability of power 11 is (99.9999)% and I know the mainframe is this high or higher, all of a sudden you’re delivering real-time AI capabilities and not—Nvidia is going to lead in the training space, but the actual business application or implementation is there.
And then I don’t want to leave software out of the discussion because IBM has done a great job of continuing to evolve things like what’s next, its own models, it has Granite models, but it also is talking about how they’re also training these systems and these cards and the IBM systems even run the watson code even better. So if you were looking at watson, you’re looking at code generation, you’re looking at using the Granite models, maybe going back to the medical explanation where maybe you use an application running in or on the Spyre card to actually do a better job of explaining medical diagnoses to someone like myself. All of a sudden you’ve got that access. And I think what it does open up is a lot of these systems—and you and I talked about before—the IBM i world, I started in my career at IBM with the AS/400—imagine 30 or odd years of information that you now can actually do something and mine and leverage.
You can see the evolution of a manufacturing process or something and evolve that. So I think, what I really think about IBM in this space right now is maybe for the first time in a while, IBM tends to do things really well and then some things are a little bit not keeping up, but IBM, I think, is really firing on all cylinders, and I think it’s going to be very interesting, and obviously the stock market is seeing it as well, I imagine. So I think it’s going to be interesting to see and certainly both the Power11 and the new mainframes are coming up, I think are available now, if my memory’s any good. So it’ll be interesting to see how customers take on that technology.
Wig:
Yeah, I mean the fact that IBM is emphasizing it so heavily, I mean they’re not a fly by night operation trying to make a quick buck, right? So I mean, I don’t know. Does that mean that should this be our sign that yeah, this is real.
Silverman:
I think it’s a sign that’s real, but it’s also getting back to where we started in this discussion and that is putting the foundation on the business and the information and how do you leverage AI as opposed to creating something separate that is going to, maybe you think is going to revolutionize your business or something. But this is really saying the foundation, being able to leverage the data, being able to leverage a trusted environment and actually do real business with AI, I think, is where IBM is putting its investment. And I think that’s very similar to what they did in the internet era. So I think it would be interesting to see, and one of the interesting things with Power11 to me was that it’s coming out, all platforms are coming out at the same time, including the cloud. So even if you’re not ready to run AI, you’ve got a Power10 system or something, you can reach out into the cloud and go ahead and take advantage of what the new capabilities are.
And those kinds of things to me are showing some agility from IBM that maybe hasn’t been there in the recent past the way we would like it to be. And then one more thing and then I’ll—you can’t tell I’ve been around IBM for a little while. IBM’s also using AI to infuse its traditional software capabilities. So it’s bringing AI into it acquired web methods, I think it’s been about a year ago. They’ve already released new capabilities and web methods that are being fueled by AI. So all of this technology is permeating through not just IBM servers and physical technology, but the software side as well.
Wig:
Full stack, as they say. Well we’re getting close to our time here. My last question I wanted to ask you, and this is always, it may be a fool’s there, and this may be just for fun, we won’t necessarily hold you to it, but five years from now, where do you see this going? What’s the AI world going to look like, or I mean maybe, that just means what’s the world going to look like in five years? Where do you see what we will keep it in the realm of business though? Where do you see AI going in five years?
Silverman:
I think we’re going to have to think about it. We think about smartphones and tablets and things and you look at a kid that’s two or three years old today and you put a tablet or a phone in their hand and they’re just ready to go. They don’t have any barriers, if you will, to using it. They expect it. And if they don’t have it then they probably scream and cry. And I think what’s going to happen. We’re going to see two facets of AI. I think one is going to be sort of in the realm of where IBM’s going, where it’s implementing somewhere and it’s generating content or applications or technology, but it’s also going to start to permeate everywhere that we live and how we operate. And because it’s going to go to hospitals that are going to be using AI as they already are today, and it’s going to become more and more prevalent whether we’re going to see within, I mean, if you listen to Sam Altman, you could imagine five years from now that, and I think Musk and company are talking about as well, having robots that are right along with you doing things with you and taking on chores and things.
And I think we’re going to see that kind of evolution of technology, and I don’t know whether the human is going to decide that we’re going too fast and too far out or we’re going to accept it just as we think about cars and other things that have revolutionized how we live. So I think it is going to be, and the last thing I didn’t mention on the IBM side, there is one little small little detail that seems to get, and that is quantum computing is going to arrive. So IBM’s saying I think by the end of the decade that quantum computing is going to be real and in production. And if you think about quantum delivering capability and capacity and performance characteristics that are, we have no imagination as to how that will be. Then all of a sudden AI can take off. And I always, when you think about generative AI being able to hallucinate answers, although you and I have discussed, it’s not really doing that. Quantum has the same issue where it makes errors and it needs to be checked. And once that occurs and then it can accelerate deployment, I don’t know. I think we’re going to see, we can’t imagine what that is going to impact both good and bad probably,
Wig:
Right? It’s kind of hard to imagine the impacts when the upon technology is so alien to us. So vastly different from classical computing. It’s like a whole different, I mean I’ve heard people say not even the smartest people in the world really even understand it because it’s a whole different paradigm. So yeah, combining those AI with quantum is, like, to ask you to predict exactly what that’s going to look like would probably be unfair.
Silverman:
But one more thought about on the optimistic side, imagine that you go to the hospital and you have something going on and because AI is there, maybe quantum is in the background, they run a blood test, they do something and within minutes, instead of having to wait two weeks to get a lab result, they know instinctively that you have a virus or something and they know what the best treatment is and maybe in the robot sense then they’re mixing whatever the unique combination of medicine is and you walk out the door either already healed or you’re going to be well because, and you move quickly. Those kind of things are going to be real and that kind of benefit that we can all see coming.
But the impact on working in the day-to-day life and what it’s going to impact, I don’t know where we’re going to end up. It’s going to have to be, hopefully it’ll be, we’ll see more value and improvement in our quality of life. And I think the CEO of J.P. Morgan says that AI is already going to make it. So your five-day work week could be three and a half days. And between you and I, imagine we’ll still be working five-day work weeks, there just won’t be as many of us.
So there’s going to be some impact. But there’s also, the last point is I always think about, especially generative AI in particular, that initial prompt is gold. You got to ask it something, right? In the world that we’re living in today. And the other backend is you have to know whether it’s any good, did it hallucinate an answer. Those two things are human. And I don’t know when we get to a point where that kind of governance, if you will, of AI performance is not a human quality, what happens in between? All bets are off as to how much that can be automated and improved. And we’ll continue to do that. And it’s interesting, if you look at the training, if you will, with these large language models, they’re not as, they’re still the core training. Well, they’ll say, I think last I looked ChatGPT was ‘24 I think was the cutoff date. But they do now obviously research in real time. But the training and the continuing the evolution of these models is going to be challenging. And then there was a great presentation by, I have to look it up, but about the bias that’s already in the models and how do we correct that? So I think the comment was ask ChatGPT for the picture of a construction worker and it’s a male. So to figure out how do you make sure that what we’re doing is at least fair and balanced, I think is going to be a challenge as well.
Wig:
It obvious, I don’t know where that bias is. Originating isn’t always obvious, right? Because there’s that black box aspect to it. So I imagine that’s part of the change as well.
Silverman:
But if you look at it, I remember years ago, IBM had Watson World and you would walk up and they would say, when Watson’s not busy, and we’re talking maybe a decade ago at this point, I don’t know, maybe a little less, what is Watson doing when it’s not busy? And they said it’s just surfing the internet. So if you think about it’s surfing, going to find all these, whether it’s open AI or Met or whoever, they’re going out and finding all this content that’s out there on the internet and Wikipedia and all these other basis. But the primary generation of the original information tends to be, I say white males. The preponderance of the information that it got trained on is what’s happened over the last number of years, depending upon how far back they go. So you have to have some level of reality check that says is what we’re training them on and what we’re getting back where society wants it to be. You don’t want a female who wants to be a construction worker to go online and not see anyone like herself. So you want to figure out how to make that balance. And I think that’ll be a challenge going forward.
Wig:
Yeah, there’s going to be lots of challenges going forward, it sounds like. And it going to be, I’m excited that we’re alive to see this play out. Hopefully we’re living in a utopia five years from now. We’ll see. Probably not. But Brian, I really, really appreciate you coming on it. Social hour. This has been fascinating to just hear you get deep into all this stuff. And for our audience, I know we can find your work at Tech Channel or else, sorry, I think you do some post some blogs on LinkedIn.
Silverman:
I have, and I’m getting ready to launch a website and I’m working on a class that’s going to come out around the roadmap, so keep an eye on that as well.
Wig:
All right. Yeah, I know you’ll keep us posted. So yeah, thanks everyone for joining it Social Hour. I hope you enjoyed this conversation. I think if you listened all the way through here, I think you’ve probably got some things to think about for the rest of the day, the rest of the week, rest of the year. I know I do. So yeah, Brian, thanks again.
Silverman:
Thank you. Thank you everybody.
Wig:
Yeah, yeah. And just so you know, if you’re not already subscribed to our newsletters, you can find those at tech.com/subscribe. You can find Brian’s work there as well as all sorts of stuff about those IBM systems we were talking about, about AI concepts, about data concepts runs the gamut. So check it out, tech channel.com/subscribe and I’ll leave it at that. Thanks so much. Bye.