A Matter of Trust: Why Your AI Strategy Needs a Librarian
Amid an AI "trust crisis," Nathan Smith, who leads mainframe data replication strategy for customers at Kyndryl, writes about the ways the mainframe can help generative AI produce more reliable results
When I flip the switch for a light, I expect it to turn on. When I pull up my browser to search for something, I expect the search engine to show me results within a couple of seconds. When I pay for something online, I expect my bank to deduct the correct amount from my account and the merchant to ship the correct product.
That level of trust has been built up over decades of technological advancement, but the latest transformational technology, generative AI, is yet to earn the complete confidence of its users. We have entered a trust crisis with this first wave of true AI—a groundbreaking technology that will transform humanity but is fundamentally untrustworthy. However, the right approach to system architecture can shrink that trust gap, backed by a technology that has been building trust for over six decades: the mainframe.
If AI is the storyteller prone to flights of fancy, the mainframe is the librarian faithfully serving it records from deep in the stacks. What’s needed now is a way to make sure the storyteller stays true to the information it receives from the librarian.
Trust: The Foundation of Society
Our society is built on the innate trust we place in each other, through both direct and indirect actions. We generally trust that people will obey the law and observe societal norms. We trust each other to do what is expected in school, in business and in life—and we’ve built systems that reduce friction across many life processes. At first, there was significant mistrust of computers handling transactions, but the benefits far outweighed the risks of these early mainframe computing systems. Trust was earned over decades, and these business applications hummed along.
Then came the Internet Age, with e-commerce entering the lexicon as the new buzz. New entrants entered the (cyber)space, generating excitement as disruptors of traditional brick-and-mortar incumbents. Amazon started as an online bookshop challenging both local bookstores and established brands like Waldenbooks, Borders and Barnes & Noble, but quickly found that its site could also sell other merchandise. Webvan envisioned itself as a grocery disruptor—a real timesaver offering home delivery. The dot-com bubble grew as optimistic fervor expanded, driven by the promise of making real money on a new entrant into the home: the personal computer.
Strangers on the Internet
In those early days of online shopping, excitement was there, but people struggled to trust e-commerce sites. A customer might be willing to place an order or make a reservation without a credit card, but who in their right mind would trust a stranger on the internet with banking information? Then came the dot-com bust—many promising startups and darlings of the era went bankrupt and defunct. A few, like Amazon and eBay, survived; however, Amazon famously was not profitable for a number of years after starting in the ’90s. Companies were learning lessons—both old and new. Dot-com companies that could deliver survived, while traditional companies adapted to the emergence of the internet.
People began to trust companies they had real-world experience with, like banks, airlines and traditional brick-and-mortar stores (which quickly earned the moniker “click-and-mortar”). Interestingly, many of these companies enabled their reliable, trusted systems to interface with new technologies. A customer might use a clean, cutting-edge banking website to set up bill pay, but the processing is still handled by the same banking mainframe system that has been in use for decades.
The mid-2000s introduced us to the concept of cloud computing. Amazon, through the monetization and commoditization of its excess server and data center capacity, created Amazon Web Services. Don’t want to spend money buying physical servers? Now you can augment your capacity through a model that rhymes perfectly with the past: timesharing. Just as early computing relied on sharing centralized mainframe resources, the cloud allowed companies to lease capacity that wasn’t their own. Amazon, Alphabet (Google), Microsoft, IBM and Oracle have built immense trust over the past few decades by providing reliable, low-friction infrastructure for integration into a company’s existing IT architecture.
AI: Untrustworthy by Nature
The commoditization of server and data center capacity has been a boon for tech startups—and has helped fuel the research and development of AI. Artificial general intelligence (AGI) is the holy grail of computer science, sought after for the last 70 years. While we’ve seen the development, release and use of generative AI, we’ve also seen the faults that have developed alongside it. Generative AI tools can hallucinate and fabricate their own “facts.” Sometimes they own up to their mistakes after being confronted, but not always.
There is no easy fix. AI systems are probabilistic: They use the data they’ve seen to provide a “best guess” of what will happen next. Computing, for the most part, has been deterministic, where 1 + 1 = 2 always and forever. Much like the light switch, I trust that the calculation will yield consistent results. The probabilistic nature of AI undermines that trust. How can I assume whatever the AI is telling me is true if I’m not a subject matter expert? The answer is … you don’t. Much like the demands of a high school algebra teacher, AI has to show its work by separating the hard facts it uses from the more colorful embellishments.
Anchoring AI to Reality
Consider a librarian and a storyteller. A librarian knows where all the books are, keeps records of which shelves they are on and can retrieve them. It is very deterministic—you give the librarian the call number, and they know exactly where it is. A storyteller, however, can use the facts retrieved by the librarian and present them to the consumer with added flair. You may want information about the Battle of Chickamauga, but a storyteller may add unverifiable embellishments to give the story spice.
The solution isn’t to break the storyteller’s pen, but rather to anchor it. Systems must be architected to allow the storyteller to provide narratives only for the records the librarian hands them. This is where the mainframe ecosystem shines in the enterprise. The mainframe has been the reliable, deterministic librarian—an immutable system of record. By piping this deterministic data into our probabilistic AI models—a process known as “grounding” or “retrieval-augmented generation” (RAG)—we shrink the trust gap. We get the speed, fluency and flair of AI backed by the unshakeable integrity of the mainframe. The interface may change, but the foundation of trust remains the same. History may not repeat, but it certainly rhymes.
As we stand close to the “fire” of this new AI revolution, we don’t have to get burned. We just have to make sure that before the fire speaks, it checks its work with the librarian. After all, it is always a matter of trust.