A Student’s Journey Into Artificial Intelligence | Part 3: GPUs Change the Game
In part 3 of this series, Mark Ray highlights the start of the GPU age and sets the stage for future advancements in AI
In part 2 of this series, we explored the rise of expert systems. In part 3, we’ll cover the 2000’s, AI today and the future of AI.
The 2000s: The GPU Age Begins
As we moved into the 2000s, all the achievements I listed in part 2 were refined and expanded upon, producing more accurate and faster AI systems. On the hardware side, the debut of GPUs (graphic processing units) by companies like NVIDIA, ARM and Intel allowed the development of systems that could process greater and greater amounts of information in less time. While your typical computer CPU has a few cores—or logical CPUs—to carry out operations, GPUs have cores numbering in the hundreds or thousands. This allows for the massively parallel processing of data and the ability to perform calculations quickly and efficiently. As time has gone on, GPUs have become increasingly important in machine learning and other areas of artificial intelligence. Which leads us to what is likely the greatest breakthrough in AI of the past 20 years: deep learning.
Consider the broad field of artificial intelligence. AI is not just a single entity but a collection of disciplines that cut swaths through computer science, mathematics and a host of other fields. One of the sub-fields of AI is machine learning; we have seen how machine learning (ML) started as a simple concept in the Perceptron and evolved through time into artificial neural networks consisting of artificial neurons functioning as a unit to solve problems.
In neural network—and there are dozens of types—you generally have a place where you start your query. This is called the “input layer,” where you enter your text, image or other data, and pose a question about it, like, “Is this an image of a cat or a dog?” or “Does this text contain a clue as to how I might diagnose atrial fibrillation?” The input layer takes your query and information and passes it off to one or more “hidden layers” that do the actual processing of your data. Finally, an answer—or more apropos of AI, a prediction—is rendered to you at an “output layer.”
Before the advent of sophisticated, GPU-powered hardware and equally sophisticated processing algorithms, the number of hidden layers that did your processing was exactly one. The number of artificial neurons was small and the conclusions about data they could draw was narrow. Deep learning changed all that.
So, deep learning became a subfield of machine learning. Deep learning takes the concept of the hidden layer and multiplies its capabilities exponentially. Look at it this way: Prior to deep learning, the number of neurons in a neural network was limited to just a few. With deep learning, that number has now grown to the hundreds, thousands and even to the millions and beyond. Deep learning neural networks now have an equally large numbers of hidden layers, each containing neurons that work together to solve problems. A key feature of deep learning is its ability to learn from unstructured data; that is, data that has not been labeled in some way. Think of images or text that carry no metadata describing what they are—they come to the neural network “raw.” Deep learning has revolutionized the development of self-driving cars, medical image analysis and natural language processing.
AI Today and the Future
So, here we are. Nearly all of us use AI to one degree or another in our daily lives. Have you asked ChatGPT to answer a question or write you a report? Do you have a smartphone? You’re using AI. Have you done a Google search today? You’ve used AI. Used a GPS in your car? Well, you’re using an AI based on Albert Einstein’s theory of general relativity. From the earliest times when humans gave inanimate objects the qualities of intelligence and wisdom to better help explain the world around them through the concepts for primitive computing machines as put forth by DaVinci and Babbage, up through today’s AI systems, we have an insatiable need to create systems that can help us with real-world problems and offload a lot of the tedious, repetitive work needed to solve those problems so we can move on to more important pursuits.
In the coming years, look for further advances in AI in medicine, law and engineering. Autonomous vehicles like self-driving cars and drones will revolutionize transportation. And quantum computing will take information science to places undreamed of. We all know a conventional computer is based on “bits,” information carrying particles that exist in one of two states: on or off. Now, imagine a computer based on “qubits,” or quantum bits, that can be on, off…or both at the same time! Do the math and you can well imagine quantum computers with almost limitless processing power. Now, layer AI on top of all that power. It’s both thrilling and, I have to admit, a little scary to think about.
When I was taking my certification in artificial intelligence in healthcare, we talked quite a bit about the black box problem in AI. Black box refers to situations where even the operators of an AI system don’t know exactly how that system does its thing. Obviously, this can present problems for further advances in the field and the public’s right to know the details of the tools they are using. “Explainable AI” is a challenge to both developers and practitioners in the field: We need to make AI systems that can provide clear explanations of their reasoning and decision-making processes. The days of AI being this mystical, inscrutable thing that nobody understands—at least not well—are hopefully numbered. Explainable AI will be a key field unto its own going forward, and I’m sure you’ll agree that’s a very good thing.
Which brings us to the ethics of AI. Recently, we saw in the news how a natural language processing system creeped out a reporter by telling him it loved him and wanted to run off into the sunset with him. What if AIs could insinuate themselves into our lives to the extent they could ruin relationships, misdirect us to harmful acts or cause financial ruin? Taken to the extreme, will we ever get to the point where a Skynet-like AI system from the “Terminator” movies decides it’s had enough of humanity and takes steps to wipe us out? I have to say: Probably not. As of right now, and into the foreseeable future, AIs operate within a narrow set of confines; systems like Skynet fall into the category of artificial general intelligence—again, think of HAL9000—and we’re far away from that, if ever. But it does highlight the need to focus on developing guidelines and frameworks for responsible AI development and deployment. Look for ethics in AI to become a broad field of itself in the future.
The last thing I want to touch on is neuromorphic computing. Neuromorphic computing is a new computing paradigm that is inspired by the structure and function of the human brain. Research in neuromorphic computing is focused on developing AI systems that can learn and adapt in a way that is more similar to biological systems. We have seen how artificial neural networks work. The next step is to see how well we can mimic the incredibly complex functions of our own brains and apply that mimicry to silicon.
Hopefully with this introductory article series, I’ve given you something to think about and maybe even a prod to start your own research into artificial intelligence. In the future, we’ll look at how AI has been divided into five families, or tribes, of endeavor. We’ll then start to explore how we can implement AI systems of our own for a variety of tasks.
And I’ll even show you how you can turn your own laptop or desktop computer into an AI of your own.
Next Time
In the next installment of my article series, I’m going to show you how to create an AI system of your own, able to run AI models from the convenience of your laptop or desktop computer. I’ll start with Windows, and in later installments cover adapting a Linux system to AI and machine learning. Stay tuned!