Skip to main content

AI on Your Laptop: Part 1

AI expert Mark Ray kicks off a new series in which he’ll explain how to set up an AI on your personal computer

As I write this in March 2024, artificial intelligence is ubiquitous. You can’t turn on your television without being inundated with reports about AI. Your smartphone likely has a backlog of messages and texts on the subject, and what’s left of print media is top heavy with self-proclaimed pundits opining on this or that AI topic. And forget about social media: the platform of your choice will be choked with posts, articles and what used to pass for Tweets about one of two subjects: the glories of AI or the evils of AI.

It is very difficult these days to separate out the wheat from the chaff, so to speak, about artificial intelligence: Many articles and posts sound convincing, so you may find yourself reading a 10,000-word article—or even a whole book—about AI and come away nodding your head and thinking you have a better understanding, only to find that article or book completely debunked in the very next item you read. Over the past four years, my own research has led me to this exact conundrum; it’s only since I’ve become formally trained in AI that I’ve been able to discern the good stuff from complete crap.

My own experience has been there is about a 75/25 split between the lousy stuff out there about AI and the really good, thoughtful pieces that have actually increased my understanding. The bad stuff tends to be really bad: pieces that fan the flames of paranoia and fear about AI actually do harm to the world at large by sowing the seeds of the biggest enemy of any technology, and that is mistrust. And that mistrust holds back innovation in any endeavor. The good stuff increases our knowledge and understanding of AI and helps those engaged in the field to invent new models and methods that can benefit everyone.

But how do you tell the difference between the good and the bad?

The Rationale

I wrote for TechChannel and its’ earlier incarnation IBM Systems Magazine for years on the topic of IBM’s AIX operating system. What I tried to do in those articles was cut through the misinformation—which was considerable—to deliver clear, concise and useful tips on how to get the most out of the operating system in your own environment; the articles were well-received, and I continue to get supportive emails about them to this day. I want to do the same thing in this article series about artificial intelligence. I want to show you how to cut through all of that chaff to get to the kernels (pun fully intended) that will enhance your understanding of the subject and hopefully spur you to start creating something useful on your own. Now, how am I going to do that?

We learn best by doing. In my AIX articles, I practiced that philosophy by telling my readers how to diagnose situations that had led to disruption in the performance or availability of their systems. I then showed them, step-by-step, how to fix those issues. And in cases where readers’ systems were performing fine, I showed them how to artificially induce errors on test equipment that would help them fix those problems, should they ever occur. I’m going to follow the same philosophy in these articles. I’ll show you how to turn your own laptop or desktop Windows and Linux computers into fully fledged AI systems of your own so you can start exploring all the different facets of AI. My goal is to get you to the point you can discern the difference between good, useful information on AI, and that which is useless…and even worse, harmful.

Getting Started With AI

Let’s get started. A fundamental misperception about AI is that you need extremely powerful, high-priced hardware to instantiate and run it. While this is generally true if you’re diagnosing cancer, plotting weather patterns for the next 100 years or developing new wonder-drugs, the typical laptop or desktop you could pull off the shelf at any Staples or Best Buy will do just fine to build your own AI system and run smaller models that will greatly aid your understanding of AI. I’m going to break down how to setup your own AI system into two major parts. First, I’m going to show you how to build your own AI system and run it on the Microsoft Windows operating system; I chose to do this first because most of us have at least one system running Windows, and it will be the system most of us power up to do work or recreate.

In the next part, I’ll tell you how to setup an AI on Linux; most of us involved in the IT field these days have at least a passing familiarity with Linux, and if we use Linux extensively in our work, we most likely also have Linux systems sitting in our home offices. The thing about Linux is it allows you to fiddle with the nuts and bolts of an AI system with a more fine-grained capability than does Windows, but Windows is much easier to get started with, with its GUI features. My own opinion is the one operating system isn’t appreciably better than the other when it comes to learning AI; both allow you to get into the guts of a system adequately to start understanding all its features.

Also, when you setup your AI system, and as you progress in these articles, you’ll find that AI looks pretty much the same in both OSes, if you excluded the GUI front end the Windows version gives you. You’re learning will be portable, meaning your setup and usage will be nearly identical in both. You’ll be able to directly apply what you learn on your own Windows laptop or desktop to a Linux system. And your hardware requirements will be much the same regardless of your OS. With that in mind, what sort of hardware do you need when creating your own AI?

Hardware and AI

I’m writing this article series on a Lenovo Ideapad laptop. It runs an Intel I7 processor with sixteen gigabytes of RAM and a one terabyte hard drive. The hard drive is an SSD, and this will help performance when you start loading and checkpointing your own models, but it is by no means required. My operating system is Windows 11 Pro, but you needn’t get fancy with the OS; a Windows Home edition going all the way back to version 7 will do quite nicely. About the only modification you may need to make to Windows will be to increase the size of your paging file; both loading a dataset into an AI model and checkpointing that model will page heavily under some circumstances as will using particular Python-based software packages, so what I would do would be to increase the size of your paging file to at least twice the size of your installed RAM; for example, I have a 32GB paging file on my laptop with the 16GB of RAM.

The only caveat I can think of when doing your own AI work is to reserve the time for this work when you’re not doing other things. You’re going to be disappointed in the quality of your playback if you try to watch a movie while you’re loading a data set or training a model. I also need to tell you to make sure whatever virus protection you have is up to date; you’re going to find as you progress that nearly 100% of the software packages you’ll need to acquire to set up an AI adequately are open-sourced. There are very few packages you need to pay for, as the vast majority of AI development software is community driven. Personally, I’ve never heard of an AI package containing a virus, and I’ve been using them for four years. But you never know, so a common sensed approach to virus protection when you’re running AI on Windows is as valid a proposition as any other use to which you would put such a system.

AI Terminology

What next? Chances are, you are unfamiliar with the basic terminology required to set up and use an AI system. What’s worse, I just went out to the internet and asked several search engines to define some AI terms, and what I found was a wide discrepancy in those definitions. So here’s what you do:

Ever hear of “ChatGPT?” ChatGPT is a large language model (or “LLM” for short), that was developed by OpenAI. ChatGPT is a massive AI system that has been trained on everything from cookie recipes to quantum mechanics; it is a search engine taken to an exponential degree and is what can be described as a “final authority” on most things AI-related. It has an API you can access for free here.

There are two types of accounts you can get to access ChatGPT. The first is a free account that you can use to get going with your own AI learning, and which is fine for most uses. There is also a fee-based account that gets you the latest version of the GPT model that provides better depth and completeness to any questions you may ask. The fee-based account also gives your activities priority over the free account, meaning your questions will be answered quicker and with more detail. This costed account is currently $20 a month; in my own work, I’ve found it invaluable and don’t see how I ever lived without it, it’s that good for learning this stuff. You might also be curious as to what the “GPT” in ChatGPT means. Let’s let ChatGPT define itself, in its own words; in this definition, ChatGPT refers to itself as a “smart robot:”

“Generative Pre-Trained Transformer” (GPT) is a fancy name for a type of smart robot—or more accurately, a computer program—that has been trained this way (on huge corpuses of material of all kinds).’Generative’ means it can create or generate responses on its own. ‘Pre-Trained’ means it has already been taught a lot before you even start talking to it. ‘Transformer’ is a special technique it uses to understand and generate language really well.”

“Generative” models give the appearance of thinking on their own, coming up with wholly original answers to questions. One thing I want to get out of the way right off is that AIs do not “think” for themselves; that is the purview of a completely different type of AI called “Artificial General Intelligence (AGI).” An AGI would be something like the HAL 9000computer in the film 2001: A Space Odyssey; many companies are working on AGI, but its actual implementation is years away, if ever. The AIs we can build on our own equipment do not “think” like we humans do—they process information through very complex mathematical formulae called “algorithms.” These algorithms extrapolate and formulate original-sounding answers that make it appear the machine can think, but this impression is false. AI is, basically, the solving of problems through the use of extremely complex algorithms arranged in a series of steps—or “nodes”—in which each algorithm comes up with a partial answer to a question and then passes that partial answer on to the next node which adds to that answer and so on, until a complete answer is formed.

At this point, even the few technical terms like “model” and “generative” I’ve introduced in this article are some with which you’re probably unfamiliar. AI study is terminology-heavy, expressing concepts far outside the usual scope of everyday IT activities. So with every article, I’m going to give you definitions to the terms I use; these definitions will not be my own, but definitions given by that final authority, ChatGPT. It knows itself—and AI in general—best and can describe itself far better than any human. With that, here are some terms you’ll need in your beginning AI study:

  • Model: “Think of an AI ‘model’ as a recipe that a computer follows to make decisions or predictions, just like how you might follow a recipe to bake a cake. This recipe isn’t for cooking, though—it’s for processing information. The AI model learns this recipe from a lot of data, like texts, images or sounds, depending on what it’s designed to do. For example, it might learn from reading lots of books or looking at pictures.”
  • Algorithm: “An ‘algorithm’ in AI and machine learning is like a set of instructions or a recipe that tells the computer exactly how to solve a problem or perform a task. Imagine you’re teaching someone how to solve a puzzle. You’d give them step-by-step instructions to follow, like ‘start with the corner piece’” or ‘look for pieces with the same color.’ In the world of AI and machine learning, an algorithm does something similar for the computer—it guides the computer through the steps it needs to take to make predictions, understand language, recognize images or learn from data.”
  • Neural Network: “Imagine if you could make a mini brain out of a bunch of tiny lights (let’s call them ‘neurons’) that turn on and off to solve problems or learn new things. Each of these lights can send signals to others, telling them to light up or stay dark. By connecting lots of these lights in different patterns, you create a network that can learn from experiences, just like how your brain learns to recognize faces or understand words. In AI, a ‘neural network’ works a bit like this imaginary network of lights. It’s a computer program designed to mimic how human brains learn and make decisions. The ‘neurons’ in this case aren’t actual lights, of course—they’re pieces of code that can process information and pass it on to other pieces of code in the network.”

Throughout this article series, I will refer to models, algorithms and neural networks a great deal, so you’ll need to know these terms before we proceed.

I think that’s enough for now. Next time, in part 2, we’ll get into the actual set up of an AI system on Microsoft Windows, starting with downloading and installing the framework upon which just about every AI system is based: Anaconda. Think of Anaconda as a software-based virtual machine you will create on your own equipment; it will be the environment in which you’ll carry out all of your AI activities.