Skip to main content

When Small Language Models Fit Just Right

In this edition of TechTalk SMB, host Charlie Guarino speaks with data scientist Arthur Van Schendel about the virtues of small language models

Click here to listen to the audio-only version

Charlie Guarino: Hi everybody, this is Charlie. Welcome to Tech Talk SMB. I am thrilled today to be sitting with Arthur Van Schell. Arthur is a data scientist and AI engineer in the areas of data science and generative ai. Arthur holds a bachelor’s degree and master’s degree in applied mathematics and computer science. His mission is to exploit the latest data science and artificial intelligence technologies in new areas of research and industry. I have to tell you that I recently had the great pleasure of having a nice dinner meeting with Arthur in Geneva, and I was so fascinated by our conversation. I really wanted to share his knowledge specifically on small language models or SLMs. So first of all, Arthur, welcome to our podcast today.

Arthur Van Schendel: Thank you very much, Charlie. It’s a really great pleasure to be here. Thank you for inviting me, and yeah, thank you very much.

Charlie: Thank you for joining me. So I briefly mentioned S SLMs or small language models, and I have to share with you that this was a new topic of discussions for me. I don’t think s SLMs are particularly new, but I think that maybe the terminology itself might be new or newer to the general population. What are your thoughts on that? First of all, let’s just start from the very beginning. Everybody here seems to know what LLMs large. We all seem to know that that name has become synonymous with chat, GPT and other technology. SLM was new to me. So let’s talk about that for a little while. S SLMs, help me out on that. What are s SLMs?

Arthur: Yeah, so small language models, it means, so basically it’s the same principle as large language models. It’s just the size is way smaller, and so the model is actually way more efficient. So that’s the idea. It’s smaller in size, but while keeping a good quality in terms of output,

Charlie: Who decides or are there known parameters or definitions? What makes a language model small or large? Are there hard parameters or are they just something that we say, oh, it’s because it has less parameters and a more traditional one that it becomes, it’s defined as it falls in a small category?

Arthur: Yeah, that would be it. It’s kind of arbitrary. It depends on who you’re talking to and probably the budget of the people. But I would say a good limit would be below 7 billion parameters would be considered as small, I would say, because I would define small language models as the ones you can actually use on a consumer hardware it’s called. So it’s something that it’s not only reserved to a company or a lab. It’s actually made for consumers, I would say. So, yeah,

Charlie: So it’s made for consumers. That’s an interesting concept right there, because I typically associate chat GPTA much larger model with industry. I mean, of course I do use it for personal use certainly, but I know also that anything I’m using on chat GPT potentially is being used for their training and there’s a difference. What are the differences there? Why would I want to use, tell me the difference there. What’s so special about SLM that’s different than a large language model?

Arthur: Well, mainly I think the main motivations, the first one is actually being able to actually own the model. Owning the model, meaning you can actually run it on either your phone or your computer, and it doesn’t have to be a super huge computer with loads of GPUs right now. You can actually use a standard computer and actually run really powerful small language models. So I would say that’s, for me, the main motivation in that is to actually be able to run it. So that would mean not giving away your data. So it’s edge computing, the data stays where the computing is. So because on counterpart, if you’re using third party platforms, actually as you said, it’s sending out your data. So I mean, some of them are free platforms, but you actually pay for your data obviously, because no free lunch. And if you have small language models, at least you can actually own everything and you can actually collect your own data and fine tune it after.

There’s so many possibilities in actually owning your AI that I think it’s really, really the main motivation behind it. And also the other one is obviously the efficiency because we don’t have this impression, but when you use third party platforms with large language models, there’s so much budget, so much energy that’s being consumed by running all those platforms. And we know when you arrive on the platform, you have to have the answer super quickly. This is very, very resource intensive, whether as small language model is actually very efficient, the model is compressed smaller and it uses way less electricity and energy. So that’s also for the environment, I think it is better. Yeah,

Charlie: I have to say that you said something that really caught my attention. You mentioned that I can run an SLM on my phone. That to me is amazing because that’s how much processing power does a phone actually have to support an SLM or I’m making this up. Is there a term micro SLM or we’re not there yet, or is it still SLM and but is a phone literally power an SLM?

Arthur: Yeah, I think the latest phones, I know the iPhones with their latest chip I think is a 15, a 16 I think. I’m not sure. But basically they have integrated NPU, so mural processing units, and they have, because Apple have this Apple intelligence, so they have the hardware locally on their phone to be able to run those services. But you can actually use them also for small language models. There’s some projects that are out in open source as well, you can use and download where you can actually run several. You can choose which SLM you’re using and you can actually chat with it. And so the benefit of it is also, you don’t even need internet connection if you’re on the plane, for example, or in the middle of the desert, you go in airplane mode and you can still use the power of generative ai. And yeah, the only problem would be the battery because probably it would consume a bit of battery, but electricity. But that’s the only,

Charlie: You mentioned the number. You mentioned I think 7 billion parameters. I think that was the number, some number you put out there. And that might be some metric of an SLM versus a large language model perhaps.

Arthur: Yeah.

Charlie: How does that compare to the human brain? How many parameters does human brain have just to give us some scale?

Arthur: Yeah, that’s a good question to have a order of comparison. I think in average, a human brain has 700 trillion parameters,

Charlie: 700 trillion,

Arthur: 700 trillion. So we are so far away still from even the biggest large language models are maybe one or two trillions parameters. And that’s the biggest, I think. But then also we are not sure, but I think that’s the kind of order of magnitude.

Charlie: So even when a company is advertising, we have a trillion per hour, and that’s not a small number to be sure. That’s a huge, that’s nothing but a mere fraction of what the human brain is capable of generating,

Arthur: Of course. And to continue the comparison, I think for me, the biggest difference with the human brain is that the human brain is able to, is dynamic. It always changes the number of parameters because every day we can either gain or lose neurons, depending on the task you’re doing every day or the training you are receiving, you can actually reinforce a certain parts of your brains, of your neurons. So it is very modular, very dynamic, and that enables the human brain to adapt to so many things compared to the AI models we have right now that are basically frozen. So there’s a big training and fine tuning phase, but then you stop this phase and the model brain, the model is frozen and you just use it at inference time, but it is not, there’s difference between training and inferencing. So inferencing is when you use it actually, you actually use it to make predictions to

Charlie: Generate

Arthur: Perform actions. Yeah, generate. So that’s the biggest difference between human brain and

Charlie: AI. That’s pretty impressive, actually. I never stopped to think about the comparison, but that’s pretty impressive. And that brings up another question. We hear conversations all the time about how impressive large language models are and things like that, but what would you even call the human brain? It’s more than a large language model. I mean, a super large language model. I mean, perhaps I don’t have a word for

Arthur: No, no, no. I think it’s so unique because even that’s another topic. I would say it’s more in neurology because the basics of AI is based on neurology, so trying to mimic the human brain and the neural network is basically trying to copy how the neurons are connected in our brain. But yeah, I think human brain is so unique. We don’t even know a fraction of how it works. There’s so many things that we don’t know about our brain. So what we are doing in AI is just like a fraction of a fraction of what human intelligence is.

Charlie: As much as we know, as much as we know it’s

Arthur: Nothing, it’s like a fraction of a fraction. So that just shows we find AI is so impressive, but just imagine that human intelligence is levels and levels is very different because, so I dunno how to describe it, but yes.

Charlie: Well, it’s very interesting, very interesting topic. Let’s go back into S SLMs though, because I know you mentioned that because they can run on a smaller computing platform, including my phone or maybe our personal computing, things like that, there has to be some savings and cost perhaps, and maybe even how it performs, if it performs better, if it’s a more narrow or a smaller language model, does that mean it just by definite runs better or it’s more efficient in a particular topic?

Arthur: Well, yeah, efficiency is a good word. I think that’s the most important thing. It means that it’s probably slightly less good than the biggest large language models right now, but it’s way more interpretable. So that means that as they’re are way less neurons, you can actually train it better. You can actually align it more with what you actually need. Because I mean, personally when I use chat GPT, I use it for any other platform. I use it for a very precise task. And I don’t think that a trillion parameters are worth or not enough. I don’t think this task should use that much parameters is what I mean. It’s basically, I think most of the tasks people use generative AI for can be done by small language models, but it just has to be well trained and well fine tuned and well aligned with the user. And the way I would see it in the near future would be that each person has one or several small language models specialized on their needs. So guess each of our lives are different, and we have different tasks, but we would have small language models extended or intelligence basically.

Charlie: So I could literally have multiple S SLMs running concurrently and maybe integrating somehow with an LLM as well. Is that a practical configuration?

Arthur: Yeah, for sure. I think that’s very interesting that you say that is the way I would see it is to have multi agent system, so multiple small language models, each specialized in a certain task or certain domain or language, and they would be communicating with each other in order to solve a bigger task, a bigger problem. And I think that would be one of the power of those small language models is how do we say the community power being and just also our brain, how it works when we do a certain task, we are not using a hundred percent of our brain. We are using certain parts of our brain part for electricity. So moving your body and one of them for emotions and stuff, et cetera. And I would see the same with small language models. Each one of them would be good for translating text. One of them would be to generate summaries. One of them would be to do sentiment analysis, whatever, and all of them would cooperate together to solve something, a complex problem.

Charlie: Do you think that’s really a direction that technology will be going instead of having one large language model that does everything, but to have, or instead is a direction perhaps where you have multiple smaller s SLMs working together? Is that a more practical use or maybe even a better use or more efficient use? And this way you can almost pull in perhaps even a la carte, the ones that you need.

Arthur: Yeah, for sure. I think that enables overall system to be very versatile because every business, every person, every business, every company has different business needs. And I think in the end, they’ll each have their own set of small language models working together, and that will enable them to have way more adaptability, if I can say that. I dunno how to say it, but yeah, basically very specialized and very, very efficient as well, because they’ll only need a certain number of small language models. And instead of having a huge language models, a huge language model that probably there’s like 50% of the model maybe you don’t even use because it’s trained on task that you don’t even know exists or don’t even use. So that’s the part where efficiency plays a big role.

Charlie: And because you’re employing multiple s SLMs, maybe it’s a more cost efficient as well. It’s less expensive to do that versus using one large language model as well. Yeah,

Arthur: Obviously. Yeah, for sure. Yeah, that makes it way more, way more efficient, way less resource intensive and easier to run on the hardware, on the computing side and for the ethical parts of environment, because that’s very important. So a small language model would be very practical for this compared to large language models.

Charlie:Have we reached a point where we can use, or we can view LMS or SL, can we view them now as just black boxes? Have they become that much of a utility to us and we can just pull them in as we need them? Or are they just black boxes to us? Have we reached that point of efficiency yet?

Arthur: I don’t understand the,

Charlie: Well, we don’t necessarily, yeah, maybe I’m saying it the wrong way, maybe that we just use them as we need them. We don’t need necessarily to understand what they do, but we send the data, we get a response back, and then we’re done. We don’t need to know the details of what the intricacies of what’s going on inside the engine themselves.

Arthur: Yeah, yeah, for sure. Yeah, I think as long as you understand, you’re able to understand and to verify the outputs, I think that’s the minimum I would say, because it’s dangerous when you start to actually just copy paste what it outputs it, because obviously not only, there’s a big word, it is called hallucination, so that’s in a new field of generative ai, but it’s something that existed since the beginning of ai because AI is based on probabilities. So hallucination is just the language model is just trying to output the most probable answer, but it has no notion of truth. So it doesn’t care about the truth, it just outputs the most probable answer. So that is the cause of hallucination, and I think, yeah, the human has to really make sure to verify the output and to understand it to, yeah. Yeah.

Charlie: But there are some tools, are some tools already in place to help mitigate some of those hallucinations things like rag for example. I know a lot about rag, you want to talk about how that does, is that exclusively made for LLMs or can you use RE with S SLMs as well?

Arthur: Yeah, you can use RAG with S SLMs. Yeah, because I think RAG is called retrieval augmented generation. So what it means is that basically you have this, I like to use the analogy with cars. In a car you have engine. So either the large language model or the small language model is the engine, and then RAG would be the parts around it making overall system able to drive. And so basically R retrieval augmented generation enables the generative model, so the SLM, to have access to search engine. So the RAG is actually really, it was the search in the last year in all the industries, because what it enabled is to have the power of generative AI in the industry in every business, because every business has internal data, internal documentation, and some of them are not known, well known in the training data of bigger language models. And so they use RAG in order to incorporate, to add the knowledge contained in their data to the power of generative ai. So enables businesses to have internal chatbots basically, that their employees can use, and it generates answers based on their internal data without actually training the model, which is very expensive how to do.

Charlie: That’s an interesting point, because if I’m using my own internal data, does that mean that I’m less likely to have hallucination then?

Arthur: Yeah, yeah. Yes. Thank you for mentioning that. I forgot the first part of the question. Okay. Yeah. So basically what it does is you can, what it does at inference time is you ask a question and then searches through the search engine and retrieves pages or paragraphs from Euro sources, from Euro data, and that makes the hallucination way less probable because it actually retrieves Euro data and it’s in the prompt, it gives it in the prompt of the model, and then it generates an answer on this data, euro data. And you can also have the sources, basically the sources retrieved, so you can actually understand why the rack system answered this, if I can actually check the sources that I had seen. So it adds a explainability factor, which is very helpful.

Charlie: And transparency.

Arthur: Transparency, yeah, of course. Yeah,

Charlie: That’s always a big topic. I hear about an AI all the time. The transparency of it.

Arthur: Yeah, very important. Yeah,

Charlie: So I’ve used chat PT four oh, Omni, and now of course there’s four. Oh, mini. Would you consider that an SLM four Oh, mini, is that by saying it’s mini? Is that by definition mean SLM?

Arthur: I mean, I would assume so, but they haven’t released any research paper on it, so we don’t actually know the size of it or the data, et cetera. But I would assume, yeah, it’s like a compressed version, a more efficient version of their big charge, GPT-4. Oh. So yeah, I would assume it’s a small language model,

Charlie: But what would compel any company or any platform to come out with an SLM, excuse me, confusing SLM. What would compel any company to offer that? Do you think that they see a bigger trend, people gravitating more towards s SLMs? Is that why they’re offering these newer platforms?

Arthur: Yeah, I think because I am following very closely the open source community and the open science and AI and a very big part of the community tries to always check out which is the latest models that came out, what are the benchmarks on it or the evaluations. And they actually want to test it on their computer, test it on their application, because so many of them, they have ideas, they want to try it, and they get feedback. Then it’s like a whole community where people, this new toy that came out, they want to buy it and test it and then say, oh yeah, I found that this is not good, this is good. And I think small language models are so, is very, very accessible. So I would think it would be accessible to broader audience. So yeah, I would think it is. That’s a very good side of it, I would say. Yeah.

Charlie: Do you think that we reach a point where, I always say it’s a game of leapfrog, and you alluded to it in that somebody comes out with something called A, and then another platform comes out with B, it’s better and more efficient or things like that, maybe more parameters or whatever the case is. But at what point do they keep leapfrogging over each other that they’re no longer an SLM, now they’re back to where they started again? Or you think that it’s getting better and still staying under the definition of SLMs at that point?

Arthur: Yeah, it’s a good question. Well, I think it’s going take a similar path to the computers because, I mean, I was in Rome, so I can’t really say that, but at the beginning of the computers, they were huge. They were taking a whole room. It was reserved to research and military use or I dunno, it was huge. And now our computers are on our phones and we talk about microtechnology nanotechnology. So we just seen quality go up, but also the size go down and more and more efficient. And I would say that the language models and even the models in general will follow the same trend. There’s been, in the past year, there’s been a huge scaling, scaling low, I’d say that is behind the success of gt Chad, GPT. They were the first to actually show that by scaling making the model huge and with a huge amount of data, you’d actually have very impressing results. So then the trend was everybody, okay, let’s build it bigger and bigger and bigger. More data, more data. And then I think now we’re reaching a plateau where they, I mean they’ve trained on all of the internet and

Charlie: All of human knowledge maybe.

Arthur: Yeah, I mean there’s no more data you can actually eat for the ai. So scaling is finished now. And I think now the next part is actually making them smaller and smaller, more efficient. So actually making clever changes to the architecture and the methods of training so that you actually conserve the same quality or even suppress the quality while reducing the size of the model. So

Charlie: Yeah, you said earlier you mentioned chatbots, and that to me is something that everybody can relate to. Everybody at some point in their life has worked with a chatbot. And is that considered a good example of an SLM? Or maybe that can be used in the future as a integrating with an SLMA chatbot where it’s using my company data and it knows how to properly answer a customer online. Is that a good use case for an SLM?

Arthur: Yeah, it’s a very good use case. I would say, yeah, it’d have even for large language models, but for small language models as they’re very compared to large language models, they’re very easy to train and to fine tune and actually to deploy, et cetera. So I think small language models is very, very interesting for businesses. And I think a lot of businesses need chatbots and they need their data of their company to be integrated in those models. And for is very, very hard in terms for a large language model, it’s very hard for two reasons. The first is that you have to actually have the compute power to host it and to train it. And that’s like we’re talking about millions or even big billions for the biggest language model. So it’s like can think about it. And the second one is actually you need experts in research, experts in training and fine tuning the model, which is very, it has to be very carefully done because as there’s so many parameters, it’s sometimes too many. You have to too

Charlie: Many, perhaps

Arthur: Too many. So it’s very, very hard to control the learning and to actually check it. Has it learned or not? Because it has consumed so much that imagine you have this and then you integrate a small portion that is your data. How do you check that I actually learned it or it’s very hard because it gets lost in the huge amount of murals. And so if you compare it to small language models, then you can actually, it is quite easy to fine tune it on consumer grade hardware. It’s easier to train because it has less parameters, so it takes less time and it is easier to check if it actually learned or not. So there’s lots of advantages, I would say, for businesses.

Charlie: So this is clearly a technology that’s here to stay, it sounds like it’s not going away and it’s only going to keep getting better as time moves on, it sounds like.

Arthur: Yeah, that would be my vision. And I think that’s the trend right now and the open source AI is that the models are better and better and smaller and smaller. And I think it’s here to stay here. This trend.

Charlie: So we just started having this conversation just a few weeks ago when we had that dinner together. But this clearly has captured your imagination. I think that’s my takeaway. I’m just curious what got you started in growing an interest in this technology?

Arthur: Yeah, for sure. I’m really passionate about this because basically it started, I always wanted to AI on my computer. So at the beginning, it was maybe a bit less than two years ago at the very beginning of natural language processing and generative ai, there wasn’t even T that was out. It was very hard to have a good model that will actually run on your computer. And yeah, I’ve just always had this, I dunno how to call it, but I always wanted to download the model and actually use it on my platform and see what can I do, what can I do with my computer? I always have had this DIY approach of things. I always like to do my stuff on my own, grab this and grab this and then craft it together, do something. I always really love to do that. And I think that’s why I’ve always, it wasn’t even like a question, I always wanted to do that, but obviously I use also third party platforms sometimes for tedious tasks and stuff. But yeah, my favorite models would be the open source and the small language models. Yeah, I’d say.

Charlie: And surely if companies are going to be looking to embrace and employ more of things, AI in their own enterprises, this is going to be an area that they will naturally go down because it just seems to be a much better use of computing platform and of course more secure, potentially more secure. And that that’s always a huge concern and almost any area of IT security is always the conversation.

Arthur: Oh yeah. I think when businesses are going to realize that you can have almost the same quality of answer, but having way less compute budget and actually having way more security on the system because it actually runs not on the cloud or on someone else server, but actually on your own data computing platform, then I think people are going to be impressed. Because I was really impressed lately using some of the small language models, the latest ones compared to when I first started. I remember one of the first experience I had with rag retrieval augmented generation system, and it was a small language model. It was 3 billion parameters, but it was like a year ago. So the quality is huge. And I remember answering just one word. I gave it financial transcript and I asked it, okay, what do you think? What is the view about gold, for example, on the market? And it replied top ish. And I remember I was like, wait, is this model garbage or what? But actually ish means it goes like this and

Charlie: Oh, it peak probably peak It peaked. Peaked, yeah.

Arthur: But it just shows now. Yeah, you kind have similar 3 billion parameters, but they actually, they really perform so many tasks. They can translate, they can summarize, they can perform complex mathematical problems, is really, really impressive. The amount of progress in such a short amount of time I think is the most impressive.

Charlie: And I think the fact that they can work in concert, that’s a big one. They can work with each other. And I think that might be the big selling point here. That’s what I’m taking away, because you’re not limited, you’re not limited to just one. You can have many working together and still be more cost effective than using one large model.

Arthur: Exactly, yeah. That’s the main motivation. The biggest advantage for me is when you cooperate them together, I would say it’s called a multi-agent system. So each small language model could be considered as an agent with a specific task, with a specific set of tools. So that would be functions and yeah, they would each cooperate together in order to solve a bigger problem. There’s going to be one orchestrator that’s going to split the input query into several queries and then dispatch them to the agents, and one agent is going to be doing its part and then, okay, I need to call him. I think it brings so many possibilities. I think agents are going to be the next trend in this field, and small language models will enable those agents to build, like I said, multi-agent systems and have quite complex and very, very efficient solutions to problems. Now,

Charlie: It’s curious to me, the paradigm you’re suggesting to me is not uncommon to me because we talk about modernization all the time, and we talk about modularization having little modules to do different tasks, and you incorporate many of these into as you need them. It sounds like a very similar model.

Arthur: Exactly,

Charlie: Yeah. It’s another prime example of the need to isolate business logic into different modules. And that’s what you’re describing it sounds like to me.

Arthur: Exactly. Yeah, totally. Yeah.

Charlie: Yeah. So the paradigm still works. It’s still a very sound model. Yeah,

Arthur: I would say so, yeah.

Charlie: Which in fact, if you’re, just to go back to modernization, there’s no reason why it seems I cannot have my own set of proprietary program models and then also incorporate as another module, perhaps an SLM as another module just becomes another thing to plug into my application.

Arthur: Yeah, yeah. Because agents, they would have access to tools, and tools can actually be functions like actual your function that you write in your code. You just have to give it the name of the function with the parameters, and then the small language model is aware of this function. And whenever, if you train it, well, whenever it, how do you say, encounters a decision where the call to function could be useful, it’s going to actually call this function, and so it can actually also operate with your code. So you’ll have, I dunno, a scientist, small language model, small language model. I’ll basically do the ideas and try to get the main ideas around the research and the main concept and stuff. And then he’s going to call, I dunno, a search function to search the latest papers on the internet around this topic, retrieve it, and then it’s going to call probably a pricing function that’s going extract all the text and then it’s going to call another model. There’s so many possibilities with this, with code and other agents, and there’s probably way more I can think of. But yeah, I think it’s a fascinating field.

Charlie: It certainly is. I think we will start wrapping this up, but I’m just curious how your final opinion or your final thoughts on this, how quickly this has moved forward? We talk about technology, how it gets better each year, but I think that we are on a nearly completely straight up trajectory right now. It’s amazing. And we’re talking now the beginning of 2025. I can’t imagine if we were to do this conversation again in one year from now where we’re going to be, it’s hard for me to get my head around that

Arthur: For sure. Yeah, nice. Even for me, that tries to follow as closely as possible. The field of ai, it’s not possible. There’s so many different papers that come out, different methods, different models that come out. It is just so hard to keep up because I think there’s so much hype around it, but also, I mean, so much possibilities, so much promises that everybody, even people that weren’t into computer science or ai, I think they’re starting to get into it, into actually try those models, see how they work and stuff. And this whole community, the open source community and the whole search around it makes it go even faster and faster. And I think in one year, I don’t even think what is going to be around in one year. But yeah, very excited to live in this period. I think in

Charlie: Absolutely.

Arthur: It’ll be great

Charlie: To ask anybody to predict five or 10 years out. That’s just pure, at this point, pure science fiction. It sounds like

Arthur: Even one year, I think it’s pure science fiction. Wow. Maybe that,

Charlie: No, but still. But that’s okay. I mean, there was a time not that far ago, long ago, when we were able to predict three years out. Now it seems like we can’t do that anymore because it’s mind boggling. Yes.

Arthur: Crazy. Crazy. Yeah.

Charlie: It’s all good. Arthur, what can I say? This has been a fascinating conversation to me. This is a topic that really has captured my imagination. It really has, and I’m with you. I think that agents, s lms, I think this is really going to be one of the next big things in the field of AI altogether.

Arthur: Yeah, it was great to talk all of that with you. I think there’s so much to talk about. We’re going to maybe stop here because Yeah, but it’s very fascinating. Thank you very much for having me. It was a really, really, really good experience and very, very happy to share that.

Charlie: Thank you. And on behalf of everybody listening to the podcast, thank you for sharing your knowledge. It’s so fascinating. And again, it wasn’t that long ago. I had not even heard of SLMs, and here we are now having an engaging discussion on it. I think it won’t be too long before this is part of the normal lexicom of everybody on everybody’s lips, SLMs, because this is clearly an area that’s really growing very quickly.

Arthur: For sure. Yeah. Great.

Charlie: Arthur, it’s been a real delight. Thank you so much. As you said, thank you so much for your help, for your time and your enthusiasm on this topic. It shines through. It really does. So thank you very much.

Arthur: Great pleasure. Thank you to you too. Thank you for having me. It was really, really appreciate it. Thank you.

Charlie: Great. Great. Alright, we’ll wrap it up. We’ll leave it there. We’ll wrap it up. And thank you everybody for joining us today. Enjoy the podcast and we look forward to seeing you and chatting with you down the road. Take care everybody. Bye now. You. Bye.


Key Enterprises LLC is committed to ensuring digital accessibility for techchannel.com for people with disabilities. We are continually improving the user experience for everyone, and applying the relevant accessibility standards.