Skip to main content

AI Security, Risk and Compliance, With Trustero CISO George Totev

George Totev, chief information security officer at Trustero, joins Charlie Guarino on TechTalk SMB to share his observations on the state of AI and the new security considerations that must now be considered

Click here to listen to the audio-only version

The following transcript has been edited for clarity:

Charlie Guarino:

Hi everybody. Welcome to another edition of TechTalk SMB. In today’s meeting, I am so happy to introduce a new associate of mine, somebody who I recently met at an AI convention, Mr. George Totev. George was sitting on a panel as a security expert and I was just enthralled with the discussion he was having, talking about risk and compliance and things like that, and all about the AI. So George, thank you very much for joining me and why don’t you to tell us a little about yourself a little bit.

George Totev:

Yeah, thank you for having me, Charlie. Yes, I have about 25 years experience in security, risk and compliance. I’ve been working for large organizations like Goldman Sachs, the World Bank, Visa. I built the risk and compliance function for Atlassian. I spent some time at Snowflake and last year I decided to join Trustero, and currently I’m the chief information security officer for Trustero. It’s a relatively small startup based in Palo Alto, California. As you can see, Trustero AI, we are in the AI space. We use AI to help with governance, risk and compliance. As you know, that’s a field that has a lot of mundane, boring, repeatable tasks that have to be done. So we are trying to use AI to help with that. Like things like continuous control monitoring, gap assessment, self-assessment, maturity assessment, all of those things.

Guarino:

What’s been your experience, George, so far with companies, you talk about large companies who maybe tend to be more conservative because of their scale and their size and their data sets, things like that. But meanwhile you’re introducing the notion of AI. Are you getting any pushback from that or are you getting a deep embrace or is it somewhere in the middle or are you getting all across the spectrum there?

Totev:

It is somewhere in between and I can understand it because I think we should move forward with AI and there is a strong case, especially in the security space, why we should use AI. For example, I’m looking at our developers, they’re using compilers. They’re using all the tools for developing code and pushing out code and frankly, my team cannot keep up with them. So how when you have development, they develop the code, then you have to do security review, then you push it into production and then you monitor it. And if they’re scaling their productivity 10, 20 times, I cannot keep up with my current team. So I also have to scale. And one of the ways to scale is for me to use AI. So AI is something that we need to embrace, but we have to be careful how we do it, and we kind of balance the risks and the benefits, and that’s what I’m seeing with the larger companies.

All of them want to do it, they see the benefits of it. On one side, we talk a lot about the increased productivity and efficiency that is coming with ai, but on the other hand also allows us, and we see it in our space, allows us to increase the quality of the work that we are doing, because it’s not just covering more but covering it more effectively. So yes, there is a balance between the risks and the benefits. We were talking about that before. I don’t think we are ready for AI to take over critical processes or critical infrastructure or the crown jewels of a company, but certainly it can help with some more mundane things, some assisting in a lot of things, and that will allow us to understand it, gain more trust, fix whatever needs to be fixed so that at some point we can give the reins to the AI. It

Guarino:

Is an interesting point where we are right now. As much as AI is truly doing, there’s so much misunderstanding about it and misinformation about it, and that is scaring so many companies of course. And I think I’d be curious to know what is your primary role? I would think while security is a big part of it, I would also think that education plays a big part of that.

Totev:

Yes, thank you for bringing it up. Actually, I have a series of podcasts where it brings security leaders to talk exactly about that, how we use AI in security, because it is kind of interesting. It’s either, oh, this is going to take over everything, the sky is falling, Terminator is coming, Skynet is activated. Where in reality, it’s like any other tool that we use. I mean, we can use it for our benefit, but of course there is also potential for abuse and we have to be careful about that. But the only thing that we cannot do is just sit on the sidelines and trying to understand and trying to figure out where we can use it.

Guarino:

That’s an interesting point because there are people layman, I suppose, who have never had any exposure at all to AI and now, yet today they’re implementing and using it as a tool. And that brings us into the topic of shadow AI, which is a big one where non-technical people are beginning to use it. And so why don’t you talk about shadow AI as overall and maybe some concerns you might have about it?

Totev:

Yeah, shadow AI is probably in the security space, probably one of the emerging big concerns that we have. If you remember when the cloud came first, we started talking about shadow IT, because you can get a SaaS product, you can put it on a credit card, you start using it and nobody knows that you’re doing that, and you can share sensitive information. I don’t know how that SaaS tool is. Maybe it’ll plant some malware going back into the environment, all of those things. I’m not saying that we solved that problem, but we gained a pretty good understanding and we have tools nowadays to kind of monitor what is our SaaS usage and whether we use the proper SaaS platforms and all of those things. And zero trust is one of those things that we kind of employed, and I’ll get back to that.

With the Shadow AI, it’s the same thing, but on steroids, because everybody’s becoming a developer. Before everybody was a user. Now everybody is also a developer, because people can create those agents, they can give them tasks to do, agents can talk among themselves and they can utilize the identity of whoever created that agent. Now how reliable are they? And once we are talking about AI, it’s a non-deterministic system, so how reliable are they and can we really predict what they’re going to do? Are they properly protected? For example, one of the trends that I’m hearing nowadays, you know what phishing is, like sending an email with trying to phish a person to do something for you. Now you can phish AI, because there are those agents that are enabled to read your email and act accordingly according to your instructions. Like for example, book meetings, answering emails, those kind of things.

Those agents can be tricked. With properly crafted email. They can be tricked to do things, on your behalf with your identity. So essentially having access the same way you have access. So shadow AI is becoming a much bigger problem than the shadow IT we used to have, and that’s where zero trust concepts are coming to play, because the same way we didn’t trust machines that are connecting to our environment, we were saying that, yep, trust every machine and see if it’s in the Starbucks cafe. Something similar has to happen about the agents. How do we verify their identity and how do we make sure that they behave the way they’re expected to behave?

Guarino:

That was one of the biggest takeaways for me from this AI conference that we just both attended. Agentic AI, I mean there’s always a buzzword that I leave with or one or two buzzwords, and Agentic AI was clearly, every breakout session that I attended, that was the main focus. And in fact, even the vendor expo, that’s what everybody was talking about, how they’re producing agents.

And of course the latest acronym now is A to A, which is agent to agent, which is even more interesting to me, agents working together, multi agents. But I know from a pure security perspective, you do, as you said, you have this concern about zero trust and let’s talk about that. So we have a big concern you have is identity authentication and how do we do that? And with so much potential for spoofing and things like that, if I was an IT director for a large company or VP, you’d have to make a very compelling case to me to implement that into my shop, with be all these big concerns. How do you do that and what are some of the barriers that you see?

Totev:

Yeah, so let me scare you first. I think we are heading for identity crisis.

Guarino:

Identity crisis.

Totev:

Yes, because even some Altman yesterday was talking about that at the Fed conference. A lot of the tools that we have in security and even outside of security are, they rely on the fact that we know who you are, right? I see you, Charlie, I know your voice, so I’m assuming that’s you. On the other hand, with Deepfake, I don’t know if it’s you, I don’t know if it’s your avatar that you created and it’s interviewing me right now, or somebody else contacted me who looks like Charlie, sounds like Charlie, writes emails like Charlie, and I am part of some big phishing that is going on right now. So how do I know? On the other hand, you’re creating agents that can even behave like you that they can act on your behalf. So how do I know who you are and if it’s you or somebody that you wanted to or somebody who is trying to spoof your identity.

So that is becoming a problem. That is becoming a problem, and we have to find a way to better authenticate people. It’s kind of weird because in our normal human interactions, when I see somebody’s face, when I hear their voice, that’s my innate authentication of the person, and we are used to that. We certainly, I mean, I don’t want to live in a society where I’m questioning every person that I’m talking to. Of course, I can ask you for some kind of a password or something to verify that you use, show me your thumbprint, those kind of things. But I don’t want to live in a society like that, we authenticate every single human interaction. So we have to find a way how to, on one hand, combat the deepfakes that can spoof your identity, on the other hand, not go to the extreme of authenticating every single thing.

Guarino:

Well, we talked about this earlier. We talked about how in the 1930s the government came out with Social Security numbers and that was, everybody had a unique number. I know even back then there was a pushback, I don’t want to be just known as a number, but that’s what happened. But even that number now has grown, its significance on society and identification has grown well beyond the original intent. It was to have your account and that was the end of that, but so what are some other better ways of identifying people? We talk about zero trust, the zero trust where you identify somebody at every stop of the way. Does that solve the problem?

Totev:

That probably can work for agents to identify agents. I’m not sure if that is a good solution for people, because again, you don’t want to identify and authenticate every person every time when you communicate with them. But zero Trust is a really powerful concept, and I’ve implemented it on several places. Granted, I haven’t heard of many companies that actually took it all the way, as Google intended when they published the paper on zero trust. Let’s be honest, it’s really hard to get it there. And also, it presents a lot of friction if you get it all the way. But just the journey to getting to zero trust is super powerful because you are securing your environment, you’re removing a lot of issues that you can see there. I think something similar needs to happen around AI to start talking, we should start talking about how we dedicate agents, how we give identity to the agents.

Because there is another thing. How should we think about, what is our frame of reference when we talk about AI? If we just think about it as a tool and be mindful that zero trust is really created for tools. If you think about it as a tool, we are not using the full potential of AI. It is much better for us to think of them as a junior member of my team. Because think about when you have a tool, you try to use it for something, it doesn’t work, you throw it away. Whereas when you have a junior member of your team and you give them a task and they do not perform as to your expectations, you work with them. You try to educate them, you show them their mistakes, you help them improve, and that is how you work with AI. You try it, it doesn’t give you exactly the result that you’re asking for, and you start improving it. You start having this conversation, this feedback with an agent. So in that perspective, the AI agents are much closer to humans than they’re closer to tools. In fact, you can make an argument that some of the tools that we use in the HR from a security perspective could be applied to the agents. So it should be some combination of what we do for human identity and what we do for system identity.

Guarino:

We even just talked about how states are now all implementing Real IDs, and that seems to be, in fact, without Real ID.

Totev:

You Cannot fly.

Guarino:

Yea, air travel and cross borders in some cases. But yeah, that’s becoming an interesting thing, and those who don’t have that now are going to be stuck and not be able to do that. Although of course in the case of air travel, a passport will suffice as well. But the point is Real IDs is now also being used more than just for air travel. I see other companies in the private sector starting to use your identification as well.

Totev:

I would love for us to get rid of the social security as identification factor. To be honest, I wouldn’t rely on it. Actually, I would say that as an identification factor, it’s very weak because social security numbers leak all the time everywhere. So it’s time for us to get out of 1930s. We had these conversations about passwords. I mean, passwords are really, really bad authentication factor. So I would like for us to get into the 21st century and use something stronger, something that is unique, something that is well supported, it’s standardized that we can all use.

Guarino:

But don’t you think that might be a challenge? I mean, anytime you’re trying to introduce something on that scale, and you’re talking about a scale of enormous, larger than any large company, you’re talking about the entire population. Isn’t there a cultural thing here that you get a lot of pushback from?

Totev:

Oh, certainly, certainly, certainly. That’s why we, like everything else, we should talk about what are the drawbacks and what are the benefits, and we should figure out, I hope the benefits are outweigh the drawbacks for that. Now, in terms of how it could be implemented, it could be on the commercial side. I mean, there are a lot of identity providers nowadays that we can utilize and they’re well established. Think about Okta, for example. They’re well established and we can probably leverage something like that, or it could be done on the government side and it’s going to take forever. But still, I mean it is possible, but we need to get rid of the Social Security numbers as authentication factors. Even we need to get rid of the passwords. I mean, passwords are really bad.

Guarino:

I wonder, George, if in our lifetime that will actually become a reality. What do you think?

Totev:

I hope, I hope. It’s kind of interesting. Don’t let the crisis go unutilized. I think this implementation of AI and the coming of the agentic AI, I think that is going to force us to rethink generally how we think about identity and authentication. So they may be a good thing.

Guarino:

But just to go back to our original discussion, we talked about how AI today is currently not being used in mission critical applications. Don’t you consider changing Social Security numbers to something else is considered mission critical?

Totev:

Probably. It really depends how it is used. If I’m using my Social Security number to authenticate to my bank and see my bank account or something like that, I would argue that’s not mission critical. So that’s a potential for us to start experimenting with new identities and new authentications. Now, there are other areas, let’s say in a hospital when they try to authenticate, is it really me that they’re going to operate on? Yeah, I’ll consider that’s a mission critical. So for that one, we have to be a little bit more careful. But on the other hand, if we are making an argument that the Social Security is not a good authentication factor, that’s another impetus to kind of, yeah, maybe we should not use those things for mission critical things anyway.

Guarino:

But aren’t we already on our way? Because for example, I go into some applications, some websites, and I have authentication apps on my phone that, aren’t we already on our way or are we already there? Yes, we

Totev:

Yes, we are. I mean, I’m using Google Authenticator, I’m using the Microsoft. So at all of those things. Yes, those are all right directions where we are heading. Now, the SMS thing, when you kind of log in and they send you an SMS code, I don’t know why banks are still using that since it’s been compromised so many times. But the Google Authenticator or the Microsoft, any kind of authenticators, those are actually good things. The faster they replace the traditional user ID and password, the better.

Guarino:

So the pro tip, is there anybody who has the option to do that should switch to that?

Totev:

Absolutely. A hundred percent. A hundred percent.

Guarino:

That’s interesting. And we talked about, for example, you used the word before friction, and I know there’s always that balance that you have to weigh, and how many obstacles do you want to put in front of a user or an agent now before it can do its job? Do you want to keep stopping it under umbrella of zero trust? How do you do that? There’s always that balance that you have to take into consideration. I want to make sure you are who you say you are, but I also want to make sure that you’re able to do your job.

Totev:

Exactly, exactly. That’s an interesting balance because yes, I can interrogate you to the zoo to make sure that you are who you are. I spend 15 minutes interrogating you so that you can take one minute to actually do your job. That doesn’t make much sense. Right? On the other hand, depending on what you’re doing, I mean, if your job is to launch a nuclear missile, then yeah, I better be sure that you are who you say you are. Actually, speaking of which, the EU AI Act, there are a lot of things that need to be fixed there, but in the core of it, they came up with a really good framework how to think about the risks from the AI. So in terms of what is the impact, the potential impact.

Guarino:

Is in the EU.

Totev:

The EU AI Act, in the EU AI Act. So I strongly recommend looking at the risk framework that they put together, because they’re talking about things like, does it make a life and dead decisions or it just drive your Roomba to do the cleaning at home? I mean, that’s a big difference, and that’s a big difference in how we should think about it and how we should secure it and all those kind of things.

Guarino:

The EU AI Act took many, many years to design, and now, to implement. And now it’s finally being rolled out. But I’d be curious if you think that’s going to be used now as a template for other governments.

Totev:

I wouldn’t be surprised. Yeah, I wouldn’t be surprised because they did something similar with privacy with GDPR, and GDPR has its own issues and all that kind of stuff. It’s kind of interesting. In the EU, they have some good ideas of how to approach them, but once we start implementing those ideas, that’s where things start going sideways. So for example, with the GDPR, yeah, great idea of how to protect privacy and all those kind of things, but at the same time, every time I go to a website, I have to either agree or disagree for them to use the cookies on me. Okay, so maybe that’s a little bit too much. So I think something similar is going to happen with AI. They’re kind of putting a stake in the ground, and a lot of AI regulations are going to come based on the EU AI.

Guarino:

I think one of the biggest challenges that they have is that there’s not really one size fits all model.

Totev:

That’s the problem of every regulation. And I know that we are talking about the regulation being slow, but that’s by definition, that is by design. Because when they put out a law, they have to make sure that this law works in a hundred percent of the cases, which is really, really difficult. And that is actually the little bit of the challenge that we have with the AI, because it’s a very dynamic field at the moment. We still don’t understand where it’s going, what is happening, let alone for some regulator to actually issue a law governing the AI. Yes, they have to start somewhere, but it has to be so open and so flexible, and they have to update it quite quickly. So this is probably one of the cases when somebody can argue for more of a self-regulation, kind of agree on some high level principles that we need to follow, and we self-regulate until we figure out, until this is a little bit more static and we figure out what is going on. And then we have a law.

Guarino:

You can’t deny. There aren’t challenges with self-policing, self-regulation.

Totev:

Oh, yeah, absolutely. Absolutely. Yeah. I understand that some of my statements are controversial. The thing is that there is no black and white in this thing. There are always nuances.

Guarino:

And all these laws can be misinterpreted and challenged and reinterpreted, things like that.

Totev:

Exactly, exactly. The law of unintended consequences.

Guarino:

Right. Exactly right. Exactly right. I think we have both in agreement that AI now is part of a critical business process. And we said earlier, I love the expression the genie’s out of the bottle. We’re not going back. It’s here to stay.

Totev:

Yes, we have to figure it out, like any other things, like when the first automobiles came out and people had to switch from horse buggies to automobiles, they had to figure out. We have to figure it out with AI as well. It could be a great helper, great assistant, if you use it. But on the other hand, of course there are dark sides of the AI as well.

Guarino:

It’s interesting to me as well, but you don’t think that it’s going to be a big bust and it was overhyped. I don’t think we’re there. This somehow feels different to me than other new technologies. Talking about industrial revolution 4.0, now that’s another new milestone in humanity, which is where we are right now. This one feels big, but then again, I can make the argument that each of those incarnations of the revolution, there was somebody, there were naysayers—no, no, no, this is too much, we shouldn’t be doing this. Things like that. It’s an interesting discussion point, I think.

Totev:

Right? No, no, I agree with you. Now, there is a lot of fraud. I’m the first, I’m working for an AI company. I can tell there is a lot of fraud in AI at the moment, up to the point that I know startups put AI in the name just to get a better valuation. So there is that going on. In fact, if you look at it, I was talking to VC and he pointed something interesting to me. It takes about five years for things to shape up. If you remember in the 2000s, the great dot bust time, it took about five years after internet came up for us to kind of figure it out and things to settle. Remember blockchain?

Guarino:

Of course.

Totev:

2005, 2006. It took about five years to things to settle and we kind of use it. Something similar is going to happen with the ai. It starts when was that? 22, 23. That’s when the charge GPT was first introduced.

So till we have another two, three years probably for things to kind of settle, and the VC point was once we stopped talking about AI as the leading thing and we think about something normal that we use in our normal interactions in the systems, then we know that things are settled, that the fraud is pretty much gone. There is fraud. I can give you that. Now, on the other hand, we’ve been talking about artificial intelligence for a very, very long time. In fact, the mathematical models, the structures, they were developed in the 1950s, 1960s, right?

Guarino:

Turing.

Totev:

Exactly, exactly. And at the time they had different names. I think they call it cybernetics or something like that.

Yeah, I remember I actually had capstone projects on AI in my college time and dating myself. The thing was that we didn’t have the hardware to actually do it. Now we have the hardware and it’s a self-feeding thing. We have the hardware, and at the same time, that allows us to develop even better theory and better mathematics behind it, which improves the hardware. So for the first time, we can actually do artificial intelligence, and it’s a big deal. It’s a big deal. And I would agree that probably is on the level of the industrial revolution. And if you look at the industrial revolution, it brought us pollution. So we have to be careful about the AI. I think there are many, many, many, many more benefits than drawbacks, but it’s really up to us to be careful about the drawbacks.

Guarino:

Before I ask my very last question, do you think we have AGI right now? Are we there?

Totev:

We are pretty close. We’re pretty close. I think we are pretty close. I think we’re pretty close. Give it another year maybe, and we are going to see the AGI.

Guarino:

That’s a pretty bold statement to make, one year out.

Totev:

So maybe I’m in the bubble, in the Silicon Valley bubble, but there is a lot of research, a lot of work happening here. Now turn it back to you. What is AGI? There are many kind of definitions. It’s a little bit of a fluid definition at the moment, but from a practical perspective, I’ll give it another year, maybe two and we’ll be there.

Guarino:

So just to kind of wrap up this, I got to tell you, this conversation is so fascinating to me. This whole topic is really very, very cool, I think, and important for everybody to know. I mean clearly. What’s your final takeaway to those, anybody in this space, maybe at any corporate level, what’s your final takeaway to them as far as trust with verify perhaps, or be cautiously optimistic? What’s your final takeaway that you typically profess to people or you think they need to know?

Totev:

The core principles of risk management remain. Those are solely principles, things about what is the risk, what is the benefit, what is the likelihood, what is the impact? So those are core principles that haven’t changed. The math hasn’t changed, it’s the same. So when you look at the AI and how to implement and how to use it, all of those things, just think from a risk perspective. Does it solve a problem? Or you want to do AI just because everybody else is doing AI, you want to do AI, right? If you have a business problem that fits into the real realm of AI, absolutely a hundred percent, start into it. Evaluate the risk, evaluate the benefits, go with it.

On the personal aspect, I would say start using it. Just figure out in a personal level how you can use it, because you have to become comfortable of having a really smart, non-natural being around you, because we are used to kind of talk to—when somebody is replying to you in a coherent way and it knows a lot and behaves, we kind of used to think of them as a human, but that is not a human. It just mimics human. Don’t forget, that’s just statistical models. That’s all AI is. It’s statistical models. So don’t expect them to think like us, and I don’t want to go into the topics of soul and all of those things, but yeah, it’s a great assistant that we should understand, we should understand. Therefore, we’ll understand what are the risks, and we can address those risks.

Guarino:

Excellent. That’s a great way to wrap up our conversation. Thank you so much, George, for joining me here today. I really enjoyed our conversation. I thank you for your time and for sharing your knowledge on this very vital topic.

Totev:

Thank you, Charlie. I really enjoyed it. And yeah, look to see you again.

Guarino:

Excellent. Everybody, join us for future podcasts and we’ll see you soon. Take care now. Bye now.


Key Enterprises LLC is committed to ensuring digital accessibility for techchannel.com for people with disabilities. We are continually improving the user experience for everyone, and applying the relevant accessibility standards.