Skip to main content

The Business of AI: Cybersecurity and Trust in the Age of AI

Cybersecurity expert Paul Robinson, founder and managing director of Tempus Network LLC, discusses how AI is impacting his field.

AI is changing the way organizations conduct business, but it’s also changing the way cybercriminals attempt to disrupt that business. With this dynamic in play, cybersecurity teams have been forced to adjust their approach as their foes flex new capabilities on these digital battlefields. 

For this installment of “The Business of AI,” I got together with my colleague and good friend, Paul Robinson, who has been in the cyber industry for more than 15 years, to discuss his cybersecurity observations and guidance for organizations on their own AI journey. Robinson is the founder and managing director of Tempus Network LLC; his bio is in the addendum below.

The following Q&A has been edited for clarity.

About Paul

Brian Silverman: Paul, thank you for the opportunity to talk about the changing landscape of cyber as organizations become AI powered. Could you introduce yourself, your cyber experience, and explain how you help guide organizations in their cyber strategy and chosen cyber solutions?

Paul Robinson: I started in the cyber security industry in October of 2009 in the software and hardware manufacturing section of the industry. The second part of my career was spent in governance, risk and compliance, and in April of 2023 I started my own independent cyber security consulting practice.

The reason I started my own practice is that in the cyber industry, information can be finite, as far as people’s understanding of the options that they have to build up their cybersecurity programs. There might be some solutions or tools that they need, that they don’t know about, as well as there might be some that can be better solved internally. It’s a real gap in our industry, which led me to start my own practice helping organizations figure out the best approach to achieve the most amount of success with their cyber strategy within limited budgets and time.

AI in the Hands of Criminals

Silverman: I would like the focus of our discussion to be on the unique requirements for AI-powered organizations and how they should approach cybersecurity. However, we would be remiss if we didn’t start the discussion with the associated risk of AI from a cyberattack perspective.

What are you hearing from both the solution providers and partners, and your customer?

Robinson: It’s interesting when you talk about AI and cybersecurity. People are talking about it being this new terrain, this new frontier that’s out there.

I really don’t like to focus on this being brand spanking new, as we did with cloud—cloud was not new when we started talking about it eight, 10 years ago. It’s the same thing with AI where it’s fundamentally been around. It’s just being enhanced and more mainstream these days.

The power of AI in the hands of a cybercriminal is just moving as fast as you possibly can, and that’s not new to the cybersecurity industry. We have polymorphic malware. We have automated scripts that are being written. Those types of threats and attacks out there have been prevalent for ages.

AI and the power of it in the criminals’ hands is the ability to take petabytes of information, condensing and consolidating it into consumable formats to be able to launch attacks. Using AI for open-source intelligence (OSINT) models, using information that is publicly available. Organizing it into a consumable data format and launching attacks—that’s really where I see AI kicking in.

Clients’ AI Concerns

Silverman: Is it keeping your clients up at night worrying that there’s an attack coming around AI? Cyber is such a massive undertaking; is it just an element of what they’re talking about? Are they integrating cyber issues of AI into their overall strategy?

Robinson: It’s a great question. Along the lines of nothing new under the sun, information is power. The ones that it’s keeping up at night, or where they’re really concerned about AI threats, are the ones that really don’t have correct information or haven’t really studied the industry the way that they need to, including tying in how AI can be impactful.

The problem that we have in cyber is that we chase these butterflies throughout the industry, and stop doing the basic things such as protecting against attacks such as DDOS attacks, or SQL injections that are still happening today. They were around before I started in 2009. Large organizations are coming down because of these attacks that have been out forever.

The ones that are really freaking out about AI, and fearful of it, are not educated or informed as to how it can impact their organization.

New Requirements Due to AI

Silverman: Are your customers and the partners you work with looking at traditional cyber disciplines and expanding them based on the requirements that AI is adding?

Robinson: Not as much as they should. When I talk to clients or people that come to talk to me about AI it is where should they start? One of the first questions I ask is, “Tell me about your data security program. Tell me about data classification asset inventory. Tell me about that, and then we can discuss AI.”

Shockingly, a majority of them cannot answer that question. I tell them that is where they need to start. The main concern is their data, assets and applications. You know, we go back to the Covid model, where we were concerned about people, applications, data and infrastructure. It’s a very basic model that can have a wellspring of results and is the foundation. And then start chasing AI.

If they don’t, they are absolutely going to miss gaps, because AI can touch so many different things inside and outside an organization.

Those organizations that are going down the path of AI and security, whether it’s external threats, internal threats or how they use it, and how they secure their own AI applications, need to start with getting back to the basics. How are we protecting our data? How are we protecting our applications? How are we protecting our infrastructure, whether it’s cloud on-premises or wherever?  Don’t forget their people, how are we currently protecting them? Then focus on: How can we take that foundation and build on top of it?

For solution providers, many of the existing security tools are putting AI on top of tools that are already established, and those tools in most cases are failing because these tools were not built with AI in mind. The security tools that have AI that are really good and functioning are the ones that are building it from the ground up.

The same thing has to go with an organization’s cyber security program with AI. It’s building it from the ground up. How do we build that along with the core competencies that we’ve had in place for years?

Silverman: In “The Business of AI” series the first two articles focused on the different types of AI. What struck me listening to you is that you have to start with the data, as well as the use case and what you’re deploying, because assistants such CoPilot, ChatGPT or Google Gemini will create different security issues as compared to deploying your own AI solutions that may automate different business processes such as customer support.

I also appreciate your comment about the cyber solution vendors need to think about AI from the ground up, and not just an added feature on a checklist or brochure.

Robinson: You go to any vendor right now, if they don’t have AI in their marketing or sales literature, they are not going to find many opportunities. But again, the focus needs to be on how it functions.

Clients come to me and talk to me and say we have to have AI in every tool that we buy. Okay, that’s great, fantastic. But what does that mean to your organization? Have they proven its capabilities for you with a proof of value or a proof of concept? There are large number of companies that are just buying tools because they say AI, but they don’t see it demonstrated. They pay a pretty hefty bill to get the tool in there, and it’s not doing what they thought it was going to do, and that leads to a lot of problems.

Effect on Innovation

Silverman: Does it make you optimistic that there are vendors, including AI, that will not only solve AI security issues, but they’re going to be able to address some of the broader issues that have been challenging the cyber security world for a long time?

Robinson: Yes, I really do, because it’s a different mindset that they have, because now it’s not just someone sitting in a basement, pick your nation state country that wants to plan a cyberattack. It’s now truly automated, as I said earlier, we have polymorphic malware. But now it’s on steroids right now, because of the capabilities of AI.

People don’t really understand that the cybercriminals are amazing at communicating with each other. They’re amazing at innovation and progressive thinking. 

Cyber security has relied on archaic methods to protect organizations, protect employees and protect intellectual property.

If these new AI-powered tools being built to protect are architected in the AI realm, and they’ve done really well at this one domain in cyber security, now they are going to increase the productivity of the tool, the productivity of the updates to it, the productivity to make changes, and that’s now going to be a competitive advantage.  They can say, we’re keeping up with the criminals, which has eluded cyber security, which for so long has been behind the criminals. Now, AI is forcing people to have to stay in lockstep with the criminals as far as innovation and progressive thinking.

Impacts of Agentic AI

Silverman: Let’s take a shift for a moment, and focus on AI-powered organizations, or on their AI journey.

As we move to agentic AI where agents, such as digital workers, are independently able to do work and work with other agents and human, how does that change traditional cyber and IT disciplines such as identity and access and data security?

Robinson: When you conceptually look especially against cyber disciplines such as identity and access management, there still has to be a process that’s in place, a human process.

I was talking to somebody the other day about a data leakage situation, because they found out that employees could do broad searches on the database of the organization through Copilot, and they did not have the correct setup, segmentation or the rules and rights of identity to control access to the data, so Copilot could access everything.

As an example, if it’s a healthcare firm, and I ask Copilot to show me all the member numbers of an insurance company and the data is not properly set up, which is a massive regulatory and compliance violation.

The architecting of identity and access management can’t be automated right out of the gate. Who knows? Maybe someone sitting in their basement right now can do it with the right prompts to ensure everything is locked down, encrypted, but they are losing the human element because there are so many variables that are involved. Let’s say your executive assistant is sloppy with sharing data. You know, that is a hard prompt to put in and to consider when you’re doing identity and access management.

A lot of things can be automated, including in cyber security, but there is still room for and a need for humans, where the knowledge of a company day-to-day is going to be vital, including  AI and human interaction.

Silverman: I read somewhere, and it was Copilot related, that Microsoft has Copilot assume the permissions of the user. That obviously did not do well enough to secure  the data for your customer. 

In the case of a digital worker or agent, it may have greater authority independently than the employee or customer it is working with, making managing those permissions more complicated than an AI assistant such as Copilot.

To your point, the human advantage here is going to be able to understand the differences and adjust, but a digital worker is going to move forward with what has been setup for them, including, hopefully, the right guardrails to protect trusted information.

Robinson: That is two things you brought up. One is the assumptions, and you know assumptions are great, but it could lead to having massive data and privacy issues. I don’t want to leave that to assumptions. It’s also the human element. There’s so many prompts that you could put into these tools that don’t account for human nature, such as an executive assistant who may be having a bad day.

It goes back to, if the right policies and procedures and standards have been established, then you have a better chance at not having these assumptions made, and you are just accounting for regular policies that you have in place to protect the organization, to protect employees.

It’s very complicated. We’re all still learning at this point, and it’s not to say AI is bad or not useful. Companies are going to save businesses by utilizing AI.

It’s just holistically thinking about it and not being overly consumed with AI. It’s getting to the core of what it is you’re looking to accomplish, whether you’re implementing it or whether you’re looking to protect against the threats from AI.

Monitoring Tools

Silverman: It is a good bridge to the next question, focused on the person who’s developing the AI agent, AI applications, with AI use cases and workflows that they are planning to automate with AI. An AI application developer may not prioritize security or consider the cyber risk from the beginning.

Do you think cyber monitoring and tools are going to have to change? Or you think they just need to embrace and monitor better, this new AI-agent-automated workflow world that we’re in.

Robinson: Yeah, I think that’s directed by policy. I really do. The tools are there, you know, web application security tools are out there, and I’m sure that you can configure them in a way to monitor AI. And to your point, if you’re a developer, you know, going live is everything. You’re not caring about holes in applications or security flaws.

Utilizing the tools that are out there that do protect against flawed applications. Having policies that people have to adhere to, such as applications have to go through a security check to go live. Developers have to follow policies and procedures to make sure that sensitive data isn’t available to everybody that doesn’t need to have access to it.

Those are the important things that are out there to set the standard as to how you’re developing the AI solutions.

Silverman: To your point a few minutes ago: What will be good about things those that may not have been well disciplined before, may be improved because of the emergence of AI.

Robinson: Yeah, I’ve been in conversations with SecOps and DevSecOps and it’s a mess. It’s like the Hatfield and McCoys when they’re out there battling. But for organizations that are building these AI models and solutions correctly, it’s going to force their hand to have to consider security.

This is an old analogy that I use: If you’re building a house. You could build the most beautiful, elegant house. The pool is in the backyard, outdoor kitchen, and the whole 9 yards. But if I’m putting it in the middle of the most dangerous neighborhood, and I don’t have protections in place to protect that elegant house, you have problems.

Now we are building these beautiful, elegant AI solutions that reside in the most dangerous neighborhood in all the world, which is the internet. If you’re not giving credence to securing these applications that you’re building, you’re asking for trouble.

The Role of the CISO

Silverman: Does it change the CISO’s role? Or is it that this is a moment of goodness for them? Because now people have to pay more attention and not just as a check mark on a compliance check list?

Robinson: It’s a real opportunity for CISOs to expand the roles of responsibility in cyber. CISOs, in general, have been put in a really bad position. Because organizations that don’t understand the role think that they control more than they do. It’s unfair pressure for the CISO. If something goes wrong, and it’s an insider threat, that’s an HR issue and not a CISO issue. But, because it has to do with cyber security it falls into the CISO’s lap.

CISOs have a real opportunity to expand their role, such as a DevOps person that’s refusing to follow policies and procedures and putting out bad code that is exposing their environment to threats and vulnerabilities and attacks. Again, that’s an HR issue which falls outside of the role of the CISO. However, CISOs can take this an opportunity to expand roles to alleviate some of the stress and pressure by getting the organization onboard with holistically protecting an environment.

Regulations and Compliance

Silverman: The European Union has done a very good job of at least trying to get ahead and regulate AI. Do you see AI regulations and compliance being something that will expand more and be more important in the cyber world?

Robinson: Yes, it’s really important. The hesitation that I have there, and the challenge that I see inside of it is that when securing AI technologies, it really can’t just be a checklist. There are a lot of variables in place including the flexibility of AI, the usage of AI.

So yeah, I’m a GRC (Governance, Risk, and Compliance) nerd, like, I’m all about it. I’ve seen ISO 42001 (standard for AI management systems). You’re going to see different regulations such as the potential AI bill of rights. Great, excellent, fantastic to bring it to the forefront. But there has to be some messaging involved that this is the bare minimum, basic rules that you need to follow.

An example that I give is, I own an office building in Midtown Manhattan. The Fire Marshal requires me to have fire extinguishers on every floor. I have a fire extinguisher. Great, excellent! So, I’m compliant, and I’m in code with the fire department. But, does the fire extinguisher work? Does it have the materials inside of it to put out a fire? Do I have people on my floor that know how to use it? Is it easy? Is it easily accessible? Is everybody trained on what to do in case of a fire?  If it’s a massive fire that requires the fire department, am I going to have an employee just standing there with a fire extinguisher that is going to get injured from using it?

That is the second part of compliance that has to come in line with AI. Cyber and compliance go in the same direction. But they’re definitely on different tracks. It is important to make sure those tracks can intersect. For example, compliance and regulations require data to be encrypted.  If you’re using a large language model (LLM), OK, great. You need to show the data is encrypted. But you also need to know about the decryption capabilities that people might have.

What do you have if people are just spinning up documents outside of the four walls of protection, and now there’s PII(Personally Identifiable Information) leakage that takes place that the organization doesn’t have visibility into it. Yes, compliance is great, but we just can’t look at it as the one way of being protected. It’s a false sense of security.

Training Employees

Silverman: One key topic you often post on LinkedIn is the challenge of training employees. Is it still a checklist, a checkpoint on a process, or is AI making it real? Do you see employee training adapting or needing to adapt, as we were talking about security, to improve employee training and make it more relevant to what they do day-to-day?

Robinson: Yes, and employee training needs to go back to the core competencies of an organization. If you’re just doing phishing training via campaigns with a phishing training tool every quarter, and that’s your employee training program, you’re going to fail. What we need to do is we need to start implementing the why in our security awareness training, because you’re asking workers to understand the world that you live in, and they can’t.

It’s the same thing if I was on the security team and someone asked me to sell; that’s not my core competency. You have to give me the reason why I need to be careful with spinning up LLMs, because everybody’s into instant gratification. I could spin up this LLM and I could become more productive, which will make me look better to my bosses, which will lead to more money.

The best security awareness training programs that I know of are the ones that get organizational-wide buy-in, and the only way you’re going to get organizational-wide buy-in is if you meet people to where they’re at.

AI can be a Trojan horse if you don’t already have a good security training program to make the whole program better. You can get more attention for security training by saying AI, as it will grab their attention.

When you do the training, make sure to say why. One example I like to give is when I hear clients say they can’t get their sales team to do security awareness training. I recommend they go to them and say, “Hey, what if someone stole your client list and your prospect list, and they were your biggest competitors? That would be a bad day for the organization, right? That’s why you need to protect your documents. That’s why you can’t do your sales forecasting on public Wi-Fi at a Starbucks. You don’t know if the person selling against you is sitting next to you with the capabilities to take your data and weaponize it against you.

Making it the “why” and making it personable to employees helps make a successful security awareness training program.

Impact of an Organization’s Size

Silverman: Does that advice change if the organization has a hundred-thousand employees or a thousand when adapting a cyber strategy for AI?

Robinson: It doesn’t matter about the size of the environment, because it all comes down to productivity; that’s what it all comes down to.

Silverman: And from a cyber point of view, it goes back to where we were at the beginning, right? The starting point data and your planned AI use cases. Whether you’re a smaller company who maybe is going to use ChatGPT or Copilot, or you’re a larger company developing your own internal solution. Then consider the implications, cyber-wise, right?

Robinson: Yes, some people reading this interview might say this is just basic. If it was just this basic and everybody was doing it, then we wouldn’t have the problems that we were having. Making sure that the basics are being covered before expanding out. The foundation is being built before building AI strategies and solutions out is so important.

It may seem that it doesn’t make any sense as to why this is earth shattering. It is earth shattering, because people are going in so many different directions.

The only way it’s going to work is if you pragmatically approach this new frontier with facts and data specific to your organization and are able to articulate and show the business the value of utilizing security tools to get to get this going, and that’s you know.

I had a really good conversation with the CISO the other day, and they were like people don’t buy security tools for security. I asked, “What are you talking about?” And he said they all do the same thing. They’re all in business for a reason. Maybe somebody’s doing it better, slightly, you know, because they have an innovative way of doing it. But in the end they’re all doing the same things. It’s like people buy security tools because it makes them more productive, and it makes them function more as a team. So using AI properly can help teams do that.

Question 12:

Silverman: Are there any final words of wisdom you’d like to share with the TechChannel audience?

Robinson: My final words of wisdom are to use this as an opportunity to centralize your organization around the risks and threats that are out there, including AI. It’s a lot to know and it could be complicated. It could be cumbersome. But use this opportunity to take a brief pause and to say, “Hey, we have some work to do here,” and then build a solid cyber plan from there.


Key Enterprises LLC is committed to ensuring digital accessibility for techchannel.com for people with disabilities. We are continually improving the user experience for everyone, and applying the relevant accessibility standards.