Westley McDuffie on Zero Trust and Data Security
By Charlie Guarino / February 1, 2023
IBM security evangelist Westley McDuffie shouts out the Jedi Master of zero trust and issues a warning about anyone claiming their product or service is a be-all security solution
Westley McDuffie: I’m happy to be here. I really am.
Charlie: Thanks so much for coming. Westley I want to start our conversation today with something that I’ve seen you say at least twice that I can recall in present memory, and it stuck with me during your keynote. I’m going to paraphrase but it says in your business if your system gets hacked, there’s a data breach. In my area of security, if a system gets hacked, people can die—and boy when I first heard that, that really got my attention. So what do you mean by that? What’s the significance of that statement?
Westley: In the commercial world Charles, the talk actually started about what’s your worst day. What’s your worst day at work, and when you go to work and you make a mistake, what’s going to happen? You make a policy change, you do a rule change, you do something in your course of everyday work and maybe it’s wrong. We’re human. We’re fallible. We are going to make mistakes. When we’re in a business concept, if we make a mistake, it’s going to cost what? Some overtime. You’re going to lose a little money here. Your shareholders may be upset. You could drop the value of your stock. You could pull an Enron and have to go and testify, and you may blame the IT guy just like they did. In my world and the people that I deal with, if I make a mistake, people literally can lose their lives. Because I support the man on the battlefield, I support the woman on the battlefield, I support US intelligence efforts. So me making a mistake can literally and honestly can cost somebody their life.
Charlie: No joke. This is a serious topic.
Westley: It really is, and there are people who are no longer living on this planet because of cyber incidences. Yes, it has literally done that.
Charlie: It doesn’t surprise me then, based on what you just said Westley, is that you see year after year consistently, you read all these reports about topic concerns of IT professionals, CIOs, CTOs, the whole C suite, and the #1 concern is security. In any form and every form, security just seems to be front of mind every single year.
Westley: A few years ago I did a presentation. I was on a panel and I was the only non-attorney/lawyer. I was the only non-CIO. And when I say I’m the only non—and I don’t know if I can say their name, but we’ll say a credit card company from here on out. There was an attorney/lawyer type from Congress who wrote bills and articles about cyber, and then there were a couple of other CIOs and CTOs out there, and what I noticed was they had made comments about security being boardroom talk—and I called them out on it, because even though we say it’s an everyday talk, it is in the security profession. It is around your CTO, but is it really truly a talk that you have in your boardrooms with your C level suites? No. I’m going to be honest—it’s not. It may happen but it’s not a daily conversation. I mean do you get up every day talking about security? No, but our bad guys do. Our adversaries do, because that’s their job. Their job is to get your stuff. Their job is to create that. The security industry might be an $8-10 billion a year industry, and in some cases we’ll say it’s even more than that because I don’t go and check those numbers everyday, but I can tell you what the bad guy industry is. It’s also in the billions of dollars. The education that I have to do what I do, the bad guys now have that same education. The bad guys know product sets. They know what you’re going to do, and they know human nature.
Charlie: Which brings up an interesting point and that is that I think there are some companies out there who feel they are secure, and maybe a term they use is security through obscurity.
Westley: I was about to say the exact same thing. Security via obscurity.
Charlie: We think we’re secure and there’s this potential false sense of security. So what do you say to these people when they think that and how do you get around that to really suggest to them that they’re not as secure as they might be?
Westley: Years ago we used to offer something—it was a pin test challenge. If we can find some leak, some vulnerability that we could exploit, we wouldn’t charge you for it as a company. But we would come in later and then you would have to get the full-blown service. You know security through obscurity works in small doses. You don’t want to go off and advertise that you have money in your wallet. You don’t walk around flashing a big—what is this? Hey come rob me because I have X, Y, and Z. We don’t do that, so you do keep your mouth shut because it’s nobody’s business about what you have except for your data owners and your data sets. That’s who should be involved. But security through obscurity, trying to just stay hidden, years ago we talked about the reason we don’t allow something to come through the firewall like Ping. It’s a great troubleshooting tool. Let’s not get these advanced tools in here where you don’t need them, but we turned Ping off from the outside because we’ve always been told well, we don’t want the bad guys knowing where we are. True but they already know where you are because Avenue has allotted, so let’s play the game for what it really is. It’s I can get information by pinging your firewall to find out what kind of firewall it is. I can get information from your routers and your switches if I just ping them, because it’s going to send me a reply back. Different machines have different types of replies. I can tell you what kind of machine you have by the type of reply.
Charlie: So with that in mind, is there also a possibility of a false sense of security? The term I hear is security theater.
Westley: Oh yes. I’ll give you an example of security theater. That’s not technical related, so all of our listeners can understand what it is. When you go to the airport, all those controls you go through is security theater.
Charlie: You are not the first person to tell me that by the way, and as somebody who travels a lot, that’s a concern I have. But the idea is that once you go through security, you’re in a more sterile area.
Westley: And you’re really not because how many things do they miss a day?
Westley: Many. In fact, a couple of weeks ago I went through and had three full bottles of water they didn’t catch, but my belt set the detector off, and it’s only in one airport that my belt sets off the detector. I bought a “TSA approved” metal belt—or you know, buckle for my belt—so it wouldn’t do that. It doesn’t do it in any other airport but one, and that same airport missed three bottles of water and when I went through. I literally took them out of my bag and went “oh whoops, I don’t need these.” They looked at me kind of odd. I went “you missed it, not me. I’m good.” And I walked on through.
Charlie: I started our conversation Westley by saying people walk out of your keynotes concerned, and I think part of that concern is because they say you know that everybody, or a high percentage—in fact one number I read is as high as 83% of all companies are vulnerable, and in fact it’s not a matter of if but when they’re going to be hacked. Am I being Chicken Little or is it a real concern or should it be a bigger concern?
Westley: It should be a real concern because all the bad guys have got to do is get it right once. You’re got to get it right every time. We’re not. It’s just not going to happen. Human error is going to creep in some place or another. In fact, human error is the only attack vector that is increasing over the last five years at a steady rate. Other attack vectors will come up, other attack vectors will go down, but the human element is the one constant that has always been there. Is it fair to say Chicken Little? No. If I’m a betting man and went to Vegas and put odds on who is going to get breached, I would put it on every single company because sooner or later it’s going to happen, and it may not be advertently. It may be inadvertently that it does happen, but it could happen.
Charlie: Hmm. I guess our ultimate goal is to protect the data, protect the assets of the company. That’s I imagine the complete goal here, and that’s really the impetus for having security. But you know we talk about data breaches and there are real life, real numbers being associated with these hacks. I mean do you have any examples you could share with us?
Westley: You know I do. We talk about 83% could be or should be or will be. I know from the stats that the majority of companies that do get breached are notified by an outside entity, either the breacher themselves or a company that you have brought in to do some kind of analysis, some kind of external work. They’ll see it too. That’s over 60% of the known breaches are notified by outside vendors.
Westley: That to me is even scarier than the number of companies that you could see breaches of upwards of 83%, because we know a number now.
Charlie: So let’s get into this a little bit deeper. We talk about security and you know I think if I was to ask ten different people, I would get ten different responses of what they consider security to be. I think that be a true statement, but the traditional model that comes into my mind—and I’ve done some research on this is—is what they call the perimeter security model: password, user ID and then you’re inside the building and then you have complete access. You had a different term for that. Talk about that, please.
Westley: Well I called it moat and castle, and you brought up a point about allowing somebody in. That is the old moat and castle routine. Once you’re in, we trust you. So before I transition into that zero trust piece about that user identity because this is where this is going to go whether we want it to or not. This is where it’s headed. This is not new. People go, moat and castle is dead. It is not dead, it has not been dead, it will not be dead and you can quote me on that. And if it gets poor reviews so be it, but here’s the deal: We just moved the model. You still have your data center protected, whether they’re on prem or in cloud, you still put up security around that. You put up encryption around your data, so we still haven’t removed it. If we want to keep the same amount of reasoning with moat and castle, we have our remote users. Those are your knights on horseback. They’re armored up. They have their protection, it didn’t change. It’s the same concept. We just put new words on the old technology, and if I can say this without getting in trouble, it’s marketing mumbo jumbo. I mean I’ve changed the name and the marketing on one of my products four times in the last six years. I’m still using the old marketing terms because I’m not keeping up with it because it’s marketing, just like advanced persistent threat is a marketing—APT stands for another PR term.
Charlie: So we talk about moat and castle that it’s not dead, but is it still a valid way to do it? You mentioned zero trust which I guess is where we are heading, but is moat and castle in and of itself still a valid model or a sufficient model to keep somebody secure?
Westley: To keep it secure, no. Is it a model? Yes and what we’ve had to do is the moat and castle is still your foundation, right? So let’s take myself as the knight. I’m going to go off with my laptop and I’m going to work remotely somewhere. I still have to get back to my data center, so that is not going to change. Now you just made it secure for me to talk to my data center, and if we bring it to the next phase of zero trust, we use multifactor authentication. We’re going to use a couple of other pieces of technology to verify who I am, but I’m going to be constantly verified every so many minutes, or if I go to another domain or another enclave within that network, I’m going to be re-verified.
Charlie: Right, so now we’re going into the area of zero trust where—
Westley: Yeah, but—and I’m trying to keep it under the moat and castle theme, so that whole piece of it is not new. That whole point about zero trust of this piece of it is not new. I can go back and look at a document from March 8, 2017. It talks about minding your network security gaps, what your current monitoring solutions don’t tell you. I’ve got another one that talks about user identify in 2016 and about enforcing the policy and enforcing those rules. That’s where the breakdown occurred. It’s not new and it still works. That moat and castle still works—we just can’t trust anybody inside the castle.
Charlie: And not only can you not trust somebody, but you have to keep re-credentialing them every time.
Westley: Yes, and if we want to talk about that piece of zero trust, you see it all the time around you—you just don’t realize it. And that’s how it is supposed to work. If you go to your favorite airline website or your bank and you login and you don’t touch the keyboard for a few minutes, it’s going to log you out.
Charlie: That’s zero trust?
Westley: That is part of zero trust.
Charlie: Okay. The research that I have done says that literally nobody can be trusted.
Westley: That is correct.
Charlie: That’s the premise.
Westley: That’s in the title: zero trust.
Charlie: Exactly. So this brings up an interesting discussion, and it’s true in this conversation and other areas as well—for example like in source code and programming and refactoring. And the point that I’m making is that there’s a balance, and the balance is how much friction do you want to put in front of your users to keep the bad guys out but also but to give the good guys a good experience without having to keep going through hoops every single time? Where’s that sweet spot? How do you do that?
Westley: Well it depends on the user because I’m having issues with a user, I wants lots of friction so they understand they messed up—you know that I say that tongue in cheek. When security becomes problematic for your users and it prohibits your ability to do what your company’s set is to do. So if you are in the financial industry, if you’re not doing finances and you’re not creating money because security is in the way, now we have a problem. If you are in the defense industry—and that one is a lot different than that commercial or the financial industry—but when security gets in the way of productivity, we’re doing one of the two wrong, and that’s where that part has got to change. With user identity, you know we want multifactor authentication—you want it to be something you do, something you have, those two pieces—but you want to make it where it could be a fingerprint. So at IBM we use an IBM product and when I have to re-authenticate every 15, 20, 30 minutes or I move to a different enclave, I have to look at my phone. I honestly have to pick it up and look at my face ID to log me in, or I can do my fingerprint or I can type in a code. But I’ve got it set up for my ease of use because I get tired of typing in codes just to look at my face. So now I have to keep my phone with me to use my laptop. That’s an acceptable tradeoff. I have a client that has to log in 7-12 times a day and they don’t have a choice on how they do it. I’m like well, that’s odd. They should at least look at less-intrusive mechanisms. You know I’m all for fingerprints, I’m all for iris scans on your computer to make sure that it’s your eyes. I love those things, but we have to be able to embrace that technology to say hey, this is where we need to go and let me tell you: These technologies currently are not cheap. That’s why you still see a lot of the old-fashioned user name and password and re-authenticate. That’s why you still see some of those.
Charlie: While that may be true, in many cases I think the cost of implementing a zero trust or even just a basic security model, that’s good insurance and that’s still cheaper than the cost of a data breach, which is more than just money. It’s reputation as well.
Westley: It is, so I can take us back. There was an entertainment company—movies, they make them, a network gaming solution. And I won’t say who it is, but they created a movie and the movie got released—or leaked a little bit early, and it made some other countries mad, and that country hacked this entertainment system. $225,000 a year to pay an employee to do the work. You’re paying—I don’t know what the cost of the software would have been, but that application fix for the software cost this company billions upon billions of dollars. Over $70 million worth of fallout was what this company suffered, and let’s say $300,000 a year would have solved that problem. But they saw it as an overhead they didn’t want to pay. Well, they paid it in the end.
Charlie: Right, in spades.
Westley: Oh yes—and the movie was a bomb.
Charlie: Besides [laughs]. So how does somebody get started if they really want to go down the zero-trust path? I guess before I even have that question, my question is is zero trust. Is that a model that all companies eventually should adopt?
Westley: So you ask should they adopt it, and that is a valid question. Part of that answer is they’ve already adopted it and did not know it, whether they wanted to or not.
Westley: Because zero trust, there are four primary myths and I’m not going to cover all of it. We can find everybody on YouTube talking about it. I don’t need to recover all of it here, but one of the biggest myths is that we try to establish trust in zero trust. There is no trust in zero trust, and that’s also one of the biggest problems that we have with zero trust. It is trying to establish trust. Stop establishing trust. Trust is your vulnerability in this entire concept.
Charlie: Regardless of what a person’s title is or what their responsibilities—
Westley: Especially what a person’s title is. Case in point: When Ginni Rometty was our CEO, our supreme poohbah, grand leader, she actually showed up to one of our data centers that was giving her a tour. She didn’t have her badge. Well, everybody let her in because everybody knew she’s Ginni Rometty and she said, you can’t let me in. I don’t have my badge. But you’re Ginni, we know you. You’re the boss. We have to let you in. No, you don’t. She didn’t have her badge. You shouldn’t have let her in. I get it. It’s Ginni Rometty. You let her in, so the whole concept about the zero trust—the first myth is there’s trust. There’s no trust. We don’t trust our users. That’s why we give them badges, ID cards. That’s why you have all of that piece of that and we’ve already implemented that across the board with two factor authentication. We’ve already done that VP and access, so we’ve already incorporated some pieces of zero trust. Like what I said earlier is there’s not a product about zero trust, but what there is is there is products that help us achieve zero trust. We toddle around the user entity, and that’s good because again, the number one problem we have is with the identity piece.
Charlie: So you just mentioned something I want to just grab onto. You mentioned that there are products out there. Are you talking about things like security penetration test?
Westley: Yes, any type of red team testing, blue team testing, any type of vulnerability assessment. Those help you achieve your zero trust model. They help you find your holes. They help you identify the gaps in what you’re not seeing, because honestly when you’re a security administrator, you’re an analyst. You’re looking at it and you’re looking at the same thing over and over again, you’re going to get a bias—like Charles, if you came to my office like, let him in. I know him. Well, you’re not vetted. I just broke all the protocols and trust, or the zero trust, because I trusted you because I know you.
Westley: We can’t be doing that.
Charlie: Right. So talking about that penetration test, is that a one and done, or is that something you need to do on some regular cadence?
Westley: So penetration testing and vulnerability assessments are twin step brothers. A vulnerability assessment is nothing more than a pen test where the “bad guys”—your guys or whoever you hire to do your VA—know all your passwords. They know all your loopholes. They know everything, and they’re going to help you go from what we call a crawl, walk, run method. So we’re going to do a vulnerability assessment first. We’re going to look at all your assets. We’re going to see what their patch levels are. We’re going to go through and check your firewall rules. We’re going to do your basic stuff. We’re going to do some basic testing, but we’re going to have the keys to the kingdom so we don’t get stopped by what we’re doing. Then we find those loopholes or those gaps and we help you fix those gaps—and we do this again, and we do this again. We’ll do this two, three, four times over the course of a year or two. Then when you think you got it down, you call an independent company or the same company with different teammates and you have them do a pen test. The first time you do the pen test, you’re expecting not great results or you don’t have high expectations is what I will say. So with a pen test, let’s say that we set it up for the first week of December of this year. All you do is you tell your team that it’s going to be during this time period. Be looking for it. You give them a little bit of help if they’re not ready for it, or you may say we’re going to do a pen test on this day and you give them a block of hours to start looking for those activities in your analytics tools, in your firewalls, in your SIMS, in your source. You build those playbooks and you do this, and then the next time you may open that window up to the week or you may open it up to that month. Then after that you get a recurring basis, but every time you do a pen test, if the pen testers are not giving you the information that you need to secure those gaps, there’s no point in the pen test. That pen test is to help you get to a level of security in your maturity model that gets you into compliancy while you’re maintaining your security.
Charlie: I’m glad you mentioned that word, because that’s another quote that I read that you actually had said and it really resonated with me. You just mentioned compliancy, so the fact that I may achieve industry compliance does not necessarily mean that I am still as secure as I need to be.
Charlie: All right, so give me an example of that.
Westley: In your policy, it might say that you need to have a password for every user—
Westley: Right? Your password needs to be at least seven characters long. That might be all it says.
Westley: ABCD1234 is a password. It can be easily guessed. You’re not secure if that’s what you’re using. You are compliant, but you’re not secure.
Charlie: And under that umbrella there are many examples of things you can point out?
Westley: There are thousands of examples I could point out.
Westley: Thousands of examples. You know earlier in one of the presentations, and I talk about it all the time that there is not a single right way to do security and anybody who tells you that it is is absolutely full of garbage. There is not a single right way, but there are ten of thousands wrong ways to do it.
Charlie: So no one size fits all template then.
Westley: No, not at all. I mean it could be, but a policy may say I may use a firewall, that I’ve got to use a firewall and create rules. All right. I’ve got a firewall. If I don’t configure the firewall correctly, I’m not secure but I’m compliant.
Westley: If I’m with the federal government and I use a firewall that is not permitted to be on my network, or I use a device that’s not permitted to be on my network, I’m breaking all kinds of compliancy and security even though the software itself may do what I need to do. I’m now not in either one.
Westley: So that whole piece about compliancy—if you run compliancy first, you’re going to fail. That is a no-no. You will fail what happens. If you do security first, then make your security objectives fit your compliance objectives, now you’re doing it correctly, because I’m going to get both security and compliance.
Charlie: Right. Right.
Westley: It still doesn’t mean you’re not going to do that wrong or make a mistake, because we’re still human.
Charlie: We’ve just gone full circle right there. We just ended this whole conversation at how we started it.
Charlie: Same way you did. So let’s kind of start wrapping this up because—I mean, Westley, I could sit here for three more hours and just keep talking because this topic, there’s so many different avenues we can go down and it’s fascinating. Again, I go back to what I said earlier: There’s no surprise that this topic is always front of mind because it is a real concern and there are—
Westley: It is.
Charlie: There are real hard assets at stake here. So just to give people something to think about as we start saying goodbye here, what do you yourself read or what industry magazines if you can name any, or just some topics that you read? What should somebody be looking at to stay current on this topic because it is so important?
Westley: So if we want to look just at zero trust, if you just google zero trust and put in whatever, you’re going to get millions of documents. If I’m going to start talking about zero trust and the way that it is supposed to be, the first thing I’m going to do is I’m going to listen to a gentleman by the name of John Kindervag. He’s been a mentor of mine and he’s actually the creator of zero trust. If you want to know what it is in its entirety, listen to what that man says. Then I would listen to anything that he says that this is good reading or good listening, too. There are tons out there. That’s the number one resource in zero trust, bar none. Disclaimer: I do not follow the IBM zero trust model. I do not promote it. Now when I’m in my IBM uniform yes, I do because I’m paid to do so, but everything I talk about is based on John Kindervag’s vision of zero trust. Why? He owns it. Literally, he is the creator of zero trust. That’s first.
Charlie: He’s the genesis, but are there others who have taken that basic infrastructure and have expanded on it? Maybe you yourself?
Westley: Oh no, no, no, no. I am just a mere Padawan compared to that Jedi Master, because he can talk about it in four minutes and you totally understand it. He talks about the four myths and the five other pillars. You know the next thing that I do is I read a lot of company blogs. I read IBM’s. I read Dell. I read SAIC because that’s the wheelhouse that I need to be into with my federal customers. I read the executive orders on zero trust and what’s it is supposed to be. I go okay, this is right, this is wrong, this is right, this is wrong based on what John Kindervag has taught us. You know for me—and this is what I will wrap this up in and it’s not just about zero trust. If you’re ever reading a book, if you’re ever reading a document, a guide, I don’t care what it is or who it is, and it says to achieve zero trust, you need to do X, Y, and Z, if it doesn’t match what the current status—and I don’t want to say because it changes but let’s say we’re not reading John Kindervag’s view on this or it’s different, and like I said, there can be differences in what we do and how we do it because there’s not a single right way but there are 10,000 wrong ways—if you’re reading any document, any guide, any paper that says you can achieve zero trust only by doing what we do, they’re wrong, so I’d discredit that. If you’re reading one and it says that you can do everything that you need to do, but to achieve true security or true zero trust or whatever it may be, and it says that everything else is all the same until you get to what this product does, you’re reading an infomercial. You’re not reading true security. Those are the ones that I tell you to watch out for. I’ll never tell you that one person is better than the other one, except for John Kindervag, because he literally created this—
Westley: But I’ve already proven that it’s not new. All he did was put a title on it, but if you’re listening to somebody that says that you can only get this with my product or the way I do it, there’s your wrong answer. Move to the next one that doesn’t say that, and that’s the fairest way I can put it.
Charlie: That’s very succinct and I think that’s a great way to end our conversation. Westley, what else can I tell you? This has been—like I said, we can talk for hours.
Westley: We can. In fact, here’s what I’ll do for you. Are you ready? If your listeners or anybody else says hey, this was really good. You want to do another one? I’ll do more with you and we can talk about any nuance of security that you want to talk about.
Charlie: I’m going to hold you to that. So we’ll consider this Part 1, then. How does that sound?
Westley: You got it. It’s Part 1.
Charlie: Terrific, thank you. All right. Thank you everybody for listening to our podcast. Do take Westley’s advice and things that he said to heart, because he is bar none one of the true experts in this field, in this area and it’s no surprise why he’s doing as many keynotes as I’ve seen him present. So that’s good advice from me to you. Anyway, we’ll wrap it up here. Westley, thank you.
Westley: You are too kind, sir. Thank you very much.
Charlie: Thank you very much, Westley. Please remember everybody to check out TechChannel’s website. I say this every time: It’s chock full of other great information—podcasts, blogs, things like that—so it’s also worth a visit for you. Thank you very much. Until next month, everybody, see you again. Bye now.
AIX / IBM i / Linux on POWER / Podcast / Community / Security / Systems management / Data management / Data security / TechTalk SMB
About the author
Charlie Guarino // President, Central Park Data Systems
See more by Charlie Guarino