I recently hosted a conversation with Paul Vixie, Deputy CISO at AWS and one of the true OGs in Internet infrastructure. Paul’s been around since the early days of the Internet and is an Internet Hall of Fame inductee, renowned for his contributions to BIND and co-founding the Internet Software Consortium. Decades of hacking on DNS and anti-spam tooling have given him a unique lens into how to build secure, resilient systems at scale.
We covered a lot of ground, including how AWS weaves security into their development culture: treating incidents not as failures but as opportunities to improve threat models and harden their infra. Paul had some great war stories about the early Internet and how those lessons inform modern cloud security. We talked about the asymmetry between attackers and defenders: how attackers only need to be right once, while defenders have to be right all the time. He also hit on the real cost of complexity in distributed systems... more moving parts means more ways to break, more paths to exploit.
We even got into how AI is showing up in security workflows, from anomaly detection in logs to code-assisted red teaming. Paul’s take: AI is a useful tool, but don’t be fooled. AI is not a silver bullet, and it can introduce new attack surfaces if you’re not careful.
My personal favorite moment? Paul’s candid thoughts on the security industry’s biggest self-deception: that compliance equals security. It doesn’t. If you’re an engineer or security pro, you’ll want to check out the full video. It’s a rare peek inside how a true infrastructure veteran thinks about modern threats.
‍
‍
Transcript
This transcript has been edited for clarity.
‍
Nancy: Thanks for joining today, Paul. Keep me honest as I read through your impressive bio of things that you've done, including a lot of awards. Paul, you're the founder of the Internet Software Consortium. You're known for your contributions to BIND, bonus points for folks out here who know what BIND is. Also you're an inductee into the Internet Hall of Fame. Now as a distinguished engineer and Deputy CISO at AWS, you're driving the direction of cloud security at a very large scale. So let's dive right in.
Paul: Okay.
‍
What security as a top priority means at AWS
Nancy: At AWS there’s a saying that security is job zero. What does that mean and how does it shape decision-making across different Amazon teams?
Paul: Well first some insider information. It's no longer called job zero. Marketing department decided it was better off as our top priority. So that's the new language.
Nancy: I'm a little outdated.
Paul: It's been weeks. What that means is principally around operating conditions and culture. If something is about to be released. And somebody says, there's a way that it could be misused, that could put customer data at risk. It doesn't get released until that is fixed.
There's no workaround for that. That would be an example of culture; a better one, is the way we process mistakes. We've got 1.6 million employees. Many are warehouse drivers and truck drivers; the size of the permanent technical workforce is immense.
When you get that many people, they're gonna make mistakes. The law of large numbers indicates it. Interviewing at AWS is hard, but it doesn't matter how hard you make it; there will be a residual error factor. So the question is not how do we prevent errors, it's how can we limit them so none are dumb errors. And how do we learn from each one that sneaks through?Â
The best way to understand that is that the CEO of AWS, Matt Garman, gives us one hour a week. If you think about the CEO of a subsidiary that is a business of this size, that is a jewel beyond price, and we spend a whole week making sure that we are using his time wisely. And what starts then is looking at the so-called COE documents, correction of error, When a mistake is made, it gets written up. If it's security-relevant, that writeup goes to the security team. If the security team thinks Matt Garman should know about this and should meet personally with the team, it means we want the company to learn as much as we can from an error of this type.
The people on my team spend the whole week making sure the write-up is ready. We are bringing about three issues a week to Matt and his top lieutenants, and we have very frank conversations, using a tool called Chime. It's like Zoom but worse.Â
You know, there's almost always action items for the service team that went through this. But again, we're not there. It is there so that the company can squeeze every possible bit of education out of every mistake we make. That's a huge investment, but it's an investment justified by security being our top priority.
‍
Innovating for customers
Nancy: That was really well said. And so on the note of culture, I'm actually super curious to hear your thoughts. As a former AWS builder of data protection, data security services, how does that culture get reflected in the security services? Now I'm talking about a SS, so external security services like guard, duty, security hub, how does that get reflected in services that we build for customers?
Paul: Amazon is an innovation company. It's very interested in pushing the leading edge further. That means our best day is when the thing we are developing to put in front of customers is something they would not have known to ask for, but which will surprise and delight them, and they will recognize immediately when we put it in front of them, like the iPhone.
And, the early cloud services, I think S3 is an example of something nobody was asking for, but as soon as they saw it, they said, “That is how I want to do storage.” And now the S3 model has been thoroughly copied, and no storage is done any other way. Then the general style of what S3 offered, that's who we want to be.
In the security segment for ESS, we do the same thing. We're not talking to customers and saying, “what do you need?” We're talking to customers and potential customers and saying, what are your pain points? Because if you're a true innovator and you hear the pain points, that is when the wheels begin to turn, you begin to think, if I built this, that would solve for that pain point and it would be recognizable by the customers. So it wouldn't be a hard sell. And that's where Guard uty came from.
‍
Building secure systems in the age of AI
Nancy: So you've been working on DNS and anti-spam systems, and that's one of the contributors to you being inducted into the Internet Hall of Fame. How has your approach to building secure, scalable access systems evolved? Can you talk about the challenges from those early days and how they show up in cloud security today?
Paul: At the time that I came into DNS, there was only one implementation that was in C. It was called Bind. It had come from the University of California at Berkeley as part of the BSD team. I was working at a mini computer company—now defunct—And they had an operating system based on BSD. A lot of us gravitated there. At that time, the C language did not have prototypes. if you wanted to make sure you were calling some function with the right arguments, the right number of arguments, having the right type, being in the right order. You did that or you went through a major hell and got something called lint to work.Â
But when I inherited that code base, there was a function that had 13 arguments that was being called in several places with only nine. And you could do that in C, and there'd be no warnings at that time. And that was the big problem, which is that we didn't have enough understanding, and the machines were not yet beginning to help us understand the complexities of what we were creating.Â
I worked on the BIND code base for three years. And had my own fork of it before I ever knew what an RFC was. Cause the problems were in the code, not the correct implementation of a standardized protocol. Now, later I created some additional RFCs for new features. But since you're asking about the early days, that's what it was like. The way that shows up now is structurally similar; in every security engagement between an attacker and a defender, there are some asymmetries.
For one thing, the attacker, if they win, knows how much they won, whereas a defender, if they don't lose, doesn't know how much they would've lost. That makes it very difficult to do forward planning and budget justification for any CISO anywhere because you don't know what would've happened if you didn't make the investments you've already made.
And so, how do you prove that you need to keep doing that or to do more things like it? That's an asymmetry. Another problem is complexity. If we just strip it down to chalkboard essentials, we will measure complexity by, within a system, how many state variables are there, and how many transition functions are there.
One transition function would be to read a packet from the network. that affects a state variable, which is the buffer that the query goes into, and so forth. If you just take a rough order of magnitude, state variables and transition functions, that'll tell you in general how hard it would be to understand that thing.
At the moment, the bad guys are studying our systems. Because that's how they make money, they have a continuous incentive to go to school every day and find vulnerabilities or buy zero days or whatever they do. We don't have that continuous incentive; most of our customers are not in the business of defending themselves. They are in business to create value for somebody. And defense is a cost center, not a profit center, as it would be for the attacker.Â
If your attacker understands your systems better than you do, they're going to win more often than they lose. That means that our ability to understand is a key predictor of future outcomes.
I have told somebody you've got two firewalls that are out of support. The company that made them has not issued a patch in several years. I know you want to spend money somewhere, but first, turn those off because getting rid of things that nobody understands—nobody except the bad guys—is going to improve your security posture, which will improve your outcomes.
It predicts a better outcome because it raises the fraction of your complexity that you understand, but those are the primary investments that I tell anyone who's trying to defend something. They’ve got to work on that.
That is like not having lint in 1985 and seeing a program calling some function with the wrong number of arguments. I am not going to be a naysayer about AI, and I understand we’ve got to talk about it. This is my contribution to making sure we mention AI. To the extent that AI is being sold to anyone on the basis that you won't have to understand what you're doing anymore, because this new, cool tool will do it for you. That is increasing complexity and thus reducing the fraction of what we actually understand. I would like to guide AI tool makers, solution makers, buyers, and users: Please focus some of your AI investment on understanding what you have and studying the telemetry you're already getting, cross correlating telemetry, summarizing it, and looking for patterns.Â
Do a bunch of that, which in my team, we sometimes call telling the customer to eat their security vegetables. Everybody likes the steak. Not everybody likes the broccoli. It turns out the broccoli's important too. AI is gonna bring about a lot of changes to society. But it could also enrich and enable bad guys more than good guys unless we invest properly.
‍
The biggest lie the security industry tells itself
Nancy: Finally, let's wrap up on a spicy question, Paul, because you did ask me last night, and I deliver. Paul, you've been in the trenches of internet security for decades. What's the biggest lie the cybersecurity industry tells itself?
Paul: So in my fifth and what I hope will be my final startup, we had the requirement to show up at RSA and Black Hat and, you know, if you're, if you have a booth in a trade show floor, you're walking the floor, you're going up and down and trying to find your booth. You meet a lot of people.
Some of this is just normal business culture seeping in, like: can I scan your badge, please? I guess, but why are you standing in the aisle instead of in your booth? Because that particular employee gets compensated based on how many badge scans they have. And they figured out a way to game the system.
That seems bad to me, but it also seems inevitable. We built a system out of humans. Humans have human nature. But the biggest lie is to go into one of those booths. Get somebody to walk you through the demo and explain their value proposition and so forth. The biggest lie is that if you write me a check, I will keep you safe.
That's bullshit.
‍Nancy: Thank you, Paul.