Peter: That ends up being the path of least resistance, which is a natural for humans. Yeah, you know what? I've got other work to do. I look at the pile that's sitting behind me of things that need to get done, and if you're telling me that this is okay, then let's start there.
Caroline: From Cobalt at home, this is "Humans of InfoSec," a show about real people, their work, and its impact on the information security industry. I'm here with Pete Chestna. He is the CISO of North America at Checkmarx, a company that's on a mission to provide the technology, expertise, and intelligence to enable developers and enterprises to secure the world's applications. What a great mission.
Pete believes in building, leading, and developing high velocity, agile, and DevOps teams with security as a first-class citizen. In addition to being an engineering and security leader, Peter regularly presents at industry conferences, goes on podcasts like this one, writes articles, and speaks to the press about DevSecOps, AppSec, and adjacent topics. Pete has also been granted, not one, not two, but three patents. He likes whiskey, tourism, astronomy, model rocketry, and listening to Rush in his spare time. Pete, welcome to "Humans of InfoSec."
Peter: Thank you, Caroline. It's an absolute pleasure to be here.
Caroline: Pete, you have told me that you consider yourself to be a lifelong developer. And as we were prepping for this podcast recording, you were showing me some of the blinking lights behind you, which were programmed by yours truly. I would love to hear about your passion for coding. How did you get into it, what kind of coding did you start off doing, and what do you love about it?
Peter: So, it started in high school, we actually had a mainframe in my high school, if you can believe it. We had a Fortran, we had an F77 class, I took that class because I was curious and I was hooked, that thought about breaking down problems into smaller and smaller pieces, writing functional programs. We went off to a Worcester State College programming competition with a couple of friends from the class and absolutely loved it. It was just such an incredible experience that it really was my career. It was a no-brainer after that.
Caroline: Incredible. And so you've taken that foundation as a developer and then you grew into AppSec. Can you tell me a little bit about your self-described accidental fall into the application security area?
Peter: Yeah. When I would change companies or be looking for a new job, the recruiters would always ask, "Well, what language do you wanna code in? And what problem do you wanna solve?" And I'm like, "You know, I really don't care. I like to solve problems, and I like to get paid for it, so give me anything." And at that time... And I still have a couple of startups left in me, but I like that idea of starting something from nothing, of building something of value from scratch.
So, Veracode was a new startup in 2006. I was number 21. And, you know, we all fit in a small little conference room. I'm like, "Wow, this is awesome. This is exactly what I love." That was my second startup at the time, and I was super excited to be there. It happened to be in application security, so lucky me. And I've loved it ever since. It is something that I have a lot of passion about. I think that that space has a lot to learn and a lot of growth potential still.
Caroline: You have so much variety in your career progression. I can totally relate to the fun of being at a small startup. I'm actually still at the small startup that I joined six years ago. And then after Veracode, you went over to Bank of Montreal and transitioned from development leader to AppSec leader. Now you're at Checkmarx. And so, naturally, throughout your career over the last several years, you have developed quite the perspective on application security. And I'd really like to know, what do you think about this term, vulnerability "management?"
Peter: Yeah. It's so funny to think about how we talk about it as security professionals. We should be about reducing risk, but we talk about it as managing risk. And really, what I've seen in all of the AppSec...so over 17 years, I've seen hundreds of AppSec programs. Bank of Montreal was just another example of it, where you have your policies and your controls to say you have to do this on this timeframe, and then whatever you find, here are the things that need to be fixed. And if that was the end of the conversation, we would be in a much better place today as an industry.
But what happens is you have the easy button or the off-ramp or the, hey, I can't, or I won't, for whatever reason, some of them good, some of them not, where the business needs to take accountability and assume that risk where you say we're not going to do this. And if that isn't in balance. So, there's a number of factors to this. You either overscan and, you know, all of this tooling in all of cyber, not just AppSec, can bury you under more work than you could do in your lifetime.
So, if you stand that up in front of a developer and say, "Hey, I know you've never hiked before, but, you know, let's do Everest," they're gonna look at that, turn around, and walk away. If you, instead, say, "Hey, we're gonna build our maturity together, let's start with like the really critical stuff. Like, really focus ourselves on the things that we have to do and not open up the gauntlet to say, here are all the things I could find, so let's go look at fixing them all." You know, you think about targets of percentage, and while that's interesting, I care more about the focus on the important things.
What is the most important thing that I could do today? When vendors would come in and talk to me at the bank, I would say, "All right, I have three questions for you. Once I put your tooling in and get it into my production environment, if I had time for a phone call, one, who would I call? Two, what would I ask to do? And, three, why am I asking them to do it?" It's a way to frame up the fact that we're not prioritizing things properly and we're not doing enough of fixing, and we're doing more of the, hey, I'm gonna write out this exception, which will take me a couple of hours, and then the AppSec people will have to go look at it, and they're gonna review it, and we're gonna do that every year. This back and forth of still not fixing it, and then you look at it and say, "Okay, I guess we're still not fixing it." And you go to the business leader and they say, "Yep, I'm okay with that because I got other stuff to do."
Caroline: I mean, imagine that. What a novel idea, asking someone to do work and telling them why it's important and helping them to prioritize it. I mean, it's totally brilliant, and I think you and I are relatively unusual in the scheme of all the humans on the planet in that we've each seen a ton of AppSec programs, and that's just not how it goes some of the time. Pete, you and I were chatting about something pretty specific, which is the way that compliance frameworks and auditors look at vulnerability management. I wonder if you'd share some of that with our listeners here as well.
Peter: Yeah. So, when you go for an audit, whether that's second line, or your corporate audit, or your regulator, they ask you for your policies, your procedures, your controls, and then they ask for evidence. And, unfortunately, the way we write them... And some of this is practical, right? It's practical that we can't do everything. I get that. And the question is, well, are we holding ourselves to a high enough bar to say, from a quality perspective, from a security perspective, are we doing enough prior to release? And when you look at the way these vulnerability management programs work, you say, "Hey, you've gotta scan here. You've gotta fix these. Oh, yeah, but if you can't, you can just do this thing."
That ends up being the path of least resistance, which is a natural for humans. Yeah, you know what? I've got all the work to do. I look at the pile that's sitting behind me of things that need to get done, and if you're telling me that this is okay, then let's start there. And instead of starting there, I wish that we would take a different perspective, reduce the amount of things that we're finding to things that we're capable of fixing, and then mature that over time. You know, they say in DevOps, if you stink at something, do it more.
So, by fixing these things, we learn from those mistakes. If we do them in close proximity to when they were written, then we get better at writing them in the first place and we don't make those mistakes. You know, I always talked about things like SQL injection where it's no more expensive from a development effort to write it correctly and securely as it is to write it insecurely. And it's not malicious. I didn't know any better. I didn't know there was a fork in the road there. If I'd taken that other path, I wouldn't have this rework. DevOps is all about the rework. Rework is evil. Make all the work visible.
So, as we look back on the rework that we have to do, or the filings we have to do for our vulnerability management program, well, can't we be better at that? Show them that path that doesn't cost them any more because I'm not telling you that you have to take twice as long to program this, just saying if you did it this way, you wouldn't have to do it twice.
Caroline: I love that. Pete, you know, I wonder if you have thoughts on, broadly speaking, are there tools that security practitioners can use? And I don't actually mean tools like software, I mean tools like an approach whereby they can prioritize the pile such that items that actually can get fixed do get fixed. For example, do you recommend that folks look at what types of incidents were happening over the past year, or do you recommend that folks try and do any sort of a top-end list? I wonder if you have any high-level guidance that you can share with our folks if somebody's listening and they're thinking, "Wow, I really wanna do that. I really wanna help prioritize this list of, sometimes, never-ending seeming actions for developers to take," what advice do you have for us?
Peter: Some of this is about not wasting people's time. So, in the space, specifically in open source, so if you look at your software composition analysis, it's gonna spit out, you know, this bill of materials and all of these CVEs, and what a lot of the vendor community tells you to do is go patch this, and then go patch that, and go patch this other thing. But what they're doing is having you do it, first of all, at the transitive dependency level. So, instead of fixing something in Struts and replacing Struts, which might have a half a dozen or a dozen vulnerabilities in it, and say one fix, one change cycle, one release cycle, they say, "Well, go do a dozen of those things," which is a dozen changes, a dozen release cycles, a dozen test cycles. And it might not even work because those things weren't built together.
So, how do you know that's a non-breaking change? Have you put in the work to, you know, get those tests in there that prove that that's going to work the way you expect it to? So, you might go from a security incident to an outage, which is not better. So, part of it is finding high-leverage targets. Is there a better place to fix it that allows you to fix multiple things at once to say, "I fixed it here and I fixed a dozen things in the same cycle?" Instead of doing one thing 12 times, do it once and get that bang.
Caroline: Yeah. That's amazing. Peter, maybe someone listening to the podcast is on a security team and maybe they've got an application security function, but maybe it's not particularly mature or, you know, whatever phase of company they're in, whatever industry they're in, you know, it doesn't actually require them to have highly technical AppSec experts on staff. In a case like that, you know, presumably, they're working with some third parties, maybe to do some pentesting, maybe to do some scanning. Who is a person who can make that call? You know, is that sort of a technical architect on the company side? Is that a pentester? You know, who should we be asking to help us figure that kind of thing out?
Peter: So, I might meld a few people into more of a team than an individual. If you don't have an AppSec professional, then using smart people and asking them questions, what are we afraid of, or how could we fix this better, or how could we, you know, make more of our effort? Engineers, security professionals are problem-solvers. Give them the problem and they will come up with unique solutions. What I've come up with, I'm sure there are better ones out there. And for each individual company, there might be a better place.
I mean, you think about how a company writes code. They've got their first-party, their second-party code, they've got their open source code. How were those things put together? What are the vulnerabilities that you're seeing? Do you see commonality in these things? Is there something that we could do? And sometimes they're gonna come upon the same solutions that we would provide them, but because it came out of their heads, they're more glued to it. They're like, "Yes, this is the right way. I'm super excited about this. I solved a problem today." I love that. I mean, my job as a leader now is to solve people problems, but it still, for me, ticks that box of I'm solving problems.
Caroline: Yeah. That's awesome. So many of the folks in this field, whether they be on the security side or on the development side, get so much satisfaction out of solving problems. You know, why not allow them that pleasure? Why not give them that joy? Pete, speaking of pleasure and joy, I can think of something that's not particularly pleasurable or joyful, which is you've come up with what I consider to be a very fitting analogy for vulnerability management and work queues. I wonder if you would tell us about that analogy. And I know this is a talk you're giving, so I wonder if you might give a little preview of that talk to our listeners.
Peter: Absolutely. Thank you. So, it's called "Vulnerability Management: Work Queue or Landfill." And again, it goes back to those controls, to my policies that allow for the papering of the vulnerabilities. So, I'm going to "manage them into existence." And now I'm better because I've managed it. You know, I'm technically compliant with my policies, but I haven't reduced risk. In that talk, I talk about, you know, the fact that your tooling can bury you, and you shouldn't allow it to. You should be able to focus your efforts based upon the maturity of your developers, the maturity of your security team, and your ability to fix things and learn from mistakes.
So, you know, you even think about... You go down that road of, well, you know, they talk about, well, we're just gonna train them. Well, the question is, what do you train them on? And I don't wanna give them, you know, two days of secure Java coding when half the things are not gonna be relevant to them. I wanna use the output from the tooling to say, "Here are the things you struggle with, and here's how we're gonna fix 'em." It is a truism that it is harder to read code than write code. Oftentimes, the people that made those mistakes are long gone, moved up, or whatever. So, you need to think about helping them get over that hurdle of understanding what's going wrong, what are we capable of doing, so that way, we actually make progress.
If you put too much work in front of someone, they're gonna walk away. They're gonna look at that and say, "Well, I can't possibly..." Here's 10,000 vulnerabilities, go fix 'em. There's no way I'm doing 10,000, so I might as well not do any. If I put 100 of the most important things in front of them and worked with them, so this idea...and this appears in every one of my talks, mutual accountability. One of the problems that we have as an industry is we pay developers to go fast. Take the governor off. We don't need breaks, just go.
And because we don't pay them to "do security," they don't. And you can't blame them. My bonus is based upon how fast I ship the code. And if you start to say things like, "Well, you know what, 20%, 30% of your bonus is now tied to how secure the code is," well, I'm not giving up 30% of my bonus. I'm gonna go to those security people and say, "Hey, can you help me? I don't really understand what this thing is." I don't wanna go through the exception process because this is gonna count against me. I want to be better.
So, that idea that we are in this together, oftentimes, the boss at my first startup, we'd get into this heated discussion, and he'd say, "Okay, everybody stop. Take out a business card." Of course, virtual business card. He'd hold up, pretend one in his hand. "Look at the top left corner, look, it says the same company name. We're all on the same team." The more we get to the place where security is everyone's job, and I care so much about it that that's what I'm going to pay you to do, and that's gotta go all the way up, right? I talk about, you know, going after the pointy end of the pyramid.
I don't wanna be down at the base trying to fight the battle with every single developer, if, instead, I go to the CISO and the CIO and say, "By mutual agreement, we are gonna report on this and we are gonna be held accountable for the results," those conversations are gonna flow downhill. The first thing you're gonna do when you show 'em a dashboard that doesn't look so nice is, "Hey, let me get a breakdown by teams here. Who do I need to go help? Who do I need to speak to? Who do I need to call? What am I asking them to do?" And that will flow all the way downstream. So, to think about the fact that we want to be held accountable for this.
There was a discussion I had with a CISO of a medical company where...I was in a meeting with him, and he said, "I am gonna enforce breaking the build." And I stopped him, I said, "You know, enforce is... I understand what you want to do, but could we change enforce to embrace? Can we get to a point where the developers come to you and say, I want you to break the build? We are no longer happy with where we are, and we wanna hold ourselves to a higher level." Because when they do that, they're signed on. If you're enforcing and cramming it down their throat, you're going to get compliance. And compliance sucks.
Caroline: Said like a true technical security professional. Pete, there's another topic that I really wanna get your opinion on, which is open source code, and specifically, when it comes to recent security things, topics like Log4j, Log4Shell, Log4 fill in the blank. What do you think about this stuff? How do people think about it, and how should we maybe think about it?
Peter: So, it is true that open source is free. Unfortunately, they treat it like a free lunch where really, it's more like a free puppy. There is inherent responsibility when you bring open source in. As a developer myself, and this comes up in my talks as well, we integrate and abandon. Meaning that I've got it implemented, it works, I don't need to think about it anymore. This needs to get to the place where we think about it from the idea of a patch cycle, where in IT, we queue up change, we queue up a bunch of it, and we say in this period, we're gonna test this and we're gonna roll it out to the enterprise, and we're gonna now be more current. We're gonna be closer to what is reality today, which means it fixes a whole bunch of stuff, but sometimes there's no fixes at all, it's just moving.
Open source moves like any software. And the longer you let it sit, the more it rots because it ages, but not in a good way. And you get to this point where the bill that comes due when a Log4j comes up, and Log4j was a super simple example of what could be. It is a top-level library. If you ask any developer, they know whether they use it or not. Full stop. If I don't do Java, I know I don't use it. If I use Java, I'm probably using it, and I use it every day. If you think about something like classic example from, I don't know, 10, 15 years ago, Apache Commons Collections, if I asked a developer if they used that, no idea. Blank stare. What is that?
And then if you really need to nail 'em down, "Oh, the CVE is for version 3.2.1, how about that?" "Well, I don't know. I'll have to get back to you." But that is baked into literally tens of thousands of other libraries as a transitive dependency. In the place where I was working at the time, the estimate for the amount of work to upgrade Struts, because it appeared in Struts, two-person years. As a security professional, what do you do with that? Two-person years. Because we integrated it, we abandoned, it still works, who cares?
It's not a security problem until it is. And by the time it is, it's too late. We need to build the right muscle to say, I build tests around the libraries I use, so I know if I drop a new version in, it's a non-breaking change, and I get used to that frequent re-release. I repave the roads, I put out the new version of the library, and I just stay current because I capture all the goodness of the new versions. And if CVE, if incident, I already know how to do this. It's part of the work that I normally do, and it's not some, you know, weird three-month process where we're trying to get people to re-release software that they should be capable of doing. Again, in that DevOps mentality of if you suck at it, do it more because you'll find a way to do it better.
So, that's kind of what I talk about, is you need to have those good hygiene habits. You brush your teeth twice a day so you're not having your teeth ripped out. I think a Log4j is the rotten tooth. You're brushing your teeth twice a day and just upgrading the libraries, when that came out, even though that one was not something that... And that existed in the code for 10 years. I'm not jumping four major versions, I'm just saying, "Oh, yeah. Piece of cake. Ten minutes, I'm done. Let's move on with life."
Caroline: Yeah. Well, Pete, I am newly inspired to brush my teeth and the teeth of my children, but in all seriousness, I wanna thank you so much for taking the time to chat with me and share your experiences and thoughts with our "Humans of InfoSec" audience. We really appreciate it.
Peter: Thank you so much, Caroline.
Caroline: "Humans of InfoSec" is brought to you by Cobalt, a Pentest as a Service company. You can find us on Twitter @humansofinfosec.