In this episode, Jeremy welcomes Danny Allan, Chief Technology Officer at Snyk, for a wide-ranging conversation on how AI is transforming software development and cybersecurity.
In this episode, Jeremy welcomes Danny Allan, Chief Technology Officer at Snyk, for a wide-ranging conversation on how AI is transforming software development and cybersecurity. From productivity boosts to the growing sophistication of AI-powered attacks, Danny shares his perspective on the evolving responsibilities of developers, the implications of "vibe coding," and why security guardrails are more critical than ever. They also dig into how AI is being used to filter noise in vulnerability data, what realistic productivity gains might look like, and how Snyk is helping developers build securely by default.
About Danny Allan
Danny Allan is the Chief Technology Officer at Snyk, where he leads the strategic direction of the company’s developer-first security platform. Before joining Snyk, Danny served as CTO at Veeam and Desktone (acquired by VMware), and as Director of Security Research at IBM. With deep experience across security, infrastructure, and cloud, Danny is a champion for making security accessible and scalable for developers. A proud Canadian, he’s also an avid scuba diver, cyclist, and hockey enthusiast.
Alright. Welcome back to another episode of Modern Cyber. We've got a great conversation teed up for today, and I think we're gonna get into some really interesting areas because there is stuff literally happening every week as we record this episode. Recent events, recent developments, all kinds of stuff going on. I am delighted to be joined today by Danny Allan, CTO of Snyk.
As the CTO over at Snyk, Danny leads end to end ownership of Snyk's current core offerings and road map as well as the company's near term platform vision. Before joining Snyk, he was the CTO at Veeam and Desktone, which was acquired by VMware, and a director of security research at IBM. In his free time, he loves scuba diving, cycling, and playing hockey like any true Canadian does. Danny, thank you so much for taking the time to join us on Modern Cyber. Well, thank you, Jeremy.
Super exciting to be on with you. Awesome. Awesome. Well, let's dive in. And, you know, I mentioned kind of in the preamble that there is stuff going on literally every week, And unsurprisingly, as we record, it's AI.
And AI is really having impact every week. Every week we see something new that is now possible, but every week we also see things that are maybe not going as well as planned. So it's front of mind for everybody. And I'm curious from your perspective, you're someone who's been in the software development industry for some time now. How do you see AI changing the software development landscape?
Well, that literally, Jeremy, is changing week to week or day to day as you say. When when AI first came in, I saw a lot of developers across our customer base doing things like using auto complete. So they'd start typing code and it would auto complete and it helped them be a little bit more efficient. More recently that has changed. What I'd say is that we're beginning to see there's a term about this called vibe coding.
But they're prompting essentially the editors to generate code for them, whole blocks of code rather than just completion. But ultimately what it's enabling them to do is be more productive. It does help them with documentation, it helps them with the creation of code. It helps them with all kinds of the tasks that historically had been very tedious and time consuming for them to do. But along with that, do you see that they're getting, you know, massive productivity gains?
Let's say, like, I don't know if you could quantify it all. Is this like a 10% or 20%, or or as we transition from the kind of the auto complete level of productivity boost to more the vibe coding, are we going from like 10% to like 50%? Any way to quantify it? It's hard to quantify. I think it really depends on the segment of the customers.
Okay. If you look at the large customers, I have one customer for example, large financial services bank. They told me they got a 10.6% productivity gain. I was like, how did you get so specific? Yeah, that's super specific.
And what they were, they were actually calculating the delta between the code that was generated by their coding assistant and what was submitted and apparently, you know, 10.5% of it came out of a coding assistant. I do think it's all over the map but what I would say is it seems to average somewhere between 105% for the larger organizations. The smaller organizations, they definitely seem to indicate that they're getting greater gains. And I don't know whether that's just because they have less rigor and and workflow and other things that they're doing. Yeah.
But I I would say five to 10% is probably a a reasonable number. Does that sound very impactful to you? Because that's not as high a number as I think a lot of people would assume it to be. Well, I think a lot of people assume that developers are coding from morning till night. They're doing eight hours of coding.
And as you and I know, that is not the case. Especially at large organisations, it's what maybe 20% of their time. And so it depends on how you look at the job. They're still doing a lot of interaction with the teams around them, they're still doing a lot of requirements gathering, they're still doing a lot of research. So five to 10% actually seems pretty substantial for me.
I think ultimately it will probably end up, if you look in the long term, that maybe we'll go reach 20% productivity. I would be shocked if we ever get to they're getting 50% productivity gains. To me, that just seems, unlikely given what developers are doing day to day. And when you say ever, do you mean, like, ever, ever, ever? Or do you mean, like, let's say, like, foreseeable next five to ten years?
Well, the way I see it is the role is changing. There was a statement actually, the the CEO of Anthropic said, I think, a few weeks ago, he said within six months, he said this in I guess it was in March. He said within six months, ninety percent of code will be generated by AI and within twelve months a % of code will be generated by AI. And I thought that that's just unrealistic. The timelines, like I work with customers who aren't using any AI.
You're gonna tell me six months that Right. That 90% of their code will be generated. But what I think is will is likely to happen, I do think eventually all developers will use AI as a tool, the same way that they use an IDE as a tool. But I think the role changes. It may be that they become far more prompt engineers.
Yeah. And guardians of the agents that are creating the code, they're still there. Yeah. But the role that they do, like, is very different. Instead of typing code, they're prompting code or they're prompting agents to create components for them.
Yeah. That actually brings up another area that I think gets talked about a little bit less on the developer side or in terms of developer productivity enhancements enabled by AI. But you said something earlier that really brought it to mind, which is that, like, okay. So if a developer actually spends about 20% of their time writing code, a huge chunk of the rest of that is actually, like, understanding other parts of the code base. Right?
Because no modern piece of software is one repository or one block of code. Right? You you've got, you know, the microservice that I'm working on, talking to the one that you're working on, talking to some other data stream, talking to some cloud service, or whatever the case may be. And actually just stitching all of that together, understanding all the connections and the data flows and whatnot, that actually seems like an area ripe for huge productivity gains. A %.
And I would argue actually in the short term that that is where the greater productivity is gonna come from. If you step into code that you haven't touched before, it's really difficult to figure it out. I Even the best documented code in the world to understand what it's doing and how it's communicating, you could actually go into your editor now and just say, can you explain what this piece of code does? And it will give you concise overview of, you know, this is how it's factored, this is what it's touching, this is the database. And that does help you get into the flow or into the context much faster than you would otherwise.
Yeah. Yeah. Absolutely. So I've been giving a talk recently about API security and AI and API security. And I'll give you a little bit of the background, and we're not gonna go into the full kind of, you know, talk track around it.
But one of the things in in the API security space is that there's both pros and cons to what AI is bringing. Right? And so on the positive side, a lot of the stuff around API security actually some of the important kind of use cases are around analyzing the design of the API, usually represented by an API spec, or another common use case is analyzing log files from API access logs. Right? And both of these are things that are, you know, pretty structured documents, and there's enough of them out there.
And you can do a good job of providing tagged examples of, like, hey. Here's an example of, an API access log that indicates, cross site scripting attack, or that indicates whatever. Right? And so then, you know, ingesting log streams in the hundreds of millions of records per month or whatever becomes less daunting. Right?
And AI is very, very good at repeating this process. So awesome. We get some good productivity gains out of it. But the flip side of it is in some of our own testing, we put APIs online. They start getting traffic within, like, three minutes, and that traffic looks increasingly intelligent.
Right? It starts with kind of a reconnaissance ping looking for just the presence of an endpoint of some type, and then it's like, oh, well, let me check. Are you running, you know, I I I hate to use this example, but, like, you know, are you running Log four j? Are you running engine x? Are you running certain things where, you know, you get these responses back?
And so, like, there's clearly also danger brought in by AI in the API security space. How do you see that that the downside on the software development side of, you know, these AI productivity boosts? Well, there are a number of threats that come from it. First of all, AI, it can hallucinate. Let me start with kind of the easy ones without external threat actors.
I saw a study recently on, ARXIV, the the research arm of Cornell. That said that of commercial models, 5% of the time when they are generating code that they completely hallucinated a package that they pulled in and if you were using an open source package, it was over 20% of the time it hallucinated and brought in a package that didn't even exist. So hallucinations clearly is one. Yeah. Secondly, they introduce vulnerabilities in the code that they're creating.
There's a study, it was a different study but on the same site in February of this year, '20 '7 percent of the code generated by Copilot introduced a vulnerability. That's a pretty high percentage especially if you're pumping 27%. Okay. I mean, that's nearly a third, like that's that is a high percentage. It's a high percentage.
Think about that. Like a quarter of the code that you're that you're putting out has a vulnerability. And I guess I am not surprised Jeremy in some ways simply because it is being trained on all this permissive open source code and of course there is lots of vulnerabilities out there in permissive open source code, right? So the accuracy and the security of code being generated is something that we need to be very concerned about in my opinion. In fact, I think something simple that we could start with, rather than training those AI generating, agents on all code, maybe we just train it on senior developers code who have been coding for a longer time and have better practices rather than kind of all the code.
And there are probably ways to make that, improve over time. Yeah. You know, as you were giving that answer, there was I I just had a visceral reaction, and it's maybe the wrong reaction. But I just put this out there because I think some people who hear that might be like, hey. If I had a developer who is introducing vulnerabilities 27% of the time, like, okay.
They're at at the very least, they're going on a performance improvement plan, and they're going to something like security awareness training, and we're we're sending them to secure code warrior to do some online training courses and things like that. But I mean, I might also just be like, you know, this isn't maybe working out so well. Right? Yeah. The the thing to realize though, we tend to jump or at least I tend to jump to a vulnerability is, for example, they forgot to do input validation on SQL injection.
Right. The reality is a lot of these vulnerabilities are more subtle than that. So for example, rate limiting Yep. Input so that it causes a denial of service. And that's not necessarily an explicit vulnerability.
Right. It's an implicit architectural vulnerability, but still, nonetheless, a vulnerability that needs to be considered. Yeah. And to your point, I mean, something like that might also be mitigated at another layer of the stack using something like an API gateway or a WAF or who knows what. Right?
So I get what you're saying there. But something like, you know, lack of input validation, lack of sanitization, things like that are a little bit more concerning, especially as we know from the work that we've done over here at Firetail. Like, this is at least a contributing factor in more than a third of API based data breaches is is, you know, just improper handling of inputs and outputs. And and, you know, also things like, overly permissive queries to the database. Right?
When what I really need is Danny Allan, but what I've pulled out of the database is Danny Allam plus phone number, plus email address, plus home address, plus social security number, and so on and so on. Right? And then I just rely on the front end to clean that up or not display the data. I'm curious, like, if you think about developers going down this path, do you feel like there is guidance in training as far as, like, hey, you get this a, AI generated block of code. Here are the things you need to look for in terms of checking it for security or vulnerabilities?
Well, one of the things that we've seen, and I feel like this is a bit of a softball, maybe it's because I I work at Snyk. It's actually been a a tailwind for us because one of the things that we've seen at, for example, one of the largest technology companies in The US that has adopted Snyk is they said, you can use these AI generative coding assistants if you implement security controls, in their case it was Snyk, but if you implement security controls within your pipelines. And so, I think, yes we should adopt these things. I think vulnerabilities is is not a reason to not adopt AI. What I say is you should adopt it, just make sure that you have the security controls in place that you're picking up on these things and correcting them and educating the developers for that matter.
Yeah. As part of that process because you don't want to just correct it without them knowing. You actually want them to learn about it over time. Yeah. Yeah.
So it's an important kind of distinction there. It's right. Hey. Go adopt. But then let's have both kind of a a a design process that passes it through another layer of review and then some tooling that can automate that process and make it, you know, super efficient and fast so that we're not actually, like, we're getting, like, the productivity gain, but then not, like, slowing it down with a security review that's gonna take a lot of time.
Yes. A %. And one other thing, so you asked about AI and the threats that come with it. So there Yeah. It's in the coding.
I do think you highlighted something that's super important and people don't think about which is attackers are using AI now too. Yes. It used to be that they were just, you know, scanning the internet looking for open IPs and you know doing thumbprints and then doing penetration testing using the NASA or Metasploit or whatever. Yeah. Now we are actually seeing very sophisticated attacks that I believe on the back end are being powered by AI.
Because it I always I used to spend time, well I used to be a pen tester and do forensics after the fact. And it was interesting, I always used to tell people, the people on the other side of this, they're just out, it's ROI for them. Yeah. They're trying to make money as well and so they're gonna do what is easiest and you don't have to be secure, you just need to be more secure than anyone else. Because there was time and effort to exploit a site.
That is changing with AI. Yeah. If the malicious actors have the power of AI, all of a sudden ROI on complex exploits comes way way down, because they don't have to put thought into a multi layered approach. They can point AI at it and say, okay, I found this type of entry point. Yep.
You know, go ahead. Yeah. And it's very very low effort to get to a full exploitation. So I think one of the things that AI brings on the negative side of the equation is more power for the attacker. Yeah.
I this is such a good point. And I've kind of been telling you know, saying a version of the same story for a little while now. And it's it's funny because it actually you know, I I spoke at a conference called HostingCon in 02/2006, and we're going back a little ways, and I'm dating myself a little bit. Right? But it was the web hosting industry in the days pre cloud.
And the web hosting industry primarily targeted at SMBs and kind of mid market companies. Right? And, you know, really, the industry at that time was providing a mix match of, like, $10 a month, you know, personal website hosting to $30 a month VPS, virtual private server, to other all the way up to, like, dedicated hosts. Right? And one of the problems that was discussed very much at that conference was, how do we clamp down on early waves of fishing sites?
And, you know, fishing was kind of just becoming a thing around that time period. Right? And it was the scourge of the industry at that point in time. And, you know, we said, oh, okay. So, actually, hackers have access to this infrastructure.
And this was at the time when, like, web hosting was kind of first becoming this, like, automated deployment process. You no longer had to go, you know, put your own server in a data center. You could go online and with a credit card, have a virtual private server in, like, five minutes. Right? We say, okay.
So hackers have credit cards, hackers have access to, hosting infrastructure. By the way, credit cards might be stolen, but they have them nonetheless. Right? And and so, you know, what's happened though is is that's evolved. So it went from hackers have access to servers, and then it became hackers have access to the cloud.
And I know this from the time I spent at AWS. There were all kinds of anti fraud controls around looking for hackers setting up phishing farms and so on. And then, you know, we said, okay. Hackers have cloud. And then automation became the thing.
And to your point, you know, Metasploit and Cobalt Strike and these other toolkits that that became, like, you know, great for pen testers. Yeah. But also great for hackers. Right? And with a little bit of automation and the tooling around that, most of which is open source and very easy to find.
Right? Like, great. But now we're at this point where hackers have AI as well. And and hackers can do prompt engineering, and hackers can, you know, take things take any combination of those technologies that we just talked about and kind of put their attacks on steroids. Right?
Yeah. I think it changes the entire model. It used to be I always used to say it was a three p's. Right? The power, profit, prestige.
And it used to be that you worried probably most about the profit, attacker. But it has completely changed the paradigm because the cost for them is very very low to do very very damaging attacks by using AI. And it's, it'll be an interesting world for the for the time to come because with the defenses against this are not up to par because as you know the weakest link is always the people and you take something as simple as phishing. Phishing still exists today, but the quality of phishing, if you're using AI, you can personalize it specifically for you, Jeremy, that know everything about you and your background and where you've been in the conferences that you are. And it's way harder to defend against a phishing attack that is that personalized than one that is just generic and misspelled.
Yeah. Yeah. And I was just double checking something. I wanted to look for a stat that I had seen recently, and I just found it. The you know, as anybody who's worked in IT for a long time, you'd be familiar with kind of the MTTD and the MTTR acronyms.
Right? Mean time to detection, mean time to remediation. By the way, just a little sidebar for a second pet peeve of mine. It bugs me to no end that the mean time to remediation on vulnerabilities in production, that number has been like a hundred eighty days for twenty years now. Right?
And I just I I still struggle to understand organizations that can't move quicker on that. But what I the the stat that I was looking for that I found is the, mean time to availability of an exploit for newly disclosed vulnerabilities. That used to be in the kind of 44 range, then it came down to 32, and it is now down as low as five days. And I do think quite a lot of that is AI powered and AI assisted coding to build exploits, you know, where you you can do some prompt engineering and you say, like, hey. I'm trying to send, I don't know, a payload to this host.
Please make the payload do this particular thing, and you can describe the exploit in some level of detail there. And, you know, it's not too hard to create that. Yeah. I do think AI is behind that. Patch Tuesday has become exploit Wednesday.
And in fact Yeah. Someone someone recently, proved that they could take a a patch that has come out and using nothing but AI, they could reverse engineer the patch into how to comp create a compromise or exploit for using nothing but exploit engineering. And so it it just highlights. It's a it's good in the hands of, you know, AI is very powerful in the hands of organizations that use it for good, but it's also very powerful in the hands of of exploit actors as well. So over at Snyk, you guys primarily work with developers.
Right? And, you know, the Snyk that I know, and you can correct me if I'm wrong because I'm sure the company has evolved quite a lot over time. But I think of it as being primarily dedicated to, like, helping developers produce more secure code. Right? That that's kind of the core of what how I think about the company.
As you think about this, do you think that it fundamentally shifts the level of support that you have to give to developers as they build so that they start to think about, like, new potential exploits and new types of exploits that hackers can find? Yeah. I think about it in two ways for our customer base. Obviously, we're using AI within our platform as well, but check that out of the equation. They are developing code, a whole lot more of it, a whole lot faster using these coding assistants.
And so part of what we are doing is just using the current tool set to do the analysis within the pipeline so as that velocity of code comes through and that's what I mentioned earlier that you know we become the guardrails and test for it. Yep. Yep. But there, it's also true though that AI as as they are writing new applications and we can talk about this, some of the new AI applications actually get me very excited. I think software development as a as a practice is going to change.
But let's talk about an application that has an LLM behind it. So it's using AI and it's you know, inferencing and it's giving you personalized data. Maybe it's just as simple as a chatbot. It increases the attack surface by definition. Some of what it would increase the attack surface on in that case is the fact that there is a large language model or a diffusion model behind it.
Right. You can jailbreak those and in fact using fuzzy AI, I heard someone say that using fuzzy AI they could jailbreak over 90% of the time to get outside of what they were allowed to do and extract data. Right? Right. So increased attack surface is one just by virtue of the fact that there is an LLM there.
It's a new type of infrastructure. Another one is as we build agents using that are agentic in nature, they are going to be talking to other agents. And one of the things that I worry about an awful lot in an AI world is authentication and authorization and what that agent should have access to and what persona they take. Because if I trained the back end LLM on all the company data, but I'm not as an individual supposed to have access to it, the agent has to be smart enough to know who's coming in, what they have access to, what they're allowed to do. And if you look at things like model context protocol, there is not strong authentication authorization mechanisms in communicating between agents or within the AI environment.
Yeah. Yeah. By the way, great plug for authentication and authorization. For those who have been following FireTel for a little while, you will know that we've been banging the drum about those being the number one and number two breach vectors for APIs for the last three years now, has not changed, by the way, over that time period. In particular, the combination of no server side authorization plus sequential integer numbering is like your toxic your number one toxic combination for APIs.
So just a little sidebar there, but to follow on to something that you said there, there there is something that I've kind of likened this agentic chain to that I worry about a little bit along the same lines as you, which is that, you know, when we think about kind of workflow driven applications, they're very much, you know, step one, process this based on the output, you know, pass the output to step two. Based on that, step two will function in some particular way. And we had to design cases around all of those things previously with our workflows or with our software or whatever the case case may be. And so what we're trying to do is we're trying to facilitate that process and accelerate that process, right, and not have to design every single case, case statement outcome for each of those steps. But at the same time, what it what concerns me about it is a little bit back to what you mentioned around the five percent hallucination scenario, where five percent of this is just, like, purely hallucinated.
To me, it sounds a little bit like having the risk of the game of telephone that we used to play when we were kids. Right? I whisper a secret in your ear, and you whisper it to the person next to you, and so on, so on, so on down the line. And by the time, you know, I said my original phrase, which is something like, you know, Watson, please come quickly, it gets to the end, and it's like, damn the torpedoes down the hatches. Right?
It's like, you know, complete drift away from my original statement. In that level of kind of agency, and I use that word very deliberately, that the agents are meant to have, that has a real concern to me. And so, like, I've I've we've been experimenting with a lot of stuff on our own platform. We've built some stuff in. But one of the things that we found is that, actually, your prompts need to be pretty explicit.
And the better outcome you want, you know, you're ending up writing paragraphs of prompt code with constraints, with descriptors, with all the stuff around it, as opposed to hundreds of lines of code. And, you know, it's easier on the one hand because I can maybe I can text to speech and just kinda describe what I'm going for in vibe code, whatever. Right? But it is it is kind of the case that, like, I'm seeing the best result where you go very explicitly down this process. A bit like if you've ever worked with a software development outsourcer.
Right? If you have all your PRDs very, very well written, you know, you can expect better results. How do you how do you kind of see that balance playing out? Or how do you see the results working for companies in in the real world? Well, I go back to what I originally said is I see people's roles evolving.
And so why will a developer be a good developer in the future? Because they're very good at prompt engineering, knowing what they want and validating that what comes back is the thing that they actually need. I prompt engineering is a massive thing. The other thing though that I am hopeful on, a lot of the early models were not reasoning models and so it didn't explain it as it went through. And if you look at some of the newer models that are coming out now, they do have reasoning and that sequential telephone is getting better if they explain each step along the way.
So I do think things will get better but I do think that the best developers and best engineers are going to be those that are explicit in what they ask for and very, very detailed. And so what it means is a developer ends up being more of a product person than a coding person. Like it's a difference between a programmer and a software developer. Like one just writes code, the other one actually has a thought process into what it's supposed to be creating. Right.
It it's, you know, it's almost the difference between having the vision for what this thing should do as opposed to the, the mechanical engineering of how the thing should do that. Yes. Yeah. Yeah. Yeah.
Yeah. Interesting. Interesting. There's a couple other things that are on my mind, and then we mentioned vibe coding a couple of times. And I guess we've now seen kind of the first real world example of a SaaS business that, you know, it's hard to validate and verify, I will say.
I I did have a look at the business. For those that are interested, you could just quickly Google, like, Reddit thread SaaS cursor. I think it was cursor was the ID or the, AI generated prompting thing. And, you know, it was basically like, hey. I built a SaaS product that is generating real world revenue.
And then as soon as that he kind of bragged about Vibe coding it, everybody went and looked at it and found, you know, hard coded credentials and and and API keys and no authentication between the front end and the back end and and all these types of things. And, you know, and then the next post was like, oh my gosh. I'm under attack. You know, little bit of a cautionary tale there. Right?
Yes. A %. I I actually just tweeted something the other day about someone who had Vibe coded, an application, and they had leaked credentials because of it. You shouldn't depend we're in way too early days to depend on Vibe coding for high quality, high security, high performance, high scalability, high anything. That's why my own thought is that there's going to be a human in the loop for a long time to come.
In fact I saw an article just posted on LinkedIn today that I very much agreed with which is we should not be looking at Vibe coding and coding assistance as an opportunity to reduce the developers. Right. Because I think that's just a very short sighted view. We're still gonna want them in the loop. We should be looking at AI and coding assistance as a way to augment and help make them more productive in what they're doing.
But assuming that you're going to eliminate a developer, you're not understanding all of the functions that a developer does in the course of their day. Yeah. Actually, by the way, the I literally just had a dinner with a group of CISOs here locally in the DC area last night, and we had a conversation around exactly this. And in their case, it's security operations is the area that is very much getting the same kind of, push from above to, like, hey. How can we boost productivity?
And there there's another stat I'll cite that is kind of well known in the security operations community, which is that, like, pretty much every SOC analyst spends more than 90% of their time chasing down false positives. And and by the way, that's also one of those things that's been that way for, like, 10. Right? As long as SIMS and, you know, things like Splunk have been a thing where we're aggregating logs, and I guess, actually, your former employer Qurate, at IBM I don't know if you worked on the QRadar side of the house, but, you know, that was kinda one of the first ones that was really a big kind of log aggregator and then created the the idea of having detections and detection engineering on top of these log streams. But they're notoriously, noisy and and, you know, low signal to noise ratio and whatnot.
And so the talk was very much like how do we make our our, SOC analysts more productive and so on. But exactly what you said, nobody was saying, oh, we're going to eliminate the SOC analysts. And in fact, nobody was saying we're going to, like, you know, I think in in in the case of the conversation that was had, a lot of the emphasis was we're gonna make them more productive. We might might be able to avoid hiring more in the future or hiring at the same rate that our data sources grow. And do you see a similar thing maybe playing out on the software development side?
I do. I see it maybe plateauing is the better word. Like, for the last decade, software engineers have been exploding exponentially the number that is needed. I see it somewhat flattening out. I do think that there's probably going, that's going to happen, but I don't think it's going to decrease.
I always, I use this analogy that, pilots, flight took place for the first time in nineteen o five, '19 o '6, I'm not sure the exact year, it was the early nineteen hundreds. Sure. Autopilot for for pilots was invented in 1912, a hundred years ago. We still have two pilots in the cockpit. I Autopilots have not replaced the pilots but it's become super helpful.
In fact, for most of the flight, the pilots are there, you know, making sure that autopilot's working and monitoring everything and keeping systems running. But we still have pilots in the cockpit. And I see development as as very similar to that. There's a lot of activities that are taking place. Yes, we now have things that will help them code, help them document, help them create tests, but you're still gonna want to have developers there prompting their into what good looks like, making sure that what comes back is not, you know, is secure, that it's doing all the things that it's supposed to do.
Yeah. This is such a great analogy because, actually, if you think about aviation as an industry, you know, a couple things that you can point to where you would say, like, hey. Actually, over the last I I don't know exactly what time frame, but let's say over the last fifty years at least, aviation has just gotten safer and safer and safer. Right? And, you know, the number of air miles flown is just dramatically increasing.
And yet the pilot supply doesn't have to increase at the same rate, but there are a little bit more, like, physical constraints on piloting as because to your point, you need two pilots or one pilot, one first officer in every in at least every plane. Right? So, I it's such a it's such a a perfect analogy for the situation that we're talking about. We've got a couple of minutes left. I wanna get your thoughts on a couple of other things just to kind of wrap up here.
So if you think about AI and security in the coding process, is there one thing that concerns you the most? Is it is it these 27% vulnerabilities? Is it, let's say, getting developers trained on understanding how to do the prompt engineering, how to make that transition from being a coder to being a product person? Is there like one above all that has you the most concerned? I think the thing that concerns me the most is if organizations start to use coding assistance and the velocity dramatically increases, but they have no guardrails for it.
Okay. Because, and I guess that is going back to the 27% because we know people make mistakes and you know they're going to continue to make mistakes, but if you continue to make mistakes at a faster velocity, you end up in a worse place. And so for me, you want to trust AI Yeah. But you want to do it in a way that you have confidence that the outcome is going to be sufficient, and that requires the governance, to be there. Yeah.
Yeah. Makes total sense. On the productivity side, I wanna throw something at you just to get a reaction on it because I had a conversation recently with somebody around AI, and and this person is extremely bullish on AI, like, extremely, extremely bullish. Like, maybe one of the people that I've sat down with who was the most, like, we have to do this. We have to do this now.
Everybody has to do it. You know, really pushing his organization down that direction. And something that he said to me was that, you know, we should actually be striving for, like, a three x productivity gain. Like, 300% productivity gain for the company. And he goes, maybe we won't get there.
Like, that's entirely possible. It's it's even probable. But if we don't strive for it, we won't get the 10% even, or we won't get past the 10%. I'm not sure exactly. But his point was, if you don't aim super high, you don't get, like, meaningful change to the organization.
How do you react to that? And have you seen our organizations that are pushing that hard in the AI direction? I do see, pardon me, organizations that are pushing very hard in the AI direction. I think it is amazing to be aspirational whether it be three x or a hundred x and at some point I say it doesn't really matter what that aspirational goal goal is. My recommendation for everyone would be not only to embrace AI, but embrace it with everything you have because it is going to be industry changing in ways that we don't even begin to understand.
And and those that understand AI the most and try to use it in everything they do, and I do like on a daily basis. I I would say rarely ten minutes goes by that I'm not using some type of AI in some way. Wow. I think if you don't have that cultural approach within your organization, you're just gonna be left behind by the organizations that do take that approach because their productivity gains are going to so outpace the traditional organization that they're they're just gonna be left behind. Yeah.
So I think it's great to have that aspirational. Now if you're asking me, do I think that we're gonna get three x productivity gains, I'm more skeptical of that. Maybe in the long run, I think it's very reasonable that we'll end up getting, you know, 50% productivity gains. I'd I'd be shocked at the three x, but let's have big goals. Yeah.
Yeah. Yeah. I like it. For those who don't know about Snyk, what are some of the things that you guys are working on that you're most excited about that you can share with the audience? Sure.
So, well, we're known for developer security. We're known for taking all these security controls and helping developers be far more productive. In fact, we talk about develop fast, but stay secure. That that's kind of our motto. And our our whole mantra is push it as early as possible, shift left in the cycle, and do all of the security analysis.
If you ask me what I get most excited about though, it's we're using AI internally for things that we never have in the past. So for five years we've been using AI to detect vulnerabilities. We're we're not new to this game. We've been actually doing this since 2020 with Yeah. We acquired a company called Deep Code.
Yeah. But more recently, we've been using AI to generate the fix, make it easier for the developer. We've been using it to go and reduce noise on all the false not false positives. You know one of the issues that has exist, Jeremy, is that we have these CVEs for all these open source components Yeah. But the CVE might be a function within that open source component that you're not actually calling Yeah.
So you don't actually have the risk. Yeah. And the growth of CVEs has been over the top over the last few years. And we couldn't keep up with it, going and find which specific function within that open source component was vulnerable. And so we started using LLMs ourselves to determine whether a vulnerable function was actually reachable.
And it's not the fact that we're using AI to do that that is impressive. I think the thing that excites me the most is we're helping filter out the noise and helping people prioritize and understand what really matters by using AI. And if we can do that type of productivity, I think that's amazing for everyone in the industry. So that's those are the types of things that get me excited. That's awesome.
Awesome. And for anybody who doesn't know, Snyk is s n y k, and and it's snyk.io, if I remember right. Correct? Yes. It's for, so now you know, s n y k.
So now you know. I never knew that. Yeah. So now you know. So now you know.
Awesome. Well, I think that's a great note to wrap today's show on. Danny, Danny, thank you so much for taking the time to join us today. I've really enjoyed this conversation. Tons of stuff happening on AI.
We'll have to have you on and revisit this conversation in, like, three months and see how much has changed just in that time period. I would love that, Jeremy. It was a great conversation. Awesome. For our audience, remember to join us on the next episode of Modern Cyber.
By the way, our breach series is ongoing. We are looking for more people to come share real world stories from the trenches. Have you been breached in an environment that you can talk about? Hopefully, not your current employer, maybe a past one. We'd love to have you on.
Share what you learned. We find that sharing is caring, and everybody can learn from real world experiences. We will talk to you on the next episode of modern cyber. Bye bye.