How to strengthen Cybersecurity in the age of AI

How to strengthen Cybersecurity in the age of AI

How to strengthen Cybersecurity in the age of AI PlatoBlockchain Data Intelligence. Vertical Search. Ai.
AI is granting new abilities to software developers that were previously thought unimaginable. New generative AI can deliver complex, fully-functional apps, debug code, or add in-line commenting with simple natural language prompts. It’s primed to advance low-code automation exponentially. But, at the same time, this new generation of artificial intelligence can empower bad actors, meaning DevSecOps must evolve in step to further application security.
There are many potential cybersecurity concerns in the age of AI. For one, new large language models could be tasked with impersonating users and bringing more human-like interaction to social engineering efforts. AI technology could be used to cast a wider net of network attacks while perpetrators keep a low profile. At the same time, generative AI is anticipated to bring more sophistication to targeted hacks, like accelerating fingerprint hashing tactics.
I chatted with Dana Lawson, senior vice president of engineering at Netlify, to further understand the cybersecurity nuances in this new AI-driven paradigm. Below, we’ll consider what are the most pressing vulnerabilities caused by generative AI. We’ll consider how these tools could aid attackers’ efforts and explore how network security should respond to win the AI race against malicious actors.

Possible Risks Caused by Generative AI

First, what are some of the top risks generative AI might pose to cybersecurity defenders? Well, according to Lawson, the most obvious use for LLMs is mimicking human patterns and masking bot attacks. For example, generative AI could generate cryptographic hashes, giving bots a real sense of humanity. All this should only intensify the bot traffic issue that network security has been thwarting for years. “It’s the same narrative, but now it’s been exacerbated,” said Lawson. “This one is a rocket ship in uptake.”
A second area is using generative AI to amplify social engineering tactics. Identity theft is already a common concern for end users. But next-generation AI could enhance impersonation abilities and lead to more successful phishing campaigns. It also means that attacks on the edge using forms and email-based schemes could be harder to spot. There’s been a lot of good progress around protecting identity, said Lawson; however, there must be more education around the impacts of AI and systems must evolve to look for new patterns.
There are also plenty of areas for identity to be faked within the CI/CD pipeline. In fact, OWASP lists inadequate identity and access management as the second-highest risk for CI/CD pipeline security. AI could heighten access control issues by scanning for authorization gaps in exposed endpoints.

Distinguishing Between AI Bots and Legitimate Traffic

So, as malicious traffic becomes more discrete, how can software providers distinguish between AI traffic and real usage? Well, systems will surely require more input validation and spam detection around forms, email and direct messages, noted Lawson. Verifying the identity of actions will also be crucial in areas like e-commerce purchases. “It has to be generative on both sides to capture what is real,” Lawson said.
For old-school denial-of-service attacks, repeatedly hitting the same URL is a telltale attack signature, and DDoS campaigns will continue going forward. Thankfully, we have a mature set of tools to monitor network traffic, said Lawson. And the big cloud providers already have machine-learning mechanisms in place to detect standard bot traffic.
However, it’s a bit too early to tell right now what this new wave of AI-driven bot traffic could look like. We’ll need to first learn how to train machines to detect when these new attack types are occurring and how the traffic is distributed, said Lawson. It will take time to develop models that distinguish between the two. But once this detection is accomplished, network security systems will evolve in rapid succession. “It’s the next wave of what is coming and how we approach digital products,” said Lawson.

How to Win the AI Race Against Attackers

So, what steps should security developers take to win the AI race against attackers? Well, Lawson encouraged transparency around common threats. She shared a handful of ways to prepare for the coming onslaught of AI-fueled cyberattacks:
  • Acknowledge it. Embrace that the wave of generative AI-backed hacking efforts is coming. Software providers must evolve DevSecOps and bake AIOps into the tooling process. The future is always coming, and that requires a growth mindset to adapt.
  • Share with the community. “Open source is the bread and butter of everything we do,” noted Lawson. When every piece is proprietary, it gives actors more opportunities to infiltrate systems. As such, the developer community should continue to unite around open transparency and share knowledge of common threats and vulnerabilities.
  • Train bespoke models. For cybersecurity defenses to be effective, they will likely require unique models trained on the environment at hand rather than generic off-the-shelf AI. However, there need to be common models to set a baseline. And having transparency around model creation can help the community standardize defensive strategies.
  • Be intentional. What problem are you solving for people? Lawson recommended seeing the bigger picture and considering developer experience so developers feel comfortable implementing new features. Lastly, it’s good to consider the environmental impact of training large models to ensure the energy story is worth it.
  • Convince leadership to direct positive change. Ensure technical leadership is well aware of the problem and help them realize the value proposition in encouraging innovation in this area and how it could affect the bottom line.

Remain One Step Ahead

To protect end users, organizations must continue to ensure they take security as seriously as they take software quality. And to get ahead of the adversaries in this new AI age, cybersecurity research into new generative AI must be further along. “We have to be one step ahead,” said Lawson.
“Technology will come before compliance does,” said Lawson. Thus, organizations should reevaluate their security posture to be compliance-minded before regulations take hold. And although there may be some AI-washing in the market right now, on the whole, it’s a smart area to focus on and invest in, said Lawson. “It’s the future.”

Time Stamp:

More from Fintech News