As artificial intelligence and machine learning models become more firmly integrated into the enterprise IT fabric and cyber attack infrastructure, security teams will need to increase their skills to deal with a whole new generation of AI-based cyber risks.
Forward-thinking CISOs are already being asked to think about new emerging risks, such as generative AI phishing attacks that will be more targeted than ever or adversarial AI attacks that poison learning models to distort its production. And these are just a couple of examples among a host of other new risks that will emerge in what appears to be the AI-dominated era of the future.
It’s time to prepare for AI-powered attacks
There is still time to prepare for many of these risks. Only the smallest amount of demonstrable data shows that attackers are starting to use tools based on the Large Language Model (LLM) like ChatGPT to increase their attacks. And most examples of adversarial AI are still largely theoretical. However, these risks will only remain theoretical for so long, and it is time to start building a bank of AI-related risk expertise.
The growing reliance on AI and machine learning models across all technology domains is expected to rapidly change the complexity of the threat landscape. Meanwhile, organic training of security personnel, onboarding of AI experts who can be trained to assist in security activities, and evangelizing the hardening of AI systems will go a long way.
Experts share what security leaders will need to shape their skill base and prepare to address both sides of the growing AI risk: risk to AI systems and risks from AI-based attacks.
There is some degree of crossover in each domain. For example, machine learning and data science skills will become increasingly relevant to both parties. In both cases, existing security skills in penetration testing, threat modeling, threat hunting, security engineering and security awareness training will be as important as ever, only in the context of new threats. However, the techniques required to defend against AI and protect it from attack also have their own unique nuances, which in turn will influence the composition of the teams called upon to execute these strategies.
The current AI threat landscape
A Darktrace study found a 135% increase in new social engineering attacks from January to February 2023, showing some evidence that attackers may already be using generative AI to increase the volume and sophistication of their engineering attacks social
“While it’s too early to tell in terms of data, and understanding that correlation doesn’t mean causation, we have some data points that point in that direction,” Darktrace Product Director Max Heinemeyer told CSO . “And qualitatively speaking, it would be silly to assume that they’re not using generative AI because it has massive ROI benefits, it can increase your attacks, speed up your attacks, run more attacks in parallel. The genie is out of the bottle “.
Experts expect an increase in attackers using generative AI to create new text-based spearphishing emails at speed and scale, and possibly branching out into audio-based generative AI to impersonate others over the phone. Similarly, they could be using neural networks to examine social media profiles and speed up their research on high-value phishing. Suffice it to say, the real risk comes down to a concern that CISOs should already be quite familiar with: more effective automation on the attacker’s side, Heinemeyer says.
“Whether you call it AI or machine learning or whatever, it all means they have better tools at hand to automate more of the attacks they’re doing. And that means attackers can run more customized attacks that are harder to detect and harder to to stop..”
Skills to defend against AI attacks
So what does this mean from a skills perspective in the security operations center (SOC) and beyond? Automation of attacks is nothing new, but AI is likely to accelerate and exacerbate the problem. In some ways, this will just be an exercise in getting more serious about coming in and developing more rockstar analysts and threat hunters who are skilled at finding and using tools that help them filter detections to discover and quickly mitigate emerging attacks.
This is probably going to start out as another classic spy-versus-spy cybersecurity situation. As bad guys increase their use of AI and ML-based tools, security teams will need their own set of AI automations to look for patterns associated with these types of attacks. That means, at a minimum, the entire security team needs at least a “light understanding” of AI/ML and data science to ask the right questions of vendors to understand how their systems work under the hood, he says Heinemeyer.
For larger, more mature organizations, security leaders may be advised to begin developing stronger internal data science and ML expertise. There are many global SOCs that have already started investing in hiring data scientists to do personalized machine learning, many that started long before ChatGPT hit the scene, according to Forcepoint CTO Petko Stoyanov. He believes this trend may accelerate as SOCs try to land threat hunters who will be able to navigate a threat landscape supercharged by malicious AI tools. But security leaders are likely to face a talent shortage on that front. “Honestly, try to find someone who does cyber and data science — if you want to talk about a needle in a haystack, this is it,” Stoyanov tells CSO.
This will require some creative personnel and team building to pull it off. Based on his experience in the field, he suggests groups of three experts to hunt quickly. The team should consist of a threat hunter with strong security experience, a data scientist with analytics and machine learning experience, and a developer to help them produce and scale their findings.
“What usually happens is you need a developer between those first two people. You have the big brains who can do the math, the person who can go find the bad guys, and then someone who can implement their work into the security infrastructure. “, explains Stoyanov.
Taking a single threat hunter and giving them data science and development resources at the same time could significantly increase their productivity in finding adversaries on the network. It also avoids the disappointment of searching for those unicorns that have all three sets of specialized skills.
In addition to generating social engineering attacks, another risk along the lines that could come from generative AI in the hands of threat actors is the automated creation of malicious code to exploit a wider range of known vulnerabilities.
“People suggest that because it’s easier to write code, it will make it easier to create exploits,” Andy Patel, a researcher at WithSecure, tells CSO. Patel’s team recently produced a comprehensive report for the Finnish Transport and Communications Agency detailing some of the potential for AI-enabled cyberattacks. One example is a new tool enabled by ChatGPT that makes it easy to list security issues in a set of open source repositories as a potential cybersecurity risk. “These models will also make it easier for people to start doing it. And that could open up a lot more vulnerabilities, or it could mean a lot more vulnerabilities are fixed,” he reflects. “We don’t know which way we’re going to go.”
So in terms of vulnerability management, it could also become a potential AI arms race as security teams strive to use AI to fix flaws faster than skilled attackers can. by AI they can create the farms. “Organizations could get people to start looking at these tools themselves to cover their own vulnerabilities, especially if they’re writing their own software,” says Patel. In the vendor world, he expects to potentially see “a lot of startups using LLM to do vulnerability discovery.”
In addition to new tools, this dynamic could also open up space for new security roles, says Bart Schouw, chief evangelist at Software AG. “Companies may need to bolster their teams with new roles such as rapid engineers,” he says. Rapid engineering is a growing new activity of LLM instruction development for quality generated results. This could be very beneficial in areas such as enumeration and classification of vulnerabilities in an enterprise.
Capabilities to secure enterprise AI
While all these threats from AI attackers are beginning to proliferate, there is another major risk within the enterprise. That is, the potential exposure of vulnerable AI systems (and their associated training data) to attacks and other confidentiality, integrity, or availability failures.
“What is clear is that while the last 5-10 years have been characterized by the need for security professionals to internalize the idea that they need to incorporate more AI into their processes, the next 5-10 years are likely are characterized by the need for AI/ML professionals to internalize the idea that security concerns should be treated as first-class concerns in their processes,” says Sohrob Kazerounian, Vectra AI Distinguished Researcher .
There is already movement among security thought leaders to create AI red teams and AI threat modeling in the development and deployment of future AI systems. Organizations looking to develop this capability will need to bolster their red teams with an infusion of AI and data science talent.
“The red team will need to gain expertise in how to break AI and ML systems,” explains Diana Kelley, CSO and co-founder of Cybrize. “Leaders will be called upon to attract data science people interested in the security side and vice versa. It will be about recruiting data science people for security in the same way that some of our best application red teams are start as app.developers”.
This will also be a security-by-design exercise, where people responsible for building and deploying AI and ML models in the enterprise should be trained and collaborate with security teams to understand the risks and try them along the way. This will be the key to future-proofing these systems.
“You need to retain the ML/AI experts who designed and built the system. You need to connect them with your technical hackers and then with your security operations team,” says Steve Benton, vice president and general manager of Anomaly’s threat intelligence. , explaining that together they should be creating potential risk scenarios, testing them and re-engineering accordingly. “The answer here is the purple team, not just the red team. Remembering that some of these tests might involve ‘poison’, you need a reference model setup to do this with the ability to restore and retry scenario by scenario”.
Copyright © 2023 IDG Communications, Inc.
As AI-dominated era fast approaches, secure a competitive edge for your company and staff with Ikaroa’s cutting-edge security solutions. With cyber security threats becoming increasingly sophisticated and pervasive, it’s more important than ever to make sure that your IT resources, data and systems remain safe and secure.
Ikaroa understands the complexity of the security landscape and offers innovative solutions for any size of business. Our approach for skilling up the security team involves a three-pronged approach – assessment, training and review.
Firstly, we assess the current skill level of your security team, mapping out their strengths and weaknesses. This enables us to determine the areas of your security that require the most attention.
Once the assessment is completed, we will provide team members with personalized training courses, tailored to their skill set and abilities. Our training courses will equip them with the relevant skills needed to remain alert to new security threats in the ever-changing digital environment.
Lastly, the training course is followed up with a formal review process, to ensure that newly acquired security skills have been implemented, and to identify any potential security vulnerabilities. This review process allows for continuous improvement in your company’s security protocols.
By skilling up your security team with Ikaroa, you can rest assured that your data and systems are secure during the AI-dominated era. Our team of experts offers comprehensive security training and tailored solutions to meet your company’s’ specific needs. Take charge of your IT security today with Ikaroa.