Artificial intelligence (AI) tools were the hot topic at this year’s RSA conference, held in San Francisco. The potential of generative AI in cybersecurity tools has sparked excitement among cybersecurity professionals. However, questions have been raised about the practical use of AI in cybersecurity and the reliability of the data used to build AI models.
“We are at the top of the early innings of the impact of AI. We have no idea how expansive it is and what we will eventually see in terms of how AI affects the cybersecurity industry,” said MK Palmore, strategic cybersecurity advisor and board member of GoogleCloud and Cyversity. Infosecurity
“I think we all hope, and certainly the company I work for, moving in a direction that shows we see value and use in terms of how AI can have a positive impact on the industry,” he added.
However, as many have pointed out, Palmore acknowledged that there will in fact be more to come when it comes to AI development.
“I don’t think we’ve seen everything that’s going to change and be affected, and as usual, as these things evolve, we’re all going to have to pivot to adapt to this new paradigm of having these large language models (LLMs) and AI available to us,” he said.
Dan Lohrmann, Field CISO at Presidio, agreed with the sentiment that we are in the early days of AI in cybersecurity.
“I think we’re early in the game, but I think it’s going to be transformative,” he said. Speaking about tools on the RSA show floor, Lohrmann said AI will transform a large percentage of products to come.
“I think it will change the attacks and the defense, how we red team, blue team for example,” he said.
However, he noted that there is still some way to go when it comes to streamlining the tools used by security teams. “I don’t think we’ll ever get to a single pane of glass, but this is as close as I’ve seen,” he said, commenting on some of the tools with built-in AI.
Adding AI to security tools
During RSA 2023, many companies highlighted how they are using generative AI in security tools. Google, for example, launched its LLM in Security and Generative AI Tools, Sec-PaLM.
Sec-PaLM is based on Mandiant’s front-line intelligence on vulnerabilities, malware, threat indicators and behavioral threat actor profiles.
Read more: Google Cloud brings generative AI to security tools as LLMs reach critical mass
Steph Hay, director of user experience at Google Cloud, said LLMs have finally reached a critical mass where they can contextualize information in a way they couldn’t before. “Now we have a truly generative AI,” he said.
Meanwhile, Mark Ryland, director of the Office of the CISO at Amazon Web Services, highlighted how threat detection can be improved with generative AI.
“We’re very focused on meaningful data and minimizing false positives. And the only way to do that effectively is with machine learning, so it’s been a core part of our security services,” he noted.
The company recently announced new tools for building on AWS that incorporate generative AI, called Amazon Bedrock. Amazon Bedrock, is a new service that makes AI21 Labs, Anthropic, Stability AI and Amazon base models (FM) accessible through an API.
In addition, Tenable released generative AI security tools designed specifically for the research community.
The announcement was accompanied by a report titled How generative AI is changing security research, which explores ways LLMs can reduce complexity and achieve efficiencies in research areas such as reverse engineering, code debugging, improving web application security, and visibility into cloud-based tools.
The report noted that LLM tools, such as ChatGPT, are evolving at “breakneck speed.”
Regarding AI tools in cybersecurity platforms, Tenable CSO Bob Huber said Infosecurity, “I think what these tools allow you to do is to have a database for yourself, for example, if you want to test something and the target is X, what vulnerabilities might be there, usually that’s a manual process and you have to go. in and search but [AI] it helps you get to those things faster.”
She added that she has seen some companies plug into open source LLM, but she noted that there must be guardrails on this because the data on which the LLM is based is not always verifiable or accurate. For LLMS built with the organization’s own data, it is much more reliable.
There are concerns about how connecting to an open source LLM, such as GPT, could affect security. As security professionals, it’s important to know the risks, but with generative AI, Huber noted that there hasn’t been enough time for people to fully understand those risks.
All of these tools aim to make the defender’s job easier, but Ismael Valenzuela, BlackBerry’s vice president of threat research and intelligence, pointed out the limitations of generative AI.
“Like any other tool, it’s something we should use as defenders and attackers as well. But the best way to describe these generative AI tools is that they are good as a wizard. It’s obvious that it can speed things up on both sides, but do I expect it to revolutionize everything? Probably not,” he said.
Additional reporting by James Coker
Recent developments in Artificial Intelligence (AI) have been shaking up the field of digital security, with AI increasingly dominating cybersecurity tooling. Ikaroa, a full-stack tech company, is proud to be at the forefront of this revolution, utilizing leading-edge AI and machine learning to empower organizations to better protect their digital assets.
The use of AI in cybersecurity tooling has been rapidly accelerating, with major industry players investing hundreds of millions of dollars into developing and deploying AI-driven solutions. The most notable example of this is the adoption of AI-enabled solutions in the field of Risk-based Authentication (RBA). This technique employs an AI algorithm, or set of algorithms, designed to detect and anticipate malicious behavior in order to curb it before it can threaten an organization’s digital assets.
The use of AI in RBA solutions has been particularly beneficial due to its speed and accuracy. AI algorithms are able to analyze vast amounts of data more quickly and accurately than their human counterparts, making them ideal for detecting and preventing cyberattacks. As a result, organizations that have adopted AI-based solutions have seen a substantial reduction in the time required to respond to potential threats.
The increased adoption of AI-driven RBA solutions, however, has also raised important questions. For instance, how do these solutions account for the privacy of users, when their data is being analyzed in order to identify malicious activity? This is a particularly notable issue when it comes to human authentication factors, such as passwords, biometrics and facial recognition. Ensuring the privacy of such data and the security of any AI systems that are exploiting it is a major challenge and one which must be addressed with greater urgency.
At the same time, the immense potential of AI applications in digital security is still undeniable. With leading-edge technologies such as deep learning and natural language processing continuing to evolve, AI-powered cybersecurity tools are becoming ever more sophisticated and powerful.
Ikaroa is committed to researching and testing the potential of AI-driven cybersecurity solutions. We are determined to develop new ways of harnessing the immense potential of AI to better protect our clients’ digital assets, while also ensuring that the privacy of the users is preserved. We strongly believe that the integration of AI-based solutions into cyber security tooling is key to ensuring that organizations remain well-equipped to tackle the ever-evolving threats posed by cybercrime.