The UK’s National Cyber Security Center (NCSC) recently issued a warning to its constituents about the threat artificial intelligence (AI) poses to UK national security. This was soon followed by a similar warning from the NSA’s director of cybersecurity, Rob Joyce. It is clear that there is great concern among many nations about the challenges and threats posed by AI.
To get a more comprehensive view of the dangers of bad actors using AI to infiltrate or attack nation-states, I reached out to the industry and found thoughts and opinions, and frankly, some who chose not to participate in the discussion, at least for now.
The NCSC cautioned that queries are archived and therefore could become part of the underlying large language model (LLM) of AI chatbots such as ChatGPT. These queries could reveal areas of interest to the user and, by extension, the organization they belong to. The NSA’s Joyce opined that ChatGPT and its peers will make cybercriminals better at their job, especially with a chatbot’s ability to enhance phishing language, making it sound more authentic and believable even to to sophisticated targets.
Secret leak through queries
As if on cue, Samsung revealed that it had warned its workforce to use the ChatGPT functionality with care. An employee wanted to optimize a confidential and sensitive product design and let the AI engine do its thing – it worked, but also left behind a trade secret and eventually inspired Samsung to start developing its own ML software for internal use only.
Speaking about the Samsung incident, CODE42 CISO Jadee Hanson noted that despite its promising developments, the ChatGPT explosion has sparked many new concerns about potential risks. “For organizations, the risk intensifies as any employee enters data into ChatGPT,” he tells CSO.
“ChatGPT and AI tools can be incredibly useful and powerful, but employees need to understand what data is appropriate to enter into ChatGPT and what isn’t, and security teams need to have adequate visibility into what the organization sends to ChatGPT. With all the powerful new technological advances, there are risks we must understand to protect our organizations.”
In a word, once you press “Enter”, the information will disappear and is no longer under your control. If the information was considered a trade secret, this action may be sufficient to cause it to no longer be declared a secret. Samsung observed that “this data is impossible to recover as it is now stored on servers belonging to OpenAI. In the semiconductor industry, where competition is fierce, any kind of data leakage could spell disaster for the ‘company in question’.
It is not difficult to extrapolate how such queries coming from a government, especially the classified information part of the government, could put national security at risk.
AI changes everything
In early 2023, Dr. Jason Matheny, president and CEO of the RAND Corporation, described the four main areas his organization saw as national security concerns in testimony before the Homeland Security and Governmental Affairs committee.
- Technologies are driven by commercial entities that are often outside our national security frameworks.
- Technologies are advancing rapidly, typically outpacing policy and organizational reforms within government.
- Technology assessments require expertise that is concentrated in the private sector and has rarely been used for national security.
- Technologies do not have conventional intelligence signatures that distinguish benign from malicious use, differentiate intentional from accidental misuse, or allow attribution with certainty.
It is not hyperbole or exaggeration to claim that AI will change everything.
The growing fear of AutoGPT
I had an extensive discussion with Rob Reiter, CTO of Sentra (who previously served in Unit 8200, within the Israel National Defense Force), in which he commented that his main fear will be met with the arrival of AutoGPT or AgentGPT, AI. entities that could be deployed with the GPT engine acting as a force multiplier, improving attack efficiency not a hundredfold, but many thousandfold. An adversary gives AutoGPT the task and internet connectivity and the machine goes and goes (think Energizer Bunny) to completion. In other words, the malware works on its own. With AutoGPT, the adversary has a tool that can be both persistent and scalable.
Reiter is not alone. Patrick Harr, CEO of SlashNext, offered that hackers use ChatGPT to deliver a higher volume of unique and targeted attacks faster, creating a higher probability of a successful compromise. “There are two areas where chatbots are successful today: malware and enterprise email compromise (BEC) threats,” says Harr. “Cyberattacks are most dangerous when they are delivered with speed and frequency against specific targets within an organization.”
Creating endless code variations
ChatGPT allows cybercriminals to make endless code variations to stay one step ahead of malware detection engines,” says Harr. “BEC attacks are targeted attempts to socially engineer a victim into giving up valuable information or financial data . These attacks require personalized messages to be successful. ChatGPT can now create well-written and personal emails a lot with infinite variations. The speed and frequency of these attacks will increase and result in a higher success rate of user compromises and breaches, and there has already been a significant increase in the number of breaches reported in the first quarter of 2023.”
Also, Reiter noted, the ability of chatbots to mimic humans is very real. Entities like the Internet Research Agency, long associated with Russian active measures, specifically disinformation and disinformation, can be expected to work overtime to develop capabilities to capture the tone, tenor, and syntax of ‘a specific person. The target audience may know this is possible, but when faced with content from the real individual and copycat content, who will they believe? Trust is at stake.
Harr stressed that similar machine learning-powered security will be needed to mitigate the problem: “You have to fight AI with AI.”
Should the world stop developing AI tools?
The warnings from security agencies around the world appear to line up with an open letter signed by many who have a hunting dog calling for a pause in the development of AI tools. But it appears to be too late for that, as evidenced by a recent US Senate Armed Services Committee hearing on the state of artificial intelligence and machine learning applications in improving operations of the Department of Defense in which the consensus was that a US pause would be detrimental to the country’s national security.
Those testifying, RAND’s Matheny, Mr. Palantir CTO Shyam Sankar and Shift5 co-founder and CEO Josh Lospinoso agreed that the U.S. currently enjoys an advantage, and that this pause would give opposing nations a chance to catch up and create AI models that the US would have a hard time defending against. That said, there was a universal call for checks on AI technology from witnesses, as well as bipartisan agreement within the subcommittee.
The subcommittee asked the three to work with others to choose and return within 30 to 60 days with recommendations on how the government should look at regulating AI in the context of protecting national security. Based on the conversations during the April 19th hearing, it is understood that AI technologies can be expected to be designated as dual-use technologies and included in the International Traffic in Arms Regulations (ITAR ), which does not prohibit international collaboration or exchange, but requires the government to have a say.
Copyright © 2023 IDG Communications, Inc.
Source link
AI-powered chatbots are quickly becoming the new norm for communication and customer service in countries around the world. However, as the use of AI chatbots grows, so too do the threats to national security. With the power of machine learning and natural language processing, AI-powered chatbots can easily mimic humans and manipulate conversations, making them vulnerable to malicious actors and impacting the security of a country’s data.
For instance, malicious actors could use AI chatbots to spread misinformation, manipulate public opinion, access confidential information, and even disrupt military operations. In some cases, chatbots are actively being used to target specific groups and individuals, exposing them to potential cyberattacks and threats.
The challenge of protecting national security and data continues to grow with the proliferation of these AI-powered chatbots. Governments and tech companies are beginning to take steps to counter these threats, such as investing in cyber security infrastructure and raising public awareness on the risks these AI-chatbots pose.
At Ikaroa, we are at the forefront of developing safe and secure AI-powered chatbot solutions, enabling our customers and partners to leverage the advantages of advanced technology, while ensuring their data remains secure. Through our groundbreaking chatbot platform, we provide AI-driven chatbot solutions that are secure, reliable and safe, helping to protect your data and ensure your conversations remain secure and private.
Overall, AI-powered chatbots have tremendous potential to revolutionize communication, but with this potential comes serious risks to national security. Tech companies, governments and businesses must be aware of these risks and take steps to protect themselves and their customers from malicious actors. At Ikaroa, we are committed to developing secure and reliable AI-powered chatbot solutions that give our customers the confidence that their data is protected and their conversations remain secure.