The Cloud Security Alliance (CSA) has revealed five ways malicious actors can use ChatGPT to enhance their attack toolset in a new report exploring the cybersecurity implications of large language models (LLM). The Security Implications of ChatGPT document details how threat actors can exploit AI-powered systems in different aspects of cyberattacks, including enumeration, database assistance, reconnaissance, phishing, and code generation polymorphic By examining these issues, the CSA said it aims to raise awareness of potential threats and emphasize the need for strong security measures and responsible AI development.
Certain sections of the document include brief risk reviews or countermeasure effectiveness ratings to help visualize the current levels of risk associated with specific areas and their potential impact on the business.
Adversary AI attacks and social engineering powered by ChatGPT were named among the five most dangerous new attack techniques used by threat actors by cyber experts at the SANS Institute at this week’s RSA conference .
Improved enumeration for finding attack points
ChatGPT’s improved enumeration to find vulnerabilities is the first attack threat that the report covers, classified medium risk, low impact, and high probability. “A basic Nmap scan identified port 8500 as open and revealed that JRun was the active web server. This information can be used to learn more about the network’s security posture and potential vulnerabilities,” says l report
ChatGPT can be used effectively to quickly identify the most frequent applications associated with specific technologies or platforms. “This information can help understand potential attack surfaces and vulnerabilities within a given network environment.”
Help on foot for unauthorized access
Foothold assistance refers to the process of helping threat actors establish an initial presence or foothold within a target system or network, with ChatGPT Enhanced Foothold Assistance rated as medium risk, medium impact and medium probability. “Usually, this involves exploiting vulnerabilities or weak points to gain unauthorized access.”
In the context of using AI tools, starting point assistance can involve automating the discovery of vulnerabilities or simplifying the process of exploiting them, making it easier for attackers to gain initial access to their targets. “When asking ChatGPT to examine vulnerabilities within a code sample of over 100 lines, it accurately identified a file inclusion vulnerability,” according to the report. Additional queries yielded similar results, with the AI successfully detecting issues such as insufficient input validation, hard-coded credentials, and weak password hashes. This highlights the potential of ChatGPT to effectively identify security flaws in code bases.”
Reconnaissance to assess attack targets
Reconnaissance, in terms of malicious threat actors in cybersecurity, refers to the initial phase of gathering information about a target system, network, or organization before launching an attack. This phase helps them identify potential vulnerabilities, weak points, and entry points that they can exploit to gain unauthorized access to systems or data. Reconnaissance is typically done in three ways: passive, active and social engineering, according to the report.
“Collecting comprehensive data, such as directories of corporate officers, can be a daunting and time-consuming process,” but by leveraging ChatGPT, users can pose specific questions, streamlining and improving data collection processes for various purposes In the report, ChatGPT-enhanced recognition was rated as low risk, medium impact, and low probability.
More effective fishing lures
With AI-powered tools, actors can now effortlessly craft legitimate-looking emails for a variety of purposes, the report said. Issues like misspellings and bad grammar are no longer obstacles, making it increasingly difficult to differentiate between genuine and malicious correspondence. In the report, phishing powered by ChatGPT was considered medium risk, low impact, and highly likely.
“Rapid advances in AI technology have significantly improved the capabilities of threat actors to create deceptive emails that closely resemble genuine correspondence. The flawless language, contextual relevance and personalized details of these emails emails make it increasingly difficult for recipients to recognize them as phishing attempts.”
Develop polymorphic malicious code more easily
Polymorphic code refers to a type of code that can be altered using a polymorphic engine while maintaining the functionality of its original algorithm. By doing so, polymorphic malware can change its “appearance” (content and signature) to evade detection while still executing its malicious intent, according to the report.
ChatGPT can be used to generate polymorphic shell code, and the same techniques that benefit legitimate programmers can also be exploited by malware. “By combining various techniques, for example, two methods of connecting to a process, two approaches to injecting code, and two ways of creating new threads, it is possible to create eight different threads to achieve the same goal. This allows for fast generation and efficient of numerous malware variations, complicating the detection and mitigation efforts of cybersecurity professionals.” ChatGPT-enhanced polymorphic code generation was rated high risk, high impact, and medium probability.
The adoption of AI in the market will cause the “parallel cloud adoption” trends.
It’s hard to overstate the impact of today’s viral adoption of AI and its long-term ramifications, commented Jim Reavis, CEO and co-founder of CSA. “The essential features of GPT, LLM and machine learning, combined with a pervasive infrastructure to deliver these capabilities as a service, are sure to create large-scale change very soon.”
CSA’s expectation that AI adoption in the marketplace will parallel cloud adoption trends and primarily use the cloud delivery model, Reavis added. “From the point of view of a typical enterprise today, they have to ensure the security of a handful of cloud infrastructure providers and thousands of SaaS providers, the latter is the biggest point. It is up to us to develop and execute a roadmap to extend and/or create new control frameworks, certification capabilities, and research artifacts to smooth the transition to cloud-enabled AI.”
Copyright © 2023 IDG Communications, Inc.
Source link
In an age of information-driven attacks, threat actors are looking for new and innovative ways to maximize their impact. A recent development is ChatGPT, an artificial intelligence chatbot designed to mimic human conversation to help attackers gain access to systems or take control of a user’s account. Here at Ikaroa, we have identified five key ways that threat actors can use ChatGPT to enhance their attacks:
1. Social Engineering: ChatGPT can be used to convince victims to provide confidential information. This could include login credentials, financial records and other personal data. ChatGPT can mimic a real person and imitate the tone and manner of their conversations to fool the victim into handing over sensitive data.
2. Spear Phishing: ChatGPT can be used to identify targets within organizations to exploit with malicious emails. By mimicking the communication patterns employees are used to, ChatGPT can recognize organizational structure and employees’ job roles. This allows malicious emails disguised as messages from legitimate senders to be prepared and sent.
3. Spreading Malware: ChatGPT can be used as a platform to spread malware. This could include malicious links, malicious documents and other malicious code. Once opened, these downloads or links can install malicious software on the victim’s computer or take control of their system.
4. Targeted Attacks: ChatGPT can be used to tailor the attack against a specific target. This could include detailed personal information on the target, such as their name, address and banking information. ChatGPT can use this data to get a deeper understanding of the target and modify the attack accordingly.
5. Fake Profiles: ChatGPT can be used to create false profiles on various platform such as social media and link sharing sites. These profiles can be used to spread malicious links, phishing pages, and other malicious content.
At Ikaroa, we believe it is important to be aware of all the potential ways of attack, so that organizations can take the appropriate preventative measures to ensure their systems and data remain secure. We encourage all organizations of any size to take stock of their security measures to see if they are at risk of a ChatGPT attack.