ChatGPT (Generative Pre-training Transformer) is an AI-powered chatbot developed by OpenAI, and probably one of the hottest topics swirling the tech world today. ChatGPT has been trained on a massive amount of text data and can generate human-like text, answer questions, and translate between languages. Although the technology holds great promise across all industries, it should also come with a warning label. There is no better example than how technology embodies both the good and the bad in cybersecurity.
For cybersecurity professionals, the technology does offer a powerful way to better understand and research cyber events. In a world that depends on swift action, the promise of ChatGPT could allow security analysts to respond and react to a security incident more quickly, no longer having to slog through countless events logs, reams of data, or mountains of code. Ultimately, streamlining research and incident response.
Imagine a scenario…. A security system has detected unusual activity on a system – alerting on specific lines of code within an application. Normally, the security analyst would proceed with analyzing the threat, potentially implementing some version of the Cyber Kill Chain. Heretofore this process is labor-intensive and time consuming, both the enemy of effective incident response.
Here’s an example of the first, “warning label”. ChatGPT is not perfect, and it is crucial to be vigilant when using any language models for cybersecurity (or any purpose for that matter). You should only use trusted models from reputable sources and ensure that the data they are trained on is free of malicious content/intent. As an example, ChatGPT does appear to understand the interworkings of the Cyber Kill Chain, but does it understand the nuances of your environment? Has the AI been properly “trained” in what security analysts have worked on for years? All should be considered with using the technology to assist in incident response.
The technology could also be used to better understand how a particular piece of malware functions. The promise – security analysts no longer need to have a sandbox in which to analyze malicious code. Simply ask the AI to tell you what the malware is intending to accomplish.
Warning label #2…. Cybercriminals can use ChatGPT to create more sophisticated and challenging-to-detect malware. Using the model's ability to generate human-like text, they can create malicious emails designed to trick the recipient into downloading malware or visiting a phishing website. The Ai can even be asked for the best way to circumvent security controls within a website allowing cybercriminals to steal sensitive information such as financial information, intellectual property or personal data.
Efforts to counter the risks have been underway for only a short while. NIST has recently released its AI Risk Management Framework Playbook (NIST AI RMF). Organizations that are considering the use of ChatGPT or similar technology should review NIST’s recommendations.
The impact of ChatGPT on cybersecurity is significant, and it is essential to be aware of the potential risks it may pose. As the model becomes more sophisticated and powerful, it will become increasingly difficult to distinguish between human-generated and machine-generated text. In turn, organizations must be proactive in protecting themselves against potential risks. By staying informed, using trusted technology, and implementing best practices, organizations can turn ChatGPT into a tool to improve their cybersecurity rather than a weapon against it.