Let’s take the following scenario: you receive an email from your CEO asking you to send him sensitive information. The message uses his usual vocabulary and even refers to his dog in a joke. It’s precise, faithful, and very convincing. The problem? It was designed by a generative AI using information that a cybercriminal found on social networks.
The emergence of ChatGPT has placed AI at the center of attention and, at the same time, raised concerns about its implications for cyber defense. A few weeks after its launch, researchers were able to demonstrate that ChatGPT is capable of writing phishing emails, creating malware code, and explaining how to incorporate malware into documents.
To make matters worse, ChatGPT is not the first chatbot to hit the market, nor the last. Just last week, Google and Baidu launched their own versions. But as technology giants compete to create the best generative AI, what will be the consequences for the future of cyber defense?
The skills deficit remains an obstacle for cybercriminals
One of the first debates raised by ChatGPT is that of cybersecurity: could cybercriminals use ChatGPT or other generative AI to improve their attack campaigns? Could they thus make it easier for aspiring hackers to act?
ChatGPT is a powerful tool, and its many use cases can help its users become more effective, aggregate knowledge, and automate simple tasks.
That said, generative AI is not yet the miracle solution; it has its limits. For starters, it only masters what it has been trained on and requires constant training. Moreover, the data on which the AI has been trained raises questions. Universities and the media are already concerned about the risk of plagiarism and AI-assisted disinformation. Therefore, humans often need to verify the results obtained by ChatGPT. However, it is sometimes difficult to know whether the content was invented or whether the results are based on reliable information.
The same goes for generative AI applied to cyber threats. If a criminal wanted to write malware, he would still have to guide ChatGPT to create it and then verify that the software works. A cybercriminal would still need some prior knowledge of attack campaigns to use it effectively, which means that the skills required to develop attack techniques are still a barrier for beginners. However, it is possible to use AI for simpler things, such as creating credible phishing emails.
AI-based attacks prioritize quality over quantity
Is the fear that ChatGPT will cause an increase in cyber attacks on companies justified? As often, reality differs a bit from the discourse.”
If the number of email attacks has hardly changed since the launch of ChatGPT, the number of phishing emails seeking to induce the victim to click on a fraudulent link has actually decreased from 22% to 14%. However, the average linguistic complexity of phishing emails has increased by 17%.
Of course, correlation is not synonymous with causality, but it is quite possible that ChatGPT allows cybercriminals to redirect their activities. Rather than multiplying email attacks by adding fraudulent links or malware, criminals may hope for a better return on investment by developing sophisticated scams that exploit the trust of their victims and ask them to take specific actions: asking HR to change the CEO’s payment details to another bank account controlled by a hacker.
Let’s go back to the hypothesis posed in the introduction: it would take only a few minutes for a cybercriminal to retrieve information about a potential victim from their social networks and ask ChatGPT to create an email. In a matter of seconds, this criminal would have a credible, well-written, and contextualized phishing email ready to be sent.
A future where machines fight against machines
The race for generative AI will push tech giants to want to commercialize the most accurate, fastest, and most credible AI. And it is inevitable that cybercriminals will exploit this innovation for their own benefit. The use of artificial intelligence (which also allows for the integration of falsified audio and video data) will make it easier for criminals to launch tailored attacks that are faster and more efficient.
For security teams tasked with protecting their employees, infrastructure, and intellectual property, it will be essential to rely on AI-based cyber defense. Self-learning AI can identify and contain subtle attacks thanks to a deep understanding of the users and devices within the organizations it protects. By learning from these life patterns, it develops a global understanding of what is normal for users in the real context of daily data exchange. In other words, the best way to stop hyper-personalized AI-powered attacks is to have an AI that knows even more about your business than a generative one could.
Obviously, the generalization of generative AI will ultimately lead us towards a war of algorithms, pitting machines against other machines. The time has come for security teams to introduce AI into their arsenal of cybersecurity tools.