The Race For The Best Generative AI is Underway: What Are The Consequences For Cybersecurity?

Let’s take the following scenario: you receive an email from your CEO asking you to send him sensitive information. The message uses his usual vocabulary and even jokes about his dog. It’s precise, faithful, and very convincing. The problem? It was designed by a generative AI using information that a cybercriminal found on social networks.

The emergence of ChatGPT has placed AI at the center of attention and, at the same time, raised concerns about its implications for cyber defense. A few weeks after its launch, researchers demonstrated that ChatGPT can write phishing emails, create malware code, and explain how to incorporate malware into documents.

To make matters worse, ChatGPT is not the first chatbot to hit the market, nor the last. Just last week, Google and Baidu launched their versions. But as technology giants compete to create the best generative AI, what will be the consequences for the future of cyber defense?

The skills deficit remains an obstacle for cybercriminals

One of the first debates raised by ChatGPT is that of cybersecurity: could cybercriminals use ChatGPT or other generative AI to improve their attack campaigns? Could they thus make it easier for aspiring hackers to act?

ChatGPT is a powerful tool, and its many use cases can help its users become more effective, aggregate knowledge, and automate simple tasks.

That said, generative AI is not yet the miracle solution; it has limits. For starters, it only matters what it has been trained on and requires constant training. Moreover, the data on which the AI has been trained raises questions. Universities and the media are already concerned about the risk of plagiarism and AI-assisted disinformation. Therefore, humans often need to verify the results obtained by ChatGPT. However, it is sometimes difficult to know whether the content was invented or whether the results are based on reliable information.

See also  8 Realistic Versions of Nintendo Characters: A Visual Transformation

The same goes for generative AI applied to cyber threats. If a criminal wanted to write malware, he would still have to guide ChatGPT to create it and verify that the software works. A cybercriminal would still need some prior knowledge of attack campaigns to use it effectively, which means that the skills required to develop attack techniques are still a barrier for beginners. However, using AI for simpler things, such as creating credible phishing emails, is possible.

AI-based attacks prioritize quality over quantity.

Is the fear that ChatGPT will cause an increase in cyber attacks on companies justified? As often, reality differs a bit from the discourse.”

If the number of email attacks has hardly changed since the launch of ChatGPT, the number of phishing emails seeking to induce the victim to click on a fraudulent link has actually decreased from 22% to 14%. However, the average linguistic complexity of phishing emails has increased by 17%.

Of course, correlation is not synonymous with causality, but it is quite possible that ChatGPT allows cybercriminals to redirect their activities. Rather than multiplying email attacks by adding fraudulent links or malware, criminals may hope for a better return on investment by developing sophisticated scams that exploit the trust of their victims and ask them to take specific actions: asking HR to change the CEO’s payment details to another bank account controlled by a hacker.

Let’s go back to the hypothesis posed in the introduction: it would take only a few minutes for a cybercriminal to retrieve information about a potential victim from their social networks and ask ChatGPT to create an email. In a matter of seconds, this criminal would have a credible, well-written, and contextualized phishing email ready to be sent.

See also  Comprehensive Startup Building Guide Utilizing ChatGPT Prompts, Starting from Scratch (Free)

A future where machines fight against machines

The race for generative AI will push tech giants to want to commercialize the most accurate, fastest, and most credible AI. And cybercriminals will inevitably exploit this innovation for their benefit. The use of artificial intelligence (which also allows for the integration of falsified audio and video data) will make it easier for criminals to launch tailored attacks that are faster and more efficient.

For security teams tasked with protecting their employees, infrastructure, and intellectual property, relying on AI-based cyber defense will be essential. Self-learning AI can identify and contain subtle attacks thanks to a deep understanding of the users and devices within the organizations it protects. Learning from these life patterns develops a global understanding of what is normal for users in the real context of daily data exchange. In other words, the best way to stop hyper-personalized AI-powered attacks is to have an AI that knows even more about your business than a generative one could.

Obviously, the generalization of generative AI will ultimately lead us toward a war of algorithms, pitting machines against other machines. The time has come for security teams to introduce AI into their arsenal of cybersecurity tools.

Did you like this article? Do not hesitate to share it on social networks and subscribe to Tech To Geek on Google News to not miss any articles!
5/5 - (3 votes)
Mohamed SAKHRI

I am Mohamed SAKHRI, the creator and editor-in-chief of Tech To Geek, where I've demonstrated my passion for technology through extensive blogging. My expertise spans various operating systems, including Windows, Linux, macOS, and Android, with a focus on providing practical and valuable guides. Additionally, I delve into WordPress-related subjects. You can find more about me on my Linkedin!, Twitter!, Reddit Facebook

Leave a Comment