Skip to content
14 min read

What Could ChatGPT and AI Mean for Cyber Security?

Significant technological advancements often incite debates about whether the upsides outweigh the potential negatives. From AlphaGo's landmark victory versus a human player at the board game Go to the AlphaZero reaching superhuman chess skills in under 24 hours, artificial intelligence is an area of technology that continues to impress and improve at a startling rate.

Recently, the emergence of the powerful AI-powered ChatGPT chatbot stirred up a mixture of both awe and concern about how it will be used in a wide range of industries. In this article, we take a look at ChatGPT and AI with a focus on their implications for cyber security and data privacy. 

A Brief Overview of ChatGPT

ChatGPT is a chatbot powered by Generative Pre-trained Transformer 3 (GPT-3), which is a deep-learning language model. The release of ChatGPT took the world by storm in November 2022 due to its impressive ability to write in a human-like way. From blog posts to poetry to even coding, ChatGPT’s powerful capabilities catapulted it into the spotlight of media commentary.

The OpenAI research laboratory unveiled the chatbot, having trained it on 45 terabytes of text data from the Internet. The bot’s knowledge base ends in 2021, so there is a clear limitation on its ability to answer queries and searches related to more recent events.

The interface is conversational—simply type in a query in a search bar and the bot delicately crafts a thorough and often compelling answer. The dialogue format makes it easy to answer follow-up questions and engage in intricate conversations.

ChatGPT’s ease of use makes it incredibly accessible for such a powerful technology. This widespread access to such a powerful technology is causing leaders across disparate industries to consider what ChatGPT might have in store for them, both good and bad. 

ChatGPT and AI in Cyber Security

Here are five key areas of cyber security that ChatGPT and AI could impact, for better and worse. 

1 - High-Volume Spear Phishing

Spear phishing emails are malicious emails targeted at a specific person or organisation. Threat actors often deploy this social engineering technique to gain access to an account or initiate fraudulent transactions. A large variety of online sources, from company websites to social networking platforms, arm attackers with useful information about people and companies that can help them craft more convincing spear phishing emails.

The targeted nature of these emails normally makes them hard to scale to the level of normal email spam. Part of the reason for this lack of scalability is the research required, but it’s also that increased levels of cyber security awareness make people more likely to spot obvious signs of mass email phishing, such as spelling errors or clunky language. And with many hackers not hailing from native-English destinations, these mistakes appear often in the mass phishing emails that they write.

However, advanced chatbots like Chat-GPT could change the game and enable high-volume, targeted, and effective spear phishing email campaigns. Research carried out separately by two different security companies in December 2022 found ChatGPT could write a plausible and well-written phishing email impersonating a web hosting company and a CEO.

Asking ChatGPT to write a phishing email now gets flagged as unethical activity, which suggests OpenAI paid attention to the concerns of security researchers. But similarly advanced AI text-based tools will likely emerge, and not all of them will flag requests or queries as unethical. Scaling these difficult-to-detect spear phishing emails might become far more feasible for hackers in the not-too-distant future. 

2 - Malware-as-a-Service

ChatGPT’s programming prowess sets a worrying precedent in lowering the barriers to creating malware. It’s trivial, for example, to get ChatGPT to write VBA code that downloads a resource from a specified URL any time an unsuspecting user opens an Excel workbook containing that code. Such a request would make it very easy to weaponise a phishing email with a malicious Excel attachment without requiring in-depth skills or knowledge.

The resource downloaded onto an end user’s computer could be a keylogger or a remote access trojan that provides access to a system or network and sensitive assets. Some security researchers were even able to get the bot to write malicious PowerShell scripts that delivered post-exploitation payloads (ransomware).

While ChatGPT’s coding skills are a concern, it still requires at least some degree of cyber security knowledge to manipulate queries in a way that produces working malicious code. A perhaps more pressing issue is that generating malware from text commands alone opens up more opportunities for malware-as-a-service. Cybercriminals with real hacking skills could easily use ChatGPT to automate the creation of working malware and sell the end product as a scalable service. 

3- Propagating Fake News

With its impressive writing abilities, ChatGPT comes with a lot of abuse potential in the context of spreading fake news. Eloquent yet false stories can be generated with a simple sentence prompt. Media outlets such as Sky have already experimented with letting ChatGPT write articles.

Fake news stories about personal data breaches or security vulnerabilities could be written by malicious insiders or published by hackers that infiltrate journalists’ or users' accounts at high-profile publications and organisations. While unlikely, the possibility of this Orwellian outcome of not being able to decipher fact from fiction would lead to chaos and a loss of trust. At worst, a complete undermining of consumer confidence in the digital economy could ensue. 

4 - Enhanced Vulnerability Detection

Turning to a more positive perspective, ChatGPT (and AI in general) show great promise in improving vulnerability detection. Try the following experiment: copy a snippet of code from this Github page of vulnerable code snippets and ask ChatGPT to examine the code for security vulnerabilities. You’ll notice that the tool quickly flags whatever happens to be wrong with the code and even suggests how to fix the security weaknesses.

Turning to the broader field of AI rather than just ChatGPT, the powerful machine learning models that underscore these technologies can also enhance vulnerability detection. As a network and the number of endpoints on it grows, detecting anomalies and weaknesses becomes more challenging. AI-powered tools are far more effective at unearthing vulnerabilities because they can use enormous sets of training data to establish what’s normal while reducing the time to find what is abnormal on a network.

5 - Automating Security Team Tasks 

Cyber security skills gaps continue to place an excessive burden on security teams. The UK government’s 2022 report found 51 per cent of businesses have a basic skills gap in tasks like configuring firewalls, and detecting and removing malware. This skills gap places a heavy burden on existing teams to the point where alert fatigue and burnout are common issues.

Automation has a critical role to play in easing the impact of cyber skills shortages and helping security teams defend their organisations in today’s threat landscape. ChatGPT excels at rapidly writing programs and code that could prove beneficial for automating a range of security tasks. As an example, it takes a few seconds to produce a simple Python program that will scan for open ports on a given hostname.

You’re aware by now that ChatGPT can be manipulated to write malicious code, but the flip side of this is its usefulness in analysing malicious code to help figure out what it does. From explaining how various Windows registry keys can be used by malware to describing what large chunks of malicious code are attempting to do on a system, speeding up and strengthening the tricky area of malware analysis is invaluable for many organisations.

Getting prepared 

While it’s still early days in understanding the full implications of ChatGPT and AI in cyber security, the ideas here offer a snapshot of what’s possible. Getting prepared for both the good and the bad of AI requires a smart cyber security strategy that accounts for these technologies’ increasing influence.

At tmc3, our cyber security services help you design an effective strategy that evolves to meet new cyber threats and get the most from emerging technologies like ChatGPT.

Get in touch with our expert team and learn how tmc3 can make a difference to your business.

avatar
I love to help organisations solve data protection challenges. To do this, I transform security and data privacy from being necessary overheads to becoming business enablers. I have enjoyed many leadership roles throughout my career in data privacy, information security, and risk management. I take pride in creating positive outcomes, with over 15 years' experience of exceeding expectations in high pressure environments, both domestically and internationally.

COMMENTS