Artificial intelligence (AI) has been surging in popularity over the last couple years, especially after the launch of ChatGPT in late 2022. People have been using it for all manner of writing tasks, from making essays and poetry to creating code and even pondering the meaning of life! The rise of ChatGPT has proven the massive potential of AI, and also shown how it can completely revamp the way we do work. AI is here to stay, with nearly every industry expected to adopt some form of this technology in the coming years. Cybersecurity is no exception, and AI will be a game-changer for cybersecurity—both beneficially and harmfully.
How AI is helping cybersecurity
First, the good news. AI and machine learning have given a massive boost to cybersecurity controls and services. Initially, people viewed the term “powered by AI” with skepticism as just the latest buzzword. However, CISOs have woken up to the fact that there is simply too much data floating around in their networks, and AI-based automation is the only way to make sense of it all.
There are many applications of AI in cybersecurity, with just a few listed below:
- Threat detection and analysis: Machine learning-powered security tools like Amazon GuardDuty can analyze terabytes of data and build a baseline of what is expected within a network and what is not. This allows them to sift through millions of events and identify suspicious activity in real-time. Such anomalies would likely not be caught by manual security review or basic security alerting.
- Automation and response: AI-based products are not just passive. They can be trained to take action against cyber threats also. By taking it a step forward, cybersecurity teams can have AI handle security threats and automatically identify, respond, and remediate within seconds, significantly reducing the time window between incident detection and remediation.
- Network and endpoint protection: AI-based protection can also wholly take over lower-level cybersecurity functions such as analyzing phishing emails, endpoint threats, suspicious websites, and malware. Most security vendors have already adopted some form of AI within their products and the trend is only expected to continue over time.
While some may scoff and say this is no different than current cybersecurity products which have become more efficient over time, they do not understand that an AI engine is effectively “learning” the environment in real time. This is not a product receiving regular updates from a vendor, but an intelligent system that is becoming increasingly aware of the environment in which it is present. CISOs need to consider such systems not as products but as an extension of their cybersecurity teams. This also brings us to the next question that people ask.
Will AI end up replacing cybersecurity?
With the launch of ChatGTP, the conversation quickly turned from “how cool this new technology is” to “how many jobs will be obsolete once this becomes more mainstream.” Concerns arose among programmers and copywriters that their careers had an expiration date, as AI would soon take over tasks like coding and writing. People have also started asking the same questions of cybersecurity. Will AI take over cybersecurity jobs? Will CISOs be replaced by AI chatbots that can answer every security query that management might have?
The answer is much more nuanced than a simple yes or no. Let us consider what can (and probably will) get replaced. Repetitive tasks within cybersecurity will undoubtedly get moved over to AI-based automation. This includes job roles that involve simply responding to email alerts with a templated answer or manually updating blacklists on a firewall, or downloading vulnerability reports to forward them onward. But even more technical jobs might be at risk: ChatGPT has proven itself capable of writing Python code and basic security alert logic for SIEM solutions. It can even generate templates for security policies and standards if asked correctly.
That does not mean cybersecurity teams should start panicking, though. AI-based products will introduce a new set of skill requirements. Penetration testing teams will realize that ChatGPT can be used to automate and create excellent social engineering tests and Red Team use cases. ChatGPT will likely become a security tool in the future as cybersecurity teams realize its potential to make their lives easier.
Furthermore, AI-powered business applications will introduce new vulnerabilities. Consider this: a few decades ago, nobody had heard of “application security” until cybercriminals started to compromise company after company with attacks like SQL injections and cross site scripting (XSS). Suddenly, a whole new discipline of cybersecurity was born, which has now matured into the field of Application Security.
Expect AI-focused security to have the same effect. AI-specific attacks like model evasion and inference attacks can bypass machine learning models and result in data leakage and even a complete compromise of the machine learning model itself. This can be a security nightmare for companies using models in critical industries like healthcare, government, and finance. These attacks are quickly gaining popularity among cybercriminals who realize they can tamper with AI-based decision-making for their benefit. As a result, there will surely be new products in the near future designed to protect against these attacks, like firewalls for AI applications.
Additionally, cybersecurity teams will need to learn new techniques for identifying such weaknesses in machine learning models and testing them. Cybersecurity assessments or threat models of an AI-based application currently only cover risks related to applications or the underlying infrastructure, not AI. This presents a blind spot within cybersecurity teams that needs to be fixed to prevent a new wave of attacks targeting AI-based applications.
AI-powered cybercrime is a reality.
Another piece of bad news is that cybercriminals have already started to weaponize AI, and this will only increase over time. By offloading lower-level attacks onto AI-based bots, cybercriminals can free up their time to further refine and improve their abilities. Imagine an AI continually updating ransomware and DDoS attacks by analyzing what security updates are coming out, removing any manual effort needed by attackers. The learning aspect of machine learning can be abused to explore a target’s environment and launch increasingly sophisticated attacks until a vulnerability is found. In addition to these kinds of attacks, machine learning can also create malware that can be autonomous and work without any human agent to drive it.
And this is just the technical side of cyberattacks. AI can empower scams and digital fraud as well. We have already seen a rise in deepfake scams, which take social engineering attacks to a new level. By using this technology to impersonate a trusted person over remote meetings, cybercriminals can gain access to trusted networks and databases. This new scam is dangerous enough for the FBI to release a special advisory on the issue. Cybersecurity teams will need to update their awareness training to make sure staff are trained on how to identify potential deepfake scams, which are expected to get harder to detect over time.
Other social engineering attacks (like phishing) used to be detectable due to the numerous typos and mistakes that frequently appeared in their emails. However, with ChatGPT, cybercriminals now have access to a tool that can quickly write convincing emails in seconds. This can easily be automated to form a pipeline of phishing attacks virtually indistinguishable from corporate emails.
Is the future of AI in cybersecurity?
Like it or not, AI will bring massive changes to the world of cybersecurity, in both helpful and dangerous ways. It is going to change how cybersecurity works by removing repetitive tasks and offloading them onto AI bots. At the same time, it will introduce new types of vulnerabilities and scams which were previously not possible.
CISOs also need to realize that AI will empower but not replace essential human skills. Leadership skills like empathy and emotional intelligence will always be in demand, and no AI can motivate a cybersecurity team or take them out for lunch when morale is low (yet).
As we look ahead to 2023 and beyond, it is clear we are entering a new and uncharted area of technological innovation. AI is here to stay, and cybercriminals are already using it, so cybersecurity teams must learn about this technology and how to use it as well.