AI and Cybersecurity: Balancing Innovation with Caution

By Aaron Bugal, Field CTO APJ, Sophos

Undoubtedly one of the most influential technologies in recent decades, the ascent of artificial intelligence has produced a mixture of reactions from individuals, organisations, and countries.

Eyes widen as we explore its potential, concerns grow as it threatens jobs, and conversations take place at a global level on how it should be regulated.

However, for cybersecurity professionals’ artificial intelligence presents a double-edged sword.

Although AI has shown the ability to enhance cybersecurity solutions with its pattern recognition, summarisation, and assistance capabilities, it also opens the door for threat actors to harness the technology in much more sinister ways.

So, in a world where we are in a constant race to out-innovate cybercriminals, what impact will AI have, especially as it continues to evolve itself?

New technologies mean new threats

 Cybercriminals have proven they shouldn’t be underestimated. They are continually updating their tactics, strategies, and tools to breach businesses, and AI only strengthens their arsenal.

AI has commonly been used to help threat actors better imitate real people – altering voices, pictures, and messages to carry out convincing phishing attacks.

Beyond mimicking human behaviour, cybercriminals have begun to experiment with AI at a more technical level. Malicious GPTs have been advertised on cybercriminal marketplaces, with functions such as automated penetration testing or malicious malware development.

However, sharing a similar experience to legal industries and businesses, there is still some hesitance from cybercriminals when it comes to implementing the technology into operations, as threat actors are mainly exploring generative AI in the context of experimentation and proof-of-concepts.

This does not mean organisations should see this as a sign to slow down, as artificial intelligence will inevitably become a regular feature of cyberattacks. Instead, businesses should be evaluating if they are using the technology in a secure and optimal way within their cybersecurity set up.

AI adoption is not about being first, but being smart

Businesses of all sizes are examining how AI can be used, with Sophos finding 98 per cent of organisations are using it within their cybersecurity infrastructure in at least some capacity.

Further to this, 65 per cent of organisations use cybersecurity solutions that include generative AI capabilities, and 73 per cent use solutions that include deep learning models.

While AI adoption in cybersecurity can bring many advantages, it also introduces a number of risks if approached incorrectly.

Poorly implemented AI models can inadvertently introduce considerable cybersecurity risks of their own – if it isn’t provided with the right inputs, it cannot provide adequate outcomes.

Organisations are alert to this risk, with the vast majority (89%) of cybersecurity professionals saying they are concerned about how potential flaws in cybersecurity tools’ generative AI capabilities will harm their organisation, with 43 per cent highlighting they are extremely concerned.

This alertness must also remain for AI that’s implemented in non-cybersecurity related tools, as emerging technologies pose threats in their infancy.

Agentic AI for example has become highly topical recently, but will a technology that learns from humans be able to adequately defend itself from cyber threats? At its current level, AI should be approached with the intention that it can serve a single purpose and expecting an individual system or ‘AI agent’ to do everything with minimal human interference is risk inducing.

Therefore, an organisation’s artificial intelligence advances – both within cybersecurity infrastructure and its entire technology stack – must be done with guardrails up and thorough oversight.

Fighting fire with fire without getting burnt

In an ongoing race against cybercriminals, artificial intelligence will only become a multiplier to innovation that takes place on both sides. For businesses, avoiding the risks of AI within cybersecurity systems is possible when implementation is approached with care. This can be achieved through:

  • Inquiring about vendor’s AI capabilities: AI requires transparency, and asking cybersecurity vendors about how their data is trained, what AI expertise their professionals have, and their roll out process for deploying AI capabilities will help paint a clearer picture of AI development best practises.
  • Providing strict outlines to AI investment: AI investment cannot be rushed, so it is important to assess whether AI provides the best solution for current cybersecurity challenges, prioritise specific AI investments, and measure the impact of AI once it is implemented into cybersecurity infrastructure.
  • Remain human first in AI adoption. Organisations should never take a set-and-forget approach to cybersecurity, and this is even more the case when AI is involved. Ultimately, cybersecurity is a human responsibility, and AI should be used as an accelerant to support cybersecurity professionals, not a replacement.

Artificial intelligence will become mainstay within organisations for many years to come. This is no different for cybersecurity, however with such high stakes it is vital that AI is used correctly, or it will only work against its intended purpose – giving cybercriminals the leg up over organisations in this ongoing battle. It is not about implementing a range of AI capabilities to expand your cybersecurity infrastructure, but the right capabilities that addresses your cybersecurity needs.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Next Post

First Ever Cross Continental Hackathon Launches in Nairobi & San Francisco

Sat Apr 26 , 2025
Share on Facebook Tweet it Share on Reddit Pin it Share it Email Fresh off the launch of Kenya’s National AI Strategy, a groundbreaking initiative is taking shape – Worldwide Hack, the first cross continental AI hackathon linking Nairobi and San Francisco, is officially live. Co hosted by Clutch Foundry […]

You May Also Like

Chief Editor

Jacktone Lawi

Meet Jacktone Lawi, a seasoned technology journalist with years of experience in the industry. I have developed my passion for technology during my formative years, which has been instrumental in shaping my career trajectory. My expertise lies in reporting on emerging technologies and their impact on businesses and consumers worldwide. Through my experience I’m well-versed in covering topics such as artificial intelligence, blockchain, cybersecurity, cloud computing, and digital transformation, among others. Throughout my career, I have has demonstrated an exceptional ability to distill complex technical information into accessible and engaging content that resonates with my readers. My writing style is clear, concise, and informative, allowing me to communicate even the most technical concepts to a broad audience. Beyond my writing skills, I have also become known for extensive network of industry contacts and ability to secure exclusive interviews with high-profile figures in the technology world. These connections have enabled me to gain unique insights into the latest trends and developments in the field, giving me a competitive edge in my reporting. In addition to my work as a journalist, I’m also actively engaged in the broader technology community. Where I regularly attend conferences and events, share insights and stays up-to-date on the latest innovations in the industry. Overall, my wealth of experience as a technology journalist have given me a deep understanding of the industry and its impact on society.

Quick Links