Despite the rapid acceleration of artificial intelligence (AI) adoption, only 37% of IT decision-makers are prioritising securing AI from cyber threats when implementing the technology, new research by digital transformation company ANS reveals. This exposes a worrying gap in readiness that could be putting businesses at increased risk of cyber attacks.
The survey of over 2,000 IT decision-makers, conducted in partnership with Censuswide*, found that 94% of IT decision-makers now see AI as central to their organisation’s corporate strategy. Improved decision making (39%), enhanced customer experience (37%), and product and service innovation (36%) were among the most widely cited factors driving its adoption, highlighting the elevation of AI from a narrow operational tool to a key part of the delivery of strategic, business-wide objectives.
However, as organisations embrace AI, their attack surfaces expand, giving cyber criminals fresh opportunities to exploit users and systems. Concerningly, the research suggests security for AI is being overlooked: only around a third (37%) of respondents said that cybersecurity is one of their organisation’s top three challenges when adopting AI, and less than a third (29%) stated that they see it as a strategic enabler within their AI strategy.
Notably, when asked about the role of security in their AI strategy, 37% described it as a compliance requirement, unnecessary cost, or non-essential. From this, it would seem that many organisations are prioritising short-term AI gains and viewing the additional investment in cybersecurity as a barrier rather than an enabler. This mindset leaves organisations vulnerable to breaches, compliance failures, and potential setbacks in revenue growth, time-to-market, and competitive advantage.
Kyle Hill, CTO at ANS, said: “It’s clear that organisations understand the strategic value of AI, but recognition of security’s role in realising this value is lagging. IT decision-makers have an important part to play in shifting perceptions of security for AI within their organisations, ensuring it’s seen as the foundation for responsible acceleration rather than a speedbump that holds back innovation.”
Encouragingly, 42% of IT decision-makers claim to take a proactive approach to AI security, embedding it into development and strategy. This should include staff training, ensuring employees understand how to use AI safely and act as a strong line of defence against cyber attacks. But this also means more than half are still treating it as an afterthought, limiting resources and diminishing its potential as a strategic enabler.
Kyle continued: “Cyber attacks are rising and causing more disruption across the board, and as AI becomes central to operations, the stakes rise. Without the right safeguards in place, organisations could be opening themselves up to unnecessary risk, with the vulnerabilities outweighing the advantages.
“Organisations need to make sure they’re not only seeing the strategic value of AI, but the strategic value of responsible AI. If they can shift to this outlook, they will be better equipped to fend off threats targeting their AI systems and create a platform for innovation that drives real, tangible value”
You can read the full AI Readiness Secured report here.