There is significant potential for AI usage within the security and compliance industry. And yet, a lack of regulation is holding progress back. There is a consensus that if governments take the lead on regulation, they will stifle innovation. While if individuals take the lead, they will neglect protection. The solution then is a more collaborative approach – to innovation, but also to safety. Matt Cooper, Director, GRC at Vanta, explores how collaboration can light the way for AI regulation.
AI is constantly proving its ability to accelerate cumbersome tasks and free individuals up to dedicate their time to high-value activities. This is no different for those working in the security and compliance industry. According to data from Vanta’s State of Trust 2024 report, time spent on compliance tasks for UK organisations has reached over 12 working weeks a year. Given the obvious opportunity for AI to revolutionise this time-heavy task, it’s perhaps no surprise that two-thirds (66%) of UK businesses plan to invest more in security around the use of AI within their organisation in the next year.
AI can truly be transformative for how businesses streamline workflows and demonstrate continuous compliance. However, a lack of regulation – or at least a lack of pace associated with regulation – is holding this potential progress back. According to our research, over half of UK leaders (56%) say they are more likely to invest in AI if it is regulated. Without action to correct this, there is a very real risk that UK businesses will continue to become sidetracked by manual work and fall behind in the long run as their international counterparts harness AI’s potential.
Creating the optimum environment for regulation
An urgent question we face in facilitating this is where do we look to drive this push for regulation? There is a consensus that if governments take the lead, they may stifle innovation; while if individuals take the lead, they could neglect protection. But in reality, the key to progress isn’t siding with one over the other – but finding the right balance between them.
This requires governments, academics and business leaders to work together to create the optimum environment for businesses to shape what regulation looks like. This means sharing AI-based research, insights and concerns and finding common ground for action.
Businesses can help by volunteering to participate in regulatory sandboxes that offer a low-risk environment for experimentation and innovation. This creates the right environment for risk mitigation but also gives regulators far better insights into emerging technologies – ones they can use to create more effective, real-world ready regulations.
At the same time, governments should focus on championing ethical AI. This could take the shape of clear governance frameworks and/or the promotion of accountability. They also need to set the right conditions for ethical AI to thrive – using the aforementioned regulatory sandboxes to speed up the pace of regulation so that responsible innovation is encouraged. They also need to encourage public-private partnerships that make knowledge sharing the new norm. Further, at a more macro level, education and training should be made available so that individuals are aware of the risks and the opportunities – and why collaboration is the way forward.
This might sound like utopia, but while the approaches of each party may differ, the desired end product is the same: an innovative and safe future with AI. By recognising our duty to work together and by pooling our insights and experiences to benefit regulation, we can build a future where AI is used to its potential.
Promising signs for regulation, with room for improvement
It’s important to note that leadership isn’t standing still in its efforts to regulate AI. Europe’s AI Act is the first piece of legislation of its kind, hoping to pave the way for ‘trustworthy’ AI, with the first of its compliance deadlines coming up in early 2025.
In the UK, the Cybersecurity and Resilience Bill was recently announced as a new piece of legislation looking to tackle increased cyberthreats. There is much discourse around what this bill needs to achieve in order to be considered successful – not least the inclusion of AI regulation. As the UK government develops this further, they must recognise their responsibility in ensuring that collaboration is at the heart of promoting a safer digital environment for all.
However, it’s important to remember that the goal here isn’t just regulation for the sake of it. Over a third (37%) of UK leaders rank ‘keeping up with evolving regulation’ as their top security concern. This means that, for businesses, the juice has to be worth the squeeze, and they must find the regulation helpful and not just a complicating necessity.
Paving the way for collaborative compliance
The future of AI in the UK promises much, with a number of exciting investments from the likes of Amazon and Blackstone. While we cannot afford to take our foot off the innovation pedal, we must also not lose sight of the role of regulation in this revolution –and the responsibility of governments to develop and implement such guidelines effectively.
As has always been true, with evolving technologies comes evolving compliance, and to manage this, collaboration must become a core part of regulation. Governments, academics, innovators and leaders have, for the most part, already recognised the potential of AI. Now, they need to recognise the necessity of regulation and building a future that safeguards, not stifles.