Author: Simon Rycroft
Building trust in AI
Amid the excitement surrounding AI there’s just as much uncertainty. Taking a risk-based approach to AI assurance can mitigate this while allowing businesses to unlock the potential it has to offer…
There’s plenty of excitement surrounding AI and it’s no wonder – the potential benefits are far-reaching and substantial. But this excitement is tainted with no small amount of fear.
Through vendor hype, scare stories, political posturing and regulator murmuring, organisations are left scratching their heads as to how they can leverage AI while doing so securely, ethically and responsibly – and without running the risk of unintended consequences.
To do this, they need meaningful AI assurance.
Before we talk more about AI assurance and how businesses can use it to realise the benefits of AI, it’s important to understand some of the key issues that AI presents.
The problem with AI
Our research has shown that organisations are struggling to realise the potential benefits of AI. Too often, AI is designed as an experiment for the lab with developers of the technology struggling to predict how it will react and work when deployed in the real world.
The bottom line is that deploying unpredictable AI, or AI that is vulnerable to threats such as adversarial AI or data poisoning, carries substantial business risk.
Businesses are aware of these risks and this is what is causing uncertainty, with many asking themselves questions such as:
How can we explain what AI is really doing?
How can we trust what AI does or produces?
What about the protection of personal data?
How can we ensure it behaves ethically?
Can we safely remove the human from the loop?
How can we gain the benefits of AI safely, securely and predictably?
These questions and more can be answered by taking a technically-backed, risk-based approach to AI assurance. And that’s why CRMG has teamed up with Advai.
This enables us to build on Advai’s advanced AI stress testing techniques within a risk-based approach to assurance that many business leaders and risk managers will recognise and understand.
Why take a risk-based approach to AI assurance
There are many benefits to taking a risk-based approach to AI assurance. The first is that the Board can govern the use of AI in line with the wider appetite for business risk. It also features advanced AI stress testing techniques pioneered by Advai in the defence sector.
It allows organisations to leverage existing business risk management principles and architecture, underpinned by CRMG’s risk experience. This is all supported by a control framework that meets the requirements of most emerging AI regulations.
How risk-based AI assurance actually works:
Taking a risk-based approach to AI assurance features the following key components.
AI risk architecture: this means taking advantage of existing risk management and reporting arrangements and enhancing them to reflect AI-specific threats and controls.
AI assurance framework: this ensures that AI is managed in a consistent way, in line with risk profiling and emerging regulation, such as the EU Artificial Intelligence Act, to provide confidence that AI is being deployed responsibly.
Advanced technical stress testing techniques: testing techniques, pioneered by Advai, enable specific instances of AI to be measured against intended outcomes and for integrity-related issues to be identified early.
Take the first step towards AI assurance:
If your organisation wants to realise the benefits of AI by implementing a risk-based approach, get in touch with the team here.
We can offer an AI risk orientation workshop for senior management, or simply chat about your business and how it might be impacted by AI.