Government needs AI incident reporting system, says think tank

A think tank has emphasised the need for an incident reporting system to log artificial intelligence (AI) misuse or malfunctions. 

The Centre for Long-Term Resilience (CLTR) said the Department for Science, Innovation, and Technology (DSIT) may miss crucial insights into AI failures if they do not put a system in place.

The CLTR has warned without a robust incident reporting framework, DSIT will miss incidents in highly capable foundation models like bias, discrimination, or misaligned agents which could cause harm.

The think tank also said DSIT will lack visibility in incidents from the government’s use of AI in public services which could directly harm the public, such as through improperly revoking access to benefits.

According to the CLTR, incident reporting is a "proven safety mechanism" that can support the government's context-based approach to AI regulation.

They said that, at the moment, the Uk government lacks "an effective incident reporting framework." 

The think tank recommended the UK government take urgent steps to address the issue. 

These include creating a system for the government to report incident in its own use of AI in public services, which they said can help the government "responsibly improve" public health services.

Additionally, they suggest commissioning UK regulators and consult experts to confirm where there are the most concerning gaps. 

The CLTR said its mission is to "transform global resilience to extreme risks" by working with governments and institutions to improve governance, processes, and decision making.