Ensuring reliability and promoting confidence in AI-based research

Jane Frost CBE, chief executive of MRS explains how AI can be used appropriately in research

Artificial intelligence (AI) has been on everyone’s lips recently – its potential, its limitations and its future – and the UK has taken to the technology with particular enthusiasm. We are one of the top five chatbot using countries worldwide (AI Multiple), almost half of our healthcare organisations work with it (Microsoft) and it has the potential by itself to create a 10.3 per cent expansion in the economy by 2030 (PWC).
The research sector has contributed to its rapid growth and has largely embraced the potential of AI. It has major benefits when used correctly, making us more efficient and promoting even greater accuracy within our work. It opens up new forms of cutting-edge research, helping us to explore individuals’ habits in their natural environments, such as in the workplace or at home, like never before.
Most of all, it gives us more time to develop actionable insights for decision-makers, our sector’s primary responsibility.
This is the principle behind customer insights agency, Streetbees. With a major global database powering it, Streetbees uses a WhatsApp-style app to collect data daily and in real-time from its 3.5 million participants worldwide. The AI technology – the recipient of an MRS Award in 2022 – will then compile, pattern and group the vast range of responses to produce insight about everything from emotions in the workplace to the motivations behind grocery purchases during the cost-of-living crisis. As such, instead of spending time sifting through the data, the researchers can focus on producing recommendations to support commercial goals, social enterprise or policy making.

However, despite its vast potential, AI is not without its faults. At the moment, public trust in the overall concept is low. KPMG reports that only a quarter of people believe AI provides accurate information – a view that is, to an extent, understandable considering how often it comes to wrong or misleading conclusions. Nor should we expect perfection soon. Visual recognition systems, a technology that has been around for far longer, are still not without error.
Confusion surrounding how it operates is stirring fears further, with Google boss Sundar Pichai saying in a recent interview that he did not fully understand his own chatbot, Bard. In fact, many believe it will soon become a tool for disinformation, and their number includes Geoffrey Hinton, the ‘Godfather of AI’, who quit Google earlier this year because of fears about its future power.
As such, leaders and policy makers should be wary when relying on AI-produced insight and check its provenance, just as researchers should be rigorous in ensuring that it’s delivered to as high a standard as possible. Dialogue is key – the research sector is producing high-quality AI-led work – and we need to make sure methodology is transparent and robust, breeding faith in this rapidly advancing technology.

The Census – underpinning AI’s future
For our sector AI is an analytical tool, converting data into insights, from which we can guide discussions and develop actions. The data is pivotal, and this should be collected according to the rigorous high standards set out by MRS.
For example, using the internet as the sole basis for research simply won’t do. Results will be biased, tilting towards more digitally literate demographics such as Gen Z, and excluding groups that are generally under-researched because they are too costly to reach, from non-English speakers to those without access to the internet.
A good example of how this bias can be avoided comes from market research agency Kantar’s LinkAI tool. It predicts a typical sample of consumers’ views of an advert and is based on a demographically representative database of 230,000 video tests and 35 million human interactions. Within fifteen minutes, using AI it will produce a rating of the advertisement or public announcement’s impact, after which the researchers can analyse the data, suggest ways it can be improved and provide recommendations. A rise from ‘average’ to ‘best’ creative quality in score typically leads to a 30 per cent increase in return on investment – numbers that helped the tool win a 2022 MRS Award.
But it’s not just the quantity of the supporting research that is required. It needs to be rooted in nationally representative data – that’s where the Census comes in.

The decennial survey produces the most robust findings in the country, as well as being one of the most informative and trusted surveys in the world, with 97 per cent of households responding to the latest English and Welsh version in 2021. It is the bedrock of good research, informing representative sampling – for instance we know that an accurate national snapshot will require 51 per cent of participants to be women and 19 per cent to be aged 65 years or over. AI-based research certainly relies on it.
The Census’ reliability comes from its national scale and its rigour. The most recent version was the first to be digital-first, which was a success as the high response rates shows, but the use of face-to-face interviewers for those not digitally connected and other hard-to-reach demographics was key. It ensured whole groups weren’t missed by cutting corners and that it remains a robust benchmark.
It underpins a vast amount of UK research, including AI-based work – without it, our findings wouldn’t be representative, and our actionable insights would be ineffective.

The parameters for sustainable growth – transparency and responsibility
As the UK’s professional body for research, insight and analytics, MRS sets high standards through our Code of Conduct that we require all our members to meet with every research project. It’s part of how we are championing innovation, fostering trust in AI research tools and ensuring decision-makers can treat results with confidence. It is a major motivation behind our recent work with the Government, which has included consulting closely with the All Party Parliamentary Group on Data Analytics.
Responsibility is key. As we advised in our submission to the Government’s artificial intelligence white paper (‘a pro-innovation approach to AI regulation’), where AI technologies are used across a project journey, as they often are in research, liability must be clear. If the results are misleading or inaccurate, the part of the supply and use chain – be that developers, users or service providers – that is responsible should be identifiable via legislation.
Doing this will breed ownership over the quality of the AI tool and encourage the liable party to promote transparency in its operation – opening it up to the public or private sector client and showing why it should be trusted.
However, this doesn’t mean quality control should be purely the responsibility of one group. The entire supply chain needs to be involved, as do the clients receiving the insights. The latter must challenge suppliers if the methodology isn’t initially obvious, and the former should be open and honest about how its AI system works.
A practical step would be for clients to include the need to clarify or explain the provenance of an AI tool when research suppliers are responding to a given brief – ensuring that its credentials are known from the beginning of any relationship. But while this would be a legitimate demand of research agencies, we must keep in mind that a large number of them are small- or medium-sized. For these businesses, being able to easily and cost effectively determine and report AI usage will be essential.
Research needs to set itself apart from the wider AI movement. Fundamentally, our sector is evidenced-based, and that is being mirrored in our work with AI. We want decision-makers to embrace the impressive findings AI can deliver and treat the recommendations that come from it with confidence.
While UK legislation is being developed, we all have an immediate role to play to ensure high standards are maintained – and confirming that the most recent Census data is being used to achieve national representation is a core part of that. Additionally, we must counter scepticism about wider AI by being ethical and transparent with our work and clearly showing how the research system operates. That way, all society can feel the undoubted benefits it has to offer.