David Davis MP writes on how proper laws on Artificial Intelligence could prevent more algorithm fiascos


As published in the Times:

After the futures of thousands of students were thrown into turmoil in August because of the unfair and unaccountable decision-making over their A-level grades, the government indicated that an independent review of the statistical model would be undertaken.

It looks as though the government has handed this job to the Centre for Data Ethics and Innovation (CDEI), which is shortly due to publish a review into bias in algorithmic decision-making. This body is chaired by none other than the chairman of Ofqual, Roger Taylor, who was at the helm when the exams regulator blithely sailed into the well-forecast storm over A-level grading.

If Mr Taylor’s conflict of interest weren’t problem enough, it is impossible to see how the CDEI could act impartially anyway. A commitment was made to parliament that the centre would be a fearless, independent statutory body. In fact, it has been established as an office of the Department for Digital, Culture, Media and Sport. It follows that its advice will be signed off by the very ministers who are its intended audience.

This is symptomatic of a wider problem, reflected in the most fundamental questions concerning the regulation of AI, an area in which Britain should be a leader. We have a strong history and international reputation in all areas critical to getting AI regulation right: law, governance, ethics, and technological innovation. The UK presidency of the G7 next year could be an ideal platform from which to lead.

We are seeing growing reliance on big data-driven decision-making systems by government and companies. Until the CDEI resolves its conflicts of interest and is able to steer a clear, independent course it will not be capable of leading the AI regulation debate.

I am leading a cross-party group of MPs who propose a new way forward. We want to see the idea of a new Accountability for Algorithms Act, proposed by the Institute for the Future of Work, become reality. We need an overarching, principles-driven approach to put people at the heart of developing and taking responsibility for AI, ensuring it is designed and used in the public interest, as the institute argues in a report published today.

If these principles were put on a statutory footing, we would be far better protected as a society against the kinds of failures that devastated the lives many thousands of young people this summer. The rewards of getting AI regulation right, for society and the economy, will be great.