The police are making up the rules on live facial recognition as they go along, writes Sir David Davis
As published by LBC
The Government appears determined to introduce live facial recognition nationwide.
This is after a number of police forces have started using it without any proper parliamentary approval whatsoever, an action that has already attracted the criticism of the courts.
Labour seems intent on doing this without the intense scrutiny this sweeping change demands.
When the state proposes a major shift in how it monitors its citizens, the House of Commons must scrutinise it exhaustively, and must put the laws in place to ensure that it is used properly. The rest of Europe has done exactly that, whilst our Home Office has left the police forces to make up the rules as they go along.
This move fits a broader pattern.
The march towards a mandatory national digital identity. Attempts to abolish jury trials.
Now the move is into mass biometric surveillance.
To be clear, there is a legitimate place for facial recognition technology. When it is used to locate violent criminals, terrorists or runaways, but always under the control of rigorous rules set by Parliament.
But that requires a clear legal framework. We need answers. For how long can images be stored?
What protections exist when someone is wrongly matched by the software? Are the police allowed to keep images of people who have done nothing wrong?
What about images of children? What recourse does an innocent person have when their face becomes part of a police operation without their knowledge? Will the police be required to seek judicial authority to hold and use images? None of these questions have been answered.
As it stands, the police are making up the rules as they go along.
The familiar argument wheeled out, that only the guilty have anything to fear, collapses the moment the system misfires, as they have done on a number of occasions, particularly with members of ethnic minorities.
Errors ruin lives. It is easy to shrug off concerns until the red light lands on you or your family because of a technical glitch, a flawed watchlist, or a momentary resemblance to someone entirely different.
Once that happens, the damage is done.
As for the frankly odd questions in the consultation about trying to read emotions and predict crime from people’s behaviour in the streets, it sounds like the script of Minority Report. Plainly, whoever drafted the consultation has a ridiculously inflated idea of the accuracy of current technology, and little appreciation of the risks of false predictions in these “pre-crime” scenarios.
This debate is not a side issue or a matter for quiet departmental guidance.
It goes to the core of how a free country polices itself.
Before any national rollout takes place, Parliament must set the boundaries, define the safeguards and decide the limits of surveillance in a democratic society.
No government should be allowed to expand its powers to watch and monitor its citizens without the explicit and informed consent of Parliament and the citizens, such a system proposes to watch.
This is after a number of police forces have started using it without any proper parliamentary approval whatsoever, an action that has already attracted the criticism of the courts.
Labour seems intent on doing this without the intense scrutiny this sweeping change demands.
When the state proposes a major shift in how it monitors its citizens, the House of Commons must scrutinise it exhaustively, and must put the laws in place to ensure that it is used properly. The rest of Europe has done exactly that, whilst our Home Office has left the police forces to make up the rules as they go along.
This move fits a broader pattern.
The march towards a mandatory national digital identity. Attempts to abolish jury trials.
Now the move is into mass biometric surveillance.
To be clear, there is a legitimate place for facial recognition technology. When it is used to locate violent criminals, terrorists or runaways, but always under the control of rigorous rules set by Parliament.
But that requires a clear legal framework. We need answers. For how long can images be stored?
What protections exist when someone is wrongly matched by the software? Are the police allowed to keep images of people who have done nothing wrong?
What about images of children? What recourse does an innocent person have when their face becomes part of a police operation without their knowledge? Will the police be required to seek judicial authority to hold and use images? None of these questions have been answered.
As it stands, the police are making up the rules as they go along.
The familiar argument wheeled out, that only the guilty have anything to fear, collapses the moment the system misfires, as they have done on a number of occasions, particularly with members of ethnic minorities.
Errors ruin lives. It is easy to shrug off concerns until the red light lands on you or your family because of a technical glitch, a flawed watchlist, or a momentary resemblance to someone entirely different.
Once that happens, the damage is done.
As for the frankly odd questions in the consultation about trying to read emotions and predict crime from people’s behaviour in the streets, it sounds like the script of Minority Report. Plainly, whoever drafted the consultation has a ridiculously inflated idea of the accuracy of current technology, and little appreciation of the risks of false predictions in these “pre-crime” scenarios.
This debate is not a side issue or a matter for quiet departmental guidance.
It goes to the core of how a free country polices itself.
Before any national rollout takes place, Parliament must set the boundaries, define the safeguards and decide the limits of surveillance in a democratic society.
No government should be allowed to expand its powers to watch and monitor its citizens without the explicit and informed consent of Parliament and the citizens, such a system proposes to watch.
