Algorithmic policing risks intensifying systemic racism, harm privacy and Charter rights: report

A new report on algorithmic policing technologies is calling for a moratorium on their use until the Government carries out a comprehensive examination of their human rights implications and necessary legal reforms.

The Citizen Lab at the University of Toronto’s Munk School and the International Human Rights Program at the University of Toronto’s Faculty of Law released the report, “To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada,” on Tuesday. The report states that two police services – Vancouver and Saskatoon – have confirmed they are currently using or developing predictive algorithmic technologies, and other police forces have acquired technologies that provide that capability.

The report warns that these technologies risk reinforcing systemic bias against Black and Indigenous people and threaten the privacy and Charter rights of everyone. The Canadian legal system is currently without sufficient safeguards to ensure algorithmic policing is applied constitutionally and with the proper regulatory, judicial and legislative oversight, states the report.

“The top line finding is that there are algorithmic policing technologies being used and under development and consideration in Canada,” says Cynthia Khoo, a research fellow at the Citizen Lab and a technology and human rights lawyer.

“There’s enough evidence to show that there’s a tremendous risk of human rights violations if we’re not careful about the implementation of these technologies, and in deciding whether you even use them at all.”

Algorithmic policing technology are a variety of different tools which draw inferences from mass-data-processing to predict potential unlawful activity or analyse data through automated surveillance. The technology complements traditional investigative methods and allows police to more effectively allocate resources. Facial recognition, automated license plate readers and social media surveillance algorithms are forms of this technology. In general, its use is more widespread in the U.S. and the UK than in Canada, said the report.

The report warns that because of the use of historical police data, historically marginalized groups may find themselves fed through a “negative feedback loop.” Past systemic bias will be multiplied because the algorithm will read the historic bias as reason to label them a heightened risk.

“There are critical questions to be answered regarding whether algorithmic surveillance technologies should be used at all given known problems regarding inaccuracy, ineffectiveness, and biases in the technologies,” says Kate Robertson report co-author, Citizen Lab research fellow and criminal defence lawyer at Markson Law in Toronto.