AI experts say that research in algorithms that claim to predict crimes must end

AI experts say that research in algorithms that claim to predict crimes must end

A coalition of AI researchers, data scientists and sociologists is Is called on the academic world To stop publishing studies that claim to predict an individual for the crime of using the algorithm on the training data like facial scans and crime data.

Such work is not only scientifically illiterate, says unity the key to technology is but the continuation of a cycle of prejudice against black people and people of color. Numerous studies show the justice system treats these groups more strongly than white people, so any software trained on this data only amplifies and entrenches social prejudice and racism.

“Let’s be clear: there is no way to develop a system that can predict or identify the ‘crime’ that is not racial biased — because of this type of ‘crime’ itself ethnic biased” to write it. “Research of this nature — and with it claims to accuracy — the rest on the assumption that the data with regard to criminal arrest and conviction can serve as credible, non-biased indicators of underlying criminal activity. Yet their records are far from neutral.”

A Open letter Written by the coalition was drafted in response to news that Springer, the world’s largest publisher of educational books plans to publish just such a study. The letter, which has now been signed by 1,700 experts, calls on the Springer to rescind the paper and other academic publishers to refrain from publication of similar work in the future.

“At a time when the legitimacy of the carceral state and the police in particular is being challenged on the basic premise in the United States, there is more demand in the law enforcement agencies to the research of this nature,” write it. “The circulation of this work by a large publisher like Springer will represent an important step toward the legitimation and application of the repeatedly proved wrong, socially harmful research in the real world.”

The study in question, titled “A deep neural network model to predict crimes using image processing,” the researchers claimed to have created a face recognition system that was “able to predict whether someone is likely going to be a criminal … with 80% accuracy and no racial prejudice”, according to a now deleted Press release. The authors of the paper included in the Ph. D. student and former NYPD police officer Jonathan W. Korn<؛؛>.

In response, the open letter Springer said that it will not publish the paper according to MIT Technology Review. “The paper you reference was submitted for the upcoming conference for which Springer had planned to publish the proceedings, ” the company said. “After a thorough peer review process the paper was rejected.”

However, as the alliance is critical to technology makes clear, this incident only as an example in a broader trend within the data science and machine learning, where researchers use social, as contingent on the data to try and forecast or classification of complex human behavior.

In a notable example, as of 2016, by researchers from Shanghai Jiao Tong University claimed to have created an algorithm that can also predict the crimes of the facial features. Study on the criticism and denied, with researchers from Google and Princeton publishing a Long denied Warning that AI researchers were revisiting the pseudoscience of physiognomy. It was a discipline was established in the 19th century by Cesare Lombroso, who claimed he can identify “born criminals” by measuring the dimension of their faces.

“When put into practice, the pseudoscience of physiognomy becomes pseudoscience of scientific racism,” wrote the researchers. “Rapid progress in artificial intelligence and machine learning have enabled scientific racism to enter a new era, in which machine learned models embed the biases present in Human Behaviour used for model development.”

The images used in a 2016 newspaper that claimed to predict crimes from the facial features. The top row shows the “criminal” is facing, while the bottom row shows the “non-criminals.”
Photo: Wu Xiaolin and Xi Zhang, “automatic inference on crime using face images”

2016 paper also demonstrated how easy it is for AI practitioners to fool themselves into thinking they’ve got an objective system of measuring crime. Researchers from Google and Princeton stated that, based on the data shared in the paper, all “non-criminals” appeared to be smiling and wearing collar shirts and suits, while none of the (frowning) were guilty. It is possible in this simple and misleading visual tell was guide the algorithm’s expected of sophisticated analysis.

The Alliance for key technologies of the letter comes at a time when mobility around the world of issues to highlight racial justice, triggered by the murder of George Floyd by law enforcement. These demonstrations also seen major tech companies pull back on their use of the face recognition system, in which research by black academics have shown Interracial biased.

Letter of the authors and sign the call on the AI community to rethink how the evaluation of the “goodness” of the work thinking not about the metrics like accuracy and precision, but about the social impact of such technology on the can. “If machine learning is to bring about “social good” presented in grant proposals and press releases, researchers in this space is necessary to actively reflect on the power structures (and attendant oppressions) that their work possible,” write the authors.

STAY TUNED WITH US FOR MORE INTERESTING ARTICLES ONLY ON MYTECHYBLOGS.COM

admin

Leave a Reply

Your email address will not be published. Required fields are marked *