Posted on Leave a comment

Mathematicians Call For End Of Police’s Predictive Crime AI

Law enforcement’s use of AI to determine where crime will take place and by whom, is incredibly dangerous to a free society, and amounts to little more than the use of a crystal ball or tarot cards. Thousands of mathematicians are calling for an end of it. ⁃ TN Editor

After a flurry of police brutality cases this year and protests swarming the U.S. streets, thousands of mathematicians have joined scientists and engineers in calling for boycotting artificial intelligence from being used by law enforcement.

Over 2,000 mathematicians have signed a letter calling to boycott all collaboration with police and telling their colleagues to do the same in a future publication of the American Mathematical Society, Shadowproof reported.

The call to action for the mathematicians was the police killings of George Floyd, Tony McDade, Breonna Taylor, and many more just this year.

At some point, we all reach a breaking point, where what is right in front of our eyes becomes more obvious,” says Jayadev Athreya, a participant in the boycott and Associate Professor of Mathematics at the University of Washington. “Fundamentally, it’s a matter of justice.”

The mathematicians wrote an open letter, collecting thousands of signatures for a widespread boycott of police using algorithms for policing. Every mathematician within the group’s network pledges to refuse any and all collaboration with law enforcement.

The group is organizing a wide base of mathematicians in the hopes of cutting off police from using such technologies. The letter’s authors cite “deep concerns over the use of machine learning, AI, and facial recognition technologies to justify and perpetuate oppression.”

Predictive policing is one key area where some mathematicians and scientists have enabled the racist algorithms, which tell cops to treat specific areas as “hotspots” for potential crime. Activists and organizations have long criticized the bias in these practices. Algorithms trained on data produced by racist policing will reproduce that prejudice to “predict” where crime will be committed and who is potentially a criminal.

“The data does not speak for itself, it’s not neutral,” explains Brendan McQuade, author of Pacifying the Homeland: Intelligence Fusion and Mass Supervision. Police data is “dirty data,” because it does not represent crime, but policing and arrests.

“So what are its predictions going to find? That police should deploy their resources in the same place police have traditionally deployed their resources.”

Several, if not all, U.S. states and major cities are thought to use some type of predictive policing or pre-crime software with known users including — Chicago, Atlanta, Tacoma, New York, and LA, though not without protesting its use. As Activist Post previously reported, many of these states are using Palantir software for their predictive crime algorithms and have been exposed for doing so, like Florida, whose police terrorized and monitored residents of Pasco County.

These police organizations across the U.S. have been using what is known as “heat lists” or pre-crime databases for years. What is a “heat list,” you may ask?

Well, “heat lists” are basically databases compiled by algorithms of people that police suspect may commit a crime. Yes, you read that right — a person who might commit a crime. How these lists are generated and what factors determine an individual “may commit a crime” is unknown.

Read full story here…

Leave a Reply

Your email address will not be published. Required fields are marked *