Resources

US Prisons Mull AI To Analyze Inmate Phone Calls

A key House of Representatives panel has pressed for a report to study the use of artificial intelligence to analyze prisoners’ phone calls. “But prisoners’ advocates and inmates’ families say relying on AI to interpret communications opens up the system to mistakes, misunderstandings and racial bias,” reports Reuters. From the report:
The call for the Department of Justice (DOJ) to further explore the technology, to help prevent violent crime and suicide, accompanies an $81 billion-plus spending bill to fund the DOJ and other federal agencies in 2022 that the Appropriations Committee passed last month. The technology can automatically transcribe inmates’ phone calls, analyzing their patterns of communication and flagging certain words or phrases, including slang, that officials pre-program into the system. A House Democratic aide said in an emailed statement they were encouraging the DOJ “to engage with stakeholders in the course of examining the feasibility of utilizing such a system.”

Several state and local facilities across the country have already started using the tech, including in Alabama, Georgia and New York. The House panel wants the DOJ to look into potentially leveraging the technology for federal use and to identify gaps or shortcomings in the information it produces. Privacy groups say the technology could amplify racial bias in the justice system and unfairly subject prisoners to unaccountable artificial intelligence. Proponents dispute such criticisms, saying the tech is a vital time-saving tool for law enforcement and does not target specific groups.

320

Companies Are Using AI-Generated People To Appear More “Diverse”

AI startups are selling images of computer-generated faces that look like the real thing, offering companies a chance to create imaginary models and “increase diversity” in their ads without needing human beings. One firm is offering to sell diverse photos for marketing brochures and has already signed up clients, including a dating app that intends to use the images in a chatbot. Another company says it’s moving past AI-generated headshots and into the generation of full, fake human bodies as early as this month. The AI software used to create such faces is freely available and improving rapidly, allowing small start-ups to easily create fakes that are so convincing they can fool the human eye. The systems train on massive databases of actual faces, then attempt to replicate their features in new designs. But AI experts worry that the fakes will empower a new generation of scammers, bots and spies, who could use the photos to build imaginary online personas, mask bias in hiring and damage efforts to bring diversity to industries. The fact that such software now has a business model could also fuel a greater erosion of trust across an Internet already under assault by disinformation campaigns, “deepfake” videos and other deceptive techniques.

531

AI is Sending People To Jail–and Getting it Wrong

The US imprisons more people than any other country in the world. At the end of 2016, nearly 2.2 million adults were being held in prisons or jails, and an additional 4.5 million were in other correctional facilities. Put another way, 1 in 38 adult Americans was under some form of correctional supervision. The nightmarishness of this situation is one of the few issues that unite politicians on both sides of the aisle.

Under immense pressure to reduce prison numbers without risking a rise in crime, courtrooms across the US have turned to automated tools in attempts to shuffle defendants through the legal system as efficiently and safely as possible. This is where the AI part of our story begins. Police departments use predictive algorithms to strategize about where to send their ranks. Law enforcement agencies use face recognition systems to help identify suspects. These practices have garnered well-deserved scrutiny for whether they in fact improve safety or simply perpetuate existing inequities.

Researchers and civil rights advocates, for example, have repeatedly demonstrated that face recognition systems can fail spectacularly, particularly for dark-skinned individuals — even mistaking members of Congress for convicted criminals. But the most controversial tool by far comes after police have made an arrest. Say hello to criminal risk assessment algorithms.

476

It’s Getting Harder to Spot a Deep Fake Video

Fake videos and audio keep getting better, faster and easier to make, increasing the mind-blowing technology’s potential for harm if put in the wrong hands. Bloomberg QuickTake explains how good deep fakes have gotten in the last few months, and what’s being done to counter them.

588