Resources

Secrecy undermines trust in Google antitrust trial

Before a single witness could utter a word of testimony in the Google antitrust case on Tuesday, the public and the press were temporarily barred from the courtroom. It’s just another step in a long list of anti-transparency measures styming access to the case: documents and testimony have been repeatedly sealed; exhibits used in open court have been removed from the internet; and only those who can actually make it to the courtroom are permitted to listen to the testimony (when they’re allowed in at all, that is).

Despite these restrictions, reporters and courtwatchers have been doing their best to inform their audiences about the trial. But if the federal judge presiding over the case, Amit Mehta, doesn’t act soon to stop this tsunami of secrecy, people may be left mostly in the dark about the biggest antitrust lawsuit of the 21st century.

Behind this anti-transparency push are Google and other big tech companies arguing that letting people observe the case fully could reveal trade secrets or otherwise embarrass them by generating “clickbait.” There is some precedent for closing parts of trials or redacting court documents to avoid disclosing trade secrets. But not to save corporations from embarrassment.

159

A Bot Was Scheduled To Argue In Court, Then Came the Jail Threats

A British man who planned to have a “robot lawyer” help a defendant fight a traffic ticket has dropped the effort after receiving threats of possible prosecution and jail time. […] The first-ever AI-powered legal defense was set to take place in California on Feb. 22, but not anymore. As word got out, an uneasy buzz began to swirl among various state bar officials, according to Browder. He says angry letters began to pour in. “Multiple state bar associations have threatened us,” Browder said. “One even said a referral to the district attorney’s office and prosecution and prison time would be possible.” In particular, Browder said one state bar official noted that the unauthorized practice of law is a misdemeanor in some states punishable up to six months in county jail.

“Even if it wouldn’t happen, the threat of criminal charges was enough to give it up,” [said Joshua Browden, the CEO of the New York-based startup DoNotPay]. “The letters have become so frequent that we thought it was just a distraction and that we should move on.” State bar associations license and regulate attorneys, as a way to ensure people hire lawyers who understand the law. Browder refused to cite which state bar associations in particular sent letters, and what official made the threat of possible prosecution, saying his startup, DoNotPay, is under investigation by multiple state bar associations, including California’s.

276

Google’s Eric Schmidt Helped Write AI Laws Without Disclosing Investments In AI Startups

About four years ago, former Google CEO Eric Schmidt was appointed to the National Security Commission on Artificial Intelligence by the chairman of the House Armed Services Committee. It was a powerful perch. Congress tasked the new group with a broad mandate: to advise the U.S. government on how to advance the development of artificial intelligence, machine learning and other technologies to enhance the national security of the United States. The mandate was simple: Congress directed the new body to advise on how to enhance American competitiveness on AI against its adversaries, build the AI workforce of the future, and develop data and ethical procedures.

In short, the commission, which Schmidt soon took charge of as chairman, was tasked with coming up with recommendations for almost every aspect of a vital and emerging industry. The panel did far more under his leadership. It wrote proposed legislation that later became law and steered billions of dollars of taxpayer funds to industry he helped build — and that he was actively investing in while running the group. If you’re going to be leading a commission that is steering the direction of government AI and making recommendations for how we should promote this sector and scientific exploration in this area, you really shouldn’t also be dipping your hand in the pot and helping yourself to AI investments. His credentials, however, were impeccable given his deep experience in Silicon Valley, his experience advising the Defense Department, and a vast personal fortune estimated at about $20 billion.

Five months after his appointment, Schmidt made a little-noticed private investment in an initial seed round of financing for a startup company called Beacon, which uses AI in the company’s supply chain products for shippers who manage freight logistics, according to CNBC’s review of investment information in database Crunchbase. There is no indication that Schmidt broke any ethics rules or did anything unlawful while chairing the commission. The commission was, by design, an outside advisory group of industry participants, and its other members included well-known tech executives including Oracle CEO Safra Catz, Amazon Web Services CEO Andy Jassy and Microsoft Chief Scientific Officer Dr. Eric Horvitz, among others. Schmidt’s investment was just the first of a handful of direct investments he would make in AI startup companies during his tenure as chairman of the AI commission.
“Venture capital firms financed, in part, by Schmidt and his private family foundation also made dozens of additional investments in AI companies during Schmidt’s tenure, giving Schmidt an economic stake in the industry even as he developed new regulations and encouraged taxpayer financing for it,” adds CNBC. “Altogether, Schmidt and entities connected to him made more than 50 investments in AI companies while he was chairman of the federal commission on AI. Information on his investments isn’t publicly available.”

“All that activity meant that, at the same time Schmidt was wielding enormous influence over the future of federal AI policy, he was also potentially positioning himself to profit personally from the most promising young AI companies.” Citing people close to Schmidt, the report says his investments were disclosed in a private filing to the U.S. government at the time and the public and news media had no access to that document.

A spokesperson for Schmidt told CNBC that he followed all rules and procedures in his tenure on the commission, “Eric has given full compliance on everything,” the spokesperson said.

261

Maine Passes Facial Recognition

The new law prohibits government use of facial recognition except in specifically outlined situations, with the most broad exception being if police have probable cause that an unidentified person in an image committed a serious crime, or for proactive fraud prevention. Since Maine police will not have access to facial recognition, they will be able to ask the FBI and Maine Bureau of Motor Vehicles (BMV) to run these searches.

Crucially, the law plugs loopholes that police have used in the past to gain access to the technology, like informally asking other agencies or third parties to run backchannel searches for them. Logs of all facial recognition searches by the BMV must be created and are designated as public records. The only other state-wide facial recognition law was enacted by Washington in 2020, but many privacy advocates were dissatisfied with the specifics of the law. Maine’s new law also gives citizens the ability to sue the state if they’ve been unlawfully targeted by facial recognition, which was notably absent from Washington’s regulation. If facial recognition searches are performed illegally, they must be deleted and cannot be used as evidence.

441

Why Don’t We Just Ban Targeted Advertising?

Google and Facebook, including their subsidiaries like Instagram and YouTube, make about 83 percent and 99 percent of their respective revenue from one thing: selling ads. It’s the same story with Twitter and other free sites and apps. More to the point, these companies are in the business of what’s called behavioral advertising, which allows companies to aim their marketing based on everything from users’ sexual orientations to their moods and menstrual cycles, as revealed by everything they do on their devices and every place they take them. It follows that most of the unsavory things the platforms do—boost inflammatory content, track our whereabouts, enable election manipulation, crush the news industry—stem from the goal of boosting ad revenues. Instead of trying to clean up all these messes one by one, the logic goes, why not just remove the underlying financial incentive? Targeting ads based on individual user data didn’t even really exist until the past decade. (Indeed, Google still makes many billions of dollars from ads tied to search terms, which aren’t user-specific.) What if companies simply weren’t allowed to do it anymore?

Let’s pretend it really happened. Imagine Congress passed a law tomorrow morning that banned companies from doing any ad microtargeting whatsoever. Close your eyes and picture what life would be like if the leading business model of the internet were banished from existence. How would things be different?

Many of the changes would be subtle. You could buy a pair of shoes on Amazon without Reebok ads following you for months. Perhaps you’d see some listings that you didn’t see before, for jobs or real estate. That’s especially likely if you’re African-American, or a woman, or a member of another disadvantaged group. You might come to understand that microtargeting had supercharged advertisers’ ability to discriminate, even when they weren’t trying to.

661

NYPD Kept an Illegal Database of Juvenile Fingerprints For Years

For years, the New York Police Department illegally maintained a database containing the fingerprints of thousands of children charged as juvenile delinquents–in direct violation of state law mandating that police destroy these records after turning them over to the state’s Division of Criminal Justice Services. When lawyers representing some of those youths discovered the violation, the police department dragged its feet, at first denying but eventually admitting that it was retaining prints it was supposed to have destroyed. Since 2015, attorneys with the Legal Aid Society, which represents the majority of youths charged in New York City family courts, had been locked in a battle with the police department over retention of the fingerprint records of children under the age of 16. The NYPD did not answer questions from The Intercept about its handling of the records, but according to Legal Aid, the police department confirmed to the organization last week that the database had been destroyed. To date, the department has made no public admission of wrongdoing, nor has it notified the thousands of people it impacted, although it has changed its fingerprint retention practices following Legal Aid’s probing. “The NYPD can confirm that the department destroys juvenile delinquent fingerprints after the prints have been transmitted to DCJS,” a police spokesperson wrote in a statement to The Intercept.

Still, the way the department handled the process–resisting transparency and stalling even after being threatened with legal action–raises concerns about how police handle a growing number of databases of personal information, including DNA and data obtained through facial recognition technology. As The Intercept has reported extensively, the NYPD also maintains a secretive and controversial “gang database,” which labels thousands of unsuspecting New Yorkers–almost all black or Latino youth–as “gang members” based on a set of broad and arbitrary criteria. The fact that police were able to violate the law around juvenile fingerprints for years without consequence underscores the need for greater transparency and accountability, which critics say can only come from independent oversight of the department.

It’s unclear how long the NYPD was illegally retaining these fingerprints, but the report says the state has been using the Automated Fingerprint Identification System since 1989, “and laws protecting juvenile delinquent records have been in place since at least 1977.” Legal Aid lawyers estimate that tens of thousands of juveniles could have had their fingerprints illegally retained by police.

698

The UK Invited a Robot To ‘Give Evidence’ In Parliament For Attention

“The UK Parliament caused a bit of a stir this week with the news that it would play host to its first non-human witness,” reports The Verge. “A press release from one of Parliament’s select committees (groups of MPs who investigate an issue and report back to their peers) said it had invited Pepper the robot to ‘answer questions’ on the impact of AI on the labor market.” From the report:

“Pepper is part of an international research project developing the world’s first culturally aware robots aimed at assisting with care for older people,” said the release from the Education Committee. “The Committee will hear about her work [and] what role increased automation and robotics might play in the workplace and classroom of the future.” It is, of course, a stunt.

As a number of AI and robotics researchers pointed out on Twitter, Pepper the robot is incapable of giving such evidence. It can certainly deliver a speech the same way Alexa can read out the news, but it can’t formulate ideas itself. As one researcher told MIT Technology Review, “Modern robots are not intelligent and so can’t testify in any meaningful way.” Parliament knows this. In an email to The Verge, a media officer for the Education Committee confirmed that Pepper would be providing preprogrammed answers written by robotics researchers from Middlesex University, who are also testifying on the same panel. “It will be clear on the day that Pepper’s responses are not spontaneous,” said the spokesperson. “Having Pepper appear before the Committee and the chance to question the witnesses will provide an opportunity for members to explore both the potential and limitations of such technology and the capabilities of robots.”

MP Robert Halfon, the committee’s chair, told education news site TES that inviting Pepper was “not about someone bringing an electronic toy robot and doing a demonstration” but showing the “potential of robotics and artificial intelligence.” He added: “If we’ve got the march of the robots, we perhaps need the march of the robots to our select committee to give evidence.”

762