Resources

Cory Doctorow’s New Book Explains ‘How to Destroy Surveillance Capitalism’

If we’re going to break Big Tech’s death grip on our digital lives, we’re going to have to fight monopolies. That may sound pretty mundane and old-fashioned, something out of the New Deal era, while ending the use of automated behavioral modification feels like the plotline of a really cool cyberpunk novel… But trustbusters once strode the nation, brandishing law books, terrorizing robber barons, and shattering the illusion of monopolies’ all-powerful grip on our society. The trustbusting era could not begin until we found the political will — until the people convinced politicians they’d have their backs when they went up against the richest, most powerful men in the world. Could we find that political will again…?

That’s the good news: With a little bit of work and a little bit of coalition building, we have more than enough political will to break up Big Tech and every other concentrated industry besides. First we take Facebook, then we take AT&T/WarnerMedia. But here’s the bad news: Much of what we’re doing to tame Big Tech instead of breaking up the big companies also forecloses on the possibility of breaking them up later… Allowing the platforms to grow to their present size has given them a dominance that is nearly insurmountable — deputizing them with public duties to redress the pathologies created by their size makes it virtually impossible to reduce that size. Lather, rinse, repeat: If the platforms don’t get smaller, they will get larger, and as they get larger, they will create more problems, which will give rise to more public duties for the companies, which will make them bigger still.

We can work to fix the internet by breaking up Big Tech and depriving them of monopoly profits, or we can work to fix Big Tech by making them spend their monopoly profits on governance. But we can’t do both. We have to choose between a vibrant, open internet or a dominated, monopolized internet commanded by Big Tech giants that we struggle with constantly to get them to behave themselves…

Big Tech wired together a planetary, species-wide nervous system that, with the proper reforms and course corrections, is capable of seeing us through the existential challenge of our species and planet. Now it’s up to us to seize the means of computation, putting that electronic nervous system under democratic, accountable control.

With “free, fair, and open tech” we could then tackle our other urgent problems “from climate change to social change” — all with collective action, Doctorow argues. And “The internet is how we will recruit people to fight those fights, and how we will coordinate their labor.

“Tech is not a substitute for democratic accountability, the rule of law, fairness, or stability — but it’s a means to achieve these things.”

463

Facebook and Google Serve As Vectors For Misinformation While Hobbling Local Journalism and Collecting Taxpayer Subsidies, Group Says

Facebook and Google are hollowing out local communities by serving as vectors for misinformation while hobbling local journalism and collecting taxpayer subsidies, a new paper from progressive think tank the American Economic Liberties Project charges. Both companies cite benefits their platforms offer small businesses as a key defense against critiques of their size and power. The paper, dated Aug. 30, is sure to presage further scrutiny of the impact they’ve had on local communities.

The brief, by Pat Garofalo, the group’s director of state and local policy, argues that: Google doesn’t do enough to protect against fraud, allowing scammers to get their own numbers and websites listed on Google to the detriment of legitimate businesses. Facebook, by design, boosts shoddy and sensationalist content, crowding out legitimate local news and information, all as it and Google have come to dominate the local advertising market that was long the lifeblood of community journalism. Both have sucked up potentially billions in local taxpayer dollars via tax breaks as well as subsidies and discounts on utilities they’ve gotten in exchange for building data centers. Garofalo recommends remedies including more antitrust enforcement at the federal and state levels and an end to preferential treatment by states and localities, either voluntarily or under force of law.

469

Google Search and Dark Patterns

Previously, the search engine had marked paid results with the word “Ad” in a green box, tucked beneath the headline next to a matching green display URL. Now, all of a sudden, the “Ad” and the URL shifted above the headline, and both were rendered in discreet black; the box disappeared. The organic search results underwent a similar makeover, only with a new favicon next to the URL instead of the word “Ad.” The result was a general smoothing: Ads looked like not-ads. Not-ads looked like ads. This was not Google’s first time fiddling with the search results interface. In fact, it had done so quite regularly over the last 13 years, as handily laid out in a timeline from the news site Search Engine Land. Each iteration whittled away the distinction between paid and unpaid content that much more. Most changes went relatively unnoticed, internet residents accepting the creep like the apocryphal frog in a slowly boiling pot.

But in January, amid rising antitrust drumbeats and general exhaustion with Big Tech, people noticed. Interface designers, marketers, and Google users alike decried the change, saying it made paid results practically indistinguishable from those that Google’s search algorithm served up organically. The phrase that came up most often: “dark pattern,” a blanket term coined by UX specialist Harry Brignull to describe manipulative design elements that benefit companies over their users. That a small design tweak could inspire so much backlash speaks to the profound influence Google and other ubiquitous platforms have — and the responsibility that status confers to them. “Google and Facebook shape realities,” says Kat Zhou, a product designer who has created a framework and toolkit to help promote ethical design. “Students and professors turn to Google for their research. Folks turn to Facebook for political news. Communities turn to Google for Covid-19 updates. In some sense, Google and Facebook have become arbiters of the truth. That’s particularly scary when you factor in their business models, which often incentivize blurring the line between news and advertisements.”

Google’s not the only search engine to blur this line. If anything, Bing is even more opaque, sneaking the “Ad” disclosure under the header, with only a faint outline to draw attention. […] But Google has around 92 percent of global search marketshare. It effectively is online search. Dark patterns are all too common online in general, and January wasn’t the first time people accused Google of deploying them. In June of 2018, a blistering report from the Norwegian Consumer Council found that Google and Facebook both used specific interface choices to strip away user privacy at almost every turn. The study details how both platforms implemented the least privacy-friendly options by default, consistently “nudged” users toward giving away more of their data, and more. It paints a portrait of a system designed to befuddle users into complacency. […] That confusion reached its apex a few months later, when an Associated Press investigation found that disabling Location History on your smartphone did not, in fact, stop Google from collecting your location in all instances.

512

How Google Ruined the Internet

Remember that story about the Polish dentist who pulled out all of her ex-boyfriend’s teeth in an act of revenge? It was complete and utter bullshit. 100% fabricated. No one knows who wrote it. Nevertheless, it was picked up by Fox News, the Los Angeles Times and many other publishers. That was eight years ago, yet when I search now for “dentist pulled ex boyfriends teeth,” I get a featured snippet that quotes ABC News’ original, uncorrected story. Who invented the fidget spinner? Ask Google Assistant and it will tell you that Catherine Hettinger did: a conclusion based on poorly-reported stories from The Guardian, The New York Times and other major news outlets. Bloomberg’s Joshua Brustein clearly demonstrated that Ms. Hettinger did not invent the low friction toy. Nevertheless, ask Google Assistant “who really invented the fidget spinner?” and you’ll get the same answer: Catherine Hettinger.

In 1998, the velocity of information was slow and the cost of publishing it was high (even on the web). Google leveraged those realities to make the best information retrieval system in the world. Today, information is free, plentiful and fast moving; somewhat by design, Google has become a card catalog that is constantly being reordered by an angry, misinformed mob. The web was supposed to forcefully challenge our opinions and push back, like a personal trainer who doesn’t care how tired you say you are. Instead, Google has become like the pampering robots in WALL-E, giving us what we want at the expense of what we need. But, it’s not our bodies that are turning into mush: It’s our minds.

512

Why Don’t We Just Ban Targeted Advertising?

Google and Facebook, including their subsidiaries like Instagram and YouTube, make about 83 percent and 99 percent of their respective revenue from one thing: selling ads. It’s the same story with Twitter and other free sites and apps. More to the point, these companies are in the business of what’s called behavioral advertising, which allows companies to aim their marketing based on everything from users’ sexual orientations to their moods and menstrual cycles, as revealed by everything they do on their devices and every place they take them. It follows that most of the unsavory things the platforms do—boost inflammatory content, track our whereabouts, enable election manipulation, crush the news industry—stem from the goal of boosting ad revenues. Instead of trying to clean up all these messes one by one, the logic goes, why not just remove the underlying financial incentive? Targeting ads based on individual user data didn’t even really exist until the past decade. (Indeed, Google still makes many billions of dollars from ads tied to search terms, which aren’t user-specific.) What if companies simply weren’t allowed to do it anymore?

Let’s pretend it really happened. Imagine Congress passed a law tomorrow morning that banned companies from doing any ad microtargeting whatsoever. Close your eyes and picture what life would be like if the leading business model of the internet were banished from existence. How would things be different?

Many of the changes would be subtle. You could buy a pair of shoes on Amazon without Reebok ads following you for months. Perhaps you’d see some listings that you didn’t see before, for jobs or real estate. That’s especially likely if you’re African-American, or a woman, or a member of another disadvantaged group. You might come to understand that microtargeting had supercharged advertisers’ ability to discriminate, even when they weren’t trying to.

582

Xiaomi Camera Feed is Showing Random Homes on a Google Nest Hub, Including Still Images of Sleeping People

So-called “smart” security cameras have had some pretty dumb security problems recently, but a recent report regarding a Xiaomi Mijia camera linked to a Google Home is especially disturbing. One Xiaomi Mijia camera owner is getting still images from other random peoples’ homes when trying to stream content from his camera to a Google Nest Hub. The images include sills of people sleeping (even an infant in a cradle) inside their own homes. This issue was first reported by user /r/Dio-V on Reddit and affects his Xiaomi Mijia 1080p Smart IP Security Camera, which can be linked to a Google account for use with Google/Nest devices through Xiaomi’s Mi Home app/service. It isn’t clear when Dio-V’s feed first began showing these still images into random homes or how long the camera was connected to his account before this started happening. He does state that both the Nest Hub and the camera were purchased new. The camera was noted as running firmware version 3.5.1_00.66.

586

YouTube’s Algorithm Made Fake CNN Reports Go Viral

“YouTube channels posing as American news outlets racked up millions of views on false and inflammatory videos over several months this year,” reports CNN.

“All with the help of YouTube’s recommendation engine.”

Many of the accounts, which mostly used footage from CNN, but also employed some video from Fox News, exploited a YouTube feature that automatically creates channels on certain topics. Those topic channels are then automatically populated by videos related to the topic — including, in this case, blatant misinformation.

YouTube has now shut down many of the accounts.

YouTube’s own algorithms also recommended videos from the channels to American users who watched videos about U.S. politics. That the channels could achieve such virality — one channel was viewed more than two million times over one weekend in October — raises questions about YouTube’s preparedness for tackling misinformation on its platform just weeks before the Iowa caucuses and points to the continuing challenge platforms face as people try to game their systems….

Responding to the findings on Thursday, a CNN spokesperson said YouTube needs to take responsibility.

“When accounts were deleted or banned, they were able to spin up new accounts within hours,” added Plasticity, a natural language processing and AI startup which analyzed the data and identified at least 25 different accounts which YouTube then shut down.

“The tactics they used to game the YouTube algorithm were executed perfectly. They knew what they were doing.”

545

Facebook, Google Donate Heavily To Privacy Advocacy Groups

Few companies have more riding on proposed privacy legislation than Alphabet’s Google and Facebook. To try to steer the bill their way, the giant advertising technology companies spend millions of dollars to lobby each year, a fact confirmed by government filings. Not so well-documented is spending to support highly influential think tanks and public interest groups that are helping shape the privacy debate, ostensibly as independent observers. Bloomberg Law examined seven prominent nonprofit think tanks that work on privacy issues that received a total of $1.5 million over a 18-month period ending Dec. 31, 2018. The groups included such organizations as the Center for Democracy and Technology, the Future of Privacy Forum and the Brookings Institution. The actual total is undoubtedly much higher — exact totals for contributions were difficult to pin down. The tech giants have “funded scores of nonprofits, including consumer and privacy groups, and academics,” said Jeffrey Chester, executive director at the Center for Digital Democracy, a public interest group that does not accept donations from Google or Facebook. Further, he says, their influence is strong. The companies have “opposed federal privacy laws and worked to weaken existing safeguards,” Chester said. Accepting donations from these “privacy-killing companies enable them to influence decisions by nonprofits, even subtly,” he said.

530

Next in Google’s Quest for Consumer Dominance–Banking

The project, code-named Cache, is expected to launch next year with accounts run by Citigroup and a credit union at Stanford University, a tiny lender in Google’s backyard. Big tech companies see financial services as a way to get closer to users and glean valuable data. Apple introduced a credit card this summer. Amazon.com has talked to banks about offering checking accounts. Facebook is working on a digital currency it hopes will upend global payments. Their ambitions could challenge incumbent financial-services firms, which fear losing their primacy and customers. They are also likely to stoke a reaction in Washington, where regulators are already investigating whether large technology companies have too much clout.

The tie-ups between banking and technology have sometimes been fraught. Apple irked its credit-card partner, Goldman Sachs Group, by running ads that said the card was “designed by Apple, not a bank.” Major financial companies dropped out of Facebook’s crypto project after a regulatory backlash. Google’s approach seems designed to make allies, rather than enemies, in both camps. The financial institutions’ brands, not Google’s, will be front-and-center on the accounts, an executive told The Wall Street Journal. And Google will leave the financial plumbing and compliance to the banks — activities it couldn’t do without a license anyway.

564

Google’s Secret ‘Project Nightingale’ Gathers Personal Health Data on Millions of Americans

Google is teaming with one of the country’s largest health-care systems on a secret project to collect and crunch the detailed personal health information of millions of Americans across 21 states, WSJ reported Monday, citing people familiar with the matter and internal documents.

The initiative, code-named “Project Nightingale,” appears to be the largest in a series of efforts by Silicon Valley giants to gain access to personal health data and establish a toehold in the massive health-care industry. Amazon.com, Apple and Microsoft are also aggressively pushing into health care, though they haven’t yet struck deals of this scope. Google launched the effort last year with St. Louis-based Ascension, the country’s second-largest health system. The data involved in Project Nightingale includes lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, complete with patient names and dates of birth.

Neither patients nor doctors have been notified. At least 150 Google employees already have access to much of the data on tens of millions of patients, according to a person familiar with the matter and the documents.

Google in this case is using the data in part to design new software, underpinned by advanced artificial intelligence and machine learning.

Google appears to be sharing information within Project Nightingale more broadly than in its other forays into health-care data. In September, Google announced a 10-year deal with the Mayo Clinic to store the hospital system’s genetic, medical and financial records.

Google co-founder Larry Page, in a 2014 interview, suggested that patients worried about the privacy of their medical records were too cautious. Mr. Page said: “We’re not really thinking about the tremendous good that can come from people sharing information with the right people in the right ways.”

587

Researchers Tricked Google Home and Alexa Into Eavesdropping and Password Phishing

What if Google and Amazon employees weren’t the only ones who’d listened through your voice assistant? Ars Technica reports:

The threat isn’t just theoretical. Whitehat hackers at Germany’s Security Research Labs developed eight apps — four Alexa “skills” and four Google Home “actions” — that all passed Amazon or Google security-vetting processes. The skills or actions posed as simple apps for checking horoscopes, with the exception of one, which masqueraded as a random-number generator. Behind the scenes, these “smart spies,” as the researchers call them, surreptitiously eavesdropped on users and phished for their passwords…

The apps gave the impression they were no longer running when they, in fact, silently waited for the next phase of the attack…. The apps quietly logged all conversations within earshot of the device and sent a copy to a developer-designated server. The phishing apps follow a slightly different path by responding with an error message that claims the skill or action isn’t available in that user’s country. They then go silent to give the impression the app is no longer running. After about a minute, the apps use a voice that mimics the ones used by Alexa and Google home to falsely claim a device update is available and prompts the user for a password for it to be installed….

In response, both companies removed the apps and said they are changing their approval processes to prevent skills and actions from having similar capabilities in the future.

559

Mozilla is Sharing YouTube Horror Stories To Prod Google For More Transparency

Mozilla is publishing anecdotes of YouTube viewing gone awry — anonymous stories from people who say they innocently searched for one thing but eventually ended up in a dark rabbit hole of videos. It’s a campaign aimed at pressuring Google’s massive video site to make itself more accessible to independent researchers trying to study its algorithms. “The big problem is we have no idea what is happening on YouTube,” said Guillaume Chaslot, who is a fellow at Mozilla, a nonprofit best known for its unit that makes and operates the Firefox web browser.

Chaslot is an ex-Google engineer who has investigated YouTube’s recommendations from the outside after he left the company in 2013. (YouTube says he was fired for performance issues.) “We can see that there are problems, but we have no idea if the problem is from people being people or from algorithms,” he said….

Mozilla is publishing 28 stories it’s terming #YouTubeRegrets; they include, for example, an anecdote from someone who who said a search for German folk songs ended up returning neo-Nazi clips, and a testimonial from a mother who said her 10-year-old daughter searched for tap-dancing videos and ended up watching extreme contortionist clips that affected her body image.

604

Voice From ‘Nest’ Camera Threatens to Steal Baby

Jack Newcombe, the Chief Operating Officer of a syndication company with 44 million daily readers, describes the strange voice he heard talking to his 18-month old son:
She says we have a nice house and encourages the nanny to respond. She does not. The voice even jokes that she hopes we don’t change our password. I am sick to my stomach. After about five minutes of verbal “joy riding,” the voice starts to get agitated at the nanny’s lack of response and then snaps, in a very threatening voice: “I’m coming for the baby if you don’t answer me….” We unplug the cameras and change all passwords…

Still helpless, I started doing the only thing I could do — Googling. I typed “Nest + camera + hacked” and found out that this happens frequently. Parent after parent relayed stories similar to mine — threatening to steal a baby is shockingly common — and some much worse, such as playing pornography over the microphone to a 3-year-old… What is worse is that anyone could have been watching us at any time for as long as we have had the cameras up. This person just happened to use the microphone. Countless voyeurs could have been silently watching (or worse) for months.

However, what makes this issue even more terrifying is a corporate giant’s complete and utter lack of response. Nest is owned by Google, and, based on my experience and their public response, Google does not seem to care about this issue. They acknowledge it as a problem, shrug their shoulders and point their fingers at the users. Their party line is to remind people that the hardware was not hacked; it was the user’s fault for using a compromised password and not implementing two-step authentication, in which users receive a special code via text to sign on. That night, on my way home from work, I called Nest support and was on hold for an hour and eight minutes. I followed all directions and have subsequently received form emails in broken English. Nobody from Google has acknowledged the incident or responded with any semblance of empathy. In every email, they remind me of two-step authentication.

They act as if I am going to continue to use Nest cameras.

569

YouTube Gets Alleged Copyright Troll To Agree To Stop Trolling YouTubers

Alleged copyright troll Christopher Brady will no longer be able to issue false DMCA takedowns to other YouTubers, according to a lawsuit settlement filed today. The Verge reports:

Under the new agreement, Brady is banned from “submitting any notices of alleged copyright infringement to YouTube that misrepresent that material hosted on the YouTube service is infringing copyrights held or claimed to be held by Brady or anyone Brady claims to represent.” Brady agreed to pay $25,000 in damages as part of the settlement. He is also prohibited from “misrepresenting or masking their identities” when using Google products, including YouTube. “This settlement highlights the very real consequences for those that misuse our copyright system. We’ll continue our work to prevent abuse of our systems,” a YouTube spokesperson told The Verge.

“I, Christopher L. Brady, admit that I sent dozens of notices to YouTube falsely claiming that material uploaded by YouTube users infringed my copyrights,” he said in an apology, shared by YouTube with The Verge. “I apologize to the YouTube users that I directly impacted by my actions, to the YouTube community, and to YouTube itself.” YouTube claimed the investigation caused the company to “expend substantial sums on its investigation in an effort to detect and halt that behavior, and to ensure that its users do not suffer adverse consequences from it.” YouTube also said that the company may be “unable to detect and prevent similar misconduct in the future,” as a result of the various methods Brady took to cover up his identity.

570

Google Chief: I’d Disclose Smart Speakers Before Guests Enter My Home

After being challenged as to whether homeowners should tell guests smart devices — such as a Google Nest speaker or Amazon Echo display — are in use before they enter the building, Google senior vice president of devices and services, Rick Osterloh, concludes that the answer is indeed yes. The BBC reports:

“Gosh, I haven’t thought about this before in quite this way,” Rick Osterloh begins. “It’s quite important for all these technologies to think about all users… we have to consider all stakeholders that might be in proximity.” And then he commits. “Does the owner of a home need to disclose to a guest? I would and do when someone enters into my home, and it’s probably something that the products themselves should try to indicate.”

To be fair to Google, it hasn’t completely ignored matters of 21st Century privacy etiquette until now. As Mr Osterloh points out, its Nest cameras shine an LED light when they are in record mode, which cannot be overridden. But the idea of having to run around a home unplugging or at least restricting the capabilities of all its voice- and camera-equipped kit if a visitor objects is quite the ask.

The concession came at the end of one-on-one interview given to BBC News to mark the launch of Google’s Pixel 4 smartphones, a new Nest smart speaker and other products. You can read the full conversation on the BBC’s article.

518

YouTube is Experimenting With Ways To Make Its Algorithm Even More Addictive

While YouTube has publicly said that it’s working on addressing problems that are making its website ever so addictive to users, a new paper from Google, which owns YouTube, seems to tell a different story.

It proposes an update to the platform’s algorithm that is meant to recommend even more targeted content to users in the interest of increasing engagement. Here’s how YouTube’s recommendation system currently works. To populate the recommended-videos sidebar, it first compiles a shortlist of several hundred videos by finding ones that match the topic and other features of the one you are watching. Then it ranks the list according to the user’s preferences, which it learns by feeding all your clicks, likes, and other interactions into a machine-learning algorithm. Among the proposed updates, the researchers specifically target a problem they identify as “implicit bias.” It refers to the way recommendations themselves can affect user behavior, making it hard to decipher whether you clicked on a video because you liked it or because it was highly recommended. The effect is that over time, the system can push users further and further away from the videos they actually want to watch.

To reduce this bias, the researchers suggest a tweak to the algorithm: each time a user clicks on a video, it also factors in the video’s rank in the recommendation sidebar. Videos that are near the top of the sidebar are given less weight when fed into the machine-learning algorithm; videos deep down in the ranking, which require a user to scroll, are given more. When the researchers tested the changes live on YouTube, they found significantly more user engagement. Though the paper doesn’t say whether the new system will be deployed permanently, Guillaume Chaslot, an ex-YouTube engineer who now runs AlgoTransparency.org, said he was “pretty confident” that it would happen relatively quickly.

590

Politicians Can Break Our Content Rules, YouTube CEO Says

YouTube CEO Susan Wojcicki said this week that content by politicians would stay up on the video-sharing website even if it violates the company’s standards, echoing a position staked out by Facebook this week.

“When you have a political officer that is making information that is really important for their constituents to see, or for other global leaders to see, that is content that we would leave up because we think it’s important for other people to see,” Wojcicki told an audience at The Atlantic Festival this morning. Wojcicki said the news media is likely to cover controversial content regardless of whether it’s taken down, giving context to understand it. YouTube is owned by Google. A YouTube spokesperson later told POLITICO that politicians are not treated differently than other users and must abide by its community guidelines. The company grants exemptions to some political speech if the company considers it to be educational, documentary, scientific, or artistic in nature.

574

Google Loans Cameras To Volunteers To Fill Gaps in ‘Street View’

Kanhema, who works as a product manager in Silicon Valley and is a freelance photographer in his spare time, volunteered to carry Google’s Street View gear to map what amounted to 2,000 miles of his home country. The Berkeley, Calif., resident has filled in the map of other areas in Africa and Canada as well.

“We start in the large metropolitan areas where we know we have users, where it’s easy for us to drive and we can execute quickly,” says Stafford Marquardt, a product manager for Street View.

He says the team is working to expand the service’s reach. To do that, Google often relies on volunteers who can either borrow the company’s camera equipment or take photos using their own. Most images on Street View are collected by drivers, and most of these drivers are employed by third parties that work with Google. But when it comes to the places Google hasn’t prioritized, people like Kanhema can fill in the gaps.

“It’s so conspicuous to have a 4-foot contraption attached to the roof of your car,” Kanhema says. “People are walking up and asking questions about, ‘Is that a camera? What are you recording? What are you filming? It is for Google Maps? Will my house be on the map? Will my face be on the map?'”

Google doesn’t pay him or the other volunteers — whom the company calls “contributors” — for the content they upload. Kanhema, for example, spent around $5,000 of his own money to travel across Zimbabwe for the project.

Google currently has no plans to compensate its volunteers, adding that it pays contributors “in a lot of other ways” by offering “a platform to host gigabytes and terabytes of imagery and publish it to the entire world, absolutely for free.”

565

Ex-Google Engineer Says That Robot Weapons May Cause Accidental Mass Killings

“A former Google engineer who worked on the company’s infamous military drone project has sounded a warning against the building of killer robots,” reports Business Insider.

Laura Nolan had been working at Google four years when she was recruited to its collaboration with the US Department of Defense, known as Project Maven, in 2017, according to the Guardian. Project Maven was focused on using AI to enhance military drones, building AI systems which would be able to single out enemy targets and distinguish between people and objects. Google canned Project Maven after employee outrage, with thousands of employees signing a petition against the project and about a dozen quitting in protest. Google allowed the contract to lapse in March this year. Nolan herself resigned after she became “increasingly ethically concerned” about the project, she said…

Nolan fears that the next step beyond AI-enabled weapons like drones could be fully autonomous AI weapons. “What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed,” she said…. Although no country has yet come forward to say it’s working on fully autonomous robot weapons, many are building more and more sophisticated AI to integrate into their militaries. The US navy has a self-piloting warship, capable of spending months at sea with no crew, and Israel boasts of having drones capable of identifying and attacking targets autonomously — although at the moment they require a human middle-man to give the go-ahead.

Nolan is urging countries to declare an outright ban on autonomous killing robots, similar to conventions around the use of chemical weapons.

558

How America’s Tech Giants Are Helping Build China’s Surveillance State

The OpenPower Foundation — a nonprofit led by Google and IBM executives with the aim of trying to “drive innovation” — has set up a collaboration between IBM, Chinese company Semptian, and U.S. chip manufacturer Xilinx. Together, they have worked to advance a breed of microprocessors that enable computers to analyze vast amounts of data more efficiently. Shenzhen-based Semptian is using the devices to enhance the capabilities of internet surveillance and censorship technology it provides to human rights-abusing security agencies in China, according to sources and documents. A company employee said that its technology is being used to covertly monitor the internet activity of 200 million people…

Semptian presents itself publicly as a “big data” analysis company that works with internet providers and educational institutes. However, a substantial portion of the Chinese firm’s business is in fact generated through a front company named iNext, which sells the internet surveillance and censorship tools to governments. iNext operates out of the same offices in China as Semptian, with both companies on the eighth floor of a tower in Shenzhen’s busy Nanshan District. Semptian and iNext also share the same 200 employees and the same founder, Chen Longsen. [The company’s] Aegis equipment has been placed within China’s phone and internet networks, enabling the country’s government to secretly collect people’s email records, phone calls, text messages, cellphone locations, and web browsing histories, according to two sources familiar with Semptian’s work.

Promotional documents obtained from the company promise “location information for everyone in the country.” One company representative even told the Intercept they were processing “thousands of terabits per second,” and — not knowing they were talking to a reporter — forwarded a 16-minute video detailing their technology. “If a government operative enters a person’s cellphone number, Aegis can show where the device has been over a given period of time: the last three days, the last week, the last month, or longer,” the Intercept reports.

658