Resources

US Considers a Rare Antitrust Move: Breaking Up Google

A rare bid to break up Alphabet’s Google is one of the options being considered by the Justice Department after a landmark court ruling found that the company monopolized the online search market, Bloomberg News reported Tuesday, citing sources familiar with the matter. From the report:
The move would be Washington’s first push to dismantle a company for illegal monopolization since unsuccessful efforts to break up Microsoft two decades ago.

Less severe options include forcing Google to share more data with competitors and measures to prevent it from gaining an unfair advantage in AI products, said the people, who asked not to be identified discussing private conversations. Regardless, the government will likely seek a ban on the type of exclusive contracts that were at the center of its case against Google. If the Justice Department pushes ahead with a breakup plan, the most likely units for divestment are the Android operating system and Google’s web browser Chrome, said the people. Officials are also looking at trying to force a possible sale of AdWords, the platform the company uses to sell text advertising, one of the people said.

58

Google Sold Android Phones With Hidden Insecure Feature, Companies Find

Google’s master software for some Android phones includes a hidden feature that is insecure and could be activated to allow remote control or spying on users, according to a security company that found it inside phones at a U.S. intelligence contractor. From a report:
The feature appears intended to give employees at stores selling Pixel phones and other models deep access to the devices so they can demonstrate how they work, according to researchers at iVerify who shared their findings with The Washington Post. The discovery and Google’s lack of explanation alarmed the intelligence contractor, data analysis platform vendor Palantir Technologies, to the extent that it has stopped issuing Android phones to employees, Palantir told The Post.

“Mobile security is a very real concern for us, given where we’re operating and who we’re serving,” Palantir Chief Information Security Officer Dane Stuckey said. “This was very deleterious of trust, to have third-party, unvetted insecure software on it. We have no idea how it got there, so we made the decision to effectively ban Androids internally.” The security company said it contacted Google about its findings more than 90 days ago and that the tech giant has not indicated whether it would remove or fix the application. On Wednesday night, Google told The Post that it would issue an update to remove the application. “Out of an abundance of precaution, we will be removing this from all supported in-market Pixel devices with an upcoming Pixel software update,” said company spokesperson Ed Fernandez. He said distributors of other Android phones would also be notified.

44

Google Threatened Tech Influencers Unless They ‘Preferred’ Google Phones

The tech review world has been full of murky deals between companies and influencers for years, but it appears Google finally crossed a line with the Pixel 9. The company’s invite-only Team Pixel program — which seeds Pixel products to influencers before public availability — stipulated that participating influencers were not allowed to feature Pixel products alongside competitors, and those who showed a preference for competing phones risked being kicked out of the program. For those hoping to break into the world of tech reviews, the new terms meant having to choose between keeping access or keeping their integrity.

The Verge has independently confirmed screenshots of the clause in this year’s Team Pixel agreement for the new Pixel phones, which various influencers began posting on X and Threads last night. The agreement tells participants they’re “expected to feature the Google Pixel device in place of any competitor mobile devices.” It also notes that “if it appears other brands are being preferred over the Pixel, we will need to cease the relationship between the brand and the creator.” The link to the form appears to have since been shut down.

64

Google funds newsrooms, AI

Google has reached a groundbreaking deal with California lawmakers to contribute millions to local newsrooms, aiming to support journalism amid its decline as readers migrate online and advertising dollars evaporate. The agreement also includes a controversial provision for artificial intelligence funding. Politico reports:
California emulated a strategy that other countries like Canada have used to try and reverse the journalism industry’s decline as readership migrated online and advertising dollars evaporated. […] Under the deal, the details of which were first reported by POLITICO on Monday, Google and the state of California would jointly contribute a minimum of $125 million over five years to support local newsrooms through a nonprofit public charity housed at UC Berkeley’s journalism school. Google would contribute at least $55 million, and state officials would kick in at least $70 million. The search giant would also commit $50 million over five years to unspecified “existing journalism programs.”

The deal would also steer millions in tax-exempt private dollars toward an artificial intelligence initiative that people familiar with the negotiations described as an effort to cultivate tech industry buy-in. Funding for artificial intelligence was not included in the bill at the core of negotiations, authored by Assemblymember Buffy Wicks. The agreement has drawn criticism from a journalists’ union that had so far championed Wicks’ effort. Media Guild of the West President Matt Pearce in an email to union members Sunday evening said such a deal would entrench “Google’s monopoly power over our newsrooms.”
“This public-private partnership builds on our long history of working with journalism and the local news ecosystem in our home state, while developing a national center of excellence on AI policy,” said Kent Walker, chief legal officer for Alphabet, the parent company of Google.

Media Guild of the West President Matt Pearce wasn’t so chipper. He criticized the plan in emails with union members, calling it a “total rout of the state’s attempts to check Google’s stranglehold over our newsrooms.”

66

Secrecy undermines trust in Google antitrust trial

Before a single witness could utter a word of testimony in the Google antitrust case on Tuesday, the public and the press were temporarily barred from the courtroom. It’s just another step in a long list of anti-transparency measures styming access to the case: documents and testimony have been repeatedly sealed; exhibits used in open court have been removed from the internet; and only those who can actually make it to the courtroom are permitted to listen to the testimony (when they’re allowed in at all, that is).

Despite these restrictions, reporters and courtwatchers have been doing their best to inform their audiences about the trial. But if the federal judge presiding over the case, Amit Mehta, doesn’t act soon to stop this tsunami of secrecy, people may be left mostly in the dark about the biggest antitrust lawsuit of the 21st century.

Behind this anti-transparency push are Google and other big tech companies arguing that letting people observe the case fully could reveal trade secrets or otherwise embarrass them by generating “clickbait.” There is some precedent for closing parts of trials or redacting court documents to avoid disclosing trade secrets. But not to save corporations from embarrassment.

145

Autonomous Waymo Car Runs Over Dog In San Francisco

One of Alphabet’s Waymo autonomous cars has killed a pet dog. TechCrunch spotted the public report of the incident, which says one of the Waymo Jaguar I-Pace cars ran over a dog in San Francisco while in autonomous mode with a safety driver behind the wheel.

Waymo’s collision report says: “On May 21, 2023 at 10:56 AM PT a Waymo Autonomous Vehicle (“Waymo AV”) operating in San Francisco, California was in a collision involving a small dog on Toland Street at Toland Place. The Waymo AV was traveling southwest on Toland Street when a small dog ran into the street in front of the Waymo AV. The Waymo AV then made contact with the dog, which did not survive. At the time of the impact, the Waymo AV’s Level 4 ADS was engaged in autonomous mode, and a test driver was present (in the driver’s seating position). The Waymo AV sustained damage.”

The collision was a block from Waymo’s Toland Depot, a 120,000-square-foot warehouse that houses at least 50 autonomous cars. The speed limit on Toland Street is 25 mph, according to posted signs viewable on Google Maps. From that Street View link, the road looks like a busy industrial area with many warehouses, truck delivery areas, and barbed-wire fences. The incident is Waymo’s first reported fatality.
Waymo sent along a statement: “On May 21 in San Francisco, a small dog ran in front of one of our vehicles with an autonomous specialist present in the driver’s seat, and, unfortunately, contact was made. The investigation is ongoing, however the initial review confirmed that the system correctly identified the dog which ran out from behind a parked vehicle but was not able to avoid contact. We send our sincere condolences to the dog’s owner. The trust and safety of the communities we are in is the most important thing to us and we’re continuing to look into this on our end.”

In early 2018, an autonomous Uber vehicle in Tempe, Arizona, hit and killed a woman. “According to Tempe PD, the car was in autonomous mode at the time of the incident, with a vehicle operator sitting behind the wheel,” reported Gizmodo at the time. The company went on to suspend self-driving car tests in all North American cities after the fatal accident.

226

Google is Quietly Working on a Wearable Device for Preteens

Google is developing a wearable device for preteens under its Fitbit group as it attempts to capture a growing demographic of younger users who own wearable tech, three employees familiar with the project told Insider.

Internally code-named “Project Eleven,” the wearable is designed to help older kids form healthy relationships with their phones and social media, two of the employees said. One of them said the device could include safety features that would let parents contact their children and know their whereabouts.

Project Eleven may be an opportunity to capture a growing market of younger users who would otherwise grow up to become Apple loyalists.

248

Google’s Eric Schmidt Helped Write AI Laws Without Disclosing Investments In AI Startups

About four years ago, former Google CEO Eric Schmidt was appointed to the National Security Commission on Artificial Intelligence by the chairman of the House Armed Services Committee. It was a powerful perch. Congress tasked the new group with a broad mandate: to advise the U.S. government on how to advance the development of artificial intelligence, machine learning and other technologies to enhance the national security of the United States. The mandate was simple: Congress directed the new body to advise on how to enhance American competitiveness on AI against its adversaries, build the AI workforce of the future, and develop data and ethical procedures.

In short, the commission, which Schmidt soon took charge of as chairman, was tasked with coming up with recommendations for almost every aspect of a vital and emerging industry. The panel did far more under his leadership. It wrote proposed legislation that later became law and steered billions of dollars of taxpayer funds to industry he helped build — and that he was actively investing in while running the group. If you’re going to be leading a commission that is steering the direction of government AI and making recommendations for how we should promote this sector and scientific exploration in this area, you really shouldn’t also be dipping your hand in the pot and helping yourself to AI investments. His credentials, however, were impeccable given his deep experience in Silicon Valley, his experience advising the Defense Department, and a vast personal fortune estimated at about $20 billion.

Five months after his appointment, Schmidt made a little-noticed private investment in an initial seed round of financing for a startup company called Beacon, which uses AI in the company’s supply chain products for shippers who manage freight logistics, according to CNBC’s review of investment information in database Crunchbase. There is no indication that Schmidt broke any ethics rules or did anything unlawful while chairing the commission. The commission was, by design, an outside advisory group of industry participants, and its other members included well-known tech executives including Oracle CEO Safra Catz, Amazon Web Services CEO Andy Jassy and Microsoft Chief Scientific Officer Dr. Eric Horvitz, among others. Schmidt’s investment was just the first of a handful of direct investments he would make in AI startup companies during his tenure as chairman of the AI commission.
“Venture capital firms financed, in part, by Schmidt and his private family foundation also made dozens of additional investments in AI companies during Schmidt’s tenure, giving Schmidt an economic stake in the industry even as he developed new regulations and encouraged taxpayer financing for it,” adds CNBC. “Altogether, Schmidt and entities connected to him made more than 50 investments in AI companies while he was chairman of the federal commission on AI. Information on his investments isn’t publicly available.”

“All that activity meant that, at the same time Schmidt was wielding enormous influence over the future of federal AI policy, he was also potentially positioning himself to profit personally from the most promising young AI companies.” Citing people close to Schmidt, the report says his investments were disclosed in a private filing to the U.S. government at the time and the public and news media had no access to that document.

A spokesperson for Schmidt told CNBC that he followed all rules and procedures in his tenure on the commission, “Eric has given full compliance on everything,” the spokesperson said.

239

Google Fires Engineer Who Claimed Company’s AI Is Sentient

In June, Google placed Lemoine on paid administrative leave for breaching its confidentiality agreement after he contacted members of the government about his concerns and hired a lawyer to represent LaMDA. […] Google maintains that it “extensively” reviewed Lemoine’s claims and found that they were “wholly unfounded.” This aligns with numerous AI experts and ethicists, who have said that his claims were, more or less, impossible given today’s technology. Lemoine claims his conversations with LaMDA’s chatbot lead him to believe that it has become more than just a program and has its own thoughts and feelings, as opposed to merely producing conversation realistic enough to make it seem that way, as it is designed to do. He argues that Google’s researchers should seek consent from LaMDA before running experiments on it (Lemoine himself was assigned to test whether the AI produced hate speech) and published chunks of those conversations on his Medium account as his evidence.

251

Google’s Nest Will Provide Data to Police Without a Warrant

Google “reserves the right” to make emergency disclosures to law enforcement even when there is no legal requirement to do so. “A provider like Google may disclose information to law enforcement without a subpoena or a warrant ‘if the provider, in good faith, believes that an emergency involving danger of death or serious physical injury to any person requires disclosure without delay of communications relating to the emergency,'” a Nest spokesperson tells CNET.

While Amazon and Google have both said they would hand over a user’s data to law enforcement without a warrant, Arlo, Apple, Wyze, and Anker, owner of Eufy, all confirmed to CNET that they won’t give authorities access to a user’s smart home camera’s footage unless they’re shown a warrant or court order. These companies would be legally bound to provide data to the authorities if they were shown a legal document. But, unlike Google and Amazon, they will not otherwise share camera footage with law enforcement, even if they had an emergency request for data. Apple’s default setting for video cameras connected via Homekit is end-to-end encryption which means the company is unable to share user video at all.

311

How Facebook and Google Actually Fund the Creation of Misinformation

MIT’s Technology Review shares data from a Facebook-run tool called CrowdTangle. It shows that by 2018 in the nation of Myanmar (population: 53 million), ” All the engagement had instead gone to fake news and clickbait websites.

“In a country where Facebook is synonymous with the internet, the low-grade content overwhelmed other information sources.”
[T]he sheer volume of fake news and clickbait acted like fuel on the flames of already dangerously high ethnic and religious tensions. It shifted public opinion and escalated the conflict, which ultimately led to the death of 10,000 Rohingya, by conservative estimates, and the displacement of 700,000 more. In 2018, a United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a “determining role” in the atrocities. Months later, Facebook admitted it hadn’t done enough “to help prevent our platform from being used to foment division and incite offline violence.” Over the last few weeks, the revelations from the Facebook Papers, a collection of internal documents provided to Congress and a consortium of news organizations by whistleblower Frances Haugen, have reaffirmed what civil society groups have been saying for years: Facebook’s algorithmic amplification of inflammatory content, combined with its failure to prioritize content moderation outside the US and Europe, has fueled the spread of hate speech and misinformation, dangerously destabilizing countries around the world.

But there’s a crucial piece missing from the story. Facebook isn’t just amplifying misinformation.

The company is also funding it.

An MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fueling the deterioration of information ecosystems around the world.

Facebook pays them for permission to open their content within Facebook’s app (where Facebook controls the advertising) rather than having users clickthrough to the publisher’s own web site, reports Technology Review:
Early on, Facebook performed little quality control on the types of publishers joining the program. The platform’s design also didn’t sufficiently penalize users for posting identical content across Facebook pages — in fact, it rewarded the behavior. Posting the same article on multiple pages could as much as double the number of users who clicked on it and generated ad revenue. Clickbait farms around the world seized on this flaw as a strategy — one they still use today… Clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of U.S. dollars a month in ad revenue, or 10 times the average monthly salary — paid to them directly by Facebook. An internal company document, first reported by MIT Technology Review in October, shows that Facebook was aware of the problem as early as 2019… At one point, as many as 60% of the domains enrolled in Instant Articles were using the spammy writing tactics employed by clickbait farms, the report said…

75% of users who were exposed to clickbait content from farms run in Macedonia and Kosovo had never followed any of the pages. Facebook’s content-recommendation system had instead pushed it into their news feeds.

Technology Review notes that Facebook now pays billions of dollars to the publishers in their program. It’s a long and detailed article, which ultimately concludes that the problem “is now happening on a global scale.”
Thousands of clickbait operations have sprung up, primarily in countries where Facebook’s payouts provide a larger and steadier source of income than other forms of available work. Some are teams of people while others are individuals, abetted by cheap automated tools that help them create and distribute articles at mass scale…

Google is also culpable. Its AdSense program fueled the Macedonia- and Kosovo-based farms that targeted American audiences in the lead-up to the 2016 presidential election. And it’s AdSense that is incentivizing new clickbait actors on YouTube to post outrageous content and viral misinformation.

Reached for comment, a Facebook spokesperson told Technology Review that they’d misunderstood the issue. And the spokesperson also said “we’ve invested in building new expert-driven and scalable solutions to these complex issues for many years, and will continue doing so.”

Google’s spokesperson confirmed examples in the article violated their own policies and removed the content, adding “We work hard to protect viewers from clickbait or misleading content across our platforms and have invested heavily in systems that are designed to elevate authoritative information.”

392

How Facebook and Google fund global misinformation

MIT’s Technology Review shares data from a Facebook-run tool called CrowdTangle. It shows that by 2018 in the nation of Myanmar (population: 53 million), ” All the engagement had instead gone to fake news and clickbait websites.

“In a country where Facebook is synonymous with the internet, the low-grade content overwhelmed other information sources.”
[T]he sheer volume of fake news and clickbait acted like fuel on the flames of already dangerously high ethnic and religious tensions. It shifted public opinion and escalated the conflict, which ultimately led to the death of 10,000 Rohingya, by conservative estimates, and the displacement of 700,000 more. In 2018, a United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a “determining role” in the atrocities. Months later, Facebook admitted it hadn’t done enough “to help prevent our platform from being used to foment division and incite offline violence.” Over the last few weeks, the revelations from the Facebook Papers, a collection of internal documents provided to Congress and a consortium of news organizations by whistleblower Frances Haugen, have reaffirmed what civil society groups have been saying for years: Facebook’s algorithmic amplification of inflammatory content, combined with its failure to prioritize content moderation outside the US and Europe, has fueled the spread of hate speech and misinformation, dangerously destabilizing countries around the world.

But there’s a crucial piece missing from the story. Facebook isn’t just amplifying misinformation.

The company is also funding it.

An MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fueling the deterioration of information ecosystems around the world.

Facebook pays them for permission to open their content within Facebook’s app (where Facebook controls the advertising) rather than having users clickthrough to the publisher’s own web site, reports Technology Review:
Early on, Facebook performed little quality control on the types of publishers joining the program. The platform’s design also didn’t sufficiently penalize users for posting identical content across Facebook pages — in fact, it rewarded the behavior. Posting the same article on multiple pages could as much as double the number of users who clicked on it and generated ad revenue. Clickbait farms around the world seized on this flaw as a strategy — one they still use today… Clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of U.S. dollars a month in ad revenue, or 10 times the average monthly salary — paid to them directly by Facebook. An internal company document, first reported by MIT Technology Review in October, shows that Facebook was aware of the problem as early as 2019… At one point, as many as 60% of the domains enrolled in Instant Articles were using the spammy writing tactics employed by clickbait farms, the report said…

75% of users who were exposed to clickbait content from farms run in Macedonia and Kosovo had never followed any of the pages. Facebook’s content-recommendation system had instead pushed it into their news feeds.

Technology Review notes that Facebook now pays billions of dollars to the publishers in their program. It’s a long and detailed article, which ultimately concludes that the problem “is now happening on a global scale.”
Thousands of clickbait operations have sprung up, primarily in countries where Facebook’s payouts provide a larger and steadier source of income than other forms of available work. Some are teams of people while others are individuals, abetted by cheap automated tools that help them create and distribute articles at mass scale…

Google is also culpable. Its AdSense program fueled the Macedonia- and Kosovo-based farms that targeted American audiences in the lead-up to the 2016 presidential election. And it’s AdSense that is incentivizing new clickbait actors on YouTube to post outrageous content and viral misinformation.

Reached for comment, a Facebook spokesperson told Technology Review that they’d misunderstood the issue. And the spokesperson also said “we’ve invested in building new expert-driven and scalable solutions to these complex issues for many years, and will continue doing so.”

Google’s spokesperson confirmed examples in the article violated their own policies and removed the content, adding “We work hard to protect viewers from clickbait or misleading content across our platforms and have invested heavily in systems that are designed to elevate authoritative information.”

418

Leaked Document Says Google Fired Dozens of Employees for Data Misuse

Google has fired dozens of employees between 2018 and 2020 for abusing their access to the company’s tools or data, with some workers potentially facing allegations of accessing Google user or employee data, according to an internal Google document obtained by Motherboard. From a report:
The document provides concrete figures on an often delicate part of a tech giant’s operations: investigations into how company’s own employees leverage their position inside the company to steal, leak, or abuse data they may have access to. Insider abuse is a problem across the tech industry. Motherboard previously uncovered instances at Facebook, Snapchat, and MySpace, with employees in some cases using their access to stalk or otherwise spy on users.

The document says that Google terminated 36 employees in 2020 for security related issues. Eighty-six percent of all security-related allegations against employees included mishandling of confidential information, such as the transfer of internal-only information to outside parties. 10 percent of all allegations in 2020 concerned misuse of systems, which can include accessing user or employee data in violation of Google’s own policies, helping others to access that data, or modifying or deleting user or employee data, according to the document. In 2019, that figure was 13 percent of all security allegations.

352

Google Says Staff Have No Right to Protest Its Choice of Clients

Google employees have no legal right to protest the company’s choice of clients, the internet giant told a judge weighing the U.S. government’s allegations that its firings of activists violated the National Labor Relations Act.

“Even if Google had, for the sake of argument, terminated the employees for their protest activities — for protesting their choice of customers — this would not violate the Act,” Google’s attorney Al Latham said in his opening statement Tuesday at a labor board trial.

National Labor Relations Board prosecutors have accused the Alphabet Inc. unit of violating federal law by illegally firing five employees for their activism. Three of those workers’ claims had originally been dismissed under President Donald Trump, because agency prosecutors concluded that their opposition to the company collaborating with immigration enforcement wasn’t legally protected, according to their lawyer. But that decision was reversed after President Joe Biden fired and replaced the labor board’s general counsel.

Google has been roiled over the past four years by a wave of activism by employees challenging management over issues including treatment of sub-contracted staff, handling of sexual harassment, and a contract with the U.S. Customs and Border Protection agency, which some of the fired workers accessed internal information about and circulated a petition against.

Google has denied wrongdoing, saying in a Monday statement that it encourages “open discussion and debate” but terminated staff in response to violations of its data security policies. “Google terminated these employees not because of their protest as such, but because in the pursuit of their protest, they accessed highly confidential information that they had no right to access,” its attorney told the judge Tuesday.

387

What Happens When Big Tech’s Datacenters Come to Small Towns?

Few big tech companies that are building and hiring across America bring that wealth with them when they set up in new communities. Instead, they hire armies of low-paid contractors, many of whom are not guaranteed a job from one month to the next; some of the contracting companies have a history of alleged mistreatment of workers. Nor do local governments share in the companies’ wealth; instead, the tech giants negotiate deals — the details protected by non-disclosure agreements — that exempt them from paying taxes that would fund schools, roads and fire departments….

Globally, by the end of 2020, there were nearly 600 “hyperscale” data centers, where a single company runs thousands of servers and rents out cloud space to customers. That’s more than double the number from 2015. Amazon, Google and Microsoft account for more than half of those hyperscale centers, making data centers one more field dominated by America’s richest and biggest companies… Google in March said it was “investing in America” with a plan to spend $7 billion across 19 states to build more data centers and offices. Microsoft said in April that it plans to build 50 to 100 data centers each year for the foreseeable future. Amazon recently got approval to build 1.75 million square feet of data-center space in Northern Virginia, beyond the 50 data centers it already operates there. Facebook said this year it would spend billions to expand data centers in Iowa, Georgia and Utah; in March it said it was adding an 11th building to its largest data-center facility in rural Prineville, Oregon…

Facebook has spent more than $2 billion expanding its operations in Prineville, but because of the tax incentives it negotiated with local officials, the company paid a total of just $119,403.42 in taxes to Crook County last year, according to the County Assessor’s list of top taxpayers. That’s less than half the taxes paid by Brasada Ranch, a local resort. And according to the Oregon Bureau of Labor and Industries, the data center has been the subject of numerous labor complaints… “I’ve spent way too much of my life watching city councils say, ‘We need a big tech company to show that we’re future-focused,'” says Sebastian Moss, the editor of Data Center Dynamics, which tracks the industry. Towns will give away tax breaks worth hundreds of millions of dollars, his reporting has found, and then express gratitude toward tech companies that have donated a few thousand computers — worth a fraction of the tax breaks — to their cash-strapped school systems. “I sometimes wonder if they’re preying on desperation, going to places that are struggling.”

Communities give up more than tax breaks when they welcome tech companies. Data centers use huge amounts of water to cool computer equipment, yet they’re being built in the drought-stricken American West.

The article cites Bureau of Labor Statistics showing that 373,300 Americans were working in data processing, hosting, and related services in June — up 52% from 10 years ago.

366

Activist Raided By London Police After Downloading Docs Found On Google Search

The raid by four Metropolitan Police constables took place after Southwark campaigner Robert Hutchinson was reportedly accused of illegally entering a password-protected area of a website. “I was searching in Google and found links to board meeting minutes,” he told The Register. “Board reports, none of which were marked confidential. So I have no question that it was in the public domain.” The Southwark News reported that Hutchinson was arrested at 8.20am on 10 June this year at home following allegations made by Leathermarket Community Benefit Society (CBS). The society is a property development firm that wants to build flats over a children’s caged ball court in the south London borough, something Hutchinson “vocally opposes,” according to the local paper.

“There’s a directory, which you need to enter a password and a username to get into. But documents from that area were being published on Google,” explained Hutchinson. “I didn’t see a page saying ‘this is the directors’ area’ or anything like that, the documents were just available. They were just linked directly.” Police said in a statement that Hutchinson was arrested on suspicion of breaking section 1 of Britain’s Computer Misuse Act 1990 “between the 17th and 24th February 2021 and had published documents from the website on social media.” They added: “He was taken into custody and later released under investigation. Following a review of all available evidence, it was determined no offences had been committed and no further action was taken.”

Hutchinson said his identification by Leathermarket and subsequent arrest raised questions in his mind, saying police confirmed to him that the company had handed over an access log containing IP addresses: “Now, how that ended up with me being in the frame, I don’t know. There’s part of this that doesn’t add up…” While the property business did not respond to The Register’s request for comment at the time of publication, in a statement given to the Southwark News it said: “When it came to the CBS’s attention that confidential information had been accessed and subsequently shared via Twitter, the CBS made a general report of the data breach to the police â” who requested a full log of visitor access to the website before deciding whether or not to progress. The police carried out their own independent investigation into who accessed the documents and how, and have now concluded their investigation.” The prepared police statement did not explain whether investigators tested Leathermarket CBS’s version of events before arresting the campaigner.

350

Google Illegally Spied On Workers Before Firing Them, US Labor Board Alleges

Google violated US labor laws by spying on workers who were organizing employee protests, then firing two of them, according to a complaint to be filed by the National Labor Relations Board (NLRB) today. The complaint names two employees, Laurence Berland and Kathryn Spiers, both of whom were fired by the company in late 2019 in connection with employee activism. Berland was organizing against Google’s decision to work with IRI Consultants, a firm widely known for its anti-union efforts, when he was let go for reviewing other employees’ calendars. Now, the NLRB has found Google’s policy against employees looking at certain coworkers’ calendars is unlawful. “Google’s hiring of IRI is an unambiguous declaration that management will no longer tolerate worker organizing,” Berland said in a statement. “Management and their union busting cronies wanted to send that message, and the NLRB is now sending their own message: worker organizing is protected by law.”

Spiers was fired after she created a pop-up for Google employees visiting the IRI Consultants website. “Googlers have the right to participate in protected concerted activities,” the notification read, according to The Guardian. The company said Spiers had violated security policies, a statement that hurt her reputation in the tech community. Now, the NLRB has found the firing was unlawful. “This week the NLRB issued a complaint on my behalf. They found that I was illegally terminated for trying to help my colleagues,” Spiers said. “Colleagues and strangers believe I abused my role because of lies told by Google management while they were retaliating against me. The NLRB can order Google to reinstate me, but it cannot reverse the harm done to my credibility.”

520

Six Reasons Why Google Maps Is the Creepiest App On Your Phone

VICE has highlighted six reasons why Google Maps is the creepiest app on your phone. An anonymous reader shares an excerpt from the report:

1. Google Maps Wants Your Search History: Google’s “Web & App Activity” settings describe how the company collects data, such as user location, to create a faster and “more personalized” experience. In plain English, this means that every single place you’ve looked up in the app — whether it’s a strip club, a kebab shop or your moped-riding drug dealer’s location — is saved and integrated into Google’s search engine algorithm for a period of 18 months. Google knows you probably find this creepy. That’s why the company uses so-called “dark patterns” — user interfaces crafted to coax us into choosing options we might not otherwise, for example by highlighting an option with certain fonts or brighter colors.

2. Google Maps Limits Its Features If You Don’t Share Your Search History: If you open your Google Maps app, you’ll see a circle in the top right corner that signifies you’re logged in with your Google account. That’s not necessary, and you can simply log out. Of course, the log out button is slightly hidden, but can be found like this: click on the circle > Settings > scroll down > Log out of Google Maps. Unfortunately, Google Maps won’t let you save frequently visited places if you’re not logged into your Google account. If you choose not to log in, when you click on the search bar you get a “Tired of typing?” button, suggesting you sign in, and coaxing you towards more data collection.

3. Google Maps Can Snitch On You: Another problematic feature is the “Google Maps Timeline,” which “shows an estimate of places you may have been and routes you may have taken based on your Location History.” With this feature, you can look at your personal travel routes on Google Maps, including the means of transport you probably used, such as a car or a bike. The obvious downside is that your every move is known to Google, and to anyone with access to your account. And that’s not just hackers — Google may also share data with government agencies such as the police. […] If your “Location History” is on, your phone “saves where you go with your devices, even when you aren’t using a specific Google service,” as is explained in more detail on this page. This feature is useful if you lose your phone, but also turns it into a bonafide tracking device.

4. Google Maps Wants to Know Your Habits: Google Maps often asks users to share a quick public rating. “How was Berlin Burger? Help others know what to expect,” suggests the app after you’ve picked up your dinner. This feels like a casual, lighthearted question and relies on the positive feeling we get when we help others. But all this info is collected in your Google profile, making it easier for someone to figure out if you’re visiting a place briefly and occasionally (like on holiday) or if you live nearby.

5. Google Maps Doesn’t Like It When You’re Offline: Remember GPS navigation? It might have been clunky and slow, but it’s a good reminder that you don’t need to be connected to the internet to be directed. In fact, other apps offer offline navigation. On Google, you can download maps, but offline navigation is only available for cars. It seems fairly unlikely the tech giant can’t figure out how to direct pedestrians and cyclists without internet.

6. Google Makes It Seem Like This Is All for Your Own Good: “Providing useful, meaningful experiences is at the core of what Google does,” the company says on its website, adding that knowing your location is important for this reason. They say they use this data for all kinds of useful things, like “security” and “language settings” — and, of course, selling ads. Google also sells advertisers the possibility to evaluate how well their campaigns reached their target (that’s you!) and how often people visited their physical shops “in an anonymized and aggregated manner”. But only if you opt in (or you forget to opt out).

559

New Survey Reveals Teens Get Their News from Social Media and YouTube

Celebrities, influencers, and personalities have as much influence as a source of current events as friends, family, and news organizations.

Teens today are not only getting the majority of their news online, but they are turning away from traditional media organizations to find out about current events on social media sites and YouTube, often from online influencers and celebrities, according to a new poll by Common Sense and SurveyMonkey.

The survey found that more than half of teens (54%) get news at least a few times a week from social media platforms such as Instagram, Facebook, and Twitter and 50% get news from YouTube.

Teens’ news habits reflect the diversity of the modern media landscape. And, while most news organizations maintain accounts on social media and other platforms, they are competing for attention with corporate brands, celebrities, influencers, and personal connections. Of those teens who get their news from YouTube, for example, six in 10 say they are more likely to get it from celebrities, influencers, and personalities rather than from news organizations utilizing the platform.

What’s noteworthy is that, even with so many relying on alternative sources for the majority of their news, teens are more confident in the news they get directly from news organizations. Of teens who get news of current events from news organizations, 65% say it helps them better understand what is going on. In contrast, just 53% of teens who get news from social media say it helps them better understand what is going on, while 19 percent say it has made them more confused about current events.

Amid ongoing concerns about the impact of information disseminated through social media on elections, older teens’ news habits may have political implications. Of the teens age 16 and 17 who say they’ll be eligible to vote in the 2020 election, 85% are likely to cast a ballot, including 61% who say they’re “very likely.”

“These findings raise concerns about what kind of news the next generation is using to shape their decisions,” said James Steyer, CEO of Common Sense. “There are few standards for what constitutes news and how accurately it’s portrayed on the platforms teens use. With the 2020 election coming up, we need to make sure teens are getting their news from reliable sources, thinking critically, and making informed decisions.”

This latest survey is part of a Common Sense partnership with SurveyMonkey to examine media and technology trends affecting kids and their parents and to share actionable data and insights with families.

“While it’s notable that teens rely heavily on platforms such as Facebook and YouTube to stay informed, their reliance on news from celebrities and influencers rather than journalists may have pernicious implications,” said Jon Cohen, chief research officer at SurveyMonkey. “It’s a bit of a paradox: Overwhelmingly teens say they are interested in keeping up with the news, but they’re not seeking out either traditional or new media to do so.”

Selected key findings

  1. A large majority of teens age 13 to 17 in the U.S. (78%) say it’s important to them to follow current events.
  2. Teens get their news more frequently from social media sites (e.g., Facebook and Twitter) or from YouTube than directly from news organizations. More than half of teens (54%) get news from social media, and 50% get news from YouTube at least a few times a week. Fewer than half, 41%, get news reported by news organizations in print or online at least a few times a week, and only 37% get news on TV at least a few times a week.
  3. YouTube recommendations drive news consumption. Among all teens who get their news from YouTube—regardless of how often—exactly half (50%) say they most often find news on YouTube because it was recommended by YouTube itself (i.e., as a “watch next” video or in the sidebar). Almost half as many (27%) say they follow or subscribe to a specific channel for news on YouTube, and fewer say they find their news on YouTube through search (10%) or because it was shared by someone they know in real life (7%).
  4. Sixty percent of teens who get news from YouTube say they are more likely to get it from celebrities, influencers, and personalities as compared to news organizations (39%). The difference is even more apparent among daily YouTube news consumers (71% vs. 28%).
  5. Nearly two in three teens (65%) who get news directly from news organizations say doing so has helped them better understand current events, compared with 59% of teens who get their news from YouTube (56%) and 53% who get their news from social media sites (53%). Nearly two in 10 teens (19%) say that getting news from social media has made them more confused about current events.
  6. Teens clearly prefer a visual medium for learning about the news. A majority (64%) say that “seeing pictures and video showing what happened” gives them the best understanding of major news events, while just 36% say they’d prefer to read or hear the facts about what happened.
  7. Politically, teens are more likely to be moderate and identify as Democrats, but they are open to ideas from sources whose opinions differ from their own. Just under half of teens (45%) say they get news from sources that have views different from their own once a week or more, and only 14% say they never get news from sources with different views. Slightly fewer (35%) say they discuss political issues with people who have different views once a week or more, and 19% say they never discuss politics with people who have opposing views.

The study comes on the heels of the release of Common Sense’s revamped Digital Citizenship Curriculum, which gives teachers lessons to help students develop skills to be critical consumers of news at a time when they are navigating a fast-changing digital terrain fraught with fake media, hate speech, cyberbullying, and constant digital distraction.

Methodology: This SurveyMonkey Audience survey was conducted June 14 to 25, 2019, among 1,005 teenagers age 13 to 17 in the United States. Respondents for these surveys were selected from more than 2 million people who take surveys on the SurveyMonkey platform each day. The modeled error estimate for the full sample is +/-4.0 percentage points. Data has been weighted for age and sex using the Census Bureau’s American Community Survey to reflect the demographic composition of people in the United States age 13 to 17. Find the full survey results and more information about Common Sense research here.

580

Google is Giving Data To Police Based on Search Keywords, Court Docs Show

There are few things as revealing as a person’s search history, and police typically need a warrant on a known suspect to demand that sensitive information. But a recently unsealed court document found that investigators can request such data in reverse order by asking Google to disclose everyone who searched a keyword rather than for information on a known suspect.

In August, police arrested Michael Williams, an associate of singer and accused sex offender R. Kelly, for allegedly setting fire to a witness’ car in Florida. Investigators linked Williams to the arson, as well as witness tampering, after sending a search warrant to Google that requested information on “users who had searched the address of the residence close in time to the arson.”

The July court filing was unsealed on Tuesday. Detroit News reporter Robert Snell tweeted about the filing after it was unsealed. Court documents showed that Google provided the IP addresses of people who searched for the arson victim’s address, which investigators tied to a phone number belonging to Williams. Police then used the phone number records to pinpoint the location of Williams’ device near the arson, according to court documents. The original warrant sent to Google is still sealed, but the report provides another example of a growing trend of data requests to the search engine giant in which investigators demand data on a large group of users rather than a specific request on a single suspect. “This ‘keyword warrant’ evades the Fourth Amendment checks on police surveillance,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “When a court authorizes a data dump of every person who searched for a specific term or address, it’s likely unconstitutional.”

519