Resources

Big Tech Continues Its Surge Ahead of the Rest of the Economy

While the rest of the U.S. economy languished earlier this year, the tech industry’s biggest companies seemed immune to the downturn, surging as the country worked, learned and shopped from home. From a report:

On Thursday, as the economy is showing signs of improvement, Amazon, Apple, Alphabet and Facebook reported profits that highlighted how a recovery may provide another catalyst to help them generate a level of wealth that hasn’t been seen in a single industry in generations. With an entrenched audience of users and the financial resources to press their leads in areas like cloud computing, e-commerce and digital advertising, the companies demonstrated again that economic malaise, upstart competitors and feisty antitrust regulators have had little impact on their bottom line. Combined, the four companies reported a quarterly net profit of $38 billion.

Amazon reported record sales, and an almost 200 percent rise in profits, as the pandemic accelerated the transition to online shopping. Despite a boycott of its advertising over the summer, Facebook had another blockbuster quarter. Alphabet’s record quarterly net profit was up 59 percent, as marketers plowed money into advertisements for Google search and YouTube. And Apple’s sales rose even though the pandemic forced it to push back the iPhone 12’s release to October, in the current quarter. On Tuesday, Microsoft, Amazon’s closest competitor in cloud computing, also reported its most profitable quarter, growing 30 percent from a year earlier. “The scene that’s playing out fundamentally is that these tech stalwarts are gaining more market share by the day,” said Dan Ives, managing director of equity research at Wedbush Securities. “It’s ‘A Tale of Two Cities’ for this group of tech companies and everyone else.”

523

What It’s Like To Get Locked Out of Google Indefinitely

When he received the notification from Google he couldn’t quite believe it. Cleroth, a game developer who asked not to use his real name, woke up to see a message that all his Google accounts were disabled due to “serious violation of Google policies.” His first reaction was that something must have malfunctioned on his phone. Then he went to his computer and opened up Chrome, Google’s internet browser. He was signed out. He tried to access Gmail, his main email account, which was also locked. “Everything was disconnected,” he told Business Insider. Cleroth had some options he could pursue: One was the option to try and recover his Google data â” which gave him hope. But he didn’t go too far into the process because there was also an option to appeal the ban. He sent in an appeal.

He received a response the next day: Google had determined he had broken their terms of service, though they didn’t explain exactly what had happened, and his account wouldn’t be reinstated. (Google has been approached for comment on this story.) Cleroth is one of a number of people who have seen their accounts suspended in the last few days and weeks. In response to a tweet explaining his fear at being locked out of his Google account after 15 years of use, others have posted about the impact of being barred from the company that runs most of the services we use in our day-to-day lives. “I’ve been using a Google account for personal and work purposes for years now. It had loads of various types of data in there,” said Stephen Roughley, a software developer from Birkenhead, UK. “One day when I went to use it I found I couldn’t log in.” Roughley checked his backup email account and found a message there informing him his main account had been terminated for violating the terms of service. “It suggested that I had been given a warning and I searched and searched but couldn’t find anything,” added Roughley. “I then followed the link to recover my account but was given a message stating that my account was irrecoverable.” Roughley lost data including emails, photos, documents and diagrams that he had developed for his work. “My account and all its data is gone,” he said.

499

deletegoogle.com

A guide to deleting your Google account.

479

Google is Giving Data To Police Based on Search Keywords, Court Docs Show

There are few things as revealing as a person’s search history, and police typically need a warrant on a known suspect to demand that sensitive information. But a recently unsealed court document found that investigators can request such data in reverse order by asking Google to disclose everyone who searched a keyword rather than for information on a known suspect.

In August, police arrested Michael Williams, an associate of singer and accused sex offender R. Kelly, for allegedly setting fire to a witness’ car in Florida. Investigators linked Williams to the arson, as well as witness tampering, after sending a search warrant to Google that requested information on “users who had searched the address of the residence close in time to the arson.”

The July court filing was unsealed on Tuesday. Detroit News reporter Robert Snell tweeted about the filing after it was unsealed. Court documents showed that Google provided the IP addresses of people who searched for the arson victim’s address, which investigators tied to a phone number belonging to Williams. Police then used the phone number records to pinpoint the location of Williams’ device near the arson, according to court documents. The original warrant sent to Google is still sealed, but the report provides another example of a growing trend of data requests to the search engine giant in which investigators demand data on a large group of users rather than a specific request on a single suspect. “This ‘keyword warrant’ evades the Fourth Amendment checks on police surveillance,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “When a court authorizes a data dump of every person who searched for a specific term or address, it’s likely unconstitutional.”

491

Cory Doctorow’s New Book Explains ‘How to Destroy Surveillance Capitalism’

If we’re going to break Big Tech’s death grip on our digital lives, we’re going to have to fight monopolies. That may sound pretty mundane and old-fashioned, something out of the New Deal era, while ending the use of automated behavioral modification feels like the plotline of a really cool cyberpunk novel… But trustbusters once strode the nation, brandishing law books, terrorizing robber barons, and shattering the illusion of monopolies’ all-powerful grip on our society. The trustbusting era could not begin until we found the political will — until the people convinced politicians they’d have their backs when they went up against the richest, most powerful men in the world. Could we find that political will again…?

That’s the good news: With a little bit of work and a little bit of coalition building, we have more than enough political will to break up Big Tech and every other concentrated industry besides. First we take Facebook, then we take AT&T/WarnerMedia. But here’s the bad news: Much of what we’re doing to tame Big Tech instead of breaking up the big companies also forecloses on the possibility of breaking them up later… Allowing the platforms to grow to their present size has given them a dominance that is nearly insurmountable — deputizing them with public duties to redress the pathologies created by their size makes it virtually impossible to reduce that size. Lather, rinse, repeat: If the platforms don’t get smaller, they will get larger, and as they get larger, they will create more problems, which will give rise to more public duties for the companies, which will make them bigger still.

We can work to fix the internet by breaking up Big Tech and depriving them of monopoly profits, or we can work to fix Big Tech by making them spend their monopoly profits on governance. But we can’t do both. We have to choose between a vibrant, open internet or a dominated, monopolized internet commanded by Big Tech giants that we struggle with constantly to get them to behave themselves…

Big Tech wired together a planetary, species-wide nervous system that, with the proper reforms and course corrections, is capable of seeing us through the existential challenge of our species and planet. Now it’s up to us to seize the means of computation, putting that electronic nervous system under democratic, accountable control.

With “free, fair, and open tech” we could then tackle our other urgent problems “from climate change to social change” — all with collective action, Doctorow argues. And “The internet is how we will recruit people to fight those fights, and how we will coordinate their labor.

“Tech is not a substitute for democratic accountability, the rule of law, fairness, or stability — but it’s a means to achieve these things.”

535

Facebook and Google Serve As Vectors For Misinformation While Hobbling Local Journalism and Collecting Taxpayer Subsidies, Group Says

Facebook and Google are hollowing out local communities by serving as vectors for misinformation while hobbling local journalism and collecting taxpayer subsidies, a new paper from progressive think tank the American Economic Liberties Project charges. Both companies cite benefits their platforms offer small businesses as a key defense against critiques of their size and power. The paper, dated Aug. 30, is sure to presage further scrutiny of the impact they’ve had on local communities.

The brief, by Pat Garofalo, the group’s director of state and local policy, argues that: Google doesn’t do enough to protect against fraud, allowing scammers to get their own numbers and websites listed on Google to the detriment of legitimate businesses. Facebook, by design, boosts shoddy and sensationalist content, crowding out legitimate local news and information, all as it and Google have come to dominate the local advertising market that was long the lifeblood of community journalism. Both have sucked up potentially billions in local taxpayer dollars via tax breaks as well as subsidies and discounts on utilities they’ve gotten in exchange for building data centers. Garofalo recommends remedies including more antitrust enforcement at the federal and state levels and an end to preferential treatment by states and localities, either voluntarily or under force of law.

532

Google Search and Dark Patterns

Previously, the search engine had marked paid results with the word “Ad” in a green box, tucked beneath the headline next to a matching green display URL. Now, all of a sudden, the “Ad” and the URL shifted above the headline, and both were rendered in discreet black; the box disappeared. The organic search results underwent a similar makeover, only with a new favicon next to the URL instead of the word “Ad.” The result was a general smoothing: Ads looked like not-ads. Not-ads looked like ads. This was not Google’s first time fiddling with the search results interface. In fact, it had done so quite regularly over the last 13 years, as handily laid out in a timeline from the news site Search Engine Land. Each iteration whittled away the distinction between paid and unpaid content that much more. Most changes went relatively unnoticed, internet residents accepting the creep like the apocryphal frog in a slowly boiling pot.

But in January, amid rising antitrust drumbeats and general exhaustion with Big Tech, people noticed. Interface designers, marketers, and Google users alike decried the change, saying it made paid results practically indistinguishable from those that Google’s search algorithm served up organically. The phrase that came up most often: “dark pattern,” a blanket term coined by UX specialist Harry Brignull to describe manipulative design elements that benefit companies over their users. That a small design tweak could inspire so much backlash speaks to the profound influence Google and other ubiquitous platforms have — and the responsibility that status confers to them. “Google and Facebook shape realities,” says Kat Zhou, a product designer who has created a framework and toolkit to help promote ethical design. “Students and professors turn to Google for their research. Folks turn to Facebook for political news. Communities turn to Google for Covid-19 updates. In some sense, Google and Facebook have become arbiters of the truth. That’s particularly scary when you factor in their business models, which often incentivize blurring the line between news and advertisements.”

Google’s not the only search engine to blur this line. If anything, Bing is even more opaque, sneaking the “Ad” disclosure under the header, with only a faint outline to draw attention. […] But Google has around 92 percent of global search marketshare. It effectively is online search. Dark patterns are all too common online in general, and January wasn’t the first time people accused Google of deploying them. In June of 2018, a blistering report from the Norwegian Consumer Council found that Google and Facebook both used specific interface choices to strip away user privacy at almost every turn. The study details how both platforms implemented the least privacy-friendly options by default, consistently “nudged” users toward giving away more of their data, and more. It paints a portrait of a system designed to befuddle users into complacency. […] That confusion reached its apex a few months later, when an Associated Press investigation found that disabling Location History on your smartphone did not, in fact, stop Google from collecting your location in all instances.

581

How Google Ruined the Internet

Remember that story about the Polish dentist who pulled out all of her ex-boyfriend’s teeth in an act of revenge? It was complete and utter bullshit. 100% fabricated. No one knows who wrote it. Nevertheless, it was picked up by Fox News, the Los Angeles Times and many other publishers. That was eight years ago, yet when I search now for “dentist pulled ex boyfriends teeth,” I get a featured snippet that quotes ABC News’ original, uncorrected story. Who invented the fidget spinner? Ask Google Assistant and it will tell you that Catherine Hettinger did: a conclusion based on poorly-reported stories from The Guardian, The New York Times and other major news outlets. Bloomberg’s Joshua Brustein clearly demonstrated that Ms. Hettinger did not invent the low friction toy. Nevertheless, ask Google Assistant “who really invented the fidget spinner?” and you’ll get the same answer: Catherine Hettinger.

In 1998, the velocity of information was slow and the cost of publishing it was high (even on the web). Google leveraged those realities to make the best information retrieval system in the world. Today, information is free, plentiful and fast moving; somewhat by design, Google has become a card catalog that is constantly being reordered by an angry, misinformed mob. The web was supposed to forcefully challenge our opinions and push back, like a personal trainer who doesn’t care how tired you say you are. Instead, Google has become like the pampering robots in WALL-E, giving us what we want at the expense of what we need. But, it’s not our bodies that are turning into mush: It’s our minds.

615

Why Don’t We Just Ban Targeted Advertising?

Google and Facebook, including their subsidiaries like Instagram and YouTube, make about 83 percent and 99 percent of their respective revenue from one thing: selling ads. It’s the same story with Twitter and other free sites and apps. More to the point, these companies are in the business of what’s called behavioral advertising, which allows companies to aim their marketing based on everything from users’ sexual orientations to their moods and menstrual cycles, as revealed by everything they do on their devices and every place they take them. It follows that most of the unsavory things the platforms do—boost inflammatory content, track our whereabouts, enable election manipulation, crush the news industry—stem from the goal of boosting ad revenues. Instead of trying to clean up all these messes one by one, the logic goes, why not just remove the underlying financial incentive? Targeting ads based on individual user data didn’t even really exist until the past decade. (Indeed, Google still makes many billions of dollars from ads tied to search terms, which aren’t user-specific.) What if companies simply weren’t allowed to do it anymore?

Let’s pretend it really happened. Imagine Congress passed a law tomorrow morning that banned companies from doing any ad microtargeting whatsoever. Close your eyes and picture what life would be like if the leading business model of the internet were banished from existence. How would things be different?

Many of the changes would be subtle. You could buy a pair of shoes on Amazon without Reebok ads following you for months. Perhaps you’d see some listings that you didn’t see before, for jobs or real estate. That’s especially likely if you’re African-American, or a woman, or a member of another disadvantaged group. You might come to understand that microtargeting had supercharged advertisers’ ability to discriminate, even when they weren’t trying to.

652

Xiaomi Camera Feed is Showing Random Homes on a Google Nest Hub, Including Still Images of Sleeping People

So-called “smart” security cameras have had some pretty dumb security problems recently, but a recent report regarding a Xiaomi Mijia camera linked to a Google Home is especially disturbing. One Xiaomi Mijia camera owner is getting still images from other random peoples’ homes when trying to stream content from his camera to a Google Nest Hub. The images include sills of people sleeping (even an infant in a cradle) inside their own homes. This issue was first reported by user /r/Dio-V on Reddit and affects his Xiaomi Mijia 1080p Smart IP Security Camera, which can be linked to a Google account for use with Google/Nest devices through Xiaomi’s Mi Home app/service. It isn’t clear when Dio-V’s feed first began showing these still images into random homes or how long the camera was connected to his account before this started happening. He does state that both the Nest Hub and the camera were purchased new. The camera was noted as running firmware version 3.5.1_00.66.

650

YouTube’s Algorithm Made Fake CNN Reports Go Viral

“YouTube channels posing as American news outlets racked up millions of views on false and inflammatory videos over several months this year,” reports CNN.

“All with the help of YouTube’s recommendation engine.”

Many of the accounts, which mostly used footage from CNN, but also employed some video from Fox News, exploited a YouTube feature that automatically creates channels on certain topics. Those topic channels are then automatically populated by videos related to the topic — including, in this case, blatant misinformation.

YouTube has now shut down many of the accounts.

YouTube’s own algorithms also recommended videos from the channels to American users who watched videos about U.S. politics. That the channels could achieve such virality — one channel was viewed more than two million times over one weekend in October — raises questions about YouTube’s preparedness for tackling misinformation on its platform just weeks before the Iowa caucuses and points to the continuing challenge platforms face as people try to game their systems….

Responding to the findings on Thursday, a CNN spokesperson said YouTube needs to take responsibility.

“When accounts were deleted or banned, they were able to spin up new accounts within hours,” added Plasticity, a natural language processing and AI startup which analyzed the data and identified at least 25 different accounts which YouTube then shut down.

“The tactics they used to game the YouTube algorithm were executed perfectly. They knew what they were doing.”

615

Facebook, Google Donate Heavily To Privacy Advocacy Groups

Few companies have more riding on proposed privacy legislation than Alphabet’s Google and Facebook. To try to steer the bill their way, the giant advertising technology companies spend millions of dollars to lobby each year, a fact confirmed by government filings. Not so well-documented is spending to support highly influential think tanks and public interest groups that are helping shape the privacy debate, ostensibly as independent observers. Bloomberg Law examined seven prominent nonprofit think tanks that work on privacy issues that received a total of $1.5 million over a 18-month period ending Dec. 31, 2018. The groups included such organizations as the Center for Democracy and Technology, the Future of Privacy Forum and the Brookings Institution. The actual total is undoubtedly much higher — exact totals for contributions were difficult to pin down. The tech giants have “funded scores of nonprofits, including consumer and privacy groups, and academics,” said Jeffrey Chester, executive director at the Center for Digital Democracy, a public interest group that does not accept donations from Google or Facebook. Further, he says, their influence is strong. The companies have “opposed federal privacy laws and worked to weaken existing safeguards,” Chester said. Accepting donations from these “privacy-killing companies enable them to influence decisions by nonprofits, even subtly,” he said.

607

Next in Google’s Quest for Consumer Dominance–Banking

The project, code-named Cache, is expected to launch next year with accounts run by Citigroup and a credit union at Stanford University, a tiny lender in Google’s backyard. Big tech companies see financial services as a way to get closer to users and glean valuable data. Apple introduced a credit card this summer. Amazon.com has talked to banks about offering checking accounts. Facebook is working on a digital currency it hopes will upend global payments. Their ambitions could challenge incumbent financial-services firms, which fear losing their primacy and customers. They are also likely to stoke a reaction in Washington, where regulators are already investigating whether large technology companies have too much clout.

The tie-ups between banking and technology have sometimes been fraught. Apple irked its credit-card partner, Goldman Sachs Group, by running ads that said the card was “designed by Apple, not a bank.” Major financial companies dropped out of Facebook’s crypto project after a regulatory backlash. Google’s approach seems designed to make allies, rather than enemies, in both camps. The financial institutions’ brands, not Google’s, will be front-and-center on the accounts, an executive told The Wall Street Journal. And Google will leave the financial plumbing and compliance to the banks — activities it couldn’t do without a license anyway.

653

Google’s Secret ‘Project Nightingale’ Gathers Personal Health Data on Millions of Americans

Google is teaming with one of the country’s largest health-care systems on a secret project to collect and crunch the detailed personal health information of millions of Americans across 21 states, WSJ reported Monday, citing people familiar with the matter and internal documents.

The initiative, code-named “Project Nightingale,” appears to be the largest in a series of efforts by Silicon Valley giants to gain access to personal health data and establish a toehold in the massive health-care industry. Amazon.com, Apple and Microsoft are also aggressively pushing into health care, though they haven’t yet struck deals of this scope. Google launched the effort last year with St. Louis-based Ascension, the country’s second-largest health system. The data involved in Project Nightingale includes lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, complete with patient names and dates of birth.

Neither patients nor doctors have been notified. At least 150 Google employees already have access to much of the data on tens of millions of patients, according to a person familiar with the matter and the documents.

Google in this case is using the data in part to design new software, underpinned by advanced artificial intelligence and machine learning.

Google appears to be sharing information within Project Nightingale more broadly than in its other forays into health-care data. In September, Google announced a 10-year deal with the Mayo Clinic to store the hospital system’s genetic, medical and financial records.

Google co-founder Larry Page, in a 2014 interview, suggested that patients worried about the privacy of their medical records were too cautious. Mr. Page said: “We’re not really thinking about the tremendous good that can come from people sharing information with the right people in the right ways.”

678

Researchers Tricked Google Home and Alexa Into Eavesdropping and Password Phishing

What if Google and Amazon employees weren’t the only ones who’d listened through your voice assistant? Ars Technica reports:

The threat isn’t just theoretical. Whitehat hackers at Germany’s Security Research Labs developed eight apps — four Alexa “skills” and four Google Home “actions” — that all passed Amazon or Google security-vetting processes. The skills or actions posed as simple apps for checking horoscopes, with the exception of one, which masqueraded as a random-number generator. Behind the scenes, these “smart spies,” as the researchers call them, surreptitiously eavesdropped on users and phished for their passwords…

The apps gave the impression they were no longer running when they, in fact, silently waited for the next phase of the attack…. The apps quietly logged all conversations within earshot of the device and sent a copy to a developer-designated server. The phishing apps follow a slightly different path by responding with an error message that claims the skill or action isn’t available in that user’s country. They then go silent to give the impression the app is no longer running. After about a minute, the apps use a voice that mimics the ones used by Alexa and Google home to falsely claim a device update is available and prompts the user for a password for it to be installed….

In response, both companies removed the apps and said they are changing their approval processes to prevent skills and actions from having similar capabilities in the future.

660

Mozilla is Sharing YouTube Horror Stories To Prod Google For More Transparency

Mozilla is publishing anecdotes of YouTube viewing gone awry — anonymous stories from people who say they innocently searched for one thing but eventually ended up in a dark rabbit hole of videos. It’s a campaign aimed at pressuring Google’s massive video site to make itself more accessible to independent researchers trying to study its algorithms. “The big problem is we have no idea what is happening on YouTube,” said Guillaume Chaslot, who is a fellow at Mozilla, a nonprofit best known for its unit that makes and operates the Firefox web browser.

Chaslot is an ex-Google engineer who has investigated YouTube’s recommendations from the outside after he left the company in 2013. (YouTube says he was fired for performance issues.) “We can see that there are problems, but we have no idea if the problem is from people being people or from algorithms,” he said….

Mozilla is publishing 28 stories it’s terming #YouTubeRegrets; they include, for example, an anecdote from someone who who said a search for German folk songs ended up returning neo-Nazi clips, and a testimonial from a mother who said her 10-year-old daughter searched for tap-dancing videos and ended up watching extreme contortionist clips that affected her body image.

710

Voice From ‘Nest’ Camera Threatens to Steal Baby

Jack Newcombe, the Chief Operating Officer of a syndication company with 44 million daily readers, describes the strange voice he heard talking to his 18-month old son:
She says we have a nice house and encourages the nanny to respond. She does not. The voice even jokes that she hopes we don’t change our password. I am sick to my stomach. After about five minutes of verbal “joy riding,” the voice starts to get agitated at the nanny’s lack of response and then snaps, in a very threatening voice: “I’m coming for the baby if you don’t answer me….” We unplug the cameras and change all passwords…

Still helpless, I started doing the only thing I could do — Googling. I typed “Nest + camera + hacked” and found out that this happens frequently. Parent after parent relayed stories similar to mine — threatening to steal a baby is shockingly common — and some much worse, such as playing pornography over the microphone to a 3-year-old… What is worse is that anyone could have been watching us at any time for as long as we have had the cameras up. This person just happened to use the microphone. Countless voyeurs could have been silently watching (or worse) for months.

However, what makes this issue even more terrifying is a corporate giant’s complete and utter lack of response. Nest is owned by Google, and, based on my experience and their public response, Google does not seem to care about this issue. They acknowledge it as a problem, shrug their shoulders and point their fingers at the users. Their party line is to remind people that the hardware was not hacked; it was the user’s fault for using a compromised password and not implementing two-step authentication, in which users receive a special code via text to sign on. That night, on my way home from work, I called Nest support and was on hold for an hour and eight minutes. I followed all directions and have subsequently received form emails in broken English. Nobody from Google has acknowledged the incident or responded with any semblance of empathy. In every email, they remind me of two-step authentication.

They act as if I am going to continue to use Nest cameras.

677

YouTube Gets Alleged Copyright Troll To Agree To Stop Trolling YouTubers

Alleged copyright troll Christopher Brady will no longer be able to issue false DMCA takedowns to other YouTubers, according to a lawsuit settlement filed today. The Verge reports:

Under the new agreement, Brady is banned from “submitting any notices of alleged copyright infringement to YouTube that misrepresent that material hosted on the YouTube service is infringing copyrights held or claimed to be held by Brady or anyone Brady claims to represent.” Brady agreed to pay $25,000 in damages as part of the settlement. He is also prohibited from “misrepresenting or masking their identities” when using Google products, including YouTube. “This settlement highlights the very real consequences for those that misuse our copyright system. We’ll continue our work to prevent abuse of our systems,” a YouTube spokesperson told The Verge.

“I, Christopher L. Brady, admit that I sent dozens of notices to YouTube falsely claiming that material uploaded by YouTube users infringed my copyrights,” he said in an apology, shared by YouTube with The Verge. “I apologize to the YouTube users that I directly impacted by my actions, to the YouTube community, and to YouTube itself.” YouTube claimed the investigation caused the company to “expend substantial sums on its investigation in an effort to detect and halt that behavior, and to ensure that its users do not suffer adverse consequences from it.” YouTube also said that the company may be “unable to detect and prevent similar misconduct in the future,” as a result of the various methods Brady took to cover up his identity.

668

Google Chief: I’d Disclose Smart Speakers Before Guests Enter My Home

After being challenged as to whether homeowners should tell guests smart devices — such as a Google Nest speaker or Amazon Echo display — are in use before they enter the building, Google senior vice president of devices and services, Rick Osterloh, concludes that the answer is indeed yes. The BBC reports:

“Gosh, I haven’t thought about this before in quite this way,” Rick Osterloh begins. “It’s quite important for all these technologies to think about all users… we have to consider all stakeholders that might be in proximity.” And then he commits. “Does the owner of a home need to disclose to a guest? I would and do when someone enters into my home, and it’s probably something that the products themselves should try to indicate.”

To be fair to Google, it hasn’t completely ignored matters of 21st Century privacy etiquette until now. As Mr Osterloh points out, its Nest cameras shine an LED light when they are in record mode, which cannot be overridden. But the idea of having to run around a home unplugging or at least restricting the capabilities of all its voice- and camera-equipped kit if a visitor objects is quite the ask.

The concession came at the end of one-on-one interview given to BBC News to mark the launch of Google’s Pixel 4 smartphones, a new Nest smart speaker and other products. You can read the full conversation on the BBC’s article.

619

YouTube is Experimenting With Ways To Make Its Algorithm Even More Addictive

While YouTube has publicly said that it’s working on addressing problems that are making its website ever so addictive to users, a new paper from Google, which owns YouTube, seems to tell a different story.

It proposes an update to the platform’s algorithm that is meant to recommend even more targeted content to users in the interest of increasing engagement. Here’s how YouTube’s recommendation system currently works. To populate the recommended-videos sidebar, it first compiles a shortlist of several hundred videos by finding ones that match the topic and other features of the one you are watching. Then it ranks the list according to the user’s preferences, which it learns by feeding all your clicks, likes, and other interactions into a machine-learning algorithm. Among the proposed updates, the researchers specifically target a problem they identify as “implicit bias.” It refers to the way recommendations themselves can affect user behavior, making it hard to decipher whether you clicked on a video because you liked it or because it was highly recommended. The effect is that over time, the system can push users further and further away from the videos they actually want to watch.

To reduce this bias, the researchers suggest a tweak to the algorithm: each time a user clicks on a video, it also factors in the video’s rank in the recommendation sidebar. Videos that are near the top of the sidebar are given less weight when fed into the machine-learning algorithm; videos deep down in the ranking, which require a user to scroll, are given more. When the researchers tested the changes live on YouTube, they found significantly more user engagement. Though the paper doesn’t say whether the new system will be deployed permanently, Guillaume Chaslot, an ex-YouTube engineer who now runs AlgoTransparency.org, said he was “pretty confident” that it would happen relatively quickly.

691