Archives 2019

Researchers Tricked Google Home and Alexa Into Eavesdropping and Password Phishing

What if Google and Amazon employees weren’t the only ones who’d listened through your voice assistant? Ars Technica reports:

The threat isn’t just theoretical. Whitehat hackers at Germany’s Security Research Labs developed eight apps — four Alexa “skills” and four Google Home “actions” — that all passed Amazon or Google security-vetting processes. The skills or actions posed as simple apps for checking horoscopes, with the exception of one, which masqueraded as a random-number generator. Behind the scenes, these “smart spies,” as the researchers call them, surreptitiously eavesdropped on users and phished for their passwords…

The apps gave the impression they were no longer running when they, in fact, silently waited for the next phase of the attack…. The apps quietly logged all conversations within earshot of the device and sent a copy to a developer-designated server. The phishing apps follow a slightly different path by responding with an error message that claims the skill or action isn’t available in that user’s country. They then go silent to give the impression the app is no longer running. After about a minute, the apps use a voice that mimics the ones used by Alexa and Google home to falsely claim a device update is available and prompts the user for a password for it to be installed….

In response, both companies removed the apps and said they are changing their approval processes to prevent skills and actions from having similar capabilities in the future.

Mozilla is Sharing YouTube Horror Stories To Prod Google For More Transparency

Mozilla is publishing anecdotes of YouTube viewing gone awry — anonymous stories from people who say they innocently searched for one thing but eventually ended up in a dark rabbit hole of videos. It’s a campaign aimed at pressuring Google’s massive video site to make itself more accessible to independent researchers trying to study its algorithms. “The big problem is we have no idea what is happening on YouTube,” said Guillaume Chaslot, who is a fellow at Mozilla, a nonprofit best known for its unit that makes and operates the Firefox web browser.

Chaslot is an ex-Google engineer who has investigated YouTube’s recommendations from the outside after he left the company in 2013. (YouTube says he was fired for performance issues.) “We can see that there are problems, but we have no idea if the problem is from people being people or from algorithms,” he said….

Mozilla is publishing 28 stories it’s terming #YouTubeRegrets; they include, for example, an anecdote from someone who who said a search for German folk songs ended up returning neo-Nazi clips, and a testimonial from a mother who said her 10-year-old daughter searched for tap-dancing videos and ended up watching extreme contortionist clips that affected her body image.

Would You Trust Amazon To Run Free and Fair Elections?

While it does not handle voting on election day, AWS — along with a broad network of partners — now runs state and county election websites, stores voter registration rolls and ballot data, facilitates overseas voting by military personnel and helps provide live election-night results, according to company documents and interviews… Amazon pitches itself as a low-cost provider of secure election technology at a time when local officials and political campaigns are under intense pressure to prevent a repeat of 2016 presidential elections, which saw cyber-attacks on voting systems and election infrastructure.

Most security experts Reuters spoke to said that while Amazon’s cloud is likely much harder to hack than systems it is replacing, putting data from many jurisdictions on a single system raises the prospect that a single major breach could prove damaging. “It makes Amazon a bigger target” for hackers, “and also increases the challenge of dealing with an insider attack,” said Chris Vickery, director of cyber risk research at cybersecurity startup Upguard. A recent hack into Capital One Financial Corp’s data stored on Amazon’s cloud service was perpetrated by a former Amazon employee. The breach affected more than 100 million customers, underscoring how rogue employees or untrained workers can create security risks even if the underlying systems are secure…

Vickery uncovered at least three instances where voter data on Amazon’s cloud servers was exposed to the internet, which have been reported previously. For example, in 2017, he found a Republican contractor’s database for nearly every registered American voter hosted on AWS exposed on the internet for 12 days. In 2016, he found Mexico’s entire voter database on AWS servers was leaked. Amazon said the breaches were caused by customer errors, adding that while AWS secures the cloud infrastructure, customers are responsible for security of what goes in the cloud.

Voice From ‘Nest’ Camera Threatens to Steal Baby

Jack Newcombe, the Chief Operating Officer of a syndication company with 44 million daily readers, describes the strange voice he heard talking to his 18-month old son:
She says we have a nice house and encourages the nanny to respond. She does not. The voice even jokes that she hopes we don’t change our password. I am sick to my stomach. After about five minutes of verbal “joy riding,” the voice starts to get agitated at the nanny’s lack of response and then snaps, in a very threatening voice: “I’m coming for the baby if you don’t answer me….” We unplug the cameras and change all passwords…

Still helpless, I started doing the only thing I could do — Googling. I typed “Nest + camera + hacked” and found out that this happens frequently. Parent after parent relayed stories similar to mine — threatening to steal a baby is shockingly common — and some much worse, such as playing pornography over the microphone to a 3-year-old… What is worse is that anyone could have been watching us at any time for as long as we have had the cameras up. This person just happened to use the microphone. Countless voyeurs could have been silently watching (or worse) for months.

However, what makes this issue even more terrifying is a corporate giant’s complete and utter lack of response. Nest is owned by Google, and, based on my experience and their public response, Google does not seem to care about this issue. They acknowledge it as a problem, shrug their shoulders and point their fingers at the users. Their party line is to remind people that the hardware was not hacked; it was the user’s fault for using a compromised password and not implementing two-step authentication, in which users receive a special code via text to sign on. That night, on my way home from work, I called Nest support and was on hold for an hour and eight minutes. I followed all directions and have subsequently received form emails in broken English. Nobody from Google has acknowledged the incident or responded with any semblance of empathy. In every email, they remind me of two-step authentication.

They act as if I am going to continue to use Nest cameras.

Facing Unbearable Heat, Qatar Has Begun To Air-Condition the Outdoors

It was 116 degrees Fahrenheit in the shade outside the new Al Janoub soccer stadium, and the air felt to air-conditioning expert Saud Ghani as if God had pointed “a giant hair dryer” at Qatar. Yet inside the open-air stadium, a cool breeze was blowing. Beneath each of the 40,000 seats, small grates adorned with Arabic-style patterns were pushing out cool air at ankle level. And since cool air sinks, waves of it rolled gently down to the grassy playing field. Vents the size of soccer balls fed more cold air onto the field. Ghani, an engineering professor at Qatar University, designed the system at Al Janoub, one of eight stadiums that the tiny but fabulously rich Qatar must get in shape for the 2022 World Cup. His breakthrough realization was that he had to cool only people, not the upper reaches of the stadium — a graceful structure designed by the famed Zaha Hadid Architects and inspired by traditional boats known as dhows. “I don’t need to cool the birds,” Ghani said.

Qatar, the world’s leading exporter of liquefied natural gas, may be able to cool its stadiums, but it cannot cool the entire country. Fears that the hundreds of thousands of soccer fans might wilt or even die while shuttling between stadiums and metros and hotels in the unforgiving summer heat prompted the decision to delay the World Cup by five months. It is now scheduled for November, during Qatar’s milder winter. The change in the World Cup date is a symptom of a larger problem — climate change. Already one of the hottest places on Earth, Qatar has seen average temperatures rise more than 2 degrees Celsius (3.6 F) above preindustrial times, the current international goal for limiting the damage of global warming. The 2015 Paris climate summit said it would be better to keep temperatures “well below” that, ideally to no more than 1.5 degrees Celsius (2.7 F).

[…] To survive the summer heat, Qatar not only air-conditions its soccer stadiums, but also the outdoors — in markets, along sidewalks, even at outdoor malls so people can window shop with a cool breeze. “If you turn off air conditioners, it will be unbearable. You cannot function effectively,” says Yousef al-Horr, founder of the Gulf Organization for Research and Development. Yet outdoor air conditioning is part of a vicious cycle. Carbon emissions create global warming, which creates the desire for air conditioning, which creates the need for burning fuels that emit more carbon dioxide. In Qatar, total cooling capacity is expected to nearly double from 2016 to 2030, according to the International District Cooling & Heating Conference. And it’s going to get hotter.

Women, Not Democracy, Are the Main Victims of Deepfakes

While the 2020 U.S. presidential elections have lawmakers on edge over AI-generated fake videos, a new study by Netherlands-based deepfake-detection outfit Deeptrace shows that the main victims today are women. According to Deeptrace, deepfake videos have exploded in the past year, rising from 8,000 in December 2018 to 14,678 today. And not surprisingly for the internet, nearly all of the material is pornography, which accounts for 96% of the deepfake videos it’s found online. The fake videos have been viewed 134 million times.

The numbers suggest deepfake porn is still niche but also growing quickly. Additionally, 90% of the fake content depicted women from the U.S., UK, and Canada, while 2% represented women from South Korea and 2% depicted women from Taiwan. “Deepfake pornography is a phenomenon that exclusively targets and harms women,” the company notes. That small number of non-pornographic deepfake videos it analyzed on YouTube mostly contained (61%) synthesized male subjects. According to Henry Ajder, a researcher at Deeptrace, currently most of the deepfake porn involves famous women. But he reckons the threat to all women is likely to increase as it becomes less computationally expensive to create deepfakes. As for the political threat, there actually aren’t that many cases where deepfakes have changed a political outcome.

Facebook CEO Mark Zuckerberg Says He Fears ‘Erosion of Truth’ But Defends Allowing Politicians To Lie in Ads

Facebook chief executive Mark Zuckerberg said in an interview he worries “about an erosion of truth” online but defended the policy that allows politicians to peddle ads containing misrepresentations and lies on his social network, a stance that has sparked an outcry during the 2020 presidential campaign. From a report:

“People worry, and I worry deeply, too, about an erosion of truth,” Zuckerberg told The Washington Post ahead of a speech Thursday at Georgetown University. “At the same time, I don’t think people want to live in a world where you can only say things that tech companies decide are 100 percent true. And I think that those tensions are something we have to live with.” Zuckerberg’s approach to political speech has come under fire in recent weeks. Democrats have taken particular issue with Facebook’s decision to allow an ad from President Trump’s 2020 campaign that included falsehoods about former vice president Joe Biden and his son, Hunter. Sen. Elizabeth Warren responded to Facebook’s decision by running her own campaign ad, satirically stating that Zuckerberg supports Trump for re-election.

Zuckerberg framed the issue as part of a broader debate over free expression, warning about the dangers of social networks, including Facebook, “potentially cracking down too much.” He called on the U.S. to set an example for tailored regulation in contrast to other countries, including China, that censor political speech online. And Zuckerberg stressed Facebook must stand strong against governments that seek to “pull back” on free speech in the face of heightened social and political tensions. Zuckerberg’s appearance in Washington marks his most forceful attempt to articulate his vision for how governments and tech giants should approach the Web’s most intractable problems. The scale of Facebook and its affiliated apps, Instagram and WhatsApp, which make up a virtual community of billions of users, poses challenges for Zuckerberg and regulators around the world as they struggle to contain hate speech, falsehoods, violent imagery and terrorist propaganda on social media.

YouTube Gets Alleged Copyright Troll To Agree To Stop Trolling YouTubers

Alleged copyright troll Christopher Brady will no longer be able to issue false DMCA takedowns to other YouTubers, according to a lawsuit settlement filed today. The Verge reports:

Under the new agreement, Brady is banned from “submitting any notices of alleged copyright infringement to YouTube that misrepresent that material hosted on the YouTube service is infringing copyrights held or claimed to be held by Brady or anyone Brady claims to represent.” Brady agreed to pay $25,000 in damages as part of the settlement. He is also prohibited from “misrepresenting or masking their identities” when using Google products, including YouTube. “This settlement highlights the very real consequences for those that misuse our copyright system. We’ll continue our work to prevent abuse of our systems,” a YouTube spokesperson told The Verge.

“I, Christopher L. Brady, admit that I sent dozens of notices to YouTube falsely claiming that material uploaded by YouTube users infringed my copyrights,” he said in an apology, shared by YouTube with The Verge. “I apologize to the YouTube users that I directly impacted by my actions, to the YouTube community, and to YouTube itself.” YouTube claimed the investigation caused the company to “expend substantial sums on its investigation in an effort to detect and halt that behavior, and to ensure that its users do not suffer adverse consequences from it.” YouTube also said that the company may be “unable to detect and prevent similar misconduct in the future,” as a result of the various methods Brady took to cover up his identity.

Google Chief: I’d Disclose Smart Speakers Before Guests Enter My Home

After being challenged as to whether homeowners should tell guests smart devices — such as a Google Nest speaker or Amazon Echo display — are in use before they enter the building, Google senior vice president of devices and services, Rick Osterloh, concludes that the answer is indeed yes. The BBC reports:

“Gosh, I haven’t thought about this before in quite this way,” Rick Osterloh begins. “It’s quite important for all these technologies to think about all users… we have to consider all stakeholders that might be in proximity.” And then he commits. “Does the owner of a home need to disclose to a guest? I would and do when someone enters into my home, and it’s probably something that the products themselves should try to indicate.”

To be fair to Google, it hasn’t completely ignored matters of 21st Century privacy etiquette until now. As Mr Osterloh points out, its Nest cameras shine an LED light when they are in record mode, which cannot be overridden. But the idea of having to run around a home unplugging or at least restricting the capabilities of all its voice- and camera-equipped kit if a visitor objects is quite the ask.

The concession came at the end of one-on-one interview given to BBC News to mark the launch of Google’s Pixel 4 smartphones, a new Nest smart speaker and other products. You can read the full conversation on the BBC’s article.

Children ‘Interested in’ Gambling and Alcohol, According To Facebook

The social network’s advertising tools reveal 740,000 children under the age of 18 are flagged as being interested in gambling, including 130,000 in the UK. Some 940,000 minors — 150,000 of whom are British — are flagged as being interested in alcoholic beverages. These “interests” are automatically generated by Facebook, based on what it has learned about a user by monitoring their activity on the social network. Advertisers can then use them to specifically target messages to subgroups who have been flagged as interested in the topic. In a statement, Facebook said: “We don’t allow ads that promote the sale of alcohol or gambling to minors on Facebook and we enforce against this activity when we find it. We also work closely with regulators to provide guidance for marketers to help them reach their audiences effectively and responsibly.” The company does allow advertisers to specifically target messages to children based on their interest in alcohol or gambling. A Facebook insider gave the example of an anti-gambling service that may want to reach out to children who potentially have a gambling problem and offer them help and support.

Smart TVs Are Data-Collecting Machines, New Study Shows

A new study from Princeton University shows internet-connected TVs, which allow people to stream Netflix and Hulu, are loaded with data-hungry trackers. “If you use a device such as Roku and Amazon Fire TV, there are numerous companies that can build up a fairly comprehensive picture of what you’re watching,” Arvind Narayanan, associate professor of computer science at Princeton, wrote in an email to The Verge. “There’s very little oversight or awareness of their practices, including where that data is being sold.” From the report:
To understand how much surveillance is taking place on smart TVs, Narayanan and his co-author Hooman Mohajeri Moghaddam built a bot that automatically installed thousands of channels on their Roku and Amazon Fire TVs. It then mimicked human behavior by browsing and watching videos. As soon as it ran into an ad, it would track what data was being collected behind the scenes. Some of the information, like device type, city, and state, is hardly unique to one user. But other data, like the device serial number, Wi-Fi network, and advertising ID, could be used to pinpoint an individual. “This gives them a more complete picture of who you are,” said Moghaddam. He noted that some channels even sent unencrypted email addresses and video titles to the trackers.

In total, the study found trackers on 69 percent of Roku channels and 89 percent of Amazon Fire channels. “Some of these are well known, such as Google, while many others are relatively obscure companies that most of us have never heard of,” Narayanan said. Google’s ad service DoubleClick was found on 97 percent of Roku channels. “Like other publishers, smart TV app developers can use Google’s ad services to show ads against their content, and we’ve helped design industry guidelines for this that enable a privacy-safe experience for users,” a Google spokesperson said in a statement emailed to The Verge. “Depending on the user’s preferences, the developer may share data with Google that’s similar to data used for ads in mobile apps or on the web.”
“Better privacy controls would certainly help, but they are ultimately band-aids,” Narayanan said. “The business model of targeted advertising on TVs is incompatible with privacy, and we need to confront that reality. To maximize revenue, platforms based on ad targeting will likely turn to data mining and algorithmic personalization/persuasion to keep people glued to the screen as long as possible.”

Another study from Northeastern University and the Imperial College of London found that other smart-home devices are also collecting reams of data that is being sent to third parties like advertisers and major tech companies.

Her iPhone Died. It Led To Her Being Charged As a Criminal

Chris Matyszczyk from ZDNet retells the draconian story of a Financial Times writer who wasn’t able to prove she purchased a ticket for the London buses because her phone died (she used Apple Pay), which led to her being charged a criminal. An anonymous reader shares an excerpt from the report:

Today’s witness is Jemima Kelly. She’s a writer for The Financial Times. Please don’t let any personal thoughts about that get in the way of her story. You see, she just experienced a little technological nightmare. A cheery digital convert, she admits she often leaves the house without her wallet. But surely not without her iPhone. Apple Pay is, after all, a contemporary joy. It’s right up there with Tinder in its ability to make your life easier.

Kelly, indeed, hops on London buses and uses Apple Pay to tap her payment instead of buying a ticket the old-fashioned way. Which, as she cheerily described, is easy unless a ticket inspector wanders by. Just after your iPhone’s battery has died. She couldn’t prove that she’d paid, but gave her personal details and assumed there’d be a record of her probity on the transportation company’s computers. But then she was charged with, well, not providing proof of payment. Charged as in would be forced to go to court and to plead guilty or not guilty within 21 days. Here’s where things got (more) awkward. Kelly produced a bank statement that proved she’d paid. The transportation company — Transport For London — insisted this wasn’t enough.

It seems she’d failed another digital task — registering her Apple Pay with Transport For London. She was edging ever closer to criminal status. But did her Apple Pay details need to be registered? Kelly revealed: “They told me, ‘there is no requirement for cards to be registered, the same as paying for any goods and services in a shop’. But it’s not the same, actually; in a shop, you are given a breakdown in the form of a receipt.” So, here she was, contactless and receiptless. Next, she heard that her court case had happened and she’d been found guilty. Oh, and she also owed a fine of around $592.
In the end, Kelly managed to get back to court and persuade the judge to void her conviction, but the process took months.

“Her story, however, aptly describes how the digital world demands our complete and unyielding participation,” writes Matyszczyk. “Digital systems are designed by those who strive for complete perfection and consistency. Which doesn’t describe the human condition at all.” Do you think digitizing everything is a good thing?

Amazon Workers May Be Watching Your Cloud Cam Home Footage

In a promotional video, Amazon says its Cloud Cam home security camera provides “everything you need to monitor your home, day or night.” In fact, the artificially intelligent device requires help from a squad of invisible employees. Dozens of Amazon workers based in India and Romania review select clips captured by Cloud Cam, according to five people who have worked on the program or have direct knowledge of it. Those video snippets are then used to train the AI algorithms to do a better job distinguishing between a real threat (a home invader) and a false alarm (the cat jumping on the sofa). An Amazon team also transcribes and annotates commands recorded in customers’ homes by the company’s Alexa digital assistant, Bloomberg reported in April.

AI has made it possible to talk to your phone. It’s helping investors predict shifts in market sentiment. But the technology is far from infallible. Cloud Cam sends out alerts when it’s just paper rustling in a breeze. Apple’s Siri and Amazon’s Alexa still occasionally mishear commands. One day, engineers may overcome these shortfalls, but for now AI needs human assistance. Lots of it. At one point, on a typical day, some Amazon auditors were each annotating about 150 video recordings, which were typically 20 to 30 seconds long, according to the people, who requested anonymity to talk about an internal program.

Cambridge Analytica Whistleblower: US Heading In ‘Same Direction As China’ With Online Privacy

“The United States is walking in the same direction as China, we’re just allowing private companies to monetize left, right and center,” Cambridge Analytica whistleblower Christopher Wylie told CNBC on Wednesday. “Just because it’s not the state doesn’t mean that there isn’t harmful impacts that could come if you have one or two large companies monitoring or tracking everything you do,” he said. CNBC reports:

Wylie, whose memoir came out this week, has become outspoken about the influence of social media companies due to the large amounts of data they collect. In March 2018, he exposed the Cambridge Analytica scandal that brought down his former employer and resulted in the Federal Trade Commission fining Facebook, 15 months later, $5 billion for mishandling. While Cambridge Analytica has since shut down, Wylie said the tactics it used could be deployed elsewhere, and that is why data privacy regulation needs to be dramatically enhanced.

“Even if the company has dissolved, the capabilities of the company haven’t,” he said. “My real concern is what happens if China becomes the next Cambridge Analytica, what happens if North Korea becomes the next Cambridge Analytica?” Wylie also said he believes that social media companies should, at a minimum, face regulation similar to water utilities or electrical companies — “certain industries that have become so important because of their vital importance to business and people’s lives and the nature of their scale.” In those cases, “we put in place rules that put consumers first,” he added. “You can still make a profit. You can still make money. But you have to consider the rights and safety of people.”

Twitter Took Phone Numbers for Security and Used Them for Advertising

When some users provided Twitter with their phone number to make their account more secure, the company used this information for advertising purposes, the company said today.

This isn’t the first time that a large social network has taken information explicitly meant for the purposes of security, and then quietly or accidentally use it for something else entirely. Facebook did something similar with phone numbers provided by users for two-factor authentication, the company confirmed last year. “We recently discovered that when you provided an email address or phone number for safety or security purposes (for example, two-factor authentication) this data may have inadvertently been used for advertising purposes, specifically in our Tailored Audiences and Partner Audiences advertising system,” Twitter’s announcement reads. In short, when an advertiser using Twitter uploaded their own marketing list of email addresses or phone numbers, Twitter may have matched the list to people on Twitter “based on the email or phone number the Twitter account holder provided for safety and security purposes,” the post adds.

FBI’s Use of Surveillance Database Violated Americans’ Privacy Rights: Court

Some of the Federal Bureau of Investigation’s electronic surveillance activities violated the constitutional privacy rights of Americans swept up in a controversial foreign intelligence program (Warning: source paywalled; alternative source), a secretive surveillance court has ruled. The ruling deals a rare rebuke to U.S. spying activities that have generally withstood legal challenge or review. The intelligence community disclosed Tuesday that the Foreign Intelligence Surveillance Court last year found that the FBI’s pursuit of data about Americans ensnared in a warrantless internet-surveillance program intended to target foreign suspects may have violated the law authorizing the program, as well as the Constitution’s Fourth Amendment protections against unreasonable searches.

The court concluded that the FBI had been improperly searching a database of raw intelligence for information on Americans — raising concerns about oversight of the program, which as a spy program operates in near total secrecy. The court ruling identifies tens of thousands of improper searches of raw intelligence databases by the bureau in 2017 and 2018 that it deemed improper in part because they involved data related to tens of thousands of emails or telephone numbers — in one case, suggesting that the FBI was using the intelligence information to vet its personnel and cooperating sources. Federal law requires that the database only be searched by the FBI as part of seeking evidence of a crime or for foreign intelligence information. In other cases, the court ruling reveals improper use of the database by individuals. In one case, an FBI contractor ran a query of an intelligence database — searching information on himself, other FBI personnel and his relatives, the court revealed.

Twitter Executive Is Also A British Army ‘Psyops’ Solider

“The senior Twitter executive with editorial responsibility for the Middle East is also a part-time officer in the British Army’s psychological warfare unit,” reports Middle East Eye:
The 77th Brigade uses social media platforms such as Twitter, Instagram and Facebook, as well as podcasts, data analysis and audience research to wage what the head of the UK military, General Nick Carter, describes as “information warfare”. Carter says the 77th Brigade is giving the British military “the capability to compete in the war of narratives at the tactical level”; to shape perceptions of conflict. Some soldiers who have served with the unit say they have been engaged in operations intended to change the behaviour of target audiences.

What exactly MacMillan is doing with the unit is difficult to determine, however: he has declined to answer any questions about his role, as has Twitter and the UK’s Ministry of Defence (MoD).

Responding to the article, the British army told Newsweek their brigade had no relationship with Twitter, “other than using Twitter as one of many social media platforms for engagement and communication.” And Twitter reminded Newsweek that “We proactively publish all tweets and accounts relating to state-backed foreign information operations on the service — regardless of the source. We built this industry-leading archive to promote better public understanding of these threats.”
Despite the assertions of Twitter and the British military, academics and researchers have said the division between the two is not so clear. David Miller — a professor of political sociology in the School for Policy Studies at England’s University of Bristol, who studies propaganda and public relations efforts concerning the British government — is one such academic. He told Newsweek he believes a link is there, adding that it was a “threat to our democracy.”

“I would say I know a good amount about army propaganda and ‘psyops’ operations as they’re called, but what is interesting is how little information we have 77th Brigade,” he said.” I suppose it means that all their work is covert, but what I would like to know is what they exactly are they doing? Are they just tracking accounts or are they trying to influence people’s views? What we do know is that their account itself is hidden by Twitter so we assume they are trying to influence people under the operatives’ own names. And because we know so little about exactly what they’re doing, we have to look elsewhere for clues for example.

The professor also argues that Twitter is deceiving us by “not acting as transparently as it could. If they are working with army personnel in this way, it is extremely damaging to our democracy. Given Twitter’s closure of accounts alleged to be used by foreign governments, it’s a very hypocritical stance of Twitter to take.”

YouTube is Experimenting With Ways To Make Its Algorithm Even More Addictive

While YouTube has publicly said that it’s working on addressing problems that are making its website ever so addictive to users, a new paper from Google, which owns YouTube, seems to tell a different story.

It proposes an update to the platform’s algorithm that is meant to recommend even more targeted content to users in the interest of increasing engagement. Here’s how YouTube’s recommendation system currently works. To populate the recommended-videos sidebar, it first compiles a shortlist of several hundred videos by finding ones that match the topic and other features of the one you are watching. Then it ranks the list according to the user’s preferences, which it learns by feeding all your clicks, likes, and other interactions into a machine-learning algorithm. Among the proposed updates, the researchers specifically target a problem they identify as “implicit bias.” It refers to the way recommendations themselves can affect user behavior, making it hard to decipher whether you clicked on a video because you liked it or because it was highly recommended. The effect is that over time, the system can push users further and further away from the videos they actually want to watch.

To reduce this bias, the researchers suggest a tweak to the algorithm: each time a user clicks on a video, it also factors in the video’s rank in the recommendation sidebar. Videos that are near the top of the sidebar are given less weight when fed into the machine-learning algorithm; videos deep down in the ranking, which require a user to scroll, are given more. When the researchers tested the changes live on YouTube, they found significantly more user engagement. Though the paper doesn’t say whether the new system will be deployed permanently, Guillaume Chaslot, an ex-YouTube engineer who now runs AlgoTransparency.org, said he was “pretty confident” that it would happen relatively quickly.

Optic Nerve: millions of Yahoo webcam images intercepted by GCHQ

Optic Nerve is a mass surveillance programme run by the British signals intelligence agency Government Communications Headquarters (GCHQ), with help from the US National Security Agency, that surreptitiously collects private webcam still images from users while they are using a Yahoo! webcam application. As an example of the scale, in one 6-month period, the programme is reported to have collected images from 1.8 million Yahoo! user accounts globally. The programme was first reported on in the media in February 2014, from documents leaked by the former National Security Agency contractor Edward Snowden, but dates back to a prototype started in 2008, and was still active in at least 2012.[1][2]

The leaked documents describe the users under surveillance as “unselected”, meaning that data was collected indiscriminately in bulk from users regardless of whether they were an intelligence target or not. The vast majority of affected users would have been completely innocent of any crime or suspicion of a crime.

Researchers Easily Breached Voting Machines For the 2020 Election

The voting machines that the U.S. will use in the 2020 election are still vulnerable to hacks. A group of ethical hackers tested a bunch of those voting machines and election systems (most of which they bought on eBay). They were able to crack into every machine, The Washington Post reports. Their tests took place this summer at a Def Con cybersecurity conference, but the group visited Washington to share their findings yesterday. A number of flaws allowed the hackers to access the machines, including weak default passwords and shoddy encryption. The group says the machines could be hacked by anyone with access to them, and if poll workers make mistakes or take shortcuts, the machines could be infiltrated by remote hackers.