Resources

Skype Audio Graded by Workers in China With ‘No Security Measures’

A Microsoft program to transcribe and vet audio from Skype and Cortana, its voice assistant, ran for years with “no security measures,” according to a former contractor who says he reviewed thousands of potentially sensitive recordings on his personal laptop from his home in Beijing over the two years he worked for the company.

The recordings, both deliberate and accidentally invoked activations of the voice assistant, as well as some Skype phone calls, were simply accessed by Microsoft workers through a web app running in Google’s Chrome browser, on their personal laptops, over the Chinese internet, according to the contractor. Workers had no cybersecurity help to protect the data from criminal or state interference, and were even instructed to do the work using new Microsoft accounts all with the same password, for ease of management, the former contractor said. Employee vetting was practically nonexistent, he added.

“There were no security measures, I don’t even remember them doing proper KYC [know your customer] on me. I think they just took my Chinese bank account details,” he told the Guardian. While the grader began by working in an office, he said the contractor that employed him “after a while allowed me to do it from home in Beijing. I judged British English (because I’m British), so I listened to people who had their Microsoft device set to British English, and I had access to all of this from my home laptop with a simple username and password login.” Both username and password were emailed to new contractors in plaintext, he said, with the former following a simple schema and the latter being the same for every employee who joined in any given year.

Earth Is Hotter Than at Any Time Since Steam Engine Was Invented

The last five years on Earth have been hotter than at any time since the industrial revolution kicked off almost two centuries ago.

That’s the conclusion of Europe’s Copernicus Climate Change Service, which published data on Wednesday showing that global average temperatures since 2015 were some 1.2 degrees Celsius (2.2 degrees Fahrenheit) higher than when steam engines began powering industry. Last year was the second warmest on record after 2016.

2019 was Europe’s warmest year, marginally higher than temperatures in 2014, 2015 and 2018. Global average temperatures in 2019 were 0.6 degrees Celsius warmer than the 1981 to 2010 average. Atmospheric carbon dioxide concentration increased by about 2.3 parts per million in 2019, to the second-highest level on record.

Ring Fired Employees for Watching Customer Videos

Amazon-owned home security camera company Ring has fired employees for improperly accessing Ring users’ video data, Motherboard reported Wednesday, citing a letter the company wrote to Senators. The news highlights a risk across many different tech companies: employees may abuse access granted as part of their jobs to look at customer data or information. In Ring’s case this data can be particularly sensitive though, as customers often put the cameras inside their home. “We are aware of incidents discussed below where employees violated our policies,” the letter from Ring, dated January 6th, reads. “Over the last four years, Ring has received four complaints or inquiries regarding a team member’s access to Ring video data,” it continues. Ring explains that although each of these people were authorized to view video data, their attempted access went beyond what they needed to access for their job.

Companies Are Using AI-Generated People To Appear More “Diverse”

AI startups are selling images of computer-generated faces that look like the real thing, offering companies a chance to create imaginary models and “increase diversity” in their ads without needing human beings. One firm is offering to sell diverse photos for marketing brochures and has already signed up clients, including a dating app that intends to use the images in a chatbot. Another company says it’s moving past AI-generated headshots and into the generation of full, fake human bodies as early as this month. The AI software used to create such faces is freely available and improving rapidly, allowing small start-ups to easily create fakes that are so convincing they can fool the human eye. The systems train on massive databases of actual faces, then attempt to replicate their features in new designs. But AI experts worry that the fakes will empower a new generation of scammers, bots and spies, who could use the photos to build imaginary online personas, mask bias in hiring and damage efforts to bring diversity to industries. The fact that such software now has a business model could also fuel a greater erosion of trust across an Internet already under assault by disinformation campaigns, “deepfake” videos and other deceptive techniques.

Xiaomi Camera Feed is Showing Random Homes on a Google Nest Hub, Including Still Images of Sleeping People

So-called “smart” security cameras have had some pretty dumb security problems recently, but a recent report regarding a Xiaomi Mijia camera linked to a Google Home is especially disturbing. One Xiaomi Mijia camera owner is getting still images from other random peoples’ homes when trying to stream content from his camera to a Google Nest Hub. The images include sills of people sleeping (even an infant in a cradle) inside their own homes. This issue was first reported by user /r/Dio-V on Reddit and affects his Xiaomi Mijia 1080p Smart IP Security Camera, which can be linked to a Google account for use with Google/Nest devices through Xiaomi’s Mi Home app/service. It isn’t clear when Dio-V’s feed first began showing these still images into random homes or how long the camera was connected to his account before this started happening. He does state that both the Nest Hub and the camera were purchased new. The camera was noted as running firmware version 3.5.1_00.66.

AI is Sending People To Jail–and Getting it Wrong

The US imprisons more people than any other country in the world. At the end of 2016, nearly 2.2 million adults were being held in prisons or jails, and an additional 4.5 million were in other correctional facilities. Put another way, 1 in 38 adult Americans was under some form of correctional supervision. The nightmarishness of this situation is one of the few issues that unite politicians on both sides of the aisle.

Under immense pressure to reduce prison numbers without risking a rise in crime, courtrooms across the US have turned to automated tools in attempts to shuffle defendants through the legal system as efficiently and safely as possible. This is where the AI part of our story begins. Police departments use predictive algorithms to strategize about where to send their ranks. Law enforcement agencies use face recognition systems to help identify suspects. These practices have garnered well-deserved scrutiny for whether they in fact improve safety or simply perpetuate existing inequities.

Researchers and civil rights advocates, for example, have repeatedly demonstrated that face recognition systems can fail spectacularly, particularly for dark-skinned individuals — even mistaking members of Congress for convicted criminals. But the most controversial tool by far comes after police have made an arrest. Say hello to criminal risk assessment algorithms.

Samsung Chip Output at South Korea Plant Partly Halted Due To 1-Minute Electricity Glitch

A 1-minute power glitch on Tuesday, December 31, partially shut down Samsung chip production at its Hwaseong chip complex in South Korea for “two or three days”. DRAM and NAND lines were affected. Preliminary inspections show “no major damage” but losses are still expected to be in the millions.

Internet shutdowns used to be rare. They’re increasingly becoming the norm in much of the world

An ongoing internet blackout in Indian-controlled Kashmir is now the longest ever in a democracy — at more than 135 days — according to Access Now, an advocacy group that tracks internet freedom. Only the autocratic governments of China and junta-era Myanmar have cut off access for longer… Kashmiris have been without internet access for so long that WhatsApp has reportedly begun deleting their accounts for inaction… India’s increased internet censorship has been greeted with delight in China, however, where state-run media pointed to it as an endorsement of Beijing’s own authoritarian approach. The People’s Daily said this week that India’s example showed “shutting down the internet in a state of emergency should be standard practice for sovereign countries….”

African states have also embraced the tactic, with Zimbabwe, the Democratic Republic of Congo, Chad and Ethiopia all cutting off internet access in an attempt to rein in anti-government protests. This is in line with a general pattern of increased online censorship. It’s partly due to the spread of more sophisticated technology that makes it easier, and cheaper, to monitor and filter traffic online. It’s also influenced by a shifting perception of internet censorship, which once used to be seen as something of a losing battle. China’s Great Firewall, however, has proved beyond doubt that not only can the internet be controlled, but that doing so can help prop up the regime and prevent opposition movements from getting off the ground…

Shutdowns give police a freer hand to reign in unrest without the type of hyper-scrutiny on social media that has become common in highly-connected societies, and enable the government to ensure that its message is the only one heard on a particular topic. In 2018, there were 196 internet shutdowns globally — mainly in Asia, Africa and the Middle East — according to Access Now. In the first half of this year alone, there were 128, and 2019 looks to be the worst year on record. According to Freedom House, a Washington-based NGO, almost half of the world’s population lives in a country “where authorities disconnected internet or mobile networks, often for political reasons.”

Amazon is looking into tech that can identify you using the veins in your hand

Amazon filed a patent for technology that could identify you by scanning the wrinkles in the palm of your hand and by using a light to see beneath your skin to your blood vessels. The resulting images could be used to identify you as a shopper at Amazon Go stores. It was previously reported that the Seattle-based tech giant might install these hi-tech scanners in Whole Foods grocery stores. However, the U.S. Patent and Trademark Office published an application on Thursday that suggests the e-commerce behemoth sets its sites on Amazon Go stores…

While fingerprint scanners have been around for years, Amazon hopes to innovate by developing a personal identification system that you don’t have to touch. Imagine hovering your hand in front of an infrared light as a camera snaps two images — one from the surface, and one that looks for “deeper characteristics such as veins.” An internal computer system would then identify you based on that information.

‘I Asked My Students To Turn In Their Cellphones and Write About Living Without Them’

Rog Srigley, writer who teaches at Humber College and Laurentian University, offered his students extra credit if they would give him their phones for nine days and write about living without them. “What they wrote was remarkable, and remarkably consistent,” he writes. “These university students, given the chance to say what they felt, didn’t gracefully submit to the tech industry and its devices.”

“Believe it or not, I had to walk up to a stranger and ask what time it was. It honestly took me a lot of guts and confidence to ask someone,” Janet wrote. (Her name, like the others here, is a pseudonym.) She describes the attitude she was up against: “Why do you need to ask me the time? Everyone has a cell phone. You must be weird or something.”

Emily went even further. Simply walking by strangers “in the hallway or when I passed them on the street” caused almost all of them to take out a phone “right before I could gain eye contact with them.”

To these young people, direct, unmediated human contact was experienced as ill-mannered at best and strange at worst.

James: “One of the worst and most common things people do nowadays is pull out their cell phone and use it while in a face-to-face conversation. This action is very rude and unacceptable, but yet again, I find myself guilty of this sometimes because it is the norm.” Emily noticed that “a lot of people used their cell phones when they felt they were in an awkward situation, for an example [sic] being at a party while no one was speaking to them.” The price of this protection from awkward moments is the loss of human relationships, a consequence that almost all the students identified and lamented. Without his phone, James said, he found himself forced to look others in the eye and engage in conversation. Stewart put a moral spin on it. “Being forced to have [real relations with people] obviously made me a better person because each time it happened I learned how to deal with the situation better, other than sticking my face in a phone.” Ten of the 12 students said their phones were compromising their ability to have such relationships.

Peter: “I have to admit, it was pretty nice without the phone all week. Didn’t have to hear the fucking thing ring or vibrate once, and didn’t feel bad not answering phone calls because there were none to ignore.” “It felt so free without one and it was nice knowing no one could bother me when I didn’t want to be bothered,” wrote William.

Emily said that she found herself “sleeping more peacefully after the first two nights of attempting to sleep right away when the lights got shut off.”

Stewart: “Actually I got things done much quicker without the cell because instead of waiting for a response from someone (that you don’t even know if they read your message or not) you just called them [from a land line], either got an answer or didn’t, and moved on to the next thing.”

“My students’ experience of cell phones and the social-media platforms they support may not be exhaustive, or statistically representative. But it is clear that these gadgets made them feel less alive, less connected to other people and to the world, and less productive. They also made many tasks more difficult and encouraged students to act in ways they considered unworthy of themselves. In other words, phones didn’t help them. They harmed them.”

Tina’s concluding remarks described it well: “Without cell phones life would be simple and real but we may not be able to cope with the world and our society. After a few days I felt alright without the phone as I got used to it. But I guess it is only fine if it is for a short period of time. One cannot hope to compete efficiently in life without a convenient source of communication that is our phones.” Compare this admission with the reaction of Peter, who a few months after the course in 2014 tossed his smartphone into a river.

“I think my students are being entirely rational when they “distract” themselves in my class with their phones. They understand the world they are being prepared to enter much better than I do. In that world, I’m the distraction, not their phones or their social-media profiles or their networking. Yet for what I’m supposed to be doing—educating and cultivating young hearts and minds—the consequences are pretty dark.”

YouTube’s Top Earner For 2019? An 8-Year-Old Who Made $26M

“An eight-year-old boy who reviews toys on YouTube has been named by Forbes as the platform’s highest earner in 2019,” reports CNN:
Ryan Kaji, whose channel Ryan’s World has 22.9 million subscribers, earned $26 million in 2019 — up $4 million from his earnings in 2018, when he also gained the highest-earning YouTuber spot… Another child, Anastasia Radzinskaya, five, came in third place with earnings of $18 million. Radzinskaya, who was born in southern Russia and has cerebral palsy, appears in videos with her father. According to Forbes, she has 107 million subscribers across seven channels and her videos have been watched 42 billion times….

Dude Perfect — a group of five friends in their thirties who play sports and perform stunts — came in second place, earning $20 million.

YouTube has announced that next year it will stop personalized advertisements on children’s content. This comes after Google agreed to pay $170 million to settle accusations that YouTube broke the law when it knowingly tracked and sold ads targeted to children.

40% of Anti-Vaccine Group’s Funding Came From Wealthy ‘Alternative Health’ Vendor

The nation’s oldest anti-vaccine advocacy group often emphasizes that it is supported primarily by small donations and concerned parents, describing its founder as the leader of a “national, grass roots movement.” But over the past decade a single donor has contributed more than $2.9 million to the National Vaccine Information Center, accounting for about 40 percent of the organization’s funding, according to the most recent available tax records.

That donor, osteopathic physician Joseph Mercola, has amassed a fortune selling natural health products, court records show, including vitamin supplements, some of which he claims are alternatives to vaccines.

In recent years, the center has been at the forefront of a movement that has led some parents to forgo or delay immunizing their children against vaccine-preventable diseases such as measles. Health officials say falling vaccination rates contributed to the infectious virus sickening more than 1,200 people in the United States this year, the largest number in more than 25 years. Measles outbreaks are surging worldwide, including in Samoa — where nearly 80 people have died since mid-October, the great majority of them young children and infants… The group claimed credit this year for helping to defeat legislation in a dozen states that would have made it harder for parents to opt out of vaccinating their children…

Mercola, whose claims about other products have drawn warnings from regulators, has also given at least $4 million to several groups that echo the anti-vaccine message. His net worth, derived largely from his network of private companies, has grown to “in excess of $100 million,” he said in a 2017 affidavit.

In 2010 Mercola’s site and the anti-vaccination group “launched a website that tracks vaccine-related legislation in every state. The site provides activists with detailed information, including how to sign up for public comment to support or oppose legislation in their state, where to park to attend a public hearing and what color T-shirt to wear to rallies…”

“In 2016, in response to a complaint from the Federal Trade Commission, Mercola refunded nearly $2.6 million to more than 1,300 people who bought tanning beds that he claimed could reduce the risk of skin cancer.”

Facebook, Twitter Shutter Pro-Trump Network That Used AI To Create Fake People and Push Conspiracies

On Friday, Facebook and Twitter shut down a network of fake accounts that pushed pro-Trump messages all while “masquerading” as Americans with AI-generated faces as profile photos.

QIn a blog post, Facebook said that it connected the accounts to a US-based media company called The BL that, it claims, has ties to Epoch Media Group. In August, NBC News first reported that Epoch Media Group was pushing messages in support of President Donald Trump across social media platforms like Facebook and Twitter. Epoch has extensive connections to Falun Gong, an eccentric Chinese spiritual community that has faced significant persecution from the country’s central government. In a statement provided to The Verge, the Epoch Times denied any connection to The BL.

Facebook noted that many of the fake accounts used in the latest campaign employed false profile photos that appeared to have been generated by artificial intelligence. Those accounts would post BL content in other Facebook groups while pretending to be Americans. Pro-Trump messages were often posted “at very high frequencies” and linked to off-platform sites belonging to the BL and The Epoch Times. The accounts and pages were managed by individuals in the US and Vietnam. Facebook said that it removed 610 accounts, 89 Facebook pages, 156 groups, and 72 Instagram accounts that were connected to the organization. Around 55 million accounts followed one of these Facebook pages and 92,000 followed at least one of the Instagram accounts. The organization spent nearly $9.5 million in advertisements, according to Facebook.

The Rise of the Deepfake and the threat to Democracy

Deepfakes posted on the internet in the past two years, has alarmed many observers, who believe the technology could be used to disgrace politicians and even swing elections. Democracies appear to be gravely threatened by the speed at which disinformation can be created and spread via social media, where the incentive to share the most sensationalist content outweighs the incentive to perform the tiresome work of verification.

Last month, a digitally altered video showing Nancy Pelosi, the speaker of the US House of Representatives, appearing to slur drunkenly through a speech was widely shared on Facebook and YouTube. Trump then posted the clip on Twitter with the caption: “PELOSI STAMMERS THROUGH NEWS CONFERENCE”. The video was quickly debunked, but not before it had been viewed millions of times; the president did not delete his tweet, which at the time of writing has received nearly 98,000 likes. Facebook declined to take down the clip, qualifying its decision with the statement: “Once the video was fact-checked as false, we dramatically reduced its distribution.”

In response, a team including the artists Bill Posters and Daniel Howe two weeks ago posted a video on Instagram, in which Facebook founder Mark Zuckerberg boasts that he has “total control of billions of people’s stolen data, all their secrets, their lives, their futures”.

In May 2018, a Flemish socialist party called sp.a posted a deepfake video to its Twitter and Facebook pages showing Trump appearing to taunt Belgium for remaining in the Paris climate agreement. The video, which remains on the party’s social media, is a poor forgery: Trump’s hair is curiously soft-focus, while his mouth moves with a Muppet-like elasticity. Indeed, the video concludes with Trump saying: “We all know that climate change is fake, just like this video,” although this sentence alone is not subtitled in Flemish Dutch. (The party declined to comment, but a spokesperson previously told the site Politico that it commissioned the video to “draw attention to the necessity to act on climate change”.)

But James [founder of the YouTube channel derpfakes’ that publishes deepfake videos] believes forgeries may have gone undetected. “The idea that deepfakes have already been used politically isn’t so farfetched,” he says. “It could be the case that deepfakes have already been widely used for propaganda.”

37-Year-Old Mom Finds Instagram’s Sex Predators By Pretending To Be 11

Sloane Ryan is a 37-year-old woman who runs the Special Projects Team at Bark, a child-safety tech company selling a $9-a-month software that monitors text messages for bullying, threats of violence, depression, and sexual predators. “In 2018 alone, Bark alerted the FBI to 99 child predators. In 2019? That number is more than 300 — and counting.”

Bark had wanted a way to depict the problem to the public without using actual conversations — so Ryan began posing as an underage minor on Instagram.
Over the past nine months, I’ve been 15-year-old Libby and 16-year-old Kait and 14-year-old Ava. I’ve been a studious sophomore contemplating bangs and a lacrosse player being raised by her aunt and an excitable junior eager for prom….

At the beginning of the week, on the very first night as [11-year-old] “Bailey” two new messages came in within 52 seconds of publishing a photo. We sat mouths agape as the numbers pinged up on the screen — 2, 3, 7, 15 messages from adult men over the course of two hours. Half of them could be charged with transfer of obscene content to a minor. That night, I had taken a breather and sat with my head in my hands.

The second half of the article includes examples of particularly graphic conversations with what the perpetrators think are an 11-year-old girl instead of the 37-year-old woman who’s investigating them. “I exit the conversation with @ XXXastrolifer to see another nine requests pending… Over the course of one week, over 52 men reached out to an 11-year-old girl.”

The Price of Recycling Old Laptops: Toxic Fumes in Thailand’s Lungs

The e-waste industry is booming in Southeast Asia, frightening residents worried for their health. Despite a ban on imports, Thailand is a center of the business.

Crouched on the ground in a dimly lit factory, the women picked through the discarded innards of the modern world: batteries, circuit boards and bundles of wires. They broke down the scrap — known as e-waste — with hammers and raw hands. Men, some with faces wrapped in rags to repel the fumes, shoveled the refuse into a clanking machine that salvages usable metal. As they toiled, smoke spewed over nearby villages and farms. Residents have no idea what is in the smoke: plastic, metal, who knows? All they know is that it stinks and they feel sick.

The factory, New Sky Metal, is part of a thriving e-waste industry across Southeast Asia, born of China’s decision to stop accepting the world’s electronic refuse, which was poisoning its land and people. Thailand in particular has become a center of the industry even as activists push back and its government wrestles to balance competing interests of public safety with the profits to be made from the lucrative trade. Last year, Thailand banned the import of foreign e-waste. Yet new factories are opening across the country, and tons of e-waste are being processed, environmental monitors and industry experts say. “E-waste has to go somewhere,” said Jim Puckett, the executive director of the Basel Action Network, which campaigns against trash dumping in poor countries, “and the Chinese are simply moving their entire operations to Southeast Asia.”

NHS Gives Amazon Free Use of Health Data Under Alexa Advice Deal

Amazon has been given free access to healthcare information collected by the NHS as part of a contract with the government. The material, which excludes patient data, could allow the multinational technology company to make, advertise and sell its own products.

In July the health secretary, Matt Hancock, said a partnership with the NHS that allowed Amazon Alexa devices to offer expert health advice to users would reduce pressure on “our hard-working GPs and pharmacists.” But responses to freedom of information requests, published by the Sunday Times, showed the contract will also allow the company access to information on symptoms, causes and definitions of conditions, and “all related copyrightable content and data and other materials.” Amazon, which is worth $863bn and is run by the world’s richest person, Jeff Bezos, can then create “new products, applications, cloud-based services and/or distributed software,” which the NHS would not benefit from financially. It can also share the information with third parties. Labour’s shadow health secretary, Jonathan Ashworth, told the Sunday Times that the government was “highly irresponsible” and “in the pocket of big corporate interests.”

Turkey is Getting Military Drones Armed With Machine Guns

A drone with a machine gun attached can hit targets with high precision, according to its makers. Turkey is set to become the first country to have the drone, when it gets a delivery this month. The 25-kilogram drone has eight rotating blades to get it in the air. Its machine gun carries 200 rounds of ammunition and can fire single shots or 15-round bursts. Many countries and groups already use small military drones that can drop grenades or fly into a target to detonate an explosive. The new drone, called Songar and made by Ankara-based electronics firm Asisguard, is the first drone to be equipped with a firearm and be ready for service. Turkey expects the drones to be delivered before the end of the year.

It is hard for a drone to shoot accurately, partly because of the difficulty of judging range and angle, and partly because the recoil from each shot significantly moves the drone, affecting the aim for the next round. Songar has two systems to overcome these challenges. One uses sensors, including cameras and a laser rangefinder, to calculate distance, angle and wind speed, and work out where to aim. The second is a set of robot arms that move the machine gun to compensate for the effects of recoil.

Emotion Recognition Tech Should Be Banned, Says an AI Research Institute

A leading research centre has called for new laws to restrict the use of emotion-detecting tech. The AI Now Institute says the field is “built on markedly shaky foundations.” Despite this, systems are on sale to help vet job seekers, test criminal suspects for signs of deception, and set insurance prices. It wants such software to be banned from use in important decisions that affect people’s lives and/or determine their access to opportunities. The US-based body has found support in the UK from the founder of a company developing its own emotional-response technologies — but it cautioned that any restrictions would need to be nuanced enough not to hamper all work being done in the area.

AI Now refers to the technology by its formal name, affect recognition, in its annual report. It says the sector is undergoing a period of significant growth and could already be worth as much as $20 billion. “It claims to read, if you will, our inner-emotional states by interpreting the micro-expressions on our face, the tone of our voice or even the way that we walk,” explained co-founder Prof Kate Crawford. “It’s being used everywhere, from how do you hire the perfect employee through to assessing patient pain, through to tracking which students seem to be paying attention in class. “At the same time as these technologies are being rolled out, large numbers of studies are showing that there is… no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks.”

YouTube’s Algorithm Made Fake CNN Reports Go Viral

“YouTube channels posing as American news outlets racked up millions of views on false and inflammatory videos over several months this year,” reports CNN.

“All with the help of YouTube’s recommendation engine.”

Many of the accounts, which mostly used footage from CNN, but also employed some video from Fox News, exploited a YouTube feature that automatically creates channels on certain topics. Those topic channels are then automatically populated by videos related to the topic — including, in this case, blatant misinformation.

YouTube has now shut down many of the accounts.

YouTube’s own algorithms also recommended videos from the channels to American users who watched videos about U.S. politics. That the channels could achieve such virality — one channel was viewed more than two million times over one weekend in October — raises questions about YouTube’s preparedness for tackling misinformation on its platform just weeks before the Iowa caucuses and points to the continuing challenge platforms face as people try to game their systems….

Responding to the findings on Thursday, a CNN spokesperson said YouTube needs to take responsibility.

“When accounts were deleted or banned, they were able to spin up new accounts within hours,” added Plasticity, a natural language processing and AI startup which analyzed the data and identified at least 25 different accounts which YouTube then shut down.

“The tactics they used to game the YouTube algorithm were executed perfectly. They knew what they were doing.”