Resources

AI is Sending People To Jail–and Getting it Wrong

The US imprisons more people than any other country in the world. At the end of 2016, nearly 2.2 million adults were being held in prisons or jails, and an additional 4.5 million were in other correctional facilities. Put another way, 1 in 38 adult Americans was under some form of correctional supervision. The nightmarishness of this situation is one of the few issues that unite politicians on both sides of the aisle.

Under immense pressure to reduce prison numbers without risking a rise in crime, courtrooms across the US have turned to automated tools in attempts to shuffle defendants through the legal system as efficiently and safely as possible. This is where the AI part of our story begins. Police departments use predictive algorithms to strategize about where to send their ranks. Law enforcement agencies use face recognition systems to help identify suspects. These practices have garnered well-deserved scrutiny for whether they in fact improve safety or simply perpetuate existing inequities.

Researchers and civil rights advocates, for example, have repeatedly demonstrated that face recognition systems can fail spectacularly, particularly for dark-skinned individuals — even mistaking members of Congress for convicted criminals. But the most controversial tool by far comes after police have made an arrest. Say hello to criminal risk assessment algorithms.

Samsung Chip Output at South Korea Plant Partly Halted Due To 1-Minute Electricity Glitch

A 1-minute power glitch on Tuesday, December 31, partially shut down Samsung chip production at its Hwaseong chip complex in South Korea for “two or three days”. DRAM and NAND lines were affected. Preliminary inspections show “no major damage” but losses are still expected to be in the millions.

Internet shutdowns used to be rare. They’re increasingly becoming the norm in much of the world

An ongoing internet blackout in Indian-controlled Kashmir is now the longest ever in a democracy — at more than 135 days — according to Access Now, an advocacy group that tracks internet freedom. Only the autocratic governments of China and junta-era Myanmar have cut off access for longer… Kashmiris have been without internet access for so long that WhatsApp has reportedly begun deleting their accounts for inaction… India’s increased internet censorship has been greeted with delight in China, however, where state-run media pointed to it as an endorsement of Beijing’s own authoritarian approach. The People’s Daily said this week that India’s example showed “shutting down the internet in a state of emergency should be standard practice for sovereign countries….”

African states have also embraced the tactic, with Zimbabwe, the Democratic Republic of Congo, Chad and Ethiopia all cutting off internet access in an attempt to rein in anti-government protests. This is in line with a general pattern of increased online censorship. It’s partly due to the spread of more sophisticated technology that makes it easier, and cheaper, to monitor and filter traffic online. It’s also influenced by a shifting perception of internet censorship, which once used to be seen as something of a losing battle. China’s Great Firewall, however, has proved beyond doubt that not only can the internet be controlled, but that doing so can help prop up the regime and prevent opposition movements from getting off the ground…

Shutdowns give police a freer hand to reign in unrest without the type of hyper-scrutiny on social media that has become common in highly-connected societies, and enable the government to ensure that its message is the only one heard on a particular topic. In 2018, there were 196 internet shutdowns globally — mainly in Asia, Africa and the Middle East — according to Access Now. In the first half of this year alone, there were 128, and 2019 looks to be the worst year on record. According to Freedom House, a Washington-based NGO, almost half of the world’s population lives in a country “where authorities disconnected internet or mobile networks, often for political reasons.”

Amazon is looking into tech that can identify you using the veins in your hand

Amazon filed a patent for technology that could identify you by scanning the wrinkles in the palm of your hand and by using a light to see beneath your skin to your blood vessels. The resulting images could be used to identify you as a shopper at Amazon Go stores. It was previously reported that the Seattle-based tech giant might install these hi-tech scanners in Whole Foods grocery stores. However, the U.S. Patent and Trademark Office published an application on Thursday that suggests the e-commerce behemoth sets its sites on Amazon Go stores…

While fingerprint scanners have been around for years, Amazon hopes to innovate by developing a personal identification system that you don’t have to touch. Imagine hovering your hand in front of an infrared light as a camera snaps two images — one from the surface, and one that looks for “deeper characteristics such as veins.” An internal computer system would then identify you based on that information.

‘I Asked My Students To Turn In Their Cellphones and Write About Living Without Them’

Rog Srigley, writer who teaches at Humber College and Laurentian University, offered his students extra credit if they would give him their phones for nine days and write about living without them. “What they wrote was remarkable, and remarkably consistent,” he writes. “These university students, given the chance to say what they felt, didn’t gracefully submit to the tech industry and its devices.”

“Believe it or not, I had to walk up to a stranger and ask what time it was. It honestly took me a lot of guts and confidence to ask someone,” Janet wrote. (Her name, like the others here, is a pseudonym.) She describes the attitude she was up against: “Why do you need to ask me the time? Everyone has a cell phone. You must be weird or something.”

Emily went even further. Simply walking by strangers “in the hallway or when I passed them on the street” caused almost all of them to take out a phone “right before I could gain eye contact with them.”

To these young people, direct, unmediated human contact was experienced as ill-mannered at best and strange at worst.

James: “One of the worst and most common things people do nowadays is pull out their cell phone and use it while in a face-to-face conversation. This action is very rude and unacceptable, but yet again, I find myself guilty of this sometimes because it is the norm.” Emily noticed that “a lot of people used their cell phones when they felt they were in an awkward situation, for an example [sic] being at a party while no one was speaking to them.” The price of this protection from awkward moments is the loss of human relationships, a consequence that almost all the students identified and lamented. Without his phone, James said, he found himself forced to look others in the eye and engage in conversation. Stewart put a moral spin on it. “Being forced to have [real relations with people] obviously made me a better person because each time it happened I learned how to deal with the situation better, other than sticking my face in a phone.” Ten of the 12 students said their phones were compromising their ability to have such relationships.

Peter: “I have to admit, it was pretty nice without the phone all week. Didn’t have to hear the fucking thing ring or vibrate once, and didn’t feel bad not answering phone calls because there were none to ignore.” “It felt so free without one and it was nice knowing no one could bother me when I didn’t want to be bothered,” wrote William.

Emily said that she found herself “sleeping more peacefully after the first two nights of attempting to sleep right away when the lights got shut off.”

Stewart: “Actually I got things done much quicker without the cell because instead of waiting for a response from someone (that you don’t even know if they read your message or not) you just called them [from a land line], either got an answer or didn’t, and moved on to the next thing.”

“My students’ experience of cell phones and the social-media platforms they support may not be exhaustive, or statistically representative. But it is clear that these gadgets made them feel less alive, less connected to other people and to the world, and less productive. They also made many tasks more difficult and encouraged students to act in ways they considered unworthy of themselves. In other words, phones didn’t help them. They harmed them.”

Tina’s concluding remarks described it well: “Without cell phones life would be simple and real but we may not be able to cope with the world and our society. After a few days I felt alright without the phone as I got used to it. But I guess it is only fine if it is for a short period of time. One cannot hope to compete efficiently in life without a convenient source of communication that is our phones.” Compare this admission with the reaction of Peter, who a few months after the course in 2014 tossed his smartphone into a river.

“I think my students are being entirely rational when they “distract” themselves in my class with their phones. They understand the world they are being prepared to enter much better than I do. In that world, I’m the distraction, not their phones or their social-media profiles or their networking. Yet for what I’m supposed to be doing—educating and cultivating young hearts and minds—the consequences are pretty dark.”

YouTube’s Top Earner For 2019? An 8-Year-Old Who Made $26M

“An eight-year-old boy who reviews toys on YouTube has been named by Forbes as the platform’s highest earner in 2019,” reports CNN:
Ryan Kaji, whose channel Ryan’s World has 22.9 million subscribers, earned $26 million in 2019 — up $4 million from his earnings in 2018, when he also gained the highest-earning YouTuber spot… Another child, Anastasia Radzinskaya, five, came in third place with earnings of $18 million. Radzinskaya, who was born in southern Russia and has cerebral palsy, appears in videos with her father. According to Forbes, she has 107 million subscribers across seven channels and her videos have been watched 42 billion times….

Dude Perfect — a group of five friends in their thirties who play sports and perform stunts — came in second place, earning $20 million.

YouTube has announced that next year it will stop personalized advertisements on children’s content. This comes after Google agreed to pay $170 million to settle accusations that YouTube broke the law when it knowingly tracked and sold ads targeted to children.

40% of Anti-Vaccine Group’s Funding Came From Wealthy ‘Alternative Health’ Vendor

The nation’s oldest anti-vaccine advocacy group often emphasizes that it is supported primarily by small donations and concerned parents, describing its founder as the leader of a “national, grass roots movement.” But over the past decade a single donor has contributed more than $2.9 million to the National Vaccine Information Center, accounting for about 40 percent of the organization’s funding, according to the most recent available tax records.

That donor, osteopathic physician Joseph Mercola, has amassed a fortune selling natural health products, court records show, including vitamin supplements, some of which he claims are alternatives to vaccines.

In recent years, the center has been at the forefront of a movement that has led some parents to forgo or delay immunizing their children against vaccine-preventable diseases such as measles. Health officials say falling vaccination rates contributed to the infectious virus sickening more than 1,200 people in the United States this year, the largest number in more than 25 years. Measles outbreaks are surging worldwide, including in Samoa — where nearly 80 people have died since mid-October, the great majority of them young children and infants… The group claimed credit this year for helping to defeat legislation in a dozen states that would have made it harder for parents to opt out of vaccinating their children…

Mercola, whose claims about other products have drawn warnings from regulators, has also given at least $4 million to several groups that echo the anti-vaccine message. His net worth, derived largely from his network of private companies, has grown to “in excess of $100 million,” he said in a 2017 affidavit.

In 2010 Mercola’s site and the anti-vaccination group “launched a website that tracks vaccine-related legislation in every state. The site provides activists with detailed information, including how to sign up for public comment to support or oppose legislation in their state, where to park to attend a public hearing and what color T-shirt to wear to rallies…”

“In 2016, in response to a complaint from the Federal Trade Commission, Mercola refunded nearly $2.6 million to more than 1,300 people who bought tanning beds that he claimed could reduce the risk of skin cancer.”

Facebook, Twitter Shutter Pro-Trump Network That Used AI To Create Fake People and Push Conspiracies

On Friday, Facebook and Twitter shut down a network of fake accounts that pushed pro-Trump messages all while “masquerading” as Americans with AI-generated faces as profile photos.

QIn a blog post, Facebook said that it connected the accounts to a US-based media company called The BL that, it claims, has ties to Epoch Media Group. In August, NBC News first reported that Epoch Media Group was pushing messages in support of President Donald Trump across social media platforms like Facebook and Twitter. Epoch has extensive connections to Falun Gong, an eccentric Chinese spiritual community that has faced significant persecution from the country’s central government. In a statement provided to The Verge, the Epoch Times denied any connection to The BL.

Facebook noted that many of the fake accounts used in the latest campaign employed false profile photos that appeared to have been generated by artificial intelligence. Those accounts would post BL content in other Facebook groups while pretending to be Americans. Pro-Trump messages were often posted “at very high frequencies” and linked to off-platform sites belonging to the BL and The Epoch Times. The accounts and pages were managed by individuals in the US and Vietnam. Facebook said that it removed 610 accounts, 89 Facebook pages, 156 groups, and 72 Instagram accounts that were connected to the organization. Around 55 million accounts followed one of these Facebook pages and 92,000 followed at least one of the Instagram accounts. The organization spent nearly $9.5 million in advertisements, according to Facebook.

The Rise of the Deepfake and the threat to Democracy

Deepfakes posted on the internet in the past two years, has alarmed many observers, who believe the technology could be used to disgrace politicians and even swing elections. Democracies appear to be gravely threatened by the speed at which disinformation can be created and spread via social media, where the incentive to share the most sensationalist content outweighs the incentive to perform the tiresome work of verification.

Last month, a digitally altered video showing Nancy Pelosi, the speaker of the US House of Representatives, appearing to slur drunkenly through a speech was widely shared on Facebook and YouTube. Trump then posted the clip on Twitter with the caption: “PELOSI STAMMERS THROUGH NEWS CONFERENCE”. The video was quickly debunked, but not before it had been viewed millions of times; the president did not delete his tweet, which at the time of writing has received nearly 98,000 likes. Facebook declined to take down the clip, qualifying its decision with the statement: “Once the video was fact-checked as false, we dramatically reduced its distribution.”

In response, a team including the artists Bill Posters and Daniel Howe two weeks ago posted a video on Instagram, in which Facebook founder Mark Zuckerberg boasts that he has “total control of billions of people’s stolen data, all their secrets, their lives, their futures”.

In May 2018, a Flemish socialist party called sp.a posted a deepfake video to its Twitter and Facebook pages showing Trump appearing to taunt Belgium for remaining in the Paris climate agreement. The video, which remains on the party’s social media, is a poor forgery: Trump’s hair is curiously soft-focus, while his mouth moves with a Muppet-like elasticity. Indeed, the video concludes with Trump saying: “We all know that climate change is fake, just like this video,” although this sentence alone is not subtitled in Flemish Dutch. (The party declined to comment, but a spokesperson previously told the site Politico that it commissioned the video to “draw attention to the necessity to act on climate change”.)

But James [founder of the YouTube channel derpfakes’ that publishes deepfake videos] believes forgeries may have gone undetected. “The idea that deepfakes have already been used politically isn’t so farfetched,” he says. “It could be the case that deepfakes have already been widely used for propaganda.”

37-Year-Old Mom Finds Instagram’s Sex Predators By Pretending To Be 11

Sloane Ryan is a 37-year-old woman who runs the Special Projects Team at Bark, a child-safety tech company selling a $9-a-month software that monitors text messages for bullying, threats of violence, depression, and sexual predators. “In 2018 alone, Bark alerted the FBI to 99 child predators. In 2019? That number is more than 300 — and counting.”

Bark had wanted a way to depict the problem to the public without using actual conversations — so Ryan began posing as an underage minor on Instagram.
Over the past nine months, I’ve been 15-year-old Libby and 16-year-old Kait and 14-year-old Ava. I’ve been a studious sophomore contemplating bangs and a lacrosse player being raised by her aunt and an excitable junior eager for prom….

At the beginning of the week, on the very first night as [11-year-old] “Bailey” two new messages came in within 52 seconds of publishing a photo. We sat mouths agape as the numbers pinged up on the screen — 2, 3, 7, 15 messages from adult men over the course of two hours. Half of them could be charged with transfer of obscene content to a minor. That night, I had taken a breather and sat with my head in my hands.

The second half of the article includes examples of particularly graphic conversations with what the perpetrators think are an 11-year-old girl instead of the 37-year-old woman who’s investigating them. “I exit the conversation with @ XXXastrolifer to see another nine requests pending… Over the course of one week, over 52 men reached out to an 11-year-old girl.”

The Price of Recycling Old Laptops: Toxic Fumes in Thailand’s Lungs

The e-waste industry is booming in Southeast Asia, frightening residents worried for their health. Despite a ban on imports, Thailand is a center of the business.

Crouched on the ground in a dimly lit factory, the women picked through the discarded innards of the modern world: batteries, circuit boards and bundles of wires. They broke down the scrap — known as e-waste — with hammers and raw hands. Men, some with faces wrapped in rags to repel the fumes, shoveled the refuse into a clanking machine that salvages usable metal. As they toiled, smoke spewed over nearby villages and farms. Residents have no idea what is in the smoke: plastic, metal, who knows? All they know is that it stinks and they feel sick.

The factory, New Sky Metal, is part of a thriving e-waste industry across Southeast Asia, born of China’s decision to stop accepting the world’s electronic refuse, which was poisoning its land and people. Thailand in particular has become a center of the industry even as activists push back and its government wrestles to balance competing interests of public safety with the profits to be made from the lucrative trade. Last year, Thailand banned the import of foreign e-waste. Yet new factories are opening across the country, and tons of e-waste are being processed, environmental monitors and industry experts say. “E-waste has to go somewhere,” said Jim Puckett, the executive director of the Basel Action Network, which campaigns against trash dumping in poor countries, “and the Chinese are simply moving their entire operations to Southeast Asia.”

NHS Gives Amazon Free Use of Health Data Under Alexa Advice Deal

Amazon has been given free access to healthcare information collected by the NHS as part of a contract with the government. The material, which excludes patient data, could allow the multinational technology company to make, advertise and sell its own products.

In July the health secretary, Matt Hancock, said a partnership with the NHS that allowed Amazon Alexa devices to offer expert health advice to users would reduce pressure on “our hard-working GPs and pharmacists.” But responses to freedom of information requests, published by the Sunday Times, showed the contract will also allow the company access to information on symptoms, causes and definitions of conditions, and “all related copyrightable content and data and other materials.” Amazon, which is worth $863bn and is run by the world’s richest person, Jeff Bezos, can then create “new products, applications, cloud-based services and/or distributed software,” which the NHS would not benefit from financially. It can also share the information with third parties. Labour’s shadow health secretary, Jonathan Ashworth, told the Sunday Times that the government was “highly irresponsible” and “in the pocket of big corporate interests.”

Turkey is Getting Military Drones Armed With Machine Guns

A drone with a machine gun attached can hit targets with high precision, according to its makers. Turkey is set to become the first country to have the drone, when it gets a delivery this month. The 25-kilogram drone has eight rotating blades to get it in the air. Its machine gun carries 200 rounds of ammunition and can fire single shots or 15-round bursts. Many countries and groups already use small military drones that can drop grenades or fly into a target to detonate an explosive. The new drone, called Songar and made by Ankara-based electronics firm Asisguard, is the first drone to be equipped with a firearm and be ready for service. Turkey expects the drones to be delivered before the end of the year.

It is hard for a drone to shoot accurately, partly because of the difficulty of judging range and angle, and partly because the recoil from each shot significantly moves the drone, affecting the aim for the next round. Songar has two systems to overcome these challenges. One uses sensors, including cameras and a laser rangefinder, to calculate distance, angle and wind speed, and work out where to aim. The second is a set of robot arms that move the machine gun to compensate for the effects of recoil.

Emotion Recognition Tech Should Be Banned, Says an AI Research Institute

A leading research centre has called for new laws to restrict the use of emotion-detecting tech. The AI Now Institute says the field is “built on markedly shaky foundations.” Despite this, systems are on sale to help vet job seekers, test criminal suspects for signs of deception, and set insurance prices. It wants such software to be banned from use in important decisions that affect people’s lives and/or determine their access to opportunities. The US-based body has found support in the UK from the founder of a company developing its own emotional-response technologies — but it cautioned that any restrictions would need to be nuanced enough not to hamper all work being done in the area.

AI Now refers to the technology by its formal name, affect recognition, in its annual report. It says the sector is undergoing a period of significant growth and could already be worth as much as $20 billion. “It claims to read, if you will, our inner-emotional states by interpreting the micro-expressions on our face, the tone of our voice or even the way that we walk,” explained co-founder Prof Kate Crawford. “It’s being used everywhere, from how do you hire the perfect employee through to assessing patient pain, through to tracking which students seem to be paying attention in class. “At the same time as these technologies are being rolled out, large numbers of studies are showing that there is… no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks.”

YouTube’s Algorithm Made Fake CNN Reports Go Viral

“YouTube channels posing as American news outlets racked up millions of views on false and inflammatory videos over several months this year,” reports CNN.

“All with the help of YouTube’s recommendation engine.”

Many of the accounts, which mostly used footage from CNN, but also employed some video from Fox News, exploited a YouTube feature that automatically creates channels on certain topics. Those topic channels are then automatically populated by videos related to the topic — including, in this case, blatant misinformation.

YouTube has now shut down many of the accounts.

YouTube’s own algorithms also recommended videos from the channels to American users who watched videos about U.S. politics. That the channels could achieve such virality — one channel was viewed more than two million times over one weekend in October — raises questions about YouTube’s preparedness for tackling misinformation on its platform just weeks before the Iowa caucuses and points to the continuing challenge platforms face as people try to game their systems….

Responding to the findings on Thursday, a CNN spokesperson said YouTube needs to take responsibility.

“When accounts were deleted or banned, they were able to spin up new accounts within hours,” added Plasticity, a natural language processing and AI startup which analyzed the data and identified at least 25 different accounts which YouTube then shut down.

“The tactics they used to game the YouTube algorithm were executed perfectly. They knew what they were doing.”

Chinese Newspaper Touts Videogame Where Players ‘Hunt Down Traitors’ in Hong Kong

An online game calling on players to hunt down traitors who seek to separate Hong Kong from China and fuel street violence has reportedly begun to attract players across Chinese mainland social media platforms. The game, “fight the traitors together,” is set against the backdrop of the social unrest that has persisted in Hong Kong. The script asks the player to find eight secessionists hidden in the crowd participating in Hong Kong protests.

Players can knock them down with slaps or rotten eggs until they are captured. Online gamers claim the game allows them to vent their anger at the separatist behavior of secessionists during the recent Hong Kong riots. The eight traitors in the game, caricatured based on real people, include Jimmy Lai Chee-ying, Martin Lee Chu-ming and Joshua Wong Chi-fung, prominent opposition figures who have played a major role in inciting unrest in Hong Kong. There are also traitor figures in ancient China…

In the game, amid a crowd of black-clad rioters wearing yellow hats and face masks, Anson Chan Fang On-sang, another leading opposition figure, carries a bag with a U.S. flag, clutches a stack of U.S. dollars and holds a loudspeaker to incite violence in the streets.

Cops Around the World Are Using An Outlandish Mind-Reading Tool

ProPublica reports that dozens of state and local agencies have purchased “SCAN” training from a company called LSI for reviewing a suspect’s written statements — even though there’s no scientific evidence that it works.

Local, state and federal agencies from the Louisville Metro Police Department to the Michigan State Police to the U.S. State Department have paid for SCAN training. The LSI website lists 417 agencies nationwide, from small-town police departments to the military, that have been trained in SCAN — and that list isn’t comprehensive, because additional ones show up in procurement databases and in public records obtained by ProPublica. Other training recipients include law enforcement agencies in Australia, Belgium, Canada, Israel, Mexico, the Netherlands, Singapore, South Africa and the United Kingdom, among others…

For Avinoam Sapir, the creator of SCAN, sifting truth from deception is as simple as one, two, three.

1. Give the subject a pen and paper.
2. Ask the subject to write down his/her version of what happened.
3. Analyze the statement and solve the case.

Those steps appear on the website for Sapir’s company, based in Phoenix. “SCAN Unlocks the Mystery!” the homepage says, alongside a logo of a question mark stamped on someone’s brain. The site includes dozens of testimonials with no names attached. “Since January when I first attended your course, everybody I meet just walks up to me and confesses!” one says. Another testimonial says “The Army finally got its money’s worth…” SCAN saves time, the site says. It saves money. Police can fax a questionnaire to a hundred people at once, the site says. Those hundred people can fax it back “and then, in less than an hour, the investigator will be able to review the questionnaires and solve the case.”

In 2009 the U.S. government created a special interagency task force to review scientific studies and independently investigate which interrogation techniques worked, assessed by the FBI, CIA and the U.S. Department of Defense. “When all 12 SCAN criteria were used in a laboratory study, SCAN did not distinguish truth-tellers from liars above the level of chance,” the review said, also challenging two of the method’s 12 criteria. “Both gaps in memory and spontaneous corrections have been shown to be indicators of truth, contrary to what is claimed by SCAN.”
In a footnote, the review identified three specific agencies that use SCAN: the FBI, CIA and U.S. Army military intelligence, which falls under the Department of Defense…

In 2016, the same year the federal task force released its review of interrogation techniques, four scholars published a study on SCAN in the journal Frontiers in Psychology. The authors — three from the Netherlands, one from England — noted that there had been only four prior studies in peer-reviewed journals on SCAN’s effectiveness. Each of those studies (in 1996, 2012, 2014 and 2015) concluded that SCAN failed to help discriminate between truthful and fabricated statements. The 2016 study found the same. Raters trained in SCAN evaluated 234 statements — 117 true, 117 false. Their results in trying to separate fact from fiction were about the same as chance….

Steven Drizin, a Northwestern University law professor who specializes in wrongful convictions, said SCAN and assorted other lie-detection tools suffer from “over-claim syndrome” — big claims made without scientific grounding. Asked why police would trust such tools, Drizin said: “A lot has to do with hubris — a belief on the part of police officers that they can tell when someone is lying to them with a high degree of accuracy. These tools play in to that belief and confirm that belief.”

SCAN’s creator “declined to be interviewed for this story,” but they spoke to some users of the technique. Travis Marsh, the head of an Indiana sheriff’s department, has been using the tool for nearly two decades, while acknowledging that he can’t explain how it works. “It really is, for lack of a better term, a faith-based system because you can’t see behind the curtain.”

Pro Publica also reports that “Years ago his wife left a note saying she and the kids were off doing one thing, whereas Marsh, analyzing her writing, could tell they had actually gone shopping. His wife has not left him another note in at least 15 years…”

Scientists Propose Destroying Mountains To Build a New Type of Battery For Long-Term Energy Storage

One of the big challenges of making 100 percent renewable energy [sic] a reality is long-term storage,” says Julian Hunt, an engineering scientist at the International Institute for Applied Systems Analysis in Austria. Hunt and his collaborators have devised a novel system to complement lithium-ion battery use for energy storage over the long run: Mountain Gravity Energy Storage, or MGES for short. Similar to hydroelectric power, MGES involves storing material at elevation to produce gravitational energy. The energy is recovered when the stored material falls and turns turbines to generate electricity. The group describes its system in a paper published November 6 in Energy.

“Instead of building a dam, we propose building a big sand or gravel reservoir,” explains Hunt. The key to MGES lies in finding two mountaintop sites that have a suitable difference in elevation — 1,000 meters is ideal. “The greater the height difference, the cheaper the technology,” he says. The sites will look similar, with each comprised of a mine-like station to store the sand or gravel, and a filling station directly below it. Valves release the material into waiting vessels, which are then transported via cranes and motor-run cables to the upper site. There, the sand or gravel is stored — for weeks, months, or even years — until it’s ready to be used. When the material is moved back down the mountain, that stored gravitational energy is released and converted into electrical energy.

Not only is the system more environmentally friendly [sic] than pumped-storage hydropower and dams, but it’s more flexible to meet varying energy demands.

“Hunt estimates that the annual cost of storing energy via this system will vary between $50 to $100 per megawatt hour (MWh),” the report adds. “And he says that the energy expended to transport materials to the upper sits will be offset by the amount of gravitational energy the system produces.”

Deepfake Porn Is Total Control Over Women’s Bodies

A lineup of female celebrities stand in front of you. Their faces move, smile, and blink as you move around them. They’re fully nude, hairless, waiting for you to decide what you’ll do to them as you peruse a menu of sex positions. This isn’t just another deepfake porn video, or the kind of interactive, 3D-generated porn Motherboard reported on last month, but a hybrid of both which gives people even more control of women’s virtual bodies. This new type of nonconsensual porn uses custom 3D models that can be articulated and animated, which are then made to look exactly like specific celebrities with deepfaked faces. Until recently, deepfake porn consisted of taking the face of a person — usually a celebrity, almost always a woman — and swapping it on to the face of an adult performer in an existing porn video. With this method, a user can make a 3D avatar with a generic face, capture footage of it performing any kind of sexual act, then run that video through an algorithm that swaps the generic face with a real person’s.

Keep Your IoT Devices on a Separate Network, FBI Says

The FBI says owners of IoT (Internet of Things) devices should isolate this equipment on a separate WiFi network, different from the one they’re using for their primary devices, such as laptops, desktops, or smartphones. “Your fridge and your laptop should not be on the same network,” the FBI’s Portland office said in a weekly tech advice column. “Keep your most private, sensitive data on a separate system from your other IoT devices,” it added. The same advice — to keep devices on a separate WiFi network or LAN — has been shared in the past by multiple IT and security experts. The reasoning behind it is simple. By keeping all the IoT equipment on a separate network, any compromise of a “smart” device will not grant an attacker a direct route to a user’s primary devices — where most of their data is stored. Jumping across the two networks would require considerable effort from the attacker. However, placing primary devices and IoT devices on separate networks might not sound that easy for non-technical users. The simplest way is to use two routers.