Archives 2019

Airbnb Has a Hidden-Camera Problem

Airbnb’s rules allow cameras outdoors and in living rooms and common areas, but never in bathrooms or anywhere guests plan to sleep, including rooms with foldout beds. Starting in early 2018, Airbnb added another layer of disclosure: If hosts indicate they have cameras anywhere on their property, guests receive a pop-up informing them where the cameras are located and where they are aimed. To book the property, the guests must click “agree,” indicating that they’re aware of the cameras and consent to being filmed.

Of course, hosts have plenty of reason to train cameras on the homes they rent out to strangers. They can catch guests who attempt to steal, or who trash the place, or who initially say they’re traveling alone, then show up to a property with five people. A representative for Airbnb’s Trust & Safety communications department told me the company tries to filter out hosts who may attempt to surveil guests by matching them against sex-offender and felony databases. The company also uses risk scores to flag suspicious behavior, in addition to reviewing and booting hosts with consistently poor scores.

If a guest contacts Airbnb’s Trust & Safety team with a complaint about a camera, employees offer new accommodations if necessary and open an investigation into the host. […] But four guests who found cameras in their rentals told The Atlantic the company has inconsistently applied its own rules when investigating their claims, providing them with incorrect information and making recommendations that they say risked putting them in harm’s way. “There have been super terrible examples of privacy violations by AirBnB hosts, e.g., people have found cameras hidden in alarm clocks in their bedrooms,” wrote Jeff Bigham, a computer-science professor at Carnegie Mellon whose claim was initially denied after he reported cameras in his rental. “I feel like our experience is in some ways more insidious. If you find a truly hidden camera in your bedroom or bathroom, Airbnb will support you. If you find an undisclosed camera in the private living room, Airbnb will not support you.”

Facebook Ad Platform Could Be Inherently Discriminatory, Researchers Say

Researchers from Northeastern Unviersity, the University of Southern Carolina, and tech accountability non-profit Upturn have released a paper that says Facebook’s ad delivery system itself can steer ads intended to be inclusive toward discrimination without explicit intent. “In a paper titled, ‘Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes,’ co-authors Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke find that advertiser budgets and ad content affect ad delivery, skewing it along gender and racial lines even when neutral ad targeting settings are used,” reports The Register. From the report:
The researchers found that Facebook ads tend to be shown to men because women tend to click on ads more often, making them more expensive to reach through Facebook’s system. That divide becomes apparent when ad budgets are compared, because the ad budget affects ad distribution. As the paper explains, “the higher the daily budget, the smaller the fraction of men in the audience.” Such segregation may be appropriate and desirable for certain types of marketing pitches, but when applied to credit, employment and housing ads, the consequences can be problematic.

Ad content — text and images — also has a strong effect on whether ads get shown to men or women, even when the bidding strategy is the same and gender-agnostic targeting is used. In particular, the researchers found images had a surprisingly large effect on ad delivery. Ad URL destination has some effect — an ad pointing to a bodybuilding site and an ad pointing to a cosmetics site had a baseline delivery distribution of 48 percent men and 40 percent men respectively. The addition of a title and headline doesn’t change that much. But once the researchers added an image to the ad, the distribution pattern changed, with the bodybuilding site ad reaching an audience that was 75 percent male and the cosmetics ad reaching an audience that was 90 percent female. According to the researchers, their tests suggest, “Facebook has an automated image classification mechanism in place that is used to steer different ads towards different subsets of the user population.”

“In terms of credit, employment and housing ads, the problem with this system is that it discriminates where it shouldn’t: Five ads for lumber industry jobs were delivered to an audience that was more than 90 percent men and more than 70 percent white; five ads for janitorial work were delivered to an audience that was more than 65 percent women and 75 percent black,” the report adds. “Housing ads also showed a racial skew.”

The latest findings come after years of criticism of Facebook’s ad system. Last month, Facebook announced changes to the platform intended to prevent advertisers from deploying unfair credit, employment and housing ads. One week later, the U.S. Department of Housing and Urban Development sued Facebook for violating the Fair Housing Act.

Mark Zuckerberg Leveraged Facebook User Data To Fight Rivals and Help Friends, Leaked Documents Show

Facebook CEO Mark Zuckerberg once considered making deals with third-party developers just to help him find out how much users’ data is worth, NBC News reported on Tuesday. The report, which cites 4,000 leaked pages of internal documents, shines a light on the way senior company executives viewed attaching a dollar sign to sensitive user data, despite Facebook’s public commitment to protect such information. From the report:

In the same week, Zuckerberg floated the idea of pursuing 100 deals with developers “as a path to figuring out the real market value” of Facebook user data and then “setting a public rate” for developers. “The goal here wouldn’t be the deals themselves, but that through the process of negotiating with them we’d learn what developers would actually pay (which might be different from what they’d say if we just asked them about the value), and then we’d be better informed on our path to set a public rate,” Zuckerberg wrote in a chat. Facebook told NBC News that it was exploring ways to build a sustainable business, but ultimately decided not to go forward with these plans.

Zuckerberg was unfazed by the potential privacy risks associated with Facebook’s data-sharing arrangements. “I’m generally skeptical that there is as much data leak strategic risk as you think,” he wrote in the email to Lessin. “I think we leak info to developers but I just can’t think of any instances where that data has leaked from developer to developer and caused a real issue for us.”

The report also outlines how PR people at Facebook tries to spin things. An excerpt: In a March 2014 email discussing Zuckerberg’s keynote speech at the event, where he was due to announce the removal of developers’ access to friends’ data, Jonny Thaw, a director of communications, wrote that it “may be a tough message for some developers as it may inhibit their growth.” “So one idea that came up today was potentially talking in the keynote about some of the trust changes we’re making on Facebook itself. So the message would be: ‘trust is really important to us — on Facebook, we’re doing A, B and C to help people control and understand what they’re sharing — and with platform apps we’re doing D, E and F.'” If that doesn’t work, he added, “we could announce some of Facebook’s trust initiatives in the run up to F8” to make the changes for developers “seem more natural.”

Facebook Should Notify Users Who Interact With Fake Police ‘Sock Puppet’ Accounts

Despite Facebook’s repeated warnings that law enforcement is required to use “authentic identities” on the social media platform, cops continue to create fake and impersonator accounts to secretly spy on users. By pretending to be someone else, cops are able to sneak past the privacy walls users put up and bypass legal requirements that might require a warrant to obtain that same information.

EFF is now calling on Facebook to escalate the matter with law enforcement in the United States. Facebook should take the following actions to address the proliferation of fake/impersonator Facebook accounts operated by law enforcement, in addition to suspending the fake accounts. As part of its regular transparency reports, Facebook should publish data on the number of fake/impersonator law enforcement accounts identified, what agencies they belonged to, and what action was taken. When a fake/impersonator account is identified, Facebook should alert the users and groups that interacted with the account whether directly or indirectly.

The article also suggests updating Facebook’s Terms of Service to explicitly prohibit fake/impersonator profiles by law enforcement groups, and updating Facebook pages of law enforcement groups to inform visitors when those groups have a written policy allowing fake/impersonator law enforcement accounts. “These four changes are relatively light lifts that would enhance transparency and establish real consequences for agencies that deliberately violate the rules…”

“Facebook’s practice of taking down these individual accounts when they learn about them from the press (or from EFF) is insufficient to deter what we believe is a much larger iceberg beneath the surface.”

Are Phone-Addicted Drivers More Dangerous Than Drunk Drivers?

After crunching data on 4.5 billion miles of driving, road-safety analytics company Zendrive concludes there’s a new threat which just last year claimed the lives of 6,227 pedestrians: drivers “under the influence of a smartphone.”

The study points out that drunk driving fatalities peak after midnight, while distracted driving happens all day, conluding that distracted driving is now a bigger threat than drunk driving.

“Phone addicts are the new drunk drivers,” Zendrive concludes bluntly in its annual distracted driving study. The big picture: The continued increase in unsafe driving comes despite stricter laws in many states, as well as years of massive ad campaigns from groups ranging from cell phone carriers to orthopedic surgeons. “They hide in plain sight, blatantly staring at their phones while driving down the road,” Zendrive says in the study.

And it’s a growing problem. Over just the past year, Zendrive, which analyzes driver behavior for fleets and insurers, said the number of hardcore phone addicts doubled, now accounting for one in 12 drivers. If the current trend continues, that number will be one in five by 2022.

The report concludes drivers are 10 percent more distracted this year than last — and that phone addicts have their eyes off the road for 28% of their drive. Yet when asked to describe their driving, 93% of phone addicts said they believed they were “safe” — or “extremely safe” — drivers.

One even insisted that they never texted while driving, “but I like to FaceTime my friends while driving since it makes time go by faster.”

How DNA Companies Like Ancestry And 23andMe Are Using Your Genetic Data

In the past couple of years, genetic-testing companies like Ancestry and 23andMe have become popular for finding out family history and DNA information. More than 12 million Americans have sent in their DNA to be analyzed to companies like 23andMe and AncestryDNA. The spit-in-tube DNA you send in is anonymized and used for genetic drug research and both sites have been selling the data to third-party companies, like P&G Beauty and Pepto-Bismol, and universities, like The University of Chicago, for some time. In fact, just last week major pharmaceutical giant, GlaxoSmithKline, announced a $300 million deal with 23andMe. The deal entails that they can use the data to analyze the stored sample, investigate new drugs to develop and genetic data for how patients are selected for clinical trials. Both 23andMe and Ancestry said that they will not share genetic information freely, without a court order, but people are welcome to share the information online themselves sometimes in order to find lost relatives or biological parents.

Amazon and Google Fight Bill That Prohibits Secretly Recording You

On Wednesday, the Illinois State Senate passed the Keep Internet Devices Safe Act, a bill that would ban manufacturers of devices that can record audio from doing so remotely without disclosing it to the customer. But after lobbying from trade associations that represent the interests of Google, Amazon — makers of the microphone-enabled Google Home and Alexa smart speakers, respectively — and Microsoft, among other companies, the interests of big tech won out… In its current, neutered form, the bill provides exclusive authority to the Attorney General to enforce the Act, which means regular citizens won’t be able to bring forward a case regarding tech giants recording them in their homes.

Ars Technica notes the move comes after Amazon admitted thousands of their employees listen to Alexa recordings — “something not mentioned in Echo’s terms of service or FAQ pages.”

Vice points out that sometimes those recordings are shared “even after users opt out of having their data used in the program.”

Police Are Using Google’s Location Data From ‘Hundreds of Millions’ of Phones

Police have used information from the search giant’s Sensorvault database to aid in criminal cases across the country, according to a report Saturday by The New York Times. The database has detailed location records from hundreds of millions of phones around the world, the report said. It’s meant to collect information on the users of Google’s products so the company can better target them with ads, and see how effective those ads are. But police have been tapping into the database to help find missing pieces in investigations.

Law enforcement can get “geofence” warrants seeking location data. Those kinds of requests have spiked in the last six months, and the company has received as many as 180 requests in one week, according to the report…. For geofence warrants, police carve out a specific area and time period, and Google can gather information from Sensorvault about the devices that were present during that window, according to the report. The information is anonymous, but police can analyze it and narrow it down to a few devices they think might be relevant to the investigation. Then Google reveals those users’ names and other data, according to the Times…

[T]he AP reported last year that Google tracked people’s location even after they’d turned off location-sharing on their phones.

Google’s data dates back “nearly a decade,” the Times reports — though in a statement, Google’s director of law enforcement and information security insisted “We vigorously protect the privacy of our users while supporting the important work of law enforcement.” (The Times also interviewed a man who was arrested and jailed for a week last year based partly on Google’s data — before eventually being released after the police found a more likely suspect.)

More jails replace in-person visits with awful video chat products

After April 15, inmates at the Adult Detention Center in Lowndes County, Mississippi will no longer be allowed to visit with family members face to face. Newton County, Missouri, implemented an in-person visitor ban last month. The Allen County Jail in Indiana phased out in-person visits earlier this year. All three changes are part of a nationwide trend toward “video visitation” services. Instead of seeing their loved ones face to face, inmates are increasingly limited to talking to them through video terminals. Most jails give family members a choice between using video terminals at the jail — which are free — or paying fees to make calls from home using a PC or mobile device.

Even some advocates of the change admit that it has downsides for inmates and their families. Ryan Rickert, jail administrator at the Lowndes County Adult Detention Center, acknowledged to The Commercial Dispatch that inmates were disappointed they wouldn’t get to see family members anymore. Advocates of this approach point to an upside for families: they can now make video calls to loved ones from home instead of having to physically travel to the jail. These services are ludicrously expensive. Video calls cost 40 cents per minute in Newton County, 50 cents per minute in Lowndes County, and $10 per call in Allen County. Outside of prison, of course, video calls on Skype or FaceTime are free.

These “visitation” services are often “grainy and jerky, periodically freezing up altogether,” reports Ars. As for why so many jails are adopting them, it has a lot to do with money. “In-person visits are labor intensive. Prison guards need to escort inmates to and from visitation rooms, supervise the visits, and in some cases pat down visitors for contraband. In contrast, video terminals can be installed inside each cell block, minimizing the need to move inmates around the jail.” The video-visitation systems also directly generate revenue for jails.

Two-Thirds of Consumers Don’t Expect Google To Track Them the Way It Does

Last summer, an AP investigation found that Google’s location tracking remains on even if you turn it off in Google Maps, Search, and other apps. Research from Vanderbilt professor Douglas Schmidt found that Google engages in “passive” data collection, often without the user’s knowledge. His research also showed that Google utilizes data collected from other sources to de-anonymize existing user data.

Digital Content Next, the trade association of online publishers, surveyed a nationally representative sample to find out what people expect from Google — and, as with a similar study we conducted last year about Facebook, the results were unsettling.

Our findings show that many of Google’s data practices deviate from consumer expectations. We find it even more significant that consumer’s expectations are at an all-time low even after 2018, a year in which awareness around consumer privacy reached peak heights.

The results of the study are consistent with our Facebook study: People don’t want surveillance advertising. A majority of consumers indicated they don’t expect to be tracked across Google’s services, let alone be tracked across the web in order to make ads more targeted.

There was only one question where a small majority of respondents felt that Google was acting according to their expectations. That was about Google merging data from search queries with other data it collects on its own services. They also don’t expect Google to connect the data back to the user’s personal account, but only by a small majority. Google began doing both of these in 2016 after previously promising it wouldn’t.

‘Highly confidential’ documents reveal Facebook used VPN app to track competitors

Internal documents, made public as part of a cache of documents released by UK [Select Committee investigating Facebook], show just how close an eye the social network was keeping on competitors like WhatsApp and Snapchat, both of which became acquisition targets.

The documents, which are labeled “highly confidential,” show slides from an internal presentation in 2013 that compares Facebook’s reach to competing apps, including WhatsApp and Snapchat.

Facebook’s presentation relied on data from Onavo, the virtual private network (VPN) service which Facebook also acquired several months later. Facebook’s use of Onavo, which has been likened to “corporate spyware,” has itself been controversial.

The company was forced to remove Onavo from Apple’s App Store earlier this year after Apple changed its developer guidelines to prohibit apps from collecting data about which other services are installed on its users’ phones.

Survey: 25% Think AI Could Make Better Policy Decisions Than Politicians

A new survey on Europeans’ attitudes towards technology found that a quarter of people would prefer it if policy decisions were made by artificial intelligence instead of politicians. The Center for the Governance of Change at Spain’s IE University polled 2,500 adults in the UK, Spain, Germany, France, Ireland, Italy, and the Netherlands in January. The results reflect an intense anxiety about the changes brought about by advances in tech, with more than half of respondents worried that jobs would be replaced by robots, and 70% saying that unchecked technological innovation could do more harm than good to society. Respondents also expressed concerns about the impact of digital relationships replacing human contact as more people spend time online. Perhaps most interestingly, a quarter of the respondents said they would prefer AI to guide decisions about governance of their country over politicians.

Around the world, citizens have expressed a growing disillusionment with democracy, and an increased skepticism that their voice has an impact on political decisions. But algorithmic decisions aren’t a problem-free solution: they can be embedded with the prejudice and bias of their programmers or manipulated to achieve specific outcomes, making the results as potentially problematic as the ones made by humans. The study also found that respondents expected governments to reduce the disruption that technology might have on their lives with regulation, limits on automation, and support for people affected by job losses. This “highlights the paradox in which we live,” the authors wrote. “People are disillusioned with governments, yet at the same time ask them to tackle the societal and economic negative effects that emerging technologies might have.”

You Will Soon Be Able To Pay Your Subway Fare With Your Face in China

China has led the world in adoption of smartphone-based mobile payments to the point where the central bank had to remind merchants not to discriminate against cash. The next phase of development may be to pay with your face.

In Shenzhen, the local subway operator is testing various advanced technologies backed by the ultra-fast 5G network, including facial-recognition ticketing.

At the Futian station, instead of presenting a ticket or scanning a QR bar code on their smartphones, commuters can scan their faces on a tablet-sized screen mounted on the entrance gate and have the fare automatically deducted from their linked accounts.

Currently in a trial mode, the facial-recognition ticketing service could in future help improve the efficiency of handling the up to 5 million rides per day on the city’s subway network. Shenzhen Metro did not elaborate when it will roll out the facial payment service.

The introduction of facial recognition-and-payment services to the public transit system marks another step by China toward integrating facial recognition and other artificial intelligence-based technology into everyday life in the world’s most populous nation.

Consumers can already pay for fried chicken at KFC in China with its “Smile to Pay” facial recognition system, first introduced at an outlet in Hangzhou in January 2017.

“To use facial ticketing in the future, passengers will also need preregistration of their facial information and link their payment methods to their accounts, just like them making payments at the KFC restaurant,” said a staff member at the Futian station’s demonstration area in Shenzhen.
China may use facial recognition to stop kids from live streaming

Chinese cities are among the most digitally savvy and cashless in the world, with about 583 million people using their smartphones to make payment in China last year, according to the China Internet Network Information Center. Nearly 68 per cent of China’s internet users used a mobile wallet for their offline payments.

Paris street to ‘shut out Instagrammers’

Instagrammers love the colorful homes in Paris’s Rue Cremieux. But residents of Rue Cremieux have now had enough and are calling on the city council to restrict access at certain times.

Residents have asked the city council to provide a gate that can be closed at peak times — evenings, weekends and at sunrise and sunset, when good light attracts people searching for a perfect Instagram picture. One resident told radio station France Info: “We sit down to eat and just outside we have people taking photos, rappers who take two hours to film a video right beneath the window, or bachelorette parties who scream for an hour. Frankly, it’s exhausting.”

Hard Disks Can Be Turned Into Listening Devices

Researchers from the University of Michigan and Zhejiang Univeristy in China have found that hard disk drives can be turned into listening devices, using malicious firmware and signal processing calculations.

For a study titled “Hard Drive of Hearing: Disks that Eavesdrop with a Synthesized Microphone,” computer scientists Andrew Kwong, Wenyuan Xu, and Kevin Fu describe an acoustic side-channel that can be accessed by measuring how sound waves make hard disk parts vibrate. “Our research demonstrates that the mechanical components in magnetic hard disk drives behave as microphones with sufficient precision to extract and parse human speech,” their paper, obtained by The Register ahead of its formal publication, stated. “These unintentional microphones sense speech with high enough fidelity for the Shazam service to recognize a song recorded through the hard drive.”

The team’s research work, scheduled to be presented in May at the 2019 IEEE Symposium on Security and Privacy, explores how it’s possible to alter HDD firmware to measure the offset of a disk drive’s read/write head from the center of the track it’s seeking. The offset is referred to as the Positional Error Signal (PES) and hard drives monitor this signal to keep the read/write head in the optimal position for reading and writing data. PES measurements must be very fine because drive heads can only be off by a few nanometers before data errors arise. The sensitivity of the gear, however, means human speech is sufficient to move the needle, so to speak. Vibrations from HDD parts don’t yield particularly good sound, but with digital filtering techniques, human speech can be discerned, given the right conditions.

“Flashing HDD firmware is a prerequisite for the snooping […] because the ATA protocol does not expose the PES,” The Register reports. “To exfiltrate captured data, the three boffins suggest transmitting it over the internet by modifying Linux operating system files to create a reverse shell with root privileges or storing it to disk for physical recovery at a later date.”

The researchers note that this technique does require a fairly loud conversation to take place near the eavesdropping hard drive. “To record comprehensible speech, the conversation had to reach 85 dBA, with 75 dBA being the low threshold for capturing muffled sound,” the report says. “To get Shazam to identify recordings captured through a hard drive, the source file had to be played at 90 dBA. Which is pretty loud. Like lawn mower or food blender loud.”

Age of Surveillance Capitalism: “We Thought We Were Searching Google, But Google Was Searching Us”

Corporations have created a new kind of marketplace out of our private human experiences. That is the conclusion of an explosive new book that argues big tech platforms like Facebook and Google are elephant poachers, and our personal data is ivory tusks. Author Shoshana Zuboff writes in “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power”: “At its core, surveillance capitalism is parasitic and self-referential. It revives Karl Marx’s old image of capitalism as a vampire that feeds on labor, but with an unexpected turn. Instead of labor, surveillance capitalism feeds on every aspect of every human’s experience.”

Facebook’s global lobbying against data privacy laws

Facebook has targeted politicians around the world – including the former UK chancellor, George Osborne – promising investments and incentives while seeking to pressure them into lobbying on Facebook’s behalf against data privacy legislation, an explosive new leak of internal Facebook documents has revealed.

The documents, which have been seen by the Observer and Computer Weekly, reveal a secretive global lobbying operation targeting hundreds of legislators and regulators in an attempt to procure influence across the world, including in the UK, US, Canada, India, Vietnam, Argentina, Brazil, Malaysia and all 28 states of the EU. The documents include details of how Facebook:

– Lobbied politicians across Europe in a strategic operation to head off “overly restrictive” GDPR legislation. They include extraordinary claims that the Irish prime minister said his country could exercise significant influence as president of the EU, promoting Facebook’s interests even though technically it was supposed to remain neutral.

– Used chief operating officer Sheryl Sandberg’s feminist memoir Lean In to “bond” with female European commissioners it viewed as hostile.

– Threatened to withhold investment from countries unless they supported or passed Facebook-friendly laws.

The documents appear to emanate from a court case against Facebook by the app developer Six4Three in California, and reveal that Sandberg considered European data protection legislation a “critical” threat to the company. A memo written after the Davos economic summit in 2013 quotes Sandberg describing the “uphill battle” the company faced in Europe on the “data and privacy front” and its “critical” efforts to head off “overly prescriptive new laws”.

Most revealingly, it includes details of the company’s “great relationship” with Enda Kenny, the Irish prime minister at the time, one of a number of people it describes as “friends of Facebook”. Ireland plays a key role in regulating technology companies in Europe because its data protection commissioner acts for all 28 member states. The memo has inflamed data protection advocates, who have long complained about the company’s “cosy” relationship with the Irish government.

Facebook admits 18% of Research spyware users were teens, not <5%

Facebook has changed its story after initially trying to downplay how it targeted teens with its Research program that a TechCrunch investigation revealed was paying them gift cards to monitor all their mobile app usage and browser traffic. “Less than 5 percent of the people who chose to participate in this market research program were teens” a Facebook spokesperson told TechCrunch and many other news outlets in a damage control effort 7 hours after we published our report on January 29th. At the time, Facebook claimed that it had removed its Research app from iOS. The next morning we learned that wasn’t true, as Apple had already forcibly blocked the Facebook Research app for violating its Enterprise Certificate program that supposed to reserved for companies distributing internal apps to employees.

It turns out that wasn’t the only time Facebook deceived the public in its response regarding the Research VPN scandal. TechCrunch has obtained Facebook’s unpublished February 21st response to questions about the Research program in a letter from Senator Mark Warner, who wrote to CEO Mark Zuckerberg that “Facebook’s apparent lack of full transparency with users – particularly in the context of ‘research’ efforts – has been a source of frustration for me.”

In the response from Facebook’s VP of US public policy Kevin Martin, the company admits that (emphasis ours) “At the time we ended the Facebook Research App on Apple’s iOS platform, less than 5 percent of the people sharing data with us through this program were teens. Analysis shows that number is about 18 percent when you look at the complete lifetime of the program, and also add people who had become inactive and uninstalled the app.” So 18 percent of research testers were teens. It was only less than 5 percent when Facebook got caught. Given users age 13 to 35 were eligible for Facebook’s Research program, 13 to 18 year olds made of 22 percent of the age range. That means Facebook clearly wasn’t trying to minimize teen involvement, nor were they just a tiny fraction of users.

Oceans Are Getting Louder, Posing Potential Threats to Marine Life

Slow-moving, hulking ships crisscross miles of ocean in a lawn mower pattern, wielding an array of 12 to 48 air guns blasting pressurized air repeatedly into the depths of the ocean.

The sound waves hit the sea floor, penetrating miles into it, and bounce back to the surface, where they are picked up by hydrophones. The acoustic patterns form a three-dimensional map of where oil and gas most likely lie.

The seismic air guns probably produce the loudest noise that humans use regularly underwater, and it is about to become far louder in the Atlantic. As part of the Trump administration’s plans to allow offshore drilling for gas and oil exploration, five companies are in the process of seeking permits to carry out seismic mapping with the air guns all along the Eastern Seaboard, from Central Florida to the Northeast, for the first time in three decades. The surveys haven’t started yet in the Atlantic, but now that the ban on offshore drilling has been lifted, companies can be granted access to explore regions along the Gulf of Mexico and the Pacific.

Even Years Later, Twitter Doesn’t Delete Your Direct Messages

Twitter retains direct messages for years, including messages you and others have deleted, but also data sent to and from accounts that have been deactivated and suspended, according to security researcher Karan Saini. Saini found years-old messages in a file from an archive of his data obtained through the website from accounts that were no longer on Twitter. He also reported a similar bug, found a year earlier but not disclosed until now, that allowed him to use a since-deprecated API to retrieve direct messages even after a message was deleted from both the sender and the recipient — though, the bug wasn’t able to retrieve messages from suspended accounts.

Direct messages once let users “unsend” messages from someone else’s inbox, simply by deleting it from their own. Twitter changed this years ago, and now only allows a user to delete messages from their account. “Others in the conversation will still be able to see direct messages or conversations that you have deleted,” Twitter says in a help page. Twitter also says in its privacy policy that anyone wanting to leave the service can have their account “deactivated and then deleted.” After a 30-day grace period, the account disappears, along with its data. But, in our tests, we could recover direct messages from years ago — including old messages that had since been lost to suspended or deleted accounts. By downloading your account’s data, it’s possible to download all of the data Twitter stores on you.