Archives June 2023

US Surgeon General Warns on Possible Social Media Harms for Teens

“A recent advisory from U.S. Surgeon General Dr. Vivek Murthy says there’s not enough evidence to determine whether social media is safe enough for children and adolescents when it comes to their mental health.” (Although a CNN news anchor points out that “Nearly all of the research points to negative impacts.”)

CNN’s Chief Medical Correspondent interviewed U.S. Surgeon General Dr. Vivek Murthy “to examine what led him to sound the alarm, and who should be responsible for tackling the issue.” And the surgeon general remembers when his five-year-old daughter asked to post a picture on social media. “I think finding the right balance is not easy, in part because, you know, the platforms weren’t necessarily designed for balance. They were designed to maximize how much time we spend on them.”
CNN: How worried are you? When people hear something coming from the surgeon general’s office, they think of, you know, smoking, opioids, things like this. Social media — is it at that level of concern for you?

Surgeon General: Yes, I would say yes, it is. And, and — but it’s it’s more complicated… because we know that some kids do actually get benefit from their experience of social media. Some are able to connect more easily with friends and family, to express themselves more creatively and more openly than they otherwise would, and to find community… But one of the things that has become an increasing source of worry for me is that the the association between social media use and harmful outcomes… [W]e’re asking parents to somehow figure it out all on their own. And the reason I issued an advisory on this topic is I worry that we have not taken enough action to support parents and kids…

CNN: What is the level of evidence about the dangers of social media and what is the level of evidence that you want? I mean, what does it take for you as a surgeon general to act on this…?

Surgeon General: I think the first question I’m asking is where is the evidence of safety…? There’s a lot of association data, right, that’s showing an association between use and certain and negative outcomes, like for example, for kids who who use more than 3 hours of social media a day, they face double the risk of depression and anxiety symptoms. But we also know that kids are telling us in their own words and their own experience how they’re experiencing social media. So, for example, about nearly half of adolescents are saying that using social media makes them feel worse about their body image…

And one of the consistent messages I hear from researchers who’s been studying this area for a long time is that they are having a hard time getting access to the data from social media companies. You know, as a parent, I don’t ever want to feel like someone or anyone is hiding information from me about how a product affects my child. But that’s how a lot of parents are feeling right now. And so that’s a place where I think transparency matters. Let’s get the data out there so independent researchers can assess it and can help us understand the harms and benefits and which kids are most impacted so we can design, you know, our approach, you know, in a more informed way…

One of the things we call for in my advisory is for the policymakers to step in and establish actual, transparent, enforceable safety standards like we do for other products so that parents have some reassurance around safety… This technology is already being used by 95% of kids, Right. And I don’t think that’s realistic to put the genie back in the bottle here or to say somehow nobody should be using social media, that that’s not the goal here… We don’t like leave it up to car manufacturers to determine whether or not they’ve hit the standards or not. We don’t do that with medications either. There should be, you know, independent authority that parents can trust are looking primarily in solely out for the welfare of their kids, and they should be the ones who enforce these standards….

You know, just to put it bluntly, I do not think we have done our job as a society to have the backs of kids and parents on this because we haven’t moved fast enough to get the information to ultimately guide them on safe use… [P]arents across the country, people are trying to do the best they can with limited information.

The surgeon general also says their ideal legislation would also “help to reduce kids exposure to harmful content” and include “restrictions on features that seek to manipulate kids into spending excessive amounts of time on these platforms.”

10 Years After Snowden’s First Leak, What Have We Learned?

The world got a first glimpse into the US government’s far-reaching surveillance of American citizens’ communications — namely, their Verizon telephone calls — 10 years ago this week when Edward Snowden’s initial leaks hit the press. […] In the decade since then, “reformers have made real progress advancing the bipartisan notion that Americans’ liberty and security are not mutually exclusive,” [US Senator Ron Wyden (D-OR)] said. “That has delivered tangible results: in 2015 Congress ended bulk collection of Americans’ phone records by passing the USA Freedom Act.” This bill sought to end the daily snooping into American’s phone calls by forcing telcos to collect the records and make the Feds apply for the information.

That same month, a federal appeals court unanimously ruled that the NSA’s phone-records surveillance program was unlawful. The American Civil Liberties Union (ACLU) and the New York Civil Liberties Union sued to end the secret phone spying program, which had been approved by the Foreign Intelligence Surveillance Court, just days after Snowden disclosed its existence. “Once it was pushed out into open court, and the court was able to hear from two sides and not just one, the court held that the program was illegal,” Ben Wizner, director of the ACLU Speech, Privacy and Technology project, told The Register. The Freedom Act also required the federal government to declassify and release “significant” opinions of the Foreign Intelligence Surveillance Court (FISC), and authorized the appointment of independent amici — friends of the court intended to provide an outside perspective. The FISC was established in 1978 under the FISA — the legislative instrument that allows warrantless snooping. And prior to the Freedom Act, this top-secret court only heard the government’s perspective on things, like why the FBI and NSA should be allowed to scoop up private communications.

“To its credit, the government has engaged in reforms, and there’s more transparency now that, on the one hand, has helped build back some trust that was lost, but also has made it easier to shine a light on surveillance misconduct that has happened since then,” Jake Laperruque, deputy director of the Center for Democracy and Technology’s Security and Surveillance Project, told The Register. Wyden also pointed to the sunsetting of the “deeply flawed surveillance law,” Section 215 of the Patriot Act, as another win for privacy and civil liberties. That law expired in March 2020 after Congress did not reauthorize it. “For years, the government relied on Section 215 of the USA Patriot Act to conduct a dragnet surveillance program that collected billions of phone records (Call Detail Records or CDR) documenting who a person called and for how long they called them — more than enough information for analysts to infer very personal details about a person, including who they have relationships with, and the private nature of those relationships,” Electronic Frontier Foundation’s Matthew Guariglia, Cindy Cohn and Andrew Crocker said.
James Clapper, the former US Director of National Intelligence, “stated publicly that the Snowden disclosures accelerated by seven years the adoption of commercial encryption,” Wizner said. “At the individual level, and at the corporate level, we are more secure.”

“And at the corporate level, what the Snowden revelations taught big tech was that even as the government was knocking on the front door, with legal orders to turn over customer data, it was breaking in the backdoor,” Wizner added. “Government was hacking those companies, finding the few points in their global networks where data passed unencrypted, and siphoning it off.” “If you ask the government — if you caught them in a room, and they were talking off the record — they would say the biggest impact for us from the Snowden disclosures is that it made big tech companies less cooperative,” he continued. “I regard that as a feature, not a bug.”

The real issue that the Snowden leaks revealed is that America’s “ordinary system of checks and balances doesn’t work very well for secret national security programs,” Wizner said. “Ten years have gone by,” since the first Snowden disclosures, “and we don’t know what other kinds of rights-violating activities have been taking place in secret, and I don’t trust our traditional oversight systems, courts and the Congress, to ferret those out,” Wizner said. “When you’re dealing with secret programs in a democracy, it almost always requires insiders who are willing to risk their livelihoods and their freedom to bring the information to the public.”

US Intelligence Confirms It Buys Americans’ Personal Data

A newly declassified government report confirms for the first time that U.S. intelligence and spy agencies purchase vast amounts of commercially available information on Americans, including data from connected vehicles, web browsing data, and smartphones. From a report:
By the U.S. government’s own admission, the data it purchases “clearly provides intelligence value,” but also “raises significant issues related to privacy and civil liberties.” The Office of the Director of National Intelligence (ODNI) declassified and released the January 2022-dated report on Friday, following a request by Sen. Ron Wyden (D-OR) to disclose how the intelligence community uses commercially available data. This kind of data is generated from internet-connected devices and made available by data brokers for purchase, such as phone apps and vehicles that collect granular location data and web browsing data that tracks users as they browse the internet.

The declassified report is the U.S. government’s first public disclosure revealing the risks associated with commercially available data of Americans that can be readily purchased by anyone, including adversaries and hostile nations. The United States does not have a privacy or data protection law governing the sharing or selling of Americans’ private information. “In a way that far fewer Americans seem to understand, and even fewer of them can avoid, [commercially available information] includes information on nearly everyone that is of a type and level of sensitivity that historically could have been obtained” by other intelligence gathering capabilities, such as search warrants, wiretaps and surveillance, the report says.

Autonomous Waymo Car Runs Over Dog In San Francisco

One of Alphabet’s Waymo autonomous cars has killed a pet dog. TechCrunch spotted the public report of the incident, which says one of the Waymo Jaguar I-Pace cars ran over a dog in San Francisco while in autonomous mode with a safety driver behind the wheel.

Waymo’s collision report says: “On May 21, 2023 at 10:56 AM PT a Waymo Autonomous Vehicle (“Waymo AV”) operating in San Francisco, California was in a collision involving a small dog on Toland Street at Toland Place. The Waymo AV was traveling southwest on Toland Street when a small dog ran into the street in front of the Waymo AV. The Waymo AV then made contact with the dog, which did not survive. At the time of the impact, the Waymo AV’s Level 4 ADS was engaged in autonomous mode, and a test driver was present (in the driver’s seating position). The Waymo AV sustained damage.”

The collision was a block from Waymo’s Toland Depot, a 120,000-square-foot warehouse that houses at least 50 autonomous cars. The speed limit on Toland Street is 25 mph, according to posted signs viewable on Google Maps. From that Street View link, the road looks like a busy industrial area with many warehouses, truck delivery areas, and barbed-wire fences. The incident is Waymo’s first reported fatality.
Waymo sent along a statement: “On May 21 in San Francisco, a small dog ran in front of one of our vehicles with an autonomous specialist present in the driver’s seat, and, unfortunately, contact was made. The investigation is ongoing, however the initial review confirmed that the system correctly identified the dog which ran out from behind a parked vehicle but was not able to avoid contact. We send our sincere condolences to the dog’s owner. The trust and safety of the communities we are in is the most important thing to us and we’re continuing to look into this on our end.”

In early 2018, an autonomous Uber vehicle in Tempe, Arizona, hit and killed a woman. “According to Tempe PD, the car was in autonomous mode at the time of the incident, with a vehicle operator sitting behind the wheel,” reported Gizmodo at the time. The company went on to suspend self-driving car tests in all North American cities after the fatal accident.

‘Extremely Remorseful’ Lawyers Confronted by Judge Over ‘Legal Gibberish’ Citations from ChatGPT

Two apologetic lawyers responding to an angry judge in Manhattan federal court blamed ChatGPT Thursday for tricking them into including fictitious legal research in a court filing… [Attorney Steven A. Schwartz] told U.S. District Judge P. Kevin Castel he was “operating under a misconception … that this website was obtaining these cases from some source I did not have access to.” He said he “failed miserably” at doing follow-up research to ensure the citations were correct.

“I did not comprehend that ChatGPT could fabricate cases,” Schwartz said…

The judge confronted Schwartz with one legal case invented by the computer program. It was initially described as a wrongful death case brought by a woman against an airline only to morph into a legal claim about a man who missed a flight to New York and was forced to incur additional expenses. “Can we agree that’s legal gibberish?” Castel asked.

Schwartz said he erroneously thought that the confusing presentation resulted from excerpts being drawn from different parts of the case. When Castel finished his questioning, he asked Schwartz if he had anything else to say. “I would like to sincerely apologize,” Schwartz said. He added that he had suffered personally and professionally as a result of the blunder and felt “embarrassed, humiliated and extremely remorseful.”

He said that he and the firm where he worked — Levidow, Levidow & Oberman — had put safeguards in place to ensure nothing similar happens again.

An attorney for the law firm also told the judge that lawyers have historically had a hard time with technology, particularly new technology. “And it’s not getting easier.”

FBI Abused Spy Law 280,000 Times In a Year

The FBI misused surveillance powers granted by Section 702 of the Foreign Intelligence Surveillance Act (FISA) over 278,000 times between 2020 and early 2021 to conduct warrantless searches on George Floyd protesters, January 6 Capitol rioters, and donors to a congressional campaign, according to a newly unclassified court opinion. The Register reports:

On Friday, the US Foreign Intelligence Surveillance Court made public a heavily redacted April 2022 opinion [PDF] that details hundreds of thousands of violations of Section 702 of the Foreign Intelligence Surveillance Act (FISA) — the legislative instrument that allows warrantless snooping. The Feds were found to have abused the spy law in a “persistent and widespread” manner, according to the court, repeatedly failing to adequately justify the need to go through US citizens’ communications using a law aimed at foreigners.

The court opinion details FBI queries run on thousands of individuals between 2020 and early 2021. This includes 133 people arrested during the George Floyd protests and more than 19,000 donors to a congressional campaign. In the latter, “the analyst who ran the query advised that the campaign was a target of foreign influence, but NSD determined that only eight identifiers used in the query had sufficient ties to foreign influence activities to comply with the querying standard,” the opinion says, referring to the Justice Department’s National Security Division (NSD). In other words, there wasn’t a strong enough foreign link to fully justify the communications search.

For the Black Lives Matter protests, the division determined that the FBI queries “were not reasonably likely to retrieve foreign intelligence information or evidence of a crime.” Again, an overreach of foreign surveillance powers. Additional “significant violations of the querying standard” occurred in searched related to the January 6, 2021 breach of the US Capitol, domestic drug and gang investigations, and domestic terrorism probes, according to the court. It’s said that more than 23,000 queries were run on people suspected of storming the Capitol.

Supreme Court Declines To Hear Challenge To Warrantless Pole Camera Surveillance

The U.S. Supreme Court [Monday] declined to hear Moore v. United States, leaving in place a patchwork of lower court decisions on an important and recurring question about privacy rights in the face of advancing surveillance technology. In this case, police secretly attached a small camera to a utility pole, using it to surveil a Massachusetts home 24/7 for eight months — all without a warrant. Law enforcement could watch the camera’s feed in real time, and remotely pan, tilt, and zoom close enough to read license plates and see faces. They could also review a searchable, digitized record of this footage at their convenience. The camera captured every coming and going of the home’s residents and their guests over eight months. As a result, the government targeted the home of a community pillar — a lawyer, respected judicial clerk, devoted church member, and a grandmother raising her grandkids — to cherry-pick images from months of unceasing surveillance in an effort to support unwarranted criminal charges against an innocent person.

Federal courts of appeals and state supreme courts have divided on the question of whether such sweeping surveillance is a Fourth Amendment search requiring a warrant. The highest courts of Massachusetts, Colorado, and South Dakota have held that long-term pole camera surveillance of someone’s home requires a warrant. In Moore v. United States, the members of the full en banc U.S. Court of Appeals for the First Circuit split evenly on the question, with three judges explaining that a warrant is required, and three judges expressing the belief that the Fourth Amendment imposes no limit on this invasive surveillance. This issue will continue to arise in the lower courts; the ACLU filed an amicus brief on the question in the U.S. Court of Appeals for the Tenth Circuit earlier this month.

Instagram’s Recommendation Algorithms Are Promoting Pedophile Networks

Accounts found by the researchers are advertised using blatant and explicit hashtags like #pedowhore, #preteensex, and #pedobait. They offer “menus” of content for users to buy or commission, including videos and imagery of self-harm and bestiality. When researchers set up a test account and viewed content shared by these networks, they were immediately recommended more accounts to follow. As the WSJ reports: “Following just a handful of these recommendations was enough to flood a test account with content that sexualizes children.”

In addition to problems with Instagram’s recommendation algorithms, the investigation also found that the site’s moderation practices frequently ignored or rejected reports of child abuse material. The WSJ recounts incidents where users reported posts and accounts containing suspect content (including one account that advertised underage abuse material with the caption “this teen is ready for you pervs”) only for the content to be cleared by Instagram’s review team or told in an automated message […]. The report also looked at other platforms but found them less amenable to growing such networks. According to the WSJ, the Stanford investigators found “128 accounts offering to sell child-sex-abuse material on Twitter, less than a third the number they found on Instagram” despite Twitter having far fewer users, and that such content “does not appear to proliferate” on TikTok. The report noted that Snapchat did not actively promote such networks as it’s mainly used for direct messaging.

In response to the report, Meta said it was setting up an internal task force to address the issues raised by the investigation. “Child exploitation is a horrific crime,” the company said. “We’re continuously investigating ways to actively defend against this behavior.” Meta noted that in January alone it took down 490,000 accounts that violated its child safety policies and over the last two years has removed 27 pedophile networks. The company, which also owns Facebook and WhatsApp, said it’s also blocked thousands of hashtags associated with the sexualization of children and restricted these terms from user searches.

Earth Science Arctic Could Be Sea Ice-Free in the Summer by the 2030s

“We are very quickly about to lose the Arctic summer sea-ice cover, basically independent of what we are doing,” Dirk Notz, a climate scientist at the University of Hamburg in Germany tells the New York Times’ Raymond Zhong. “We’ve been waiting too long now to do something about climate change to still protect the remaining ice.” An ice-free summer, also called a “blue ocean event,” will happen when the sea ice drops below one million square kilometers (386,102 square miles), writes Jonathan Bamber, a professor of physical geography at the University of Bristol, in the Conversation. This equates to just 15 percent of the Arctic’s seasonal minimum ice cover of the late 1970s, per the Times.

Previous assessments using models have estimated an ice-free summer under high and intermediate emissions scenarios by 2050. But researchers noticed differences between what climate models predicted about what would happen to sea ice and what they’ve actually seen through observations, according to Bob Weber of the Canadian Press. “The models, on average, underestimate sea ice decline compared with observations,” says Nathan Gillett, an environment and climate change Canada scientist, to Weber.

Now, in a new study published in Nature Communications, Notz, Gillett and their colleagues tweaked these models to more closely fit satellite data collected over the past 40 years. Using these modified models, the researchers projected ice changes under different possible levels of greenhouse gas emissions. Their paper suggests that regardless of emissions scenario, “we may experience an unprecedented ice-free Arctic climate in the next decade or two.” Under a high emissions scenario, the Arctic could see a sustained loss of sea ice from August until as late as October before the 2080s, lead author Seung-Ki Min, a climate scientist at Pohang University of Science and Technology in South Korea, tells CNN’s Rachel Ramirez.

The AI firm that conducted ‘state surveillance’ of social media posts

An industrial estate in Yorkshire is an unlikely location for a state surveillance operation. But these blank-looking warehouses are home to an artificial intelligence (AI) company used by the Government to monitor people’s posts on social media.

Logically has been paid more than £1.2 million of taxpayers’ money to analyse what the Government terms “disinformation” – false information deliberately seeded online – and “misinformation”, which is false information that has been spread inadvertently.

It does this by “ingesting” material from more than hundreds of thousands of media sources and “all public posts on major social media platforms”, using AI to identify those that are potentially problematic.

It also has a “partnership” with Facebook, which appears to grant Logically’s fact-checkers huge influence over the content other people see.

A joint press release issued in July 2021 suggests that Facebook will limit the reach of certain posts if Logically says they are untrue.

“When Logically rates a piece of content as false, Facebook will significantly reduce its distribution so that fewer people see it, apply a warning label to let people know that the content has been rated false, and notify people who try to share it,” states the press release.