Resources

Stare Into The Lights My Pretties

Children as young as 13 are attending ‘smartphone rehab’

Children refusing to put down their phones is a common flashpoint in many homes, with a third of British children aged 12 to 15 admitting they do not have a good balance between screen time and other activities.

But in the US, the problem has become so severe for some families that children as young as 13 are being treated for digital technology addiction.

One ‘smartphone rehab’ centre near Seattle has started offering residential “intensive recovery programs” for teenagers who have trouble controlling their use of electronic devices.

The Restart Life Centre says parents have been asking it to offer courses of treatment to their children for more than eight years.

Hilarie Cash, the Centre’s founder, told Sky News smartphones, tablets and other mobile devices can be so stimulating and entertaining that they “override all those natural instincts that children actually have for movement and exploration and social interaction”.

Child psychotherapist Julie Lynn Evans, who has worked with hospitals, schools and families for 25 years, said her workload has significantly increased since the use of smartphones became widespread among young people.

“It’s a simplistic view, but I think it is the ubiquity of broadband and smartphones that has changed the pace and the power and the drama of mental illness in young people,” she told The Telegraph.

A ComRes poll of more than 1,000 parents of children aged under 18, published in September 2015, found 47 per cent of parents said they thought their children spent too much time in front of screens, with 43 per cent saying this amounts to an emotional dependency.”

826
Stare Into The Lights My Pretties

The Internet of Things is a surveillance nightmare

… or a dream come true for those in power. And those in power are the same entities pushing IoT technologies.

A little background reading about JTRIG from the Snowden documents is helpful. It’s the modern-day equivalent of the Zersetzung—the special unit of the Stasi that was used to attack, repress and sabotage political opponents. A power greatly expanded with a society driven by IoT.

Full article from Daily Dot:

“In 2014, security guru Bruce Schneier said, “Surveillance is the business model of the Internet. We build systems that spy on people in exchange for services. Corporations call it marketing.” The abstract and novel nature of these services tends to obscure our true relationship to companies like Facebook or Google. As the old saying goes, if you don’t pay for a product, you are the product.

But what happens when the Internet stops being just “that fiddly thing with a mouse” and becomes “the real world”? Surveillance becomes the business model of everything, as more and more companies look to turn the world into a collection of data points.

If we truly understood the bargain we were making when we give up our data for free or discounted services, would we still sign on the dotted line (or agree to the Terms and Conditions)? Would we still accept constant monitoring of our driving habits in exchange for potential insurance breaks, or allow our energy consumption to be uploaded into the cloud in exchange for “smart data” about it?

Nowhere is our ignorance of the trade-offs greater, or the consequences more worrisome, than our madcap rush to connect every toaster, fridge, car, and medical device to the Internet.

Welcome to the Internet of Things, what Schneier calls “the World Size Web,” already growing around you as we speak, which creates such a complete picture of our lives that Dr. Richard Tynan of Privacy International calls them “doppelgängers”—mirror images of ourselves built on constantly updated data. These doppelgängers live in the cloud, where they can easily be interrogated by intelligence agencies. Nicholas Weaver, a security researcher at University of California, Berkeley, points out that “Under the FISA Amendments Act 702 (aka PRISM), the NSA can directly ask Google for any data collected on a valid foreign intelligence target through Google’s Nest service, including a Nest Cam.” And that’s just one, legal way of questioning your digital doppelgänger; we’ve all heard enough stories about hacked cloud storage to be wary of trusting our entire lives to it.

 
But with the IoT, the potential goes beyond simple espionage, into outright sabotage. Imagine an enemy that can remotely disable the brakes in your car, or (even more subtly) give you food poisoning by hacking your fridge. That’s a new kind of power. “The surveillance, the interference, the manipulation … the full life cycle is the ultimate nightmare,” says Tynan.

The professional spies agree that the IoT changes the game. “‘Transformational’ is an overused word, but I do believe it properly applies to these technologies,” then CIA Director David Petraeus told a 2012 summit organized by the agency’s venture capital firm, In-Q-Tel, “particularly to their effect on clandestine tradecraft,” according to Wired.

Clandestine tradecraft is not about watching, but about interfering. Take, for example, the Joint Threat Research Intelligence Group (JTRIG), the dirty tricks division of GCHQ, the British intelligence agency. As the Snowden documents reveal, JTRIG wants to create “Cyber Magicians” who can “make something happen in the real…world,” including ruining business deals, intimidating activists, and sexual entrapment (“honeypots”). The documents show that JTRIG operatives will ignore international law to achieve their goals, which are not about fighting terrorism, but, in fact, targeting individuals who have not been charged with or convicted of any crime.

The Internet of Things “is a JTRIG wet dream,” says security researcher Rob Graham. But you don’t have to be a spy to take advantage of the IoT. Thanks to widespread security vulnerabilities in most IoT devices, almost anyone can take advantage of it. That means cops, spies, gangsters, anyone with the motivation and resources—but probably bored teenagers as well. “I can take any competent computer person and take them from zero to Junior Hacker 101 in a weekend,” says security researcher Dan Tentler. The security of most IoT devices—including home IoT, but also smart cities, power plants, gas pipelines, self-driving cars, and medical devices—is laughably bad. “The barrier to entry is not very tall,” he says, “especially when what’s being released to consumers is so trivial to get into.”

That makes the IoT vulnerable—our society vulnerable—to any criminal with a weekend to spend learning how to hack. “When we talk about vulnerabilities in computers…people are using a lot of rhetoric in the abstract,” says Privacy International’s Tynan. “What we really mean is, vulnerable to somebody. That somebody you’re vulnerable to is the real question.”

“They’re the ones with the power over you,” he added. That means intelligence agencies, sure, but really anyone with the time and motivation to learn how to hack. And, as Joshua Corman of I Am the Cavalry, a concerned group of security researchers, once put it, “There are as many motivations to hacking as there are motivations in the human condition. Hacking is a form of power.”

The authorities want that power; entities like JTRIG, the NSA, the FBI and the DOJ want to be able to not just surveil but also to disrupt, to sabotage, to interfere. Right now the Bureau wants to force Apple to create the ability to deliver backdoored software updates to iPhones, allowing law enforcement access to locally stored, encrypted data. Chris Soghoian, a technologist at the ACLU, tweeted, “If DOJ get what they want in this Apple case, imagine the surveillance assistance they’ll be able to force from Internet of Things companies.”

“The notion that there are legal checks and balances in place is a fiction,” Tynan says. “We need to rely more on technology to increase the hurdles required. For the likes of JTRIG to take the massive resources of the U.K. state and focus them on destroying certain individuals, potentially under flimsy pretenses—I just can’t understand the mentality of these people.”

Defending ourselves in this new, insecure world is difficult, perhaps impossible. “If you go on the Internet, it’s a free-for-all,” Tentler says. “Despite the fact that we have these three-letter agencies, they’re not here to help us; they’re not our friends. When the NSA and GCHQ learn from the bad guys and use those techniques on us, we should be worried.”

If the Internet is a free-for-all, and with the Internet of Things we’re putting the entire world on the Internet, what does that make us?

“Fish in a barrel?”

965

The most striking thing about the WikiLeaks CIA data dump is how little most people cared

“On March 7, the US awoke to a fresh cache of internal CIA documents posted on WikiLeaks. They detail the spy organization’s playbook for cracking digital communications.

Snowden’s NSA revelations sent shockwaves around the world. Despite WikiLeaks’ best efforts at theatrics—distributing an encrypted folder and tweeting the password “SplinterItIntoAThousandPiecesAndScatterItIntoTheWinds”—the Vault 7 leak has elicited little more than a shrug from the media and the public, even if the spooks are seriously worried. Maybe it’s because we already assume the government can listen to everything.”

729

Google’s crowd-sourced CAPTCHA “intelligence” turning invisible

We train the machine so well, and it’s use so ubiquitous, that it can become invisible: Google is making CAPTCHAs invisible using “a combination of machine learning and advanced risk analysis that adapts to new and emerging threats,” Ars Technica reports. Emphasis added.

“The old reCAPTCHA system was pretty easy — just a simple “I’m not a robot” checkbox would get people through your sign-up page. The new version is even simpler, and it doesn’t use a challenge or checkbox. It works invisibly in the background, somehow, to identify bots from humans.

When sites switch over to the invisible CAPTCHA system, most users won’t see CAPTCHAs at all, not even the “I’m not a robot” checkbox. If you are flagged as “suspicious” by the system, then it will display the usual challenges.

reCAPTCHA was bought by Google in 2009 and was used to put unsuspecting website users to work for Google. Some CAPTCHA systems create arbitrary problems for users to solve, but older reCAPTCHA challenges actually used problems Google’s computers needed to solve but couldn’t. Google digitizes millions of books, but sometimes the OCR (optical character recognition) software can’t recognize a word, so that word is sent into the reCAPTCHA system for solving by humans. If you’ve ever solved a reCAPTCHA that looks like a set of numbers, those were from Google’s camera-covered Street View cars, which whizz down the streets and identify house numbers. If the OCR software couldn’t figure out a house number, that number was made into a CAPTCHA for solving by humans. The grid of pictures that would ask you to “select all the cats” was used to train computer image recognition algorithms.”

702

WikiLeaks reveals CIA’s secret hacking tools and spy operations

“WikiLeaks has unleashed a treasure trove of data to the internet, exposing information about the CIA’s arsenal of hacking tools. Code-named Vault 7, the first data is due to be released in serialized form, starting off with “Year Zero” as part one. A cache of over 8,500 documents and files has been made available via BitTorrent in an encrypted archive. Password to the files is:

SplinterItIntoAThousandPiecesAndScatterItIntoTheWinds

The documents reveal that the CIA worked with MI5 in the UK to infect Samsung smart TVs so their microphones could be turned on at will. Investigations were carried out into gaining control of modern cars and trucks, and there is even a specialized division of the CIA focused on accessing, controlling and exploiting iPhones and iPads. This and Android zero days enables the CIA to “to bypass the encryption of WhatsApp, Signal, Telegram, Wiebo, Confide and Cloackman by hacking the “smart” phones that they run on and collecting audio and message traffic before encryption is applied.”

725

Social Media are “driving Americans insane”

“If you pull out your phone to check Twitter while waiting for the light to change, or read e-mails while brushing your teeth, you might be what the American Psychological Association calls a “constant checker.” And chances are, it’s hurting your mental health.

Last week, the APA released a study finding that Americans were experiencing the first statistically significant stress increase in the survey’s 10-year history. In January, 57 percent of respondents of all political stripes said the U.S. political climate was a very or somewhat significant source of stress, up from 52 percent who said the same thing in August. On Thursday, the APA released the second part of its 1 findings, “Stress In America: Coping With Change,” examining the role technology and social media play in American stress levels.

Social media use has skyrocketed from 7 percent of American adults in 2005 to 65 percent in 2015. For those in the 18-29 age range, the increase is larger, from 12 percent to a remarkable 90 percent. But while an increase in social media usage is hardly surprising, the number of people who just can’t tear themselves away is stark: Nowadays, 43 percent of Americans say they are checking their e-mails, texts, or social media accounts constantly. And their stress levels are paying for it: On a 10-point scale, constant checkers reported an average stress level of 5.3. For the rest of Americans, the average level is a 4.4.

If the first step toward recovery, however, is admitting there is a problem, Americans are on their way. Some 65 percent of respondents said “unplugging” or taking a “digital detox” is important. But alas, knowing you have a problem is not the same as fixing it: Only 28 percent of those Americans say they take their own advice.

700
Stare Into The Lights My Pretties

German watchdog tells parents to destroy Wi-Fi-connected doll

“A German government watchdog has ordered parents to “destroy” an internet-connected doll for fear it could be used as a surveillance device. According to a report from BBC News, the German Federal Network Agency said the doll (which contains a microphone and speaker) was equivalent to a “concealed transmitting device” and therefore prohibited under German telecom law.

The doll in question is “My Friend Cayla,” a toy which has already been the target of consumer complaints in the EU and US. In December last year, privacy advocates said the toy recorded kids’ conversations without proper consent, violating the Children’s Online Privacy Protection Act.

Cayla uses a microphone to listen to questions, sending this audio over Wi-Fi to a third-party company (Nuance) that converts it to text. This is then used to search the internet, allowing the doll to answer basic questions, like “What’s a baby kangaroo called?” as well as play games. In addition to privacy concerns over data collection, security researchers found that Cayla can be easily hacked. The doll’s insecure Bluetooth connection can be compromised, letting a third party record audio via the toy, or even speak to children using its voice.

Although the FTC has not yet taken any action against Cayla or its makers Manufacturer Genesis Toys, German data and privacy laws are more stringent than those in America. The legacy of the Stasi, the secret police force that set up one of the most invasive mass-surveillance regimes ever in Communist East Germany, has made the country’s legislators vigilant against such infringements.”

695
Stare Into The Lights My Pretties

How algorithms (secretly) run the world

“When you browse online for a new pair of shoes, pick a movie to stream on Netflix or apply for a car loan, an algorithm likely has its word to say on the outcome.

The complex mathematical formulas are playing a growing role in all walks of life: from detecting skin cancers to suggesting new Facebook friends, deciding who gets a job, how police resources are deployed, who gets insurance at what cost, or who is on a “no fly” list.

Algorithms are being used—experimentally—to write news articles from raw data, while Donald Trump’s presidential campaign was helped by behavioral marketers who used an algorithm to locate the highest concentrations of “persuadable voters.”

But while such automated tools can inject a measure of objectivity into erstwhile subjective decisions, fears are rising over the lack of transparency algorithms can entail, with pressure growing to apply standards of ethics or “accountability.”

Data scientist Cathy O’Neil cautions about “blindly trusting” formulas to determine a fair outcome.

“Algorithms are not inherently fair, because the person who builds the model defines success,” she said.

O’Neil argues that while some algorithms may be helpful, others can be nefarious. In her 2016 book, “Weapons of Math Destruction,” she cites some troubling examples in the United States:

  • Public schools in Washington DC in 2010 fired more than 200 teachers—including several well-respected instructors—based on scores in an algorithmic formula which evaluated performance.
  • A man diagnosed with bipolar disorder was rejected for employment at seven major retailers after a third-party “personality” test deemed him a high risk based on its algorithmic classification.
  • Many jurisdictions are using “predictive policing” to shift resources to likely “hot spots.” O’Neill says that depending on how data is fed into the system, this could lead to discovery of more minor crimes and a “feedback loop” which stigmatizes poor communities.
  • Some courts rely on computer-ranked formulas to determine jail sentences and parole, which may discriminate against minorities by taking into account “risk” factors such as their neighborhoods and friend or family links to crime.
  • In the world of finance, brokers “scrape” data from online and other sources in new ways to make decisions on credit or insurance. This too often amplifies prejudice against the disadvantaged, O’Neil argues.

Her findings were echoed in a White House report last year warning that algorithmic systems “are not infallible—they rely on the imperfect inputs, logic, probability, and people who design them.”

759
Stare Into The Lights My Pretties

“Your browsing history alone can give away your identity”

“Researchers at Stanford and Princeton universities have found a way to connect the dots between people’s private online activity and their Twitter accounts—even for people who have never tweeted.

When the team tested the technique on 400 real people who submitted their browsing history, they were able to correctly pick out the volunteers’ Twitter profiles nearly three-quarters of the time.

Here’s how the de-anonymization system works: The researchers figured that a person is more likely to click a link that was shared on social media by a friend—or a friend of a friend—than any other random link on the internet. (Their model controls for the baseline popularity of each website.) With that in mind, and the details of an anonymous person’s browser history in hand, the researchers can compute the probability that any one Twitter user created that browsing history. People’s basic tendency to follow links they come across on Twitter unmasks them—and it usually takes less than a minute.

“You can even be de-anonymized if you just browse and follow people, without actually sharing anything.”

792
Stare Into The Lights My Pretties

“DragonflEye” project is turning insects into cyborg drones

“R&D company Draper is developing an insect control “backpack” with integrated energy, guidance, and navigation systems, shown here on a to-scale dragonfly model.

To steer the dragonflies, the engineers are developing a way of genetically modifying the nervous system of the insects so they can respond to pulses of light. Once they get it to work, this approach, known as optogenetic stimulation, could enable dragonflies to carry payloads or conduct surveillance…”

775

Controlling the minds of cockroaches using Microsoft’s Xbox Kinect

“Late last year, you were introduced to real, live, remote-controlled cockroaches. Well, the insect hackers at the North Carolina State University are at it again, this time with a Microsoft Kinect and a software program that can boss the bugs around without human input. In other words, we have successfully co-opted cockroach sovereignty — and given it to the machines.

The goal is to ultimately use this kind of technology to create armies of biobots capable of things bio-inspired robots can only dream of.

Now, instead of those impulses being controlled remotely by a human, they’re tapped into the software program, which takes cues from the Xbox Kinect’s tracking data. If the cockroach veers away from the target, the Kinect observes the change and relays it to the software, which in turn makes a split-second decision about how much correctional impulse should be sent to the roach. Longer stimulation is designed to produce more drastic correction, just like pulling hard on a steering wheel.

The results are pretty impressive. Their previous work with remote control yielded only about a 10 per cent success rate, but the new technology has bumped them up to 27 per cent. You can see it for yourself below with a roach that really seems to want nothing in the world but to turn right.”

651

Pentagon successfully tests micro-drone swarm

“The Pentagon may soon be unleashing a 21st-century version of locusts on its adversaries after officials on Monday said it had successfully tested a swarm of 103 micro-drones.

The important step in the development of new autonomous weapon systems was made possible by improvements in artificial intelligence, holding open the possibility that groups of small robots could act together under human direction.

Military strategists have high hopes for such drone swarms that would be cheap to produce and able to overwhelm opponents’ defenses with their great numbers.

The test of the world’s largest micro-drone swarm in California in October included 103 Perdix micro-drones measuring around six inches (16 centimeters) launched from three F/A-18 Super Hornet fighter jets, the Pentagon said in a statement.

“The micro-drones demonstrated advanced swarm behaviors such as collective decision-making, adaptive formation flying and self-healing,” it said.

“Perdix are not pre-programmed synchronized individuals, they are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature,” said William Roper, director of the Pentagon’s Strategic Capabilities Office. “Because every Perdix communicates and collaborates with every other Perdix, the swarm has no leader and can gracefully adapt to drones entering or exiting the team.”

Defense Secretary Ash Carter—a technophile and former Harvard professor—created the SCO when he was deputy defense secretary in 2012.

The department is tasked with accelerating the integration of technological innovations into the US weaponry.

It particularly strives to marry already existing commercial technology—in this case micro-drones and artificial intelligence software—in the design of new weapons.

Originally created by engineering students from the Massachusetts Institute of Technology in 2013 and continuously improved since, Perdix drones draw “inspiration from the commercial smartphone industry,” the Pentagon said.”

774

Japan researchers warn of fingerprint theft from ‘peace’ sign, selfies

“Could flashing the “peace” sign in photos lead to fingerprint data being stolen? Research by a team at Japan’s National Institute of Informatics (NII) says so, raising alarm bells over the popular two-fingered pose. Fingerprint recognition technology is becoming widely available to verify identities, such as when logging on to smartphones, tablets and laptop computers. But the proliferation of mobile devices with high-quality cameras and social media sites where photographs can be easily posted is raising the risk of personal information being leaked, reports said. The NII researchers were able to copy fingerprints based on photos taken by a digital camera three meters (nine feet) away from the subject.”

652

Facebook buys data from third-party brokers to fill in user profiles

“It comes as no surprise to any Facebook user that the social network gathers a considerable amount of information based on their actions and interests. But according to a report from ProPublica, the world’s largest social network knows far more about its users than just what they do online.

What Facebook can’t glean from a user’s activity, it’s getting from third-party data brokers. ProPublica found the social network is purchasing additional information including personal income, where a person eats out and how many credit cards they keep.

That data all comes separate from the unique identifiers that Facebook generates for its users based on interests and online behavior. A separate investigation by ProPublica in which the publication asked users to report categories of interest Facebook assigned to them generated more than 52,000 attributes.

The data Facebook pays for from other brokers to round out user profiles isn’t disclosed by the company beyond a note that it gets information “from a few different sources.” Those sources, according to ProPublica, come from commercial data brokers who have access to information about people that isn’t linked directly to online behavior.”

From ProPublica:

“When asked this week about the lack of disclosure, Facebook responded that it doesn’t tell users about the third-party data because it’s widely available and was not collected by Facebook.

Facebook has been working with data brokers since 2012 when it signed a deal with Datalogix. This prompted Chester, the privacy advocate at the Center for Digital Democracy, to file a complaint with the Federal Trade Commission alleging that Facebook had violated a consent decree with the agency on privacy issues. The FTC has never publicly responded to that complaint and Facebook subsequently signed deals with five other data brokers.

Oracle’s Datalogix provides about 350 types of data to Facebook.”

559

Police request Echo recordings for investigation

“You have the right to remain silent — but your smart devices might not.

Amazon’s Echo and Echo Dot are in millions of homes now, with holiday sales more than quadrupling from 2015. Always listening for its wake word, the breakthrough smart speakers boast seven microphones waiting to take and record your commands.

Now, Arkansas police are hoping an Echo found at a murder scene in Bentonville can aid their investigation.

First reported by The Information, investigators filed search warrants to Amazon, requesting any recordings between November 21 and November 22, 2015, from James A. Bates, who was charged with murder after a man was strangled in a hot tub.

While investigating, police noticed the Echo in the kitchen and pointed out that the music playing in the home could have been voice activated through the device. While the Echo records only after hearing the wake word, police are hoping that ambient noise or background chatter could have accidentally triggered the device, leading to some more clues.

Amazon stores all the voice recordings on its servers, in the hopes of using the data to improve its voice assistant services. While you can delete your personal voice data, there’s still no way to prevent any recordings from being saved on a server.

[…]

Even without Amazon’s help, police may be able to crack into the Echo, according to the warrant. Officers believe they can tap into the hardware on the smart speakers, which could “potentially include time stamps, audio files or other data.”

The investigation has focused on other smart devices as well. Officers seized Bates’ phone but were unable to break through his password, which only served to delay the investigation.

”Our agency now has the ability to utilize data extraction methods that negate the need for passcodes and efforts to search Victor and Bates’ devices will continue upon issuance of this warrant.”

Police also found a Nest thermostat, a Honeywell alarm system, wireless weather monitoring in the backyard and WeMo devices for lighting at the smart home crime scene.

Ultimately, it might have been information from a smart meter that proved to be the most useful. With every home in Bentonville hooked up to a smart meter that measures hourly electricity and water usage, police looked at the data and noticed Bates used an “excessive amount of water” during the alleged drowning.”

629

US, Innocent people placed on watch-list to meet quota

“You could be on a secret government database or watch list for simply taking a picture on an airplane. Some federal air marshals say they’re reporting your actions to meet a quota, even though some top officials deny it.

The air marshals, whose identities are being concealed, told 7NEWS that they’re required to submit at least one report a month. If they don’t, there’s no raise, no bonus, no awards and no special assignments.

”Innocent passengers are being entered into an international intelligence database as suspicious persons, acting in a suspicious manner on an aircraft … and they did nothing wrong,” said one federal air marshal.”

601

Leaked files reveal scope of Cellebrite’s phone cracking technology

“Earlier this year, [ZDNet was] sent a series of large, encrypted files purportedly belonging to a U.S. police department as a result of a leak at a law firm, which was insecurely synchronizing its backup systems across the internet without a password. Among the files was a series of phone dumps created by the police department with specialist equipment, which was created by Cellebrite, an Israeli firm that provides phone-cracking technology. We obtained a number of these so-called extraction reports. One of the more interesting reports by far was from an iPhone 5 running iOS 8. The phone’s owner didn’t use a passcode, meaning the phone was entirely unencrypted. The phone was plugged into a Cellebrite UFED device, which in this case was a dedicated computer in the police department. The police officer carried out a logical extraction, which downloads what’s in the phone’s memory at the time. (Motherboard has more on how Cellebrite’s extraction process works.) In some cases, it also contained data the user had recently deleted. To our knowledge, there are a few sample reports out there floating on the web, but it’s rare to see a real-world example of how much data can be siphoned off from a fairly modern device. We’re publishing some snippets from the report, with sensitive or identifiable information redacted.”

720

U.S. government begins asking foreign travelers about their social media at border

“Foreign travelers arriving in the United States on the visa waiver program have been presented with an “optional” request to “enter information associated with your online presence,” a government official confirmed Thursday. The prompt includes a drop-down menu that lists platforms including Facebook, Google+, Instagram, LinkedIn and YouTube, as well as a space for users to input their account names on those sites. The new policy comes as Washington tries to improve its ability to spot and deny entry to individuals who have ties to terrorist groups like the Islamic State. But the government has faced a barrage of criticism since it first floated the idea last summer. The Internet Association, which represents companies including Facebook, Google and Twitter, at the time joined with consumer advocates to argue the draft policy threatened free expression and posed new privacy and security risks to foreigners. Now that it is final, those opponents are furious the Obama administration ignored their concerns. The question itself is included in what’s known as the Electronic System for Travel Authorization, a process that certain foreign travelers must complete to come to the United States. ESTA and a related paper form specifically apply to those arriving here through the visa-waiver program, which allows citizens of 38 countries to travel and stay in the United States for up to 90 days without a visa.”

642

“Information glut no problem for most Americans: survey”

“Most Americans do not see “information overload” as a problem for them despite the explosion of internet data and images, according to a Pew Research Center survey on Wednesday.

Only 20 percent of U.S. adults feel they get more information than they can handle, down from 27 percent a decade ago. Just over three-quarters like having so much information at hand, the survey of 1,520 people showed.

“Generally, Americans appreciate lots of information and access to it,” said the report into how U.S. adults cope with information demands.

Roughly four in five Americans agree that they are confident about using the internet to keep up with information demands, that a lot of information gives them a feeling of more control over their lives, and that they can easily determine what information is trustworthy.

Americans who are 65 or older, have a high school diploma or less and earn less than $30,000 a year are more likely to say they face a glut of information.

Eighty-four percent of Americans with online access through three sources – home broadband, smartphone and tablet computer – say they like having so much information available.

By contrast, 55 percent of those with no online source felt overwhelmed by the amount of possible information.

The term “information overload” was popularized by author Alvin Toffler in his 1970 bestseller “Future Shock.” It refers to difficulties that people face from getting too much information or data.

The Pew survey involved people over 18 interviewed by landline or cell phones from March 7 to April 4. The margin of error was 2.9 percentage points, meaning results could vary by that much either way.”

611

“Smart” toys are spying on kids

Emphasis added:

“Some people consider dolls creepy enough, but what if that deceptively cute toy was listening to everything you said and, worse yet, letting creeps speak through it?

According to The Center for Digital Democracy, a pair of smart toys designed to engage with children in new and entertaining ways are rife with security and privacy holes. The watchdog group was so concerned, they filed a complaint with the Federal Trade Commission on Dec. 6 (you can read the full complaint here). A similar one was also filed in Europe by the Norwegian Consumer Council.

“This complaint concerns toys that spy,” reads the complaint, which claims the Genesis Toys’ My Friend Cayla and i-QUE Intelligent Robot can record and collect private conversations and offer no limitations on the collection and use of personal information.

Both toys use voice recognition, internet connectivity and Bluetooth to engage with children in conversational manner and answer questions. The CDD claims they do all of this in wildly insecure and invasive ways.

Both My Friend Cayla and i-QUE use Nuance Communications’ voice-recognition platform to listen and respond to queries. On the Genesis Toy site, the manufacturer notes that while “most of Cayla’s conversational features can be accessed offline,” searching for information may require an internet connection.

The promotional video for Cayla encourages children to “ask Cayla almost anything.”

The dolls work in concert with mobile apps. Some questions can be asked directly, but the toys maintain a constant Bluetooth connection to the dolls so they can also react to actions in the app and even appear to identify objects the child taps on on screen.

The CDD takes particular issue with that app and lists all the questions it asks children (or their parents) up front during registration: everything from the child and her parent’s names to their school, and where they live.

711