Resources

As Google Maps Renames Neighbourhoods, Residents Fume

For decades, the district south of downtown and alongside San Francisco Bay here was known as either Rincon Hill, South Beach or South of Market. This spring, it was suddenly rebranded on Google Maps to a name few had heard: the East Cut. The peculiar moniker immediately spread digitally, from hotel sites to dating apps to Uber, which all use Google’s map data. The name soon spilled over into the physical world, too. Real-estate listings beckoned prospective tenants to the East Cut. And news organizations referred to the vicinity by that term.

“It’s degrading to the reputation of our area,” said Tad Bogdan, who has lived in the neighborhood for 14 years. In a survey of 271 neighbors that he organized recently, he said, 90 percent disliked the name. The swift rebranding of the roughly 170-year-old district is just one example of how Google Maps has now become the primary arbiter of place names. With decisions made by a few Google cartographers, the identity of a city, town or neighborhood can be reshaped, illustrating the outsize influence that Silicon Valley increasingly has in the real world.

646

Facebook, Google, and Microsoft Use Design to Trick You Into Handing Over Your Data, New Report Warns

A study from the Norwegian Consumer Council dug into the underhanded tactics used by Microsoft, Facebook, and Google to collect user data. “The findings include privacy intrusive default settings, misleading wording, giving users an illusion of control, hiding away privacy-friendly choices, take-it-or-leave-it choices, and choice architectures where choosing the privacy friendly option requires more effort for the users,” states the report, which includes images and examples of confusing design choices and strangely worded statements involving the collection and use of personal data.

Google makes opting out of personalized ads more of a chore than it needs to be and uses multiple pages of text, unclear design language, and, as described by the report, “hidden defaults” to push users toward the company’s desired action. “If the user tried to turn the setting off, a popup window appeared explaining what happens if Ads Personalization is turned off, and asked users to reaffirm their choice,” the report explained. “There was no explanation about the possible benefits of turning off Ads Personalization, or negative sides of leaving it turned on.” Those who wish to completely avoid personalized ads must traverse multiple menus, making that “I agree” option seem like the lesser of two evils.

In Windows 10, if a user wants to opt out of “tailored experiences with diagnostic data,” they have to click a dimmed lightbulb, while the symbol for opting in is a brightly shining bulb, says the report.

Another example has to do with Facebook. The social media site makes the “Agree and continue” option much more appealing and less intimidating than the grey “Manage Data Settings” option. The report says the company-suggested option is the easiest to use. “This ‘easy road’ consisted of four clicks to get through the process, which entailed accepting personalized ads from third parties and the use of face recognition. In contrast, users who wanted to limit data collection and use had to go through 13 clicks.”

696

Google plan for data-driven ‘smart city’ sparks privacy, democracy concerns

In the Canadian city of Toronto, city officials are negotiating a project that will give a section of the city’s waterfront to the US tech giant Google. Under the arrangement, Google affiliate Sidewalk Labs will build and run a high-tech “radical mixed-use” site called Quayside. This “smart city” plan involves creating a neighbourhood “from the internet up”, powered and served by data; with sensors monitoring everything from air quality to pedestrian traffic, even the flushing of toilets. Amenities like garbage disposal and goods delivery are to be coordinated and driven by AI and robotics.

The proposed parcel of land isn’t huge, but it’s not insubstantial either – it covers about half-a-square-kilometre, and there are already suggestions it could be extended.

For Eric Schmidt, executive chairman of Alphabet — the parent company of both Google and Sidewalk Labs — it’s the culmination of a long-held ambition.

“Give us a city and put us in charge,” he once famously declared.

Following the Facebook/Cambridge Analytica scandal, some, like Dr Jathan Sadowski at the University of Sydney, worry about the implications of putting a private tech company in charge of both urban development and urban life.

“What’s in it for them? It’s data,” he says. “It allows them to get really massive amounts of granular data about urban life and urban environments.”

“You’ll have a city that’s based on, or built around, proprietary platforms, data harvesting, corporate control.”

622

Google hasn’t stopped reading your e-mails

If you’re a Gmail user, your messages and emails likely aren’t as private as you’d think. Google reads each and every one (even if you definitely don’t), scanning your painfully long email chains and vacation responders in order to collect more data on you. Google uses the data gleaned from your messages in order to inform a whole host of other products and services, NBC News reported Thursday.

Though Google announced that it would stop using consumer Gmail content for ad personalization last July, the language permitting it to do so is still included in its current privacy policy, and it without a doubt still scans users emails for other purposes. Aaron Stein, a Google spokesperson, told NBC that Google also automatically extracts keyword data from users’ Gmail accounts, which is then fed into machine learning programs and other products within the Google family. Stein told NBC that Google also “may analyze [email] content to customize search results, better detect spam and malware,” a practice the company first announced back in 2012.

“We collect information about the services that you use and how you use them…” says Google’s privacy policy. “This includes information like your usage data and preferences, Gmail messages, G+ profile, photos, videos, browsing history, map searches, docs, or other Google-hosted content. Our automated systems analyze this information as it is sent and received and when it is stored.”

While Google doesn’t sell this information to third parties, has used it to power its own advertising network and inform search results, among other things. And this is far from a closely guarded secret. The company has included disclosures relating to these practices in its privacy policy since at least 2012: “When you share information with us, for example by creating a Google Account, we can make those services even better – to show you more relevant search results and ads…,” says Google’s March 2012 privacy policy.

714

YouTube, the Great Radicalizer

At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube. I was writing an article about his appeal to his voter base and wanted to confirm a few quotations.

Soon I noticed something peculiar. YouTube started to recommend and “autoplay” videos for me that featured white supremacist rants, Holocaust denials and other disturbing content.

Since I was not in the habit of watching extreme right-wing fare on YouTube, I was curious whether this was an exclusively right-wing phenomenon. So I created another YouTube account and started watching videos of Hillary Clinton and Bernie Sanders, letting YouTube’s recommender algorithm take me wherever it would.

Before long, I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of Sept. 11. As with the Trump videos, YouTube was recommending content that was more and more extreme than the mainstream political fare I had started with.

Intrigued, I experimented with nonpolitical topics. The same basic pattern emerged. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.

It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

This is not because a cabal of YouTube engineers is plotting to drive the world off a cliff. A more likely explanation has to do with the nexus of artificial intelligence and Google’s business model. (YouTube is owned by Google.) For all its lofty rhetoric, Google is an advertising broker, selling our attention to companies that will pay for it. The longer people stay on YouTube, the more money Google makes.

What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.

Is this suspicion correct? Good data is hard to come by; Google is loath to share information with independent researchers. But we now have the first inklings of confirmation, thanks in part to a former Google engineer named Guillaume Chaslot.

Mr. Chaslot worked on the recommender algorithm while at YouTube. He grew alarmed at the tactics used to increase the time people spent on the site. Google fired him in 2013, citing his job performance. He maintains the real reason was that he pushed too hard for changes in how the company handles such issues.

The Wall Street Journal conducted an investigation of YouTube content with the help of Mr. Chaslot. It found that YouTube often “fed far-right or far-left videos to users who watched relatively mainstream news sources,” and that such extremist tendencies were evident with a wide variety of material. If you searched for information on the flu vaccine, you were recommended anti-vaccination conspiracy videos.

It is also possible that YouTube’s recommender algorithm has a bias toward inflammatory content. In the run-up to the 2016 election, Mr. Chaslot created a program to keep track of YouTube’s most recommended videos as well as its patterns of recommendations. He discovered that whether you started with a pro-Clinton or pro-Trump video on YouTube, you were many times more likely to end up with a pro-Trump video recommended.

Combine this finding with other research showing that during the 2016 campaign, fake news, which tends toward the outrageous, included much more pro-Trump than pro-Clinton content, and YouTube’s tendency toward the incendiary seems evident.

YouTube has recently come under fire for recommending videos promoting the conspiracy theory that the outspoken survivors of the school shooting in Parkland, Fla., are “crisis actors” masquerading as victims. Jonathan Albright, a researcher at Columbia, recently “seeded” a YouTube account with a search for “crisis actor” and found that following the “up next” recommendations led to a network of some 9,000 videos promoting that and related conspiracy theories, including the claim that the 2012 school shooting in Newtown, Conn., was a hoax.

What we are witnessing is the computational exploitation of a natural human desire: to look “behind the curtain,” to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.

Human beings have many natural tendencies that need to be vigilantly monitored in the context of modern life. For example, our craving for fat, salt and sugar, which served us well when food was scarce, can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavily marketed to us. So too our natural curiosity about the unknown can lead us astray on a website that leads us too much in the direction of lies, hoaxes and misinformation.

In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.

This situation is especially dangerous given how many people — especially young people — turn to YouTube for information. Google’s cheap and sturdy Chromebook laptops, which now make up more than 50 percent of the pre-college laptop education market in the United States, typically come loaded with ready access to YouTube.

This state of affairs is unacceptable but not inevitable. There is no reason to let a company make so much money while potentially helping to radicalize billions of people, reaping the financial benefits while asking society to bear so many of the costs.

840

How Do You Vote? 50 Million Google Images Give a Clue

What vehicle is most strongly associated with Republican voting districts? Extended-cab pickup trucks. For Democratic districts? Sedans.

Those conclusions may not be particularly surprising. After all, market researchers and political analysts have studied such things for decades.

But what is surprising is how researchers working on an ambitious project based at Stanford University reached those conclusions: by analyzing 50 million images and location data from Google Street View, the street-scene feature of the online giant’s mapping service.

For the first time, helped by recent advances in artificial intelligence, researchers are able to analyze large quantities of images, pulling out data that can be sorted and mined to predict things like income, political leanings and buying habits. In the Stanford study, computers collected details about cars in the millions of images it processed, including makes and models.

Identifying so many car images in such detail was a technical feat. But it was linking that new data set to public collections of socioeconomic and environmental information, and then tweaking the software to spot patterns and correlations, that makes the Stanford project part of what computer scientists see as the broader application of image data.

734

Google and Facebook are watching our every move online

You may know that hidden trackers lurk on most websites you visit, soaking up your personal information. What you may not realize, though, is 76 percent of websites now contain hidden Google trackers, and 24 percent have hidden Facebook trackers, according to the Princeton Web Transparency & Accountability Project. The next highest is Twitter with 12 percent. It is likely that Google or Facebook are watching you on many sites you visit, in addition to tracking you when using their products. As a result, these two companies have amassed huge data profiles on each person, which can include your interests, purchases, search, browsing and location history, and much more. They then make your sensitive data profile available for invasive targeted advertising that can follow you around the Internet.

So how do we move forward from here? Don’t be fooled by claims of self-regulation, as any useful long-term reforms of Google and Facebook’s data privacy practices fundamentally oppose their core business models: hyper-targeted advertising based on more and more intrusive personal surveillance. Change must come from the outside. Unfortunately, we’ve seen relatively little from Washington. Congress and federal agencies need to take a fresh look at what can be done to curb these data monopolies. They first need to demand more algorithmic and privacy policy transparency, so people can truly understand the extent of how their personal information is being collected, processed and used by these companies. Only then can informed consent be possible. They also need to legislate that people own their own data, enabling real opt-outs. Finally, they need to restrict how data can be combined including being more aggressive at blocking acquisitions that further consolidate data power, which will pave the way for more competition in digital advertising. Until we see such meaningful changes, consumers should vote with their feet.

745

Ex-Google engineer establishes new religion with ambition to develop an AI god

“One of the engineers behind Google’s self-driving car has established a nonprofit religious corporation with one main aim – to create a deity with artificial intelligence. According to newly uncovered documents filed to the state of California in September 2015, Anthony Levandowski serves as the CEO and president of religious organisation Way of the Future.”

Way of the Future’s startling mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”

712

Facebook and Google own the top apps

“On mobile, where the majority of the world’s content is now consumed, Google and Facebook own eight of the top 10 apps, with apps devouring 87% of our time spent on smartphones and tablets, according to new comScore data (Figure A):

“In sum, the majority of our time online is now mediated by just a few megacorporations, and for the most part their top incentive is to borrow our privacy just long enough to target an ad at us.”

1059

Google forming ‘smart cities’

“An ambitious project to blanket New York and London with ultrafast Wi-Fi via so-called “smart kiosks,” which will replace obsolete public telephones, are the work of a Google-backed startup.

Each kiosk is around nine feet high and relatively flat. Each flat side houses a big-screen display that pays for the whole operation with advertising.

Each kiosk provides free, high-speed Wi-Fi for anyone in range. By selecting the Wi-Fi network at one kiosk, and authenticating with an email address, each user will be automatically connected to every other LinkNYC kiosk they get within range of. Eventually, anyone will be able to walk around most of the city without losing the connection to these hotspots.

Wide-angle cameras on each side of the kiosks point up and down the street and sidewalk, approximating a 360-degree view. If a city wants to use those cameras and sensors for surveillance, it can.

Over the next 15 years, the city will go through the other two phases, where sensor data will be processed by artificial intelligence to gain unprecedented insights about traffic, environment and human behavior and eventually use it to intelligently re-direct traffic and shape other city functions.”

801
Stare Into The Lights My Pretties

Google Glass makes an official return

“Google parent company Alphabet just unveiled an enterprise version of Google Glass, licensing the smart glasses technology for business-focused applications. This is the first time in years the company has publicly talked about its plans for Glass.”

“In a blog post Tuesday, Glass project leader Jay Kothari said partners such as GE Aviation, AGCO, DHL, Dignity Health, NSF International, Sutter Health, Boeing and Volkswagen have been using Glass over the past several years, and make up just a sampling of 50 companies using the wearable.

Wired said several of these companies found the original Google Glass to be very useful in factories and other enterprise environments. Google discovered this and began work on a product built by a team dedicated to building a new version of Glass for the enterprise.”

660

Renée DiResta: The Lunatics are Running the Asylum

449

Google’s crowd-sourced CAPTCHA “intelligence” turning invisible

We train the machine so well, and it’s use so ubiquitous, that it can become invisible: Google is making CAPTCHAs invisible using “a combination of machine learning and advanced risk analysis that adapts to new and emerging threats,” Ars Technica reports. Emphasis added.

“The old reCAPTCHA system was pretty easy — just a simple “I’m not a robot” checkbox would get people through your sign-up page. The new version is even simpler, and it doesn’t use a challenge or checkbox. It works invisibly in the background, somehow, to identify bots from humans.

When sites switch over to the invisible CAPTCHA system, most users won’t see CAPTCHAs at all, not even the “I’m not a robot” checkbox. If you are flagged as “suspicious” by the system, then it will display the usual challenges.

reCAPTCHA was bought by Google in 2009 and was used to put unsuspecting website users to work for Google. Some CAPTCHA systems create arbitrary problems for users to solve, but older reCAPTCHA challenges actually used problems Google’s computers needed to solve but couldn’t. Google digitizes millions of books, but sometimes the OCR (optical character recognition) software can’t recognize a word, so that word is sent into the reCAPTCHA system for solving by humans. If you’ve ever solved a reCAPTCHA that looks like a set of numbers, those were from Google’s camera-covered Street View cars, which whizz down the streets and identify house numbers. If the OCR software couldn’t figure out a house number, that number was made into a CAPTCHA for solving by humans. The grid of pictures that would ask you to “select all the cats” was used to train computer image recognition algorithms.”

698

Is Google’s AI-driven image resizing algorithm ‘dishonest’?

The Stack reports on Google’s “new research into upscaling low-resolution images using machine learning to ‘fill in’ the missing details,” arguing this is “a questionable stance…continuing to propagate the idea that images contain some kind of abstract ‘DNA’, and that there might be some reliable photographic equivalent of polymerase chain reaction which could find deeper truth in low-res images than either the money spent on the equipment or the age of the equipment will allow.”

“Rapid and Accurate Image Super Resolution (RAISR) uses low and high resolution versions of photos in a standard image set to establish templated paths for upward scaling… This effectively uses historical logic, instead of pixel interpolation, to infer what the image would look like if it had been taken at a higher resolution.

It’s notable that neither their initial paper nor the supplementary examples feature human faces. It could be argued that using AI-driven techniques to reconstruct images raises some questions about whether upscaled, machine-driven digital enhancements are a legal risk, compared to the far greater expense of upgrading low-res CCTV networks with the necessary resolution, bandwidth and storage to obtain good quality video evidence.”

The article points out that “faith in the fidelity of these ‘enhanced’ images routinely convicts defendants.”

612

Vint Cerf: Modern Media Are Made for Forgetting

“Vint Cerf, the living legend largely responsible for the development of the Internet protocol suite, has some concerns about history. In his current column for the Communications of the ACM, Cerf worries about the decreasing longevity of our media, and, thus, about our ability as a civilization to self-document—to have a historical record that one day far in the future might be remarked upon and learned from. Magnetic films do not quite have the staying power as clay tablets.

At stake, according to Cerf, is “the possibility that the centuries well before ours will be better known than ours will be unless we are persistent about preserving digital content. The earlier media seem to have a kind of timeless longevity while modern media from the 1800s forward seem to have shrinking lifetimes. Just as the monks and Muslims of the Middle Ages preserved content by copying into new media, won’t we need to do the same for our modern content?”

As media becomes more ephemeral across technological generations, the more it depends on the technological generation that comes next.”

Also, depends on the mindset of the generation that comes next too… What if we don’t even want to remember?

632
Stare Into The Lights My Pretties

This isn’t a Google Streetview van, it’s a government spy truck (insofar as there’s a difference) running ANPR

“The Philadelphia Police Department admitted today that a mysterious unmarked license plate surveillance truck disguised as a Google Maps vehicle is its own.

“We have been informed that this unmarked vehicle belongs to the police department; however, the placing of any particular decal on the vehicle was not approved through any chain of command. With that being said, once this was brought to our attention, it was ordered that the decals be removed immediately.”

Brandon Worf, who for three years worked at Busch and Associates, a sales group that specializes in public safety technology, described the ALPR gear installed on the vehicle as “scary efficient” after reviewing yesterday’s photos.

Worf says that this particular model, called the ELSAG MPH-900, “is based on the use of infrared cameras to find plate numbers and letters via temperature differentials between those characters and the surrounding background through optical character recognition.”

The cameras are able to read and process “several plates simultaneously” and “in a fraction of a second.” All plates swept up in such a dragnet fashion “are logged with the time/date of the read, GPS latitude/longitude coordinates of where the read occurred, and a photo of the plate and surrounding vehicle,” he added.”

787

Silicon Valley tech firms exacerbating income inequality

“Google democratized information, Uber democratized car rides, and Twitter democratized publishing a single sentence. But to the World Bank, the powerful Washington-based organisation that lends money to developing countries, Silicon Valley’s technology firms appear to be exacerbating economic inequality rather than improving it.”

655

Google AI has access to 1.6M people’s health records (UK)

“A document obtained by New Scientist reveals that the tech giant’s collaboration with the UK’s National Health Service goes far beyond what has been publicly announced. The document — a data-sharing agreement between Google-owned artificial intelligence company DeepMind and the Royal Free NHS Trust — gives the clearest picture yet of what the company is doing and what sensitive data it now has access to. The agreement gives DeepMind access to a wide range of healthcare data on the 1.6 million patients who pass through three London hospitals.

It includes logs of day-to-day hospital activity, such as records of the location and status of patients – as well as who visits them and when. The hospitals will also share the results of certain pathology and radiology tests.

As well as receiving this continuous stream of new data, DeepMind has access to the historical data that the Royal Free trust submits to the Secondary User Service (SUS) database – the NHS’s centralised record of all hospital treatments in the UK. This includes data from critical care and accident and emergency departments.

Google says it has no commercial plans for DeepMind’s work with Royal Free and that the current pilots are being done for free. But the data to which Royal Free is giving DeepMind access is hugely valuable. It may have to destroy its copy of the data when the agreement expires next year, but that gives ample time to mine it for health insights.”

666

Google files patent for injecting a device directly into your eyeball

“Second to the contact lenses that monitor for diabetes, Google’s parent company Alphabet has filed a patent which takes their development to another level. The patent specifically covers a method for “injecting a fluid into a lens capsule of an eye, wherein a natural lens of the eye has been removed from the lens capsule.” It’s powered by “radio frequency energy” received by a small antenna inside. The gadget even has its own data storage. Forbes reports, it is designed to “improve vision.”

Samsung is also one of the most recent companies to receive a patent for smart contact lenses. Their lenses are for “experimenting with new methods of delivering augmented reality interfaces and data.”

604
Stare Into The Lights My Pretties

“From Uber To Eric Schmidt, Tech Is Closer To the US Government Than You’d Think”

“Alphabet’s [Google] executive chairman, Eric Schmidt, recently joined a Department of Defense advisory panel. Facebook recently hired a former director at the U.S. military’s research lab, Darpa. Uber employs Barack Obama’s former campaign manager David Plouffe and Amazon.com tapped his former spokesman Jay Carney. Google, Facebook, Uber and Apple collectively employ a couple of dozen former analysts for America’s spy agencies, who openly list their resumes on LinkedIn.

These connections are neither new nor secret. But the fact they are so accepted illustrates how tech’s leaders — even amid current fights over encryption and surveillance — are still seen as mostly U.S. firms that back up American values. Christopher Soghoian, a technologist with the American Civil Liberties Union, said low-level employees’ government connections matter less than leading executives’ ties to government. For instance, at least a dozen Google engineers have worked at the NSA, according to publicly available records on LinkedIn. And, this being Silicon Valley, not everyone who worked for a spy agency advertises that on LinkedIn. Soghoian, a vocal critic of mass surveillance, said Google hiring an ex-hacker for the NSA to work on security doesn’t really bother him. “But Eric Schmidt having a close relationship with the White House does…”

789