Resources

Proposed Toronto development from Google’s Sidewalk Labs sparks concerns over data

Heated streets will melt ice and snow on contact. Sensors will monitor traffic and protect pedestrians. Driverless shuttles will carry people to their doors.

A unit of Google’s parent company Alphabet is proposing to turn a rundown part of Toronto’s waterfront into what may be the most wired community in history — to “fundamentally refine what urban life can be.”

Dan Doctoroff, the CEO of Sidewalk Labs, envisions features like pavement that lights up to warn pedestrians of approaching streetcars. Flexible heated enclosures — described as “raincoats” for buildings — will be deployed based on weather data during Toronto’s bitter winters. Robotic waste-sorting systems will detect when a garbage bin is full and remove it before raccoons descend.

“Those are great uses of data that can improve the quality of life of people,′ he said. “That’s what we want to do.”

But some Canadians are rethinking the privacy implications of giving one of the most data-hungry companies on the planet the means to wire up everything from street lights to pavement.

The concerns have intensified following a series of privacy scandals at Facebook and Google. A recent Associated Press investigation found that many Google services on iPhones and Android devices store location-tracking data even if you use privacy settings that are supposed to turn them off.

Adam Vaughan, the federal lawmaker whose district includes the development, said debate about big data and urban infrastructure is coming to cities across the world and he would rather have Toronto at the forefront of discussion.

“Google is ahead of governments globally and locally. That’s a cause for concern but it’s also an opportunity,” Vaughan said.

765

A Look at the Dark Side of the Lives of Some Prominent YouTubers, Who Are Increasingly Saying They’re Stressed, Depressed, Lonely, and Exhausted

Many YouTubers are finding themselves stressed, lonely and exhausted. For years, YouTubers have believed that they are loved most by their audience when they project a chirpy, grateful image. But what happens when the mask slips? This year there has been a wave of videos by prominent YouTubers talking about their burnout, chronic fatigue and depression. “This is all I ever wanted,” said Elle Mills, a 20-year-old Filipino-Canadian YouTuber in a (monetised) video entitled Burnt Out At 19, posted in May. “And why the fuck am I so unfucking unhappy? It doesn’t make any sense. You know what I mean? Because, like, this is literally my fucking dream. And I’m fucking so un-fucking-happy.”

… The anxieties are tied up with the relentless nature of their work. Tyler Blevins, AKA Ninja, makes an estimated $500,000 every month via live broadcasts of him playing the video game Fortnite on Twitch, a service for livestreaming video games that is owned by Amazon. Most of Blevins’ revenue comes from Twitch subscribers or viewers who provide one-off donations (often in the hope that he will thank them by name “on air”). Blevins recently took to Twitter to complain that he didn’t feel he could stop streaming. “Wanna know the struggles of streaming over other jobs?” he wrote, perhaps ill-advisedly for someone with such a stratospheric income. “I left for less than 48 hours and lost 40,000 subscribers on Twitch. I’ll be back today… grinding again.” There was little sympathy on Twitter for the millionaire. But the pressure he described is felt at every level of success, from the titans of the content landscape all the way down to the people with channels with just a few thousand subscribers, all of whom feel they must be constantly creating, always available and responding to their fans.

At the end of the month he was pale, gaunt and tired in a way that, he recalls, seemed “impervious to rest”. His work, he noticed, had become increasingly rushed and harsh in tone. Yet the angry, provocative quality of his videos seemed only to be making them more popular. “Divisive content is the king of online media today, and YouTube heavily boosts anything that riles people up,” he says. “It’s one of the most toxic things: the point at which you’re breaking down is the point at which the algorithm loves you the most.”

“Constant releases build audience loyalty,” says Austin Hourigan, who runs ShoddyCast, a YouTube channel with 1.2 million subscribers. “The more loyalty you build, the more likely your viewers are to come back, which gives you the closest thing to a financial safety net in what is otherwise a capricious space.” When a YouTuber passes the 1 million subscribers mark, they are presented with a gold plaque to mark the event. Many of these plaques can be seen on shelves and walls in the background of presenters’ rooms. In this way, the size of viewership and quantity of uploads become the main markers of value.

770

India’s Biometric Database Is Creating A Perfect Surveillance State — And U.S. Tech Companies Are On Board

Big U.S. technology companies are involved in the construction of one of the most intrusive citizen surveillance programs in history. For the past nine years, India has been building the world’s biggest biometric database by collecting the fingerprints, iris scans and photos of nearly 1.3 billion people. For U.S. tech companies like Microsoft, Amazon and Facebook, the project, called Aadhaar (which means “proof” or “basis” in Hindi), could be a gold mine. The CEO of Microsoft has repeatedly praised the project, and local media have carried frequent reports on consultations between the Indian government and senior executives from companies like Apple and Google (in addition to South Korean-based Samsung) on how to make tech products Aadhaar-enabled. But when reporters of HuffPost and HuffPost India asked these companies in the past weeks to confirm they were integrating Aadhaar into their products, only one company — Google — gave a definitive response.

That’s because Aadhaar has become deeply controversial, and the subject of a major Supreme Court of India case that will decide the future of the program as early as this month. Launched nine years ago as a simple and revolutionary way to streamline access to welfare programs for India’s poor, the database has become Indians’ gateway to nearly any type of service — from food stamps to a passport or a cell phone connection. Practical errors in the system have caused millions of poor Indians to lose out on aid. And the exponential growth of the project has sparked concerns among security researchers and academics that India is the first step toward setting up a surveillance society to rival China.

785

Google tracks you even if you tell it not to

Google wants to know where you go so badly that it records your movements even when you explicitly tell it not to. An Associated Press investigation found that many Google services on Android devices and iPhones store your location data even if you’ve used privacy settings that say they will prevent it from doing so.

An app like Google Maps will remind you to allow access to location if you use it for navigating. If you agree to let it record your location over time, Google Maps will display that history for you in a “timeline” that maps out your daily movements. Storing your minute-by-minute travels carries privacy risks and has been used by police to determine the location of suspects — such as a warrant that police in Raleigh, North Carolina, served on Google last year to find devices near a murder scene. So the company will let you “pause” a setting called Location History. Google says that will prevent the company from remembering where you’ve been. Google’s support page on the subject states: “You can turn off Location History at any time. With Location History off, the places you go are no longer stored.” That isn’t true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking.

For example, Google stores a snapshot of where you are when you merely open its Maps app. Automatic daily weather updates on Android phones pinpoint roughly where you are. And some searches that have nothing to do with location, like “chocolate chip cookies,” or “kids science kits,” pinpoint your precise latitude and longitude — accurate to the square foot — and save it to your Google account. The privacy issue affects some two billion users of devices that run Google’s Android operating software and hundreds of millions of worldwide iPhone users who rely on Google for maps or search.

726

As Google Maps Renames Neighbourhoods, Residents Fume

For decades, the district south of downtown and alongside San Francisco Bay here was known as either Rincon Hill, South Beach or South of Market. This spring, it was suddenly rebranded on Google Maps to a name few had heard: the East Cut. The peculiar moniker immediately spread digitally, from hotel sites to dating apps to Uber, which all use Google’s map data. The name soon spilled over into the physical world, too. Real-estate listings beckoned prospective tenants to the East Cut. And news organizations referred to the vicinity by that term.

“It’s degrading to the reputation of our area,” said Tad Bogdan, who has lived in the neighborhood for 14 years. In a survey of 271 neighbors that he organized recently, he said, 90 percent disliked the name. The swift rebranding of the roughly 170-year-old district is just one example of how Google Maps has now become the primary arbiter of place names. With decisions made by a few Google cartographers, the identity of a city, town or neighborhood can be reshaped, illustrating the outsize influence that Silicon Valley increasingly has in the real world.

742

Facebook, Google, and Microsoft Use Design to Trick You Into Handing Over Your Data, New Report Warns

A study from the Norwegian Consumer Council dug into the underhanded tactics used by Microsoft, Facebook, and Google to collect user data. “The findings include privacy intrusive default settings, misleading wording, giving users an illusion of control, hiding away privacy-friendly choices, take-it-or-leave-it choices, and choice architectures where choosing the privacy friendly option requires more effort for the users,” states the report, which includes images and examples of confusing design choices and strangely worded statements involving the collection and use of personal data.

Google makes opting out of personalized ads more of a chore than it needs to be and uses multiple pages of text, unclear design language, and, as described by the report, “hidden defaults” to push users toward the company’s desired action. “If the user tried to turn the setting off, a popup window appeared explaining what happens if Ads Personalization is turned off, and asked users to reaffirm their choice,” the report explained. “There was no explanation about the possible benefits of turning off Ads Personalization, or negative sides of leaving it turned on.” Those who wish to completely avoid personalized ads must traverse multiple menus, making that “I agree” option seem like the lesser of two evils.

In Windows 10, if a user wants to opt out of “tailored experiences with diagnostic data,” they have to click a dimmed lightbulb, while the symbol for opting in is a brightly shining bulb, says the report.

Another example has to do with Facebook. The social media site makes the “Agree and continue” option much more appealing and less intimidating than the grey “Manage Data Settings” option. The report says the company-suggested option is the easiest to use. “This ‘easy road’ consisted of four clicks to get through the process, which entailed accepting personalized ads from third parties and the use of face recognition. In contrast, users who wanted to limit data collection and use had to go through 13 clicks.”

793

Google plan for data-driven ‘smart city’ sparks privacy, democracy concerns

In the Canadian city of Toronto, city officials are negotiating a project that will give a section of the city’s waterfront to the US tech giant Google. Under the arrangement, Google affiliate Sidewalk Labs will build and run a high-tech “radical mixed-use” site called Quayside. This “smart city” plan involves creating a neighbourhood “from the internet up”, powered and served by data; with sensors monitoring everything from air quality to pedestrian traffic, even the flushing of toilets. Amenities like garbage disposal and goods delivery are to be coordinated and driven by AI and robotics.

The proposed parcel of land isn’t huge, but it’s not insubstantial either – it covers about half-a-square-kilometre, and there are already suggestions it could be extended.

For Eric Schmidt, executive chairman of Alphabet — the parent company of both Google and Sidewalk Labs — it’s the culmination of a long-held ambition.

“Give us a city and put us in charge,” he once famously declared.

Following the Facebook/Cambridge Analytica scandal, some, like Dr Jathan Sadowski at the University of Sydney, worry about the implications of putting a private tech company in charge of both urban development and urban life.

“What’s in it for them? It’s data,” he says. “It allows them to get really massive amounts of granular data about urban life and urban environments.”

“You’ll have a city that’s based on, or built around, proprietary platforms, data harvesting, corporate control.”

707

Google hasn’t stopped reading your e-mails

If you’re a Gmail user, your messages and emails likely aren’t as private as you’d think. Google reads each and every one (even if you definitely don’t), scanning your painfully long email chains and vacation responders in order to collect more data on you. Google uses the data gleaned from your messages in order to inform a whole host of other products and services, NBC News reported Thursday.

Though Google announced that it would stop using consumer Gmail content for ad personalization last July, the language permitting it to do so is still included in its current privacy policy, and it without a doubt still scans users emails for other purposes. Aaron Stein, a Google spokesperson, told NBC that Google also automatically extracts keyword data from users’ Gmail accounts, which is then fed into machine learning programs and other products within the Google family. Stein told NBC that Google also “may analyze [email] content to customize search results, better detect spam and malware,” a practice the company first announced back in 2012.

“We collect information about the services that you use and how you use them…” says Google’s privacy policy. “This includes information like your usage data and preferences, Gmail messages, G+ profile, photos, videos, browsing history, map searches, docs, or other Google-hosted content. Our automated systems analyze this information as it is sent and received and when it is stored.”

While Google doesn’t sell this information to third parties, has used it to power its own advertising network and inform search results, among other things. And this is far from a closely guarded secret. The company has included disclosures relating to these practices in its privacy policy since at least 2012: “When you share information with us, for example by creating a Google Account, we can make those services even better – to show you more relevant search results and ads…,” says Google’s March 2012 privacy policy.

791

YouTube, the Great Radicalizer

At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube. I was writing an article about his appeal to his voter base and wanted to confirm a few quotations.

Soon I noticed something peculiar. YouTube started to recommend and “autoplay” videos for me that featured white supremacist rants, Holocaust denials and other disturbing content.

Since I was not in the habit of watching extreme right-wing fare on YouTube, I was curious whether this was an exclusively right-wing phenomenon. So I created another YouTube account and started watching videos of Hillary Clinton and Bernie Sanders, letting YouTube’s recommender algorithm take me wherever it would.

Before long, I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of Sept. 11. As with the Trump videos, YouTube was recommending content that was more and more extreme than the mainstream political fare I had started with.

Intrigued, I experimented with nonpolitical topics. The same basic pattern emerged. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.

It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

This is not because a cabal of YouTube engineers is plotting to drive the world off a cliff. A more likely explanation has to do with the nexus of artificial intelligence and Google’s business model. (YouTube is owned by Google.) For all its lofty rhetoric, Google is an advertising broker, selling our attention to companies that will pay for it. The longer people stay on YouTube, the more money Google makes.

What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.

Is this suspicion correct? Good data is hard to come by; Google is loath to share information with independent researchers. But we now have the first inklings of confirmation, thanks in part to a former Google engineer named Guillaume Chaslot.

Mr. Chaslot worked on the recommender algorithm while at YouTube. He grew alarmed at the tactics used to increase the time people spent on the site. Google fired him in 2013, citing his job performance. He maintains the real reason was that he pushed too hard for changes in how the company handles such issues.

The Wall Street Journal conducted an investigation of YouTube content with the help of Mr. Chaslot. It found that YouTube often “fed far-right or far-left videos to users who watched relatively mainstream news sources,” and that such extremist tendencies were evident with a wide variety of material. If you searched for information on the flu vaccine, you were recommended anti-vaccination conspiracy videos.

It is also possible that YouTube’s recommender algorithm has a bias toward inflammatory content. In the run-up to the 2016 election, Mr. Chaslot created a program to keep track of YouTube’s most recommended videos as well as its patterns of recommendations. He discovered that whether you started with a pro-Clinton or pro-Trump video on YouTube, you were many times more likely to end up with a pro-Trump video recommended.

Combine this finding with other research showing that during the 2016 campaign, fake news, which tends toward the outrageous, included much more pro-Trump than pro-Clinton content, and YouTube’s tendency toward the incendiary seems evident.

YouTube has recently come under fire for recommending videos promoting the conspiracy theory that the outspoken survivors of the school shooting in Parkland, Fla., are “crisis actors” masquerading as victims. Jonathan Albright, a researcher at Columbia, recently “seeded” a YouTube account with a search for “crisis actor” and found that following the “up next” recommendations led to a network of some 9,000 videos promoting that and related conspiracy theories, including the claim that the 2012 school shooting in Newtown, Conn., was a hoax.

What we are witnessing is the computational exploitation of a natural human desire: to look “behind the curtain,” to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.

Human beings have many natural tendencies that need to be vigilantly monitored in the context of modern life. For example, our craving for fat, salt and sugar, which served us well when food was scarce, can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavily marketed to us. So too our natural curiosity about the unknown can lead us astray on a website that leads us too much in the direction of lies, hoaxes and misinformation.

In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.

This situation is especially dangerous given how many people — especially young people — turn to YouTube for information. Google’s cheap and sturdy Chromebook laptops, which now make up more than 50 percent of the pre-college laptop education market in the United States, typically come loaded with ready access to YouTube.

This state of affairs is unacceptable but not inevitable. There is no reason to let a company make so much money while potentially helping to radicalize billions of people, reaping the financial benefits while asking society to bear so many of the costs.

949

How Do You Vote? 50 Million Google Images Give a Clue

What vehicle is most strongly associated with Republican voting districts? Extended-cab pickup trucks. For Democratic districts? Sedans.

Those conclusions may not be particularly surprising. After all, market researchers and political analysts have studied such things for decades.

But what is surprising is how researchers working on an ambitious project based at Stanford University reached those conclusions: by analyzing 50 million images and location data from Google Street View, the street-scene feature of the online giant’s mapping service.

For the first time, helped by recent advances in artificial intelligence, researchers are able to analyze large quantities of images, pulling out data that can be sorted and mined to predict things like income, political leanings and buying habits. In the Stanford study, computers collected details about cars in the millions of images it processed, including makes and models.

Identifying so many car images in such detail was a technical feat. But it was linking that new data set to public collections of socioeconomic and environmental information, and then tweaking the software to spot patterns and correlations, that makes the Stanford project part of what computer scientists see as the broader application of image data.

824

Google and Facebook are watching our every move online

You may know that hidden trackers lurk on most websites you visit, soaking up your personal information. What you may not realize, though, is 76 percent of websites now contain hidden Google trackers, and 24 percent have hidden Facebook trackers, according to the Princeton Web Transparency & Accountability Project. The next highest is Twitter with 12 percent. It is likely that Google or Facebook are watching you on many sites you visit, in addition to tracking you when using their products. As a result, these two companies have amassed huge data profiles on each person, which can include your interests, purchases, search, browsing and location history, and much more. They then make your sensitive data profile available for invasive targeted advertising that can follow you around the Internet.

So how do we move forward from here? Don’t be fooled by claims of self-regulation, as any useful long-term reforms of Google and Facebook’s data privacy practices fundamentally oppose their core business models: hyper-targeted advertising based on more and more intrusive personal surveillance. Change must come from the outside. Unfortunately, we’ve seen relatively little from Washington. Congress and federal agencies need to take a fresh look at what can be done to curb these data monopolies. They first need to demand more algorithmic and privacy policy transparency, so people can truly understand the extent of how their personal information is being collected, processed and used by these companies. Only then can informed consent be possible. They also need to legislate that people own their own data, enabling real opt-outs. Finally, they need to restrict how data can be combined including being more aggressive at blocking acquisitions that further consolidate data power, which will pave the way for more competition in digital advertising. Until we see such meaningful changes, consumers should vote with their feet.

845

Ex-Google engineer establishes new religion with ambition to develop an AI god

“One of the engineers behind Google’s self-driving car has established a nonprofit religious corporation with one main aim – to create a deity with artificial intelligence. According to newly uncovered documents filed to the state of California in September 2015, Anthony Levandowski serves as the CEO and president of religious organisation Way of the Future.”

Way of the Future’s startling mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”

804

Facebook and Google own the top apps

“On mobile, where the majority of the world’s content is now consumed, Google and Facebook own eight of the top 10 apps, with apps devouring 87% of our time spent on smartphones and tablets, according to new comScore data (Figure A):

“In sum, the majority of our time online is now mediated by just a few megacorporations, and for the most part their top incentive is to borrow our privacy just long enough to target an ad at us.”

1183

Google forming ‘smart cities’

“An ambitious project to blanket New York and London with ultrafast Wi-Fi via so-called “smart kiosks,” which will replace obsolete public telephones, are the work of a Google-backed startup.

Each kiosk is around nine feet high and relatively flat. Each flat side houses a big-screen display that pays for the whole operation with advertising.

Each kiosk provides free, high-speed Wi-Fi for anyone in range. By selecting the Wi-Fi network at one kiosk, and authenticating with an email address, each user will be automatically connected to every other LinkNYC kiosk they get within range of. Eventually, anyone will be able to walk around most of the city without losing the connection to these hotspots.

Wide-angle cameras on each side of the kiosks point up and down the street and sidewalk, approximating a 360-degree view. If a city wants to use those cameras and sensors for surveillance, it can.

Over the next 15 years, the city will go through the other two phases, where sensor data will be processed by artificial intelligence to gain unprecedented insights about traffic, environment and human behavior and eventually use it to intelligently re-direct traffic and shape other city functions.”

897
Stare Into The Lights My Pretties

Google Glass makes an official return

“Google parent company Alphabet just unveiled an enterprise version of Google Glass, licensing the smart glasses technology for business-focused applications. This is the first time in years the company has publicly talked about its plans for Glass.”

“In a blog post Tuesday, Glass project leader Jay Kothari said partners such as GE Aviation, AGCO, DHL, Dignity Health, NSF International, Sutter Health, Boeing and Volkswagen have been using Glass over the past several years, and make up just a sampling of 50 companies using the wearable.

Wired said several of these companies found the original Google Glass to be very useful in factories and other enterprise environments. Google discovered this and began work on a product built by a team dedicated to building a new version of Glass for the enterprise.”

754

Renée DiResta: The Lunatics are Running the Asylum

511

Google’s crowd-sourced CAPTCHA “intelligence” turning invisible

We train the machine so well, and it’s use so ubiquitous, that it can become invisible: Google is making CAPTCHAs invisible using “a combination of machine learning and advanced risk analysis that adapts to new and emerging threats,” Ars Technica reports. Emphasis added.

“The old reCAPTCHA system was pretty easy — just a simple “I’m not a robot” checkbox would get people through your sign-up page. The new version is even simpler, and it doesn’t use a challenge or checkbox. It works invisibly in the background, somehow, to identify bots from humans.

When sites switch over to the invisible CAPTCHA system, most users won’t see CAPTCHAs at all, not even the “I’m not a robot” checkbox. If you are flagged as “suspicious” by the system, then it will display the usual challenges.

reCAPTCHA was bought by Google in 2009 and was used to put unsuspecting website users to work for Google. Some CAPTCHA systems create arbitrary problems for users to solve, but older reCAPTCHA challenges actually used problems Google’s computers needed to solve but couldn’t. Google digitizes millions of books, but sometimes the OCR (optical character recognition) software can’t recognize a word, so that word is sent into the reCAPTCHA system for solving by humans. If you’ve ever solved a reCAPTCHA that looks like a set of numbers, those were from Google’s camera-covered Street View cars, which whizz down the streets and identify house numbers. If the OCR software couldn’t figure out a house number, that number was made into a CAPTCHA for solving by humans. The grid of pictures that would ask you to “select all the cats” was used to train computer image recognition algorithms.”

801

Is Google’s AI-driven image resizing algorithm ‘dishonest’?

The Stack reports on Google’s “new research into upscaling low-resolution images using machine learning to ‘fill in’ the missing details,” arguing this is “a questionable stance…continuing to propagate the idea that images contain some kind of abstract ‘DNA’, and that there might be some reliable photographic equivalent of polymerase chain reaction which could find deeper truth in low-res images than either the money spent on the equipment or the age of the equipment will allow.”

“Rapid and Accurate Image Super Resolution (RAISR) uses low and high resolution versions of photos in a standard image set to establish templated paths for upward scaling… This effectively uses historical logic, instead of pixel interpolation, to infer what the image would look like if it had been taken at a higher resolution.

It’s notable that neither their initial paper nor the supplementary examples feature human faces. It could be argued that using AI-driven techniques to reconstruct images raises some questions about whether upscaled, machine-driven digital enhancements are a legal risk, compared to the far greater expense of upgrading low-res CCTV networks with the necessary resolution, bandwidth and storage to obtain good quality video evidence.”

The article points out that “faith in the fidelity of these ‘enhanced’ images routinely convicts defendants.”

713

Vint Cerf: Modern Media Are Made for Forgetting

“Vint Cerf, the living legend largely responsible for the development of the Internet protocol suite, has some concerns about history. In his current column for the Communications of the ACM, Cerf worries about the decreasing longevity of our media, and, thus, about our ability as a civilization to self-document—to have a historical record that one day far in the future might be remarked upon and learned from. Magnetic films do not quite have the staying power as clay tablets.

At stake, according to Cerf, is “the possibility that the centuries well before ours will be better known than ours will be unless we are persistent about preserving digital content. The earlier media seem to have a kind of timeless longevity while modern media from the 1800s forward seem to have shrinking lifetimes. Just as the monks and Muslims of the Middle Ages preserved content by copying into new media, won’t we need to do the same for our modern content?”

As media becomes more ephemeral across technological generations, the more it depends on the technological generation that comes next.”

Also, depends on the mindset of the generation that comes next too… What if we don’t even want to remember?

726
Stare Into The Lights My Pretties

This isn’t a Google Streetview van, it’s a government spy truck (insofar as there’s a difference) running ANPR

“The Philadelphia Police Department admitted today that a mysterious unmarked license plate surveillance truck disguised as a Google Maps vehicle is its own.

“We have been informed that this unmarked vehicle belongs to the police department; however, the placing of any particular decal on the vehicle was not approved through any chain of command. With that being said, once this was brought to our attention, it was ordered that the decals be removed immediately.”

Brandon Worf, who for three years worked at Busch and Associates, a sales group that specializes in public safety technology, described the ALPR gear installed on the vehicle as “scary efficient” after reviewing yesterday’s photos.

Worf says that this particular model, called the ELSAG MPH-900, “is based on the use of infrared cameras to find plate numbers and letters via temperature differentials between those characters and the surrounding background through optical character recognition.”

The cameras are able to read and process “several plates simultaneously” and “in a fraction of a second.” All plates swept up in such a dragnet fashion “are logged with the time/date of the read, GPS latitude/longitude coordinates of where the read occurred, and a photo of the plate and surrounding vehicle,” he added.”

884