Resources

Seemingly Normal Lightning Cable Will Leak Everything You Type

It looks like a Lightning cable, it works like a Lightning cable, and I can use it to connect my keyboard to my Mac. But it is actually a malicious cable that can record everything I type, including passwords, and wirelessly send that data to a hacker who could be more than a mile away. This is the new version of a series of penetration testing tools made by the security researcher known as MG. MG previously demoed an earlier version of the cables for Motherboard at the DEF CON hacking conference in 2019. Shortly after that, MG said he had successfully moved the cables into mass production, and cybersecurity vendor Hak5 started selling the cables. But the more recent cables come in new physical variations, including Lightning to USB-C, and include more capabilities for hackers to play with.

“There were people who said that Type C cables were safe from this type of implant because there isn’t enough space. So, clearly, I had to prove that wrong. :),” MG told Motherboard in an online chat. The OMG Cables, as they’re called, work by creating a Wi-Fi hotspot itself that a hacker can connect to from their own device. From here, an interface in an ordinary web browser lets the hacker start recording keystrokes. The malicious implant itself takes up around half the length of the plastic shell, MG said. MG said that the new cables now have geofencing features, where a user can trigger or block the device’s payloads based on the physical location of the cable. “It pairs well with the self-destruct feature if an OMG Cable leaves the scope of your engagement and you do not want your payloads leaking or being accidentally run against random computers,” he said. “We tested this out in downtown Oakland and were able to trigger payloads at over 1 mile,” he added. He said that the Type C cables allow the same sort of attacks to be carried out against smartphones and tablets. Various other improvements include being able to change keyboard mappings, the ability to forge the identity of specific USB devices, such as pretending to be a device that leverages a particular vulnerability on a system.

Investigation Reveals Widespread Cellphone Surveillance of the Innocent

Cellphones “can be transformed into surveillance devices,” writes the Guardian, reporting startling new details about which innocent people are still being surveilled (as part of a collaborative reporting project with 16 other media outlets led by the French nonprofit Forbidden Stories).

Long-time Slashdot reader shanen shared the newspaper’s critique of a “privatised government surveillance industry” that’s made NSO a billion-dollar company, thanks to its phone-penetrating spy software Pegaus:
[NSO] insists only carefully vetted government intelligence and law enforcement agencies can use Pegasus, and only to penetrate the phones of “legitimate criminal or terror group targets”. Yet in the coming days the Guardian will be revealing the identities of many innocent people who have been identified as candidates for possible surveillance by NSO clients in a massive leak of data… The presence of their names on this list indicates the lengths to which governments may go to spy on critics, rivals and opponents.

First we reveal how journalists across the world were selected as potential targets by these clients prior to a possible hack using NSO surveillance tools. Over the coming week we will be revealing the identities of more people whose phone numbers appear in the leak. They include lawyers, human rights defenders, religious figures, academics, businesspeople, diplomats, senior government officials and heads of state. Our reporting is rooted in the public interest. We believe the public should know that NSO’s technology is being abused by the governments who license and operate its spyware.

But we also believe it is in the public interest to reveal how governments look to spy on their citizens and how seemingly benign processes such as HLR lookups [which track the general locations of cellphone users] can be exploited in this environment.

It is not possible to know without forensic analysis whether the phone of someone whose number appears in the data was actually targeted by a government or whether it was successfully hacked with NSO’s spyware. But when our technical partner, Amnesty International’s Security Lab, conducted forensic analysis on dozens of iPhones that belonged to potential targets at the time they were selected, they found evidence of Pegasus activity in more than half.

The investigators say that potential targets included nearly 200 journalists around the world, including numerous reporters from CNN, the Associated Press, Voice of America, the New York Times, the Wall Street Journal, Bloomberg News, Le Monde in France, and even the editor of the Financial Times.

In addition, the investigators say they found evidence the Pegasus software had been installed on the phone of the fiancée of murdered Saudi journalist Jamal Khashoggi. NSO denies this to the Washington Post. But they also insist that they’re simply licensing their software to clients, and their company “has no insight” into those clients’ specific intelligence activities.

The Washington Post reports that Amnesty’s Security Lab found evidence of Pegasus attacks on 37 of 67 smartphones from the list which they tested. But beyond that “for the remaining 30, the tests were inconclusive, in several cases because the phones had been replaced. Fifteen of the phones were Android devices, none of which showed evidence of successful infection. However, unlike iPhones, Androids do not log the kinds of information required for Amnesty’s detective work.”

Familiar privacy measures like strong passwords and encryption offer little help against Pegasus, which can attack phones without any warning to users. It can read anything on a device that a user can, while also stealing photos, recordings, location records, communications, passwords, call logs and social media posts. Spyware also can activate cameras and microphones for real-time surveillance.

Edward Snowden Calls For Spyware Trade Ban Amid Pegasus Revelations

Snowden, who in 2013 blew the whistle on the secret mass surveillance programs of the US National Security Agency, described for-profit malware developers as “an industry that should not exist.” He made the comments in an interview with the Guardian after the first revelations from the Pegasus project, a journalistic investigation by a consortium of international media organizations into the NSO Group and its clients. […] Snowden said the consortium’s findings illustrated how commercial malware had made it possible for repressive regimes to place vastly more people under the most invasive types of surveillance. For traditional police operations to plant bugs or wiretap a suspect’s phone, law enforcement would need to “break into somebody’s house, or go to their car, or go to their office, and we’d like to think they’ll probably get a warrant,” he said. But commercial spyware made it cost-efficient for targeted surveillance against vastly more people. “If they can do the same thing from a distance, with little cost and no risk, they begin to do it all the time, against everyone who’s even marginally of interest,” he said. “If you don’t do anything to stop the sale of this technology, it’s not just going to be 50,000 targets. It’s going to be 50 million targets, and it’s going to happen much more quickly than any of us expect.”

Part of the problem arose from the fact that different people’s mobile phones were functionally identical to one another, he said. “When we’re talking about something like an iPhone, they’re all running the same software around the world. So if they find a way to hack one iPhone, they’ve found a way to hack all of them.” He compared companies commercializing vulnerabilities in widely used mobile phone models to an industry of “infectioneers” deliberately trying to develop new strains of disease. “It’s like an industry where the only thing they did was create custom variants of Covid to dodge vaccines,” he said. “Their only products are infection vectors. They’re not security products. They’re not providing any kind of protection, any kind of prophylactic. They don’t make vaccines — the only thing they sell is the virus.”

Snowden said commercial malware such as Pegasus was so powerful that ordinary people could in effect do nothing to stop it. Asked how people could protect themselves, he said: “What can people do to protect themselves from nuclear weapons? “There are certain industries, certain sectors, from which there is no protection, and that’s why we try to limit the proliferation of these technologies. We don’t allow a commercial market in nuclear weapons.” He said the only viable solution to the threat of commercial malware was an international moratorium on its sale. “What the Pegasus project reveals is the NSO Group is really representative of a new malware market, where this is a for-profit business,” he said. “The only reason NSO is doing this is not to save the world, it’s to make money.” He said a global ban on the trade in infection vectors would prevent commercial abuse of vulnerabilities in mobile phones, while still allowing researchers to identify and fix them. “The solution here for ordinary people is to work collectively. This is not a problem that we want to try and solve individually, because it’s you versus a billion dollar company,” he said. “If you want to protect yourself you have to change the game, and the way we do that is by ending this trade.”

Amazon Wants To Monitor You In Your Sleep, For Your Benefit

The Federal Communications Commission on Friday granted Amazon approval to use a radar sensor to sense motion and “enable contactless sleep tracing functionalities.” Amazon on June 22 asked the FCC, which regulates airwave uses, for permission to market a device that uses radar. The technology captures movement in three dimensions, enabling a user to control its features through simple gestures and movements, the company said in a filing. The capability, according to Amazon, could help people with “with mobility, speech, or tactile impairments,” and it could monitor sleep with a high degree of precision.

“The use of Radar Sensors in sleep tracking could improve awareness and management of sleep hygiene, which in turn could produce significant health benefits for many Americans,” Amazon said in its filing. “Radar Sensors will allow consumers to recognize potential sleep issues.” Amazon didn’t immediately respond to a request for comment. The company didn’t fully describe the device in its filing, but did say it would not be a mobile device. The FCC earlier granted similar permission to Google for radar to enable touchless control of Pixel smartphones, the agency said in its letter approving Amazon’s request.

Your Credit Score Should Be Based On Your Web History, IMF Says

In a new blog post for the International Monetary Fund, four researchers presented their findings from a working paper that examines the current relationship between finance and tech as well as its potential future. Gazing into their crystal ball, the researchers see the possibility of using the data from your browsing, search, and purchase history to create a more accurate mechanism for determining the credit rating of an individual or business. They believe that this approach could result in greater lending to borrowers who would potentially be denied by traditional financial institutions. At its heart, the paper is trying to wrestle with the dawning notion that the institutional banking system is facing a serious threat from tech companies like Google, Facebook, and Apple. The researchers identify two key areas in which this is true: Tech companies have greater access to soft-information, and messaging platforms can take the place of the physical locations that banks rely on for meeting with customers.

The concept of using your web history to inform credit ratings is framed around the notion that lenders rely on hard-data that might obscure the worthiness of a borrower or paint an unnecessarily dire picture during hard times. Citing soft-data points like “the type of browser and hardware used to access the internet, the history of online searches and purchases” that could be incorporated into evaluating a borrower, the researchers believe that when a lender has a more intimate relationship with the potential client’s history, they might be more willing to cut them some slack. […] But how would all this data be incorporated into credit ratings? Machine learning, of course. It’s black boxes all the way down. The researchers acknowledge that there will be privacy and policy concerns related to incorporating this kind of soft-data into credit analysis. And they do little to explain how this might work in practice.

How Big Tech created a data ‘treasure trove’ for police

When U.S. law enforcement officials need to cast a wide net for information, they’re increasingly turning to the vast digital ponds of personal data created by Big Tech companies via the devices and online services that have hooked billions of people around the world.

Data compiled by four of the biggest tech companies shows that law enforcement requests for user information — phone calls, emails, texts, photos, shopping histories, driving routes and more — have more than tripled in the U.S. since 2015. Police are also increasingly savvy about covering their tracks so as not to alert suspects of their interest.

That’s the backdrop for recent revelations that the Trump-era U.S. Justice Department sought data from Apple, Microsoft and Google about members of Congress, their aides and news reporters in leak investigations — then pursued court orders that blocked those companies from informing their targets.

In just the first half of 2020 — the most recent data available — Apple, Google, Facebook and Microsoft together fielded more than 112,000 data requests from local, state and federal officials. The companies agreed to hand over some data in 85% of those cases. Facebook, including its Instagram service, accounted for the largest number of disclosures.

Consider Newport, a coastal city of 24,000 residents that attracts a flood of summer tourists. Fewer than 100 officers patrol the city — but they make multiple requests a week for online data from tech companies.

That’s because most crimes — from larceny and financial scams to a recent fatal house party stabbing at a vacation rental booked online — can be at least partly traced on the internet. Tech providers, especially social media platforms, offer a “treasure trove of information” that can help solve them, said Lt. Robert Salter, a supervising police detective in Newport.

TikTok sued for billions over use of children’s data

Lawyers will allege that TikTok takes children’s personal information, including phone numbers, videos, exact location and biometric data, without sufficient warning, transparency or the necessary consent required by law, and without children or parents knowing what is being done with that information. TikTok has more than 800 million users worldwide and parent firm ByteDance made billions in profits last year, with the vast majority of that coming via advertising revenue.

US Intelligence may partner with private firms to monitor “extremist chatter” online

The Biden administration is considering using outside firms to track extremist chatter by Americans online, an effort that would expand the government’s ability to gather intelligence but could draw criticism over surveillance of US citizens. The Department of Homeland Security is limited in how it can monitor citizens online without justification and is banned from activities like assuming false identities to gain access to private messaging apps used by extremist groups such as the Proud Boys or Oath Keepers. Instead, federal authorities can only browse through unprotected information on social media sites like Twitter and Facebook and other open online platforms. A source familiar with the effort said it is not about decrypting data but rather using outside entities who can legally access these private groups to gather large amounts of information that could help DHS identify key narratives as they emerge. The plan being discussed inside DHS, according to multiple sources, would, in effect, allow the department to circumvent those limits.

Even as the DHS eyes a more robust use of its intelligence authorities, it continues to face fierce scrutiny on Capitol Hill over its handling of the Portland protests last summer — raising the possibility that at least some lawmakers will push back on the effort. The department — then led by Trump appointees but staffed by career officials, some of whom remain on the job — collected and disseminated open source reports on U.S. journalists who were publicly reporting on the protests.

Amazon Delivery Drivers Forced to Sign ‘Biometric Consent’ Form or Lose Job

Amazon delivery drivers nationwide have to sign a “biometric consent” form this week that grants the tech behemoth permission to use AI-powered cameras to access drivers’ location, movement, and biometric data. If the company’s delivery drivers, who number around 75,000 in the United States, refuse to sign these forms, they lose their jobs. The form requires drivers to agree to facial recognition and other biometric data collection within the trucks they drive.

“Amazon may… use certain Technology that processes Biometric Information, including on-board safety camera technology which collects your photograph for the purposes of confirming your identity and connecting you to your driver account,” the form reads. “Using your photograph, this Technology, may create Biometric Information, and collect, store, and use Biometric Information from such photographs.”

It adds that “this Technology tracks vehicle location and movement, including miles driven, speed, acceleration, braking, turns, and following distance …as a condition of delivery packages for Amazon, you consent to the use of Technology.”

Intelligence Analysts Use US Smartphone Location Data Without Warrants, Memo Says

A military arm of the intelligence community buys commercially available databases containing location data from smartphone apps and searches it for Americans’ past movements without a warrant, according to an unclassified memo obtained by The New York Times. Defense Intelligence Agency analysts have searched for the movements of Americans within a commercial database in five investigations over the past two and a half years, agency officials disclosed in a memo they wrote for Senator Ron Wyden, Democrat of Oregon.

The disclosure sheds light on an emerging loophole in privacy law during the digital age: In a landmark 2018 ruling known as the Carpenter decision, the Supreme Court held that the Constitution requires the government to obtain a warrant to compel phone companies to turn over location data about their customers. But the government can instead buy similar data from a broker — and does not believe it needs a warrant to do so. “D.I.A. does not construe the Carpenter decision to require a judicial warrant endorsing purchase or use of commercially available data for intelligence purposes,” the agency memo said.

Mr. Wyden has made clear that he intends to propose legislation to add safeguards for Americans’ privacy in connection with commercially available location data. In a Senate speech this week, he denounced circumstances “in which the government, instead of getting an order, just goes out and purchases the private records of Americans from these sleazy and unregulated commercial data brokers who are simply above the law.” He called the practice unacceptable and an intrusion on constitutional privacy rights. “The Fourth Amendment is not for sale,” he said.

NSO Used Real People’s Location Data To Pitch Its Contact-Tracing Tech

NSO, a private intelligence company best known for developing and selling governments access to its Pegasus spyware, pitched its contact-tracing system earlier this year, dubbed Fleming, aimed at helping governments track the spread of COVID-19. Fleming is designed to allow governments to feed location data from cell phone companies to visualize and track the spread of the virus. NSO gave several news outlets each a demo of Fleming, which NSO says helps governments make public health decisions “without compromising individual privacy.” But in May, a security researcher told TechCrunch that he found an exposed database storing thousands of location data points used by NSO to demonstrate how Fleming works — the same demo seen by reporters weeks earlier. TechCrunch reported the apparent security lapse to NSO, which quickly secured the database, but said that the location data was “not based on real and genuine data.” NSO’s claim that the location data wasn’t real differed from reports in Israeli media, which said NSO had used phone location data obtained from advertising platforms, known as data brokers, to “train” the system. Academic and privacy expert Tehilla Shwartz Altshuler, who was also given a demo of Fleming, said NSO told her that the data was obtained from data brokers, which sell access to vast troves of aggregate location data collected from the apps installed on millions of phones.

NSO is currently embroiled in a lawsuit with Facebook-owned WhatsApp, which last year blamed NSO for exploiting an undisclosed vulnerability in WhatsApp to infect some 1,400 phones with Pegasus, including journalists and human rights defenders. NSO says it should be afforded legal immunity because it acts on behalf of governments.

Dozens of Journalists’ iPhones Hacked With NSO ‘Zero-Click’ Spyware, Says Citizen Lab

For more than the past year, London-based reporter Rania Dridi and at least 36 journalists, producers and executives working for the Al Jazeera news agency were targeted with a so-called “zero-click” attack that exploited a now-fixed vulnerability in Apple’s iMessage. The attack invisibly compromised the devices without having to trick the victims into opening a malicious link. Citizen Lab, the internet watchdog at the University of Toronto, was asked to investigate earlier this year after one of the victims, Al Jazeera investigative journalist Tamer Almisshal, suspected that his phone may have been hacked. In a technical report out Sunday and shared with TechCrunch, the researchers say they believe the journalists’ iPhones were infected with the Pegasus spyware, developed by Israel-based NSO Group. The researchers analyzed Almisshal’s iPhone and found it had between July and August connected to servers known to be used by NSO for delivering the Pegasus spyware. The device revealed a burst of network activity that suggests that the spyware may have been delivered silently over iMessage. Logs from the phone show that the spyware was likely able to secretly record the microphone and phone calls, take photos using the phone’s camera, access the victim’s passwords, and track the phone’s location.

Surveillance Compounded: Real-Time Crime Centers in the United States

Over the last two decades, law enforcement agencies across the United States have been obtaining more and more sophisticated surveillance technologies to collect data. Technologies such as networked cameras, automated license plate readers, and gunshot detection are deployed around the clock, as are the tools to process this data, such as predictive policing software and AI-enhanced video analytics. The last five years have seen a distinct trend in which police have begun deploying all of this technology in conjunction with one another. The technologies, working in concert, are being consolidated and fed into physical locations called Real-Time Crime Centers (RTCCs). These high-tech hubs, filled with walls of TV monitors and computer workstations for sworn officers and civilian analysts, not only exploit huge amounts of data, but also are used to justify an increase in surveillance technology through new “data-driven” or “intelligence-led” policing strategies.

As part of the Atlas of Surveillance project, the Electronic Frontier Foundation and students from the Reynolds School of Journalism at the University of Nevada, Reno have identified more than 80 RTCCs across the United States, with heavy concentrations in the South and the Northeast. In this report, we highlight the capabilities and controversies surrounding 7 of these facilities. As this trend expands, it is crucial that the public understands how the technologies are combined to collect data about people as they move through their day-to-day lives.

What Modern Video Surveillance Looks Like

A few years ago, when you saw a security camera, you may have thought that the video feed went to a VCR somewhere in a back office that could only be accessed when a crime occurs. Or maybe you imagined a sleepy guard who only paid half-attention, and only when they discovered a crime in progress. In the age of internet-connectivity, now it’s easy to imagine footage sitting on a server somewhere, with any image inaccessible except to someone willing to fast forward through hundreds of hours of footage.

That may be how it worked in 1990s heist movies, and it may be how a homeowner still sorts through their own home security camera footage. But that’s not how cameras operate in today’s security environment. Instead, advanced algorithms are watching every frame on every camera and documenting every person, animal, vehicle, and backpack as they move through physical space, and thus camera to camera, over an extended period of time.

US Used Patriot Act To Gather Logs of Website Visitors

The government has interpreted a high-profile provision of the Patriot Act as empowering F.B.I. national security investigators to collect logs showing who has visited particular web pages, documents show. But the government stops short of using that law to collect the keywords people submit to internet search engines because it considers such terms to be content that requires a warrant to gather, according to letters produced by the Office of the Director of National Intelligence. The disclosures come at a time when Congress is struggling with new proposals to limit the law, known as Section 215 of the Patriot Act. The debate ran aground in the spring amid erratic messages from President Trump, but is expected to resume after President-elect Joseph R. Biden Jr. takes the oath of office in January.

In May, 59 senators voted to bar the use of Section 215 to collect internet search terms or web browsing activity, but negotiations broke down in the House. During that period, Senator Ron Wyden, Democrat of Oregon and one of the sponsors of the proposal ban, wrote to the director of national intelligence seeking clarity about any such use. Six months later, the Trump administration finally replied — initially, it turned out, in a misleading way. In a Nov. 6 letter to Mr. Wyden, John Ratcliffe, the intelligence director, wrote that Section 215 was not used to gather internet search terms, and that none of the 61 orders issued last year under that law by the Foreign Intelligence Surveillance Court involved collection of “web browsing” records. Mr. Wyden’s office provided that letter to The New York Times, arguing that it meant Mr. Wyden’s proposal in May — which he sponsored with Senator Steve Daines, Republican of Montana — could be enacted into law without any operational costs.

But The Times pressed Mr. Ratcliffe’s office and the F.B.I. to clarify whether it was defining “web browsing” activity to encompass logging all visitors to a particular website, in addition to a particular person’s browsing among different sites. The next day, the Justice Department sent a clarification to Mr. Ratcliffe’s office, according to a follow-up letter he sent to Mr. Wyden on Nov. 25. In fact, “one of those 61 orders resulted in the production of information that could be characterized as information regarding browsing,” Mr. Ratcliffe wrote in the second letter. Specifically, one order had approved collection of logs revealing which computers “in a specified foreign country” had visited “a single, identified U.S. web page.” Mr. Ratcliffe expressed regret “that this additional information was not included in my earlier letter” to the senator, and suggested his staff might take further “corrective action.” In a statement, Mr. Wyden said the letters raise “all kinds of new questions, including whether, in this particular case, the government has taken steps to avoid collecting Americans’ web browsing information.” “More generally,” Mr. Wyden continued, “the D.N.I. has provided no guarantee that the government wouldn’t use the Patriot Act to intentionally collect Americans’ web browsing information in the future, which is why Congress must pass the warrant requirement that has already received support from a bipartisan majority in the Senate.”

Google Illegally Spied On Workers Before Firing Them, US Labor Board Alleges

Google violated US labor laws by spying on workers who were organizing employee protests, then firing two of them, according to a complaint to be filed by the National Labor Relations Board (NLRB) today. The complaint names two employees, Laurence Berland and Kathryn Spiers, both of whom were fired by the company in late 2019 in connection with employee activism. Berland was organizing against Google’s decision to work with IRI Consultants, a firm widely known for its anti-union efforts, when he was let go for reviewing other employees’ calendars. Now, the NLRB has found Google’s policy against employees looking at certain coworkers’ calendars is unlawful. “Google’s hiring of IRI is an unambiguous declaration that management will no longer tolerate worker organizing,” Berland said in a statement. “Management and their union busting cronies wanted to send that message, and the NLRB is now sending their own message: worker organizing is protected by law.”

Spiers was fired after she created a pop-up for Google employees visiting the IRI Consultants website. “Googlers have the right to participate in protected concerted activities,” the notification read, according to The Guardian. The company said Spiers had violated security policies, a statement that hurt her reputation in the tech community. Now, the NLRB has found the firing was unlawful. “This week the NLRB issued a complaint on my behalf. They found that I was illegally terminated for trying to help my colleagues,” Spiers said. “Colleagues and strangers believe I abused my role because of lies told by Google management while they were retaliating against me. The NLRB can order Google to reinstate me, but it cannot reverse the harm done to my credibility.”

Microsoft Also Patented Tech to Score Meetings Using Filmed Body Language, Facial Expressions

Newly surfaced Microsoft patent filings describe a system for deriving and predicting “overall quality scores” for meetings using data such as body language, facial expressions, room temperature, time of day, and number of people in the meeting. The system uses cameras, sensors, and software tools to determine, for example, “how much a participant contributes to a meeting vs performing other tasks (e.g., texting, checking email, browsing the Internet).”

The “meeting insight computing system” would then predict the likelihood that a group will hold a high-quality meeting. It would flag potential challenges when an organizer is setting the meeting up, and recommend alternative venues, times, or people to include in the meeting, for example… A patent application made public Nov. 12 notes, “many organizations are plagued by overly long, poorly attended, and recurring meetings that could be modified and/or avoided if more information regarding meeting quality was available.” The approach would apply to in-person and virtual meetings, and hybrids of the two…

The filings do not detail any potential privacy safeguards. A Microsoft spokesperson declined to comment on the patent filings in response to GeekWire’s inquiry. To be sure, patents are not products, and there’s no sign yet that Microsoft plans to roll out this hypothetical system. Microsoft has established an internal artificial intelligence ethics office and a companywide committee to ensure that its AI products live by its principles of responsible AI, including transparency and privacy. However, the filings are a window into the ideas floating around inside Microsoft, and they’re consistent with the direction the company is already heading.

Demand For Employee Surveillance Increased As Workers Transitioned To Home Working

A new study shows that the demand for employee surveillance software was up 55% in June 2020 compared to the pre-pandemic average. From webcam access to random screenshot monitoring, these surveillance software products can record almost everything an employee does on their computer. VPN review website Top10VPN used its global monitoring data to analyze over 200 terms related to employee surveillance software. It took into account both generic and brand-specific queries for its study which compared searches during March-May 2020 with internet searches in the preceding year. Global demand for employee monitoring software increased by 108% in April, and 70% in May 2020 compared with searches carried out the preceding year. Queries for “How to monitor employees working from home” increased by 1,705% in April and 652% in May 2020 compared with searches carried out the preceding year.

The surge in popularity of such an open-ended phrase like this reveals how unprepared many companies were for the abrupt shift to mass home-working. The most popular surveillance tools are Time Doctor, Hubstaff, and FlexiSPY. The tools with the biggest increase in demand include Teramind, DeskTime, Kickidler, and Time Doctor, with interest for the latter tripling compared to the pre-pandemic levels. The top three tools account for almost 60% of global demand in surveillance software because of the range of features offered. The radical shift away from office-working has clearly made employers nervous about a reduction in productivity and its potential impact on their business. Greater surveillance, however, may actually reduce long-term productivity. Your boss watching your every move may make you less productive in the long run and could significantly impact your feelings about the company itself.

LidarPhone Attack Converts Smart Vacuums Into Microphones

A team of academics has detailed this week novel research that converted a smart vacuum cleaner into a microphone capable of recording nearby conversations. Named LidarPhone, the technique works by taking the vacuum’s built-in LiDAR laser-based navigational component and converting it into a laser microphone. […] They tested the LidarPhone attack with various objects, by varying the distance between the robot and the object, and the distance between the sound origin and the object. Tests focused on recovering numerical values, which the research team said they managed to recover with a 90% accuracy. But academics said the technique could also be used to identify speakers based on gender or even determine their political orientation from the music played during news shows, captured by the vacuum’s LiDAR.

But while the LidarPhone attack sounds like a gross invasion of privacy, users need not panic for the time being. This type of attack revolves around many prerequisites that most attacks won’t bother. There are far easier ways of spying on users than overwriting a vacuum’s firmware to control its laser navigation system, such as tricking the user on installing malware on their phone. The LidarPhone attack is merely novel academic research that can be used to bolster the security and design of future smart vacuum robots. In fact, the research team’s main recommended countermeasure for smart vacuum cleaning robot makers is to shut down the LiDAR component if it’s not rotating. Additional details about the research are available in a research paper titled “Spying with Your Robot Vacuum Cleaner: Eavesdropping via Lidar Sensors.”

Cheating-Detection Software Provokes ‘School-Surveillance Revolt’

New webcam-based anti-cheating monitoring is so stressful, it’s made some students cry, the Washington Post reports.

“Online proctoring” companies saw in coronavirus shutdowns a chance to capitalize on a major reshaping of education, selling schools a high-tech blend of webcam-watching workers and eye-tracking software designed to catch students cheating on their exams. They’ve taken in millions of dollars, some of it public money, from thousands of colleges in recent months. But they’ve also sparked a nationwide school-surveillance revolt, with students staging protests and adopting creative tactics to push campus administrators to reconsider the deals. Students argue that the testing systems have made them afraid to click too much or rest their eyes for fear they’ll be branded as cheats…

One system, Proctorio, uses gaze-detection, face-detection and computer-monitoring software to flag students for any “abnormal” head movement, mouse movement, eye wandering, computer window resizing, tab opening, scrolling, clicking, typing, and copies and pastes. A student can be flagged for finishing the test too quickly, or too slowly, clicking too much, or not enough. If the camera sees someone else in the background, a student can be flagged for having “multiple faces detected.” If someone else takes the test on the same network — say, in a dorm building — it’s potential “exam collusion.” Room too noisy, Internet too spotty, camera on the fritz? Flag, flag, flag.

As an unusually disrupted fall semester churns toward finals, this student rebellion has erupted into online war, with lawsuits, takedowns and viral brawls further shaking the anxiety-inducing backdrop of college exams. Some students have even tried to take the software down from the inside, digging through the code for details on how it monitors millions of high-stakes exams… Some students said the experience of having strangers and algorithms silently judge their movements was deeply unnerving, and many worried that even being accused of cheating could endanger their chances at good grades, scholarships, internships and post-graduation careers. Several students said they had hoped for freeing, friend-filled college years but were now resigned to hours of monitored video exams in their childhood bedrooms, with no clear end in sight….

[T]he systems’ technical demands have made just taking the tests almost comically complicated. One student at Wilfrid Laurier University in Ontario shared the instructions for his online Introduction to Linear Algebra midterm: five pages, totaling more than 2,000 words, requiring students to use a special activity-monitoring Web browser and keep their face, hands and desk in view of their camera at all times…

Students who break the rules or face technical difficulties can be investigated for academic misconduct. “The instructions,” the student said, “are giving me more anxiety than the test itself.”

Company executives “say a semester without proctors would turn online testing into a lawless wasteland” according to the article. But one long-time teacher counters that “the most clear value conveyed to students is ‘We don’t trust you.'”

Yet the education tech nonprofit Educause reported that 54% of higher education institutions they’d surveyed “are currently using online or remote proctoring services.

“And another 23% are planning or considering using them.”