Resources

USA Schools Are Normalising Intrusive Surveillance

As the authors detail, among the technologies are surveillance cameras. These are often linked to software for facial recognition, access control, behavior analysis, and weapon detection. That is, cameras scan student faces and then algorithms identify them, allow or deny them entry based on that ID, decide if their activities are threatening, and determine if objects they carry may be dangerous or forbidden.

“False hits, such as mistaking a broomstick, three-ring binder, or a Google Chromebook laptop for a gun or other type of weapon, could result in an armed police response to a school,” cautions the report.

That’s not a random assortment of harmless-until-misidentified items; a footnoted 2022 Charlotte Observer piece points out such objects were tagged as weapons by scanners in the Charlotte-Mecklenburg Schools. “A how-to video posted earlier this year by administrators at Butler High School instructs students to remove certain belongings from their backpacks — and walk through the scanner holding their laptops above their heads — to avoid setting off a false alarm,” it adds.

Huh. What happens if behavior analysis algorithms decide that brandished laptops are threatening?

Also called out is software that monitors social media, students’ communications, and web-surfing habits. Audio monitors that are supposed to detect gunshots—but can be triggered by slammed doors (as at Greenwood High School in Arkansas earlier this year)—also feature in many schools.

Of students aged 14–18 surveyed by the ACLU, 62 percent saw video cameras in their schools (the U.S. Department of Education says cameras are used by 91 percent of public schools), and 49 percent reported monitoring software. Understandably, this affects their behavior. Thirty-two percent say, “I always feel like I’m being watched,” and 26 percent fret over what their “school and the companies they contract with do with the data.”

“Research demonstrates the damaging effect of surveillance on children’s ability to develop in healthy ways,” Fedders added. “Pervasive surveillance can create a climate in which adults are seen as overestimating and overreacting to risk. Children, in turn, cannot develop the ability to evaluate and manage risk themselves in order to function effectively.”

Notably, school surveillance normalizes the idea that constant monitoring is good and necessary for preserving safety.

69

School surveillance tech does more harm than good, ACLU report finds

An ACLU report has found that despite claims from companies, surveillance technology in US schools does not improve student safety and constant surveillance can, in fact, cause a number of harms to students including making students less likely to report dangerous behavior.

Schools typically use technologies such as cameras, facial recognition software and communication monitoring and filtering technology, which have been marketed by education technology surveillance companies as intervention tools against school shootings, suicides and bullying. In 2021, US schools and colleges spent $3.1bn on these products and this number is expected to grow by 8% every year, according to the report.

But the ACLU’s report concludes that there is little to no independent research or evidence that supports that this technology works.

69

Supreme Court Declines To Hear Challenge To Warrantless Pole Camera Surveillance

The U.S. Supreme Court [Monday] declined to hear Moore v. United States, leaving in place a patchwork of lower court decisions on an important and recurring question about privacy rights in the face of advancing surveillance technology. In this case, police secretly attached a small camera to a utility pole, using it to surveil a Massachusetts home 24/7 for eight months — all without a warrant. Law enforcement could watch the camera’s feed in real time, and remotely pan, tilt, and zoom close enough to read license plates and see faces. They could also review a searchable, digitized record of this footage at their convenience. The camera captured every coming and going of the home’s residents and their guests over eight months. As a result, the government targeted the home of a community pillar — a lawyer, respected judicial clerk, devoted church member, and a grandmother raising her grandkids — to cherry-pick images from months of unceasing surveillance in an effort to support unwarranted criminal charges against an innocent person.

Federal courts of appeals and state supreme courts have divided on the question of whether such sweeping surveillance is a Fourth Amendment search requiring a warrant. The highest courts of Massachusetts, Colorado, and South Dakota have held that long-term pole camera surveillance of someone’s home requires a warrant. In Moore v. United States, the members of the full en banc U.S. Court of Appeals for the First Circuit split evenly on the question, with three judges explaining that a warrant is required, and three judges expressing the belief that the Fourth Amendment imposes no limit on this invasive surveillance. This issue will continue to arise in the lower courts; the ACLU filed an amicus brief on the question in the U.S. Court of Appeals for the Tenth Circuit earlier this month.

123

Ring Cameras Are Being Used To Control and Surveil Overworked Delivery Workers

Networked doorbell surveillance cameras like Amazon’s Ring are everywhere, and have changed the nature of delivery work by letting customers take on the role of bosses to monitor, control, and discipline workers, according to a recent report (PDF) by the Data & Society tech research institute. “The growing popularity of Ring and other networked doorbell cameras has normalized home and neighborhood surveillance in the name of safety and security,” Data & Society’s Labor Futures program director Aiha Nguyen and research analyst Eve Zelickson write. “But for delivery drivers, this has meant their work is increasingly surveilled by the doorbell cameras and supervised by customers. The result is a collision between the American ideas of private property and the business imperatives of doing a job.”

Thanks to interviews with surveillance camera users and delivery drivers, the researchers are able to dive into a few major developments interacting here to bring this to a head. Obviously, the first one is the widespread adoption of doorbell surveillance cameras like Ring. Just as important as the adoption of these cameras, however, is the rise of delivery work and its transformation into gig labor. […] As the report lays out, Ring cameras allow customers to surveil delivery workers and discipline their labor by, for example, sharing shaming footage online. This dovetails with the “gigification” of Amazon’s delivery workers in two ways: labor dynamics and customer behavior.

“Gig workers, including Flex drivers, are sold on the promise of flexibility, independence and freedom. Amazon tells Flex drivers that they have complete control over their schedule, and can work on their terms and in their space,” Nguyen and Zelickson write. “Through interviews with Flex drivers, it became apparent that these marketed perks have hidden costs: drivers often have to compete for shifts, spend hours trying to get reimbursed for lost wages, pay for wear and tear on their vehicle, and have no control over where they work.” That competition between workers manifests in other ways too, namely acquiescing to and complying with customer demands when delivering purchases to their homes. Even without cameras, customers have made onerous demands of Flex drivers even as the drivers are pressed to meet unrealistic and dangerous routes alongside unsafe and demanding productivity quotas. The introduction of surveillance cameras at the delivery destination, however, adds another level of surveillance to the gigification. […] The report’s conclusion is clear: Amazon has deputized its customers and made them partners in a scheme that encourages antagonistic social relations, undermines labor rights, and provides cover for a march towards increasingly ambitious monopolistic exploits.

162

America’s Funniest Home Surveillance Network Isn’t Funny

Amazon is normalizing neighborhood panopticons by turning its doorbell videos into a TV show. Orwell wouldn’t be laughing.

When smartphones first came on the scene, their built-in cameras were limited to personal use. Then social media sites like Facebook and Instagram created a beast that millions wanted to feed, and photos became a public spectacle. The same phenomenon is happening to doorbell cameras. Initially marketed to make customers feel safer in their homes, their footage is now being uploaded for entertainment. On TikTok, the hashtag Ringdoorbell has more than 2.7 billion views.

Amazon.com Inc., which owns market-dominating Ring, has seen and grabbed a lucrative opportunity, and is contributing to the gradual erosion of our privacy in the process.

On Monday, the company premiered Ring Nation, a television show syndicated across more than 70 American cities. Hosted by the comedian Wanda Sykes and produced by Metro-Goldwyn-Mayer, which Amazon finished buying in March, the 20-minute program features videos captured on smartphones and Amazon’s Ring doorbell cameras, which the company sells for about $105.

157

Google’s Nest Will Provide Data to Police Without a Warrant

Google “reserves the right” to make emergency disclosures to law enforcement even when there is no legal requirement to do so. “A provider like Google may disclose information to law enforcement without a subpoena or a warrant ‘if the provider, in good faith, believes that an emergency involving danger of death or serious physical injury to any person requires disclosure without delay of communications relating to the emergency,'” a Nest spokesperson tells CNET.

While Amazon and Google have both said they would hand over a user’s data to law enforcement without a warrant, Arlo, Apple, Wyze, and Anker, owner of Eufy, all confirmed to CNET that they won’t give authorities access to a user’s smart home camera’s footage unless they’re shown a warrant or court order. These companies would be legally bound to provide data to the authorities if they were shown a legal document. But, unlike Google and Amazon, they will not otherwise share camera footage with law enforcement, even if they had an emergency request for data. Apple’s default setting for video cameras connected via Homekit is end-to-end encryption which means the company is unable to share user video at all.

193

Amazon’s Ring and Google Can Share Footage With Police Without Warrants (or Your Consent)

U.S. law let’s companies like Google and Amazon’s Ring doorbell/security camera system “share user footage with police during emergencies without consent and without warrants.” That revelation “came under renewed criticism from privacy activists this month after disclosing it gave video footage to police in more than 10 cases without users’ consent thus far in 2022 in what it described as ’emergency situations’.”

“That includes instances where the police didn’t have a warrant.”

“So far this year, Ring has provided videos to law enforcement in response to an emergency request only 11 times,” Amazon vice president of public policy Brian Huseman wrote. “In each instance, Ring made a good-faith determination that there was an imminent danger of death or serious physical injury to a person requiring disclosure of information without delay….” Of the 11 emergency requests Ring has complied with so far in 2022, the company said they include cases involving kidnapping, self-harm and attempted murder, but it won’t provide further details, including information about which agencies or countries the requests came from.

We also asked Ring if it notified customers after the company had granted law enforcement access to their footage without their consent.

“We have nothing to share,” the spokesperson responded.

It’s been barely a year since Ring made the decision to stop allowing police to email users to request footage. Facing criticism that requests like those were subverting the warrant process and contributing to police overreach, Ring directed police instead to post public requests for assistance in the Neighbors app, where community members are free to view and comment on them (or opt out of seeing them altogether)… That post made no mention of a workaround for the police during emergency circumstances.

When CNET asked why that workaround wasn’t mentioned, Amazon response was that law enforcement requests, “including emergency requests, are directed to Ring (the company), the same way a warrant or subpoena is directed to Ring (and not the customer), which is why we treat them entirely separately.”

CNET notes there’s also no mention of warrantless emergency requests without independent oversight in Ring’s own transparency reports about law enforcement requests from past years.

CNET adds that it’s not just Amazon. “Google, Ring and other companies that process user video footage have a legal basis for warrantless disclosure without consent during emergency situations, and it’s up to them to decide whether or not to do so when the police come calling….” (Although Google told CNET that while it reserves the right to comply with warrantless requests for user data during emergencies, to date it has never actually done so.) The article also points out that “Others, most notably Apple, use end-to-end encryption as the default setting for user video, which blocks the company from sharing that video at all… Ring enabled end-to-end encryption as an option for users in 2021, but it isn’t the default setting, and Ring notes that turning it on will break certain features, including the ability to view your video feed on a third-party device like a smart TV, or even Amazon devices like the Echo Show smart display.”

The bottom line?

[C]onsumers have a choice to make about what they’re comfortable with… That said, you can’t make informed choices when you aren’t well-informed to begin with, and the brands in question don’t always make it easy to understand their policies and practices. Ring published a blog post last year walking through its new, public-facing format for police footage requests, but there was no mention of emergency exceptions granted without user consent or independent oversight, the details of which only came to light after a Senate probe. Google describes its emergency sharing policies within its Terms of Service, but the language doesn’t make it clear that those cases include instances where footage may be shared without a warrant, subpoena or court order compelling Google to do so.

154

How Beijing’s surveillance cameras crept into widespread use across UK schools, hospitals and government buildings

In the confines of his small cell, Ovalbek Turdakun was watched 24/7. At any attempt to speak to others he was instantly told to be quiet, while lights in the room were on round the clock, making it impossible to know what time of day it was.

Turdakun and his fellow detainees in the Xinjiang camp were not watched by guards, but by software. Cameras made by the Chinese company Hikvision monitored his every move, according to an account he gave to US surveillance website IPVM.

More than a million of the same company’s cameras are in Britain’s schools, hospitals and police departments. Tesco, Costa Coffee and McDonald’s have purchased Hikvision cameras. They are present in a string of Government buildings.

Britain’s population is caught on CCTV more than any nation outside of China, with 6m cameras in use – one for every 11 people. Hikvision is the biggest provider of them.

261

Uber Asked Contractor To Allow Video Surveillance In Employee Homes, Bedrooms

Teleperformance, one of the world’s largest call center companies, is reportedly requiring some employees to consent to video monitoring in their homes. Employees in Colombia told NBC News that their new contract granted the company the right to use AI-powered cameras to observe and record their workspaces. The contract also requires employees to share biometric data like fingerprints and photos of themselves, and workers have to agree to share data and images that may include children under 18.

Teleperformance employs over 380,000 people in 83 countries to provide call center services for a range of companies, including Amazon, Apple, and Uber. A company spokesperson told NBC that it is “constantly looking for ways to enhance the Teleperformance Colombia experience for both our employees and our customers, with privacy and respect as key factors in everything we do.” Amazon and Apple said that they did not ask Teleperformance for this extra monitoring, and an Apple spokesperson said the company forbids video monitoring of employees by suppliers. A recent Apple audit reportedly found Teleperformance in compliance with this requirement. But Uber apparently requested the ability to monitor some workers. Uber said it wouldn’t observe the entire workforce, but the company did not specify which employees would be subject to the new policies. The ride sharing company asked for the monitoring of Teleperformance’s remote employees because call center staff have access to customers credit cards and trip details, an Uber spokesperson told NBC News.

307

What Modern Video Surveillance Looks Like

A few years ago, when you saw a security camera, you may have thought that the video feed went to a VCR somewhere in a back office that could only be accessed when a crime occurs. Or maybe you imagined a sleepy guard who only paid half-attention, and only when they discovered a crime in progress. In the age of internet-connectivity, now it’s easy to imagine footage sitting on a server somewhere, with any image inaccessible except to someone willing to fast forward through hundreds of hours of footage.

That may be how it worked in 1990s heist movies, and it may be how a homeowner still sorts through their own home security camera footage. But that’s not how cameras operate in today’s security environment. Instead, advanced algorithms are watching every frame on every camera and documenting every person, animal, vehicle, and backpack as they move through physical space, and thus camera to camera, over an extended period of time.

446

Police Will Pilot a Program to Live-Stream Amazon Ring Cameras

The police surveillance center in Jackson, Mississippi, will be conducting a 45-day pilot program to live stream the Amazon Ring cameras of participating residents.

While people buy Ring cameras and put them on their front door to keep their packages safe, police use them to build comprehensive CCTV camera networks blanketing whole neighborhoods. This serves two police purposes. First, it allows police departments to avoid the cost of buying surveillance equipment and to put that burden onto consumers by convincing them they need cameras to keep their property safe. Second, it evades the natural reaction of fear and distrust that many people would have if they learned police were putting up dozens of cameras on their block, one for every house.

Now, our worst fears have been confirmed. Police in Jackson, Mississippi, have started a pilot program that would allow Ring owners to patch the camera streams from their front doors directly to a police Real Time Crime Center. The footage from your front door includes you coming and going from your house, your neighbors taking out the trash, and the dog walkers and delivery people who do their jobs in your street. In Jackson, this footage can now be live streamed directly onto a dozen monitors scrutinized by police around the clock. Even if you refuse to allow your footage to be used that way, your neighbor’s camera pointed at your house may still be transmitting directly to the police.

424

London Installed AI Cameras To Monitor Social Distancing, Lockdown Restrictions

Artificial Intelligence cameras are being used in London and other cities in the UK to monitor social distancing. The sensors were initially developed by Vivacity to track the flow of traffic, cyclists and pedestrians and monitor how roads are being used. But when the country went into lockdown in March, Vivacity added on an extra feature to the AI scanners so it could register the distance between pedestrians. This data is shared in a monthly report with the Government.

Vivacity Labs said they have more than 1,000 sensors installed across the UK, in cities including London, Manchester, Oxford, Cambridge and Nottingham. Chief Operating Officer at Vivacity Peter Mildon told BBC Radio Kent on Wednesday that the data is potentially “useful for informing policy decisions” regarding lockdown measures. He stressed that the cameras are not CCTV but that they operate as a data collating device rather than a camera that stores footage. “They are not recording any footage, they are not streaming any footage and no one is actually watching it,” he said.

Mr Mildon added: “We’re creating a set of statistics on how behavior is changing in terms of how people are staying close together or apart. And it is that data that is then useful for informing policy decisions on whether there should be a two meter rule or a one meter plus rule or whether local lockdown measures are having the impact they are envisioned to.”

431

Voice From ‘Nest’ Camera Threatens to Steal Baby

Jack Newcombe, the Chief Operating Officer of a syndication company with 44 million daily readers, describes the strange voice he heard talking to his 18-month old son:
She says we have a nice house and encourages the nanny to respond. She does not. The voice even jokes that she hopes we don’t change our password. I am sick to my stomach. After about five minutes of verbal “joy riding,” the voice starts to get agitated at the nanny’s lack of response and then snaps, in a very threatening voice: “I’m coming for the baby if you don’t answer me….” We unplug the cameras and change all passwords…

Still helpless, I started doing the only thing I could do — Googling. I typed “Nest + camera + hacked” and found out that this happens frequently. Parent after parent relayed stories similar to mine — threatening to steal a baby is shockingly common — and some much worse, such as playing pornography over the microphone to a 3-year-old… What is worse is that anyone could have been watching us at any time for as long as we have had the cameras up. This person just happened to use the microphone. Countless voyeurs could have been silently watching (or worse) for months.

However, what makes this issue even more terrifying is a corporate giant’s complete and utter lack of response. Nest is owned by Google, and, based on my experience and their public response, Google does not seem to care about this issue. They acknowledge it as a problem, shrug their shoulders and point their fingers at the users. Their party line is to remind people that the hardware was not hacked; it was the user’s fault for using a compromised password and not implementing two-step authentication, in which users receive a special code via text to sign on. That night, on my way home from work, I called Nest support and was on hold for an hour and eight minutes. I followed all directions and have subsequently received form emails in broken English. Nobody from Google has acknowledged the incident or responded with any semblance of empathy. In every email, they remind me of two-step authentication.

They act as if I am going to continue to use Nest cameras.

569

The 120 Most CCTV Surveilled Cities In the World

Comparitech.com has published a report and spreadsheet laying out how many CCTV cameras are in operation in 120 different cities around the world, and data for the crime rates in these cities. The report notes “We found little correlation between the number of public CCTV cameras and crime or safety.”

8 of the 10 most surveilled cities are in China, even though London and Atlana also make the cut, and the report says that — depending on what numbers you believe — China will have between 200 Million and 626 Million CCTV cameras, or possibly even more, in operation by 2020. That would be almost 1 CCTV camera per 2 citizens in the country, and the number could go up.

Outside of China, the top most-surveilled cities in the world are:

London – 68.40 cameras per 1,000 people
Atlanta – 15.56 cameras per 1,000 people
Singapore – 15.25 cameras per 1,000 people
Abu Dhabi – 13.77 cameras per 1,000 people
Chicago – 13.06 cameras per 1,000 people
Sydney – 12.35 cameras per 1,000 people
Baghdad – 12.30 cameras per 1,000 people
Dubai – 12.14 cameras per 1,000 people
Moscow – 11.70 cameras per 1,000 people
Berlin – 11.18 cameras per 1,000 people
New Delhi – 9.62 cameras per 1,000 people

570

Microsoft Turned Down Facial-Recognition Sales over “Human Rights Concerns”

Microsoft recently rejected a California law enforcement agency’s request to install facial recognition technology in officers’ cars and body cameras due to human rights concerns, company President Brad Smith said on Tuesday. Microsoft concluded it would lead to innocent women and minorities being disproportionately held for questioning because the artificial intelligence has been trained on mostly white and male pictures. AI has more cases of mistaken identity with women and minorities, multiple research projects have found.

Smith explained the decisions as part of a commitment to human rights that he said was increasingly critical as rapid technological advances empower governments to conduct blanket surveillance, deploy autonomous weapons and take other steps that might prove impossible to reverse. Smith also said at a Stanford University conference that Microsoft had declined a deal to install facial recognition on cameras blanketing the capital city of an unnamed country that the nonprofit Freedom House had deemed not free. Smith said it would have suppressed freedom of assembly there.

On the other hand, Microsoft did agree to provide the technology to an American prison, after the company concluded that the environment would be limited and that it would improve safety inside the unnamed institution.

678

AI Mistakes Ad On a Bus For an Actual CEO, Then Publicly Shames Them For ‘Jaywalking’

Since last year, many Chinese cities have cracked down on jaywalking by investing in facial recognition systems and AI-powered surveillance cameras. Jaywalkers are identified and shamed by displaying their photographs on large public screens… Developments are also underway to engage the country’s mobile network operators and social media platforms, such as Tencent Holdings’ WeChat and Sina Weibo, to establish a system in which offenders will receive personal text messages as soon as they are caught violating traffic rules….

Making a compelling case for change is the recent experience of Dong Mingzhu, chairwoman of China’s biggest maker of air conditioners Gree Electric Appliances, who found her face splashed on a huge screen erected along a street in the port city of Ningbo… That artificial intelligence-backed surveillance system, however, erred in capturing Dong’s image on Wednesday from an advertisement on the side of a moving bus. The traffic police in Ningbo, a city in the eastern coastal province of Zhejiang, were quick to recognise the mistake, writing in a post on microblog Sina Weibo on Wednesday that it had deleted the snapshot. It also said the surveillance system would be completely upgraded to cut incidents of false recognition in future.

705

Australia’s near-real-time facial recognition system, chilling effects

Civil rights groups have warned a vast, powerful system allowing the near real-time matching of citizens’ facial images risks a “profound chilling effect” on protest and dissent.

The technology – known in shorthand as “the capability” – collects and pools facial imagery from various state and federal government sources, including driver’s licences, passports and visas.

The biometric information can then rapidly – almost in real time – be compared with other sources, such as CCTV footage, to match identities.

The system, chiefly controlled by the federal Department of Home Affairs, is designed to give intelligence and security agencies a powerful tool to deter identity crime, and quickly identify terror and crime suspects.

But it has prompted serious concern among academics, human rights groups and privacy experts. The system sweeps up and processes citizens’ sensitive biometric information regardless of whether they have committed or are suspected of an offence.

659

Chinese ‘Gait Recognition’ Tech IDs People By How They Walk; Police Have Started Using It on Streets of Beijing and Shanghai

Already used by police on the streets of Beijing and Shanghai, “gait recognition” is part of a push across China to develop artificial-intelligence and data-driven surveillance that is raising concern about how far the technology will go. Huang Yongzhen, the CEO of Watrix, said that its system can identify people from up to 50 meters (165 feet) away, even with their back turned or face covered. This can fill a gap in facial recognition, which needs close-up, high-resolution images of a person’s face to work. “You don’t need people’s cooperation for us to be able to recognize their identity,” Huang said in an interview in his Beijing office. “Gait analysis can’t be fooled by simply limping, walking with splayed feet or hunching over, because we’re analyzing all the features of an entire body.”

651

Police Bodycams Can Be Hacked To Doctor Footage, Install Malware

Josh Mitchell’s Defcon presentation analyzes the security of five popular brands of police bodycams (Vievu, Patrol Eyes, Fire Cam, Digital Ally, and CeeSc) and reveals that they are universally terrible. All the devices use predictable network addresses that can be used to remotely sense and identify the cameras when they switch on. None of the devices use code-signing. Some of the devices can form ad-hoc Wi-Fi networks to bridge in other devices, but they don’t authenticate these sign-ons, so you can just connect with a laptop and start raiding the network for accessible filesystems and gank or alter videos, or just drop malware on them.

687

UK Police Plan To Deploy ‘Staggeringly Inaccurate’ Facial Recognition in London

Millions of people face the prospect of being scanned by police facial recognition technology that has sparked human rights concerns. The controversial software, which officers use to identify suspects, has been found to be “staggeringly inaccurate”, while campaigners have branded its use a violation of privacy. But Britain’s largest police force is set to expand a trial across six locations in London over the coming months.

Police leaders claimed officers make the decision to act on potential matches with police records and images that do not spark an alert are immediately deleted. But last month The Independent revealed the Metropolitan Police’s software was returning “false positives” — images of people who were not on a police database — in 98 percent of alerts… Detective Superintendent Bernie Galopin, the lead on facial recognition for London’s Metropolitan Police, said the operation was targeting wanted suspects to help reduce violent crime and make the area safer. “It allows us to deal with persons that are wanted by police where traditional methods may have failed,” he told The Independent, after statistics showed police were failing to solve 63 per cent of knife crimes committed against under-25s….

Det Supt Galopin said the Met was assessing how effective facial recognition was at tackling different challenges in British policing, which is currently being stretched by budget cuts, falling officer numbers, rising demand and the terror threat.

A policy officer from the National Council for Civil Liberties called the technology “lawless,” adding “the use of this technology in a public place is not compatible with privacy, and has a chilling effect on society.”

But a Home Office minister said the technology was vital for protecting people from terrorism, though “we must ensure that privacy is respected. This strategy makes clear that we will grasp the opportunities that technology brings while remaining committed to strengthening safeguards.”

736