Resources

Children ‘Interested in’ Gambling and Alcohol, According To Facebook

The social network’s advertising tools reveal 740,000 children under the age of 18 are flagged as being interested in gambling, including 130,000 in the UK. Some 940,000 minors — 150,000 of whom are British — are flagged as being interested in alcoholic beverages. These “interests” are automatically generated by Facebook, based on what it has learned about a user by monitoring their activity on the social network. Advertisers can then use them to specifically target messages to subgroups who have been flagged as interested in the topic. In a statement, Facebook said: “We don’t allow ads that promote the sale of alcohol or gambling to minors on Facebook and we enforce against this activity when we find it. We also work closely with regulators to provide guidance for marketers to help them reach their audiences effectively and responsibly.” The company does allow advertisers to specifically target messages to children based on their interest in alcohol or gambling. A Facebook insider gave the example of an anti-gambling service that may want to reach out to children who potentially have a gambling problem and offer them help and support.

Smart TVs Are Data-Collecting Machines, New Study Shows

A new study from Princeton University shows internet-connected TVs, which allow people to stream Netflix and Hulu, are loaded with data-hungry trackers. “If you use a device such as Roku and Amazon Fire TV, there are numerous companies that can build up a fairly comprehensive picture of what you’re watching,” Arvind Narayanan, associate professor of computer science at Princeton, wrote in an email to The Verge. “There’s very little oversight or awareness of their practices, including where that data is being sold.” From the report:
To understand how much surveillance is taking place on smart TVs, Narayanan and his co-author Hooman Mohajeri Moghaddam built a bot that automatically installed thousands of channels on their Roku and Amazon Fire TVs. It then mimicked human behavior by browsing and watching videos. As soon as it ran into an ad, it would track what data was being collected behind the scenes. Some of the information, like device type, city, and state, is hardly unique to one user. But other data, like the device serial number, Wi-Fi network, and advertising ID, could be used to pinpoint an individual. “This gives them a more complete picture of who you are,” said Moghaddam. He noted that some channels even sent unencrypted email addresses and video titles to the trackers.

In total, the study found trackers on 69 percent of Roku channels and 89 percent of Amazon Fire channels. “Some of these are well known, such as Google, while many others are relatively obscure companies that most of us have never heard of,” Narayanan said. Google’s ad service DoubleClick was found on 97 percent of Roku channels. “Like other publishers, smart TV app developers can use Google’s ad services to show ads against their content, and we’ve helped design industry guidelines for this that enable a privacy-safe experience for users,” a Google spokesperson said in a statement emailed to The Verge. “Depending on the user’s preferences, the developer may share data with Google that’s similar to data used for ads in mobile apps or on the web.”
“Better privacy controls would certainly help, but they are ultimately band-aids,” Narayanan said. “The business model of targeted advertising on TVs is incompatible with privacy, and we need to confront that reality. To maximize revenue, platforms based on ad targeting will likely turn to data mining and algorithmic personalization/persuasion to keep people glued to the screen as long as possible.”

Another study from Northeastern University and the Imperial College of London found that other smart-home devices are also collecting reams of data that is being sent to third parties like advertisers and major tech companies.

Her iPhone Died. It Led To Her Being Charged As a Criminal

Chris Matyszczyk from ZDNet retells the draconian story of a Financial Times writer who wasn’t able to prove she purchased a ticket for the London buses because her phone died (she used Apple Pay), which led to her being charged a criminal. An anonymous reader shares an excerpt from the report:

Today’s witness is Jemima Kelly. She’s a writer for The Financial Times. Please don’t let any personal thoughts about that get in the way of her story. You see, she just experienced a little technological nightmare. A cheery digital convert, she admits she often leaves the house without her wallet. But surely not without her iPhone. Apple Pay is, after all, a contemporary joy. It’s right up there with Tinder in its ability to make your life easier.

Kelly, indeed, hops on London buses and uses Apple Pay to tap her payment instead of buying a ticket the old-fashioned way. Which, as she cheerily described, is easy unless a ticket inspector wanders by. Just after your iPhone’s battery has died. She couldn’t prove that she’d paid, but gave her personal details and assumed there’d be a record of her probity on the transportation company’s computers. But then she was charged with, well, not providing proof of payment. Charged as in would be forced to go to court and to plead guilty or not guilty within 21 days. Here’s where things got (more) awkward. Kelly produced a bank statement that proved she’d paid. The transportation company — Transport For London — insisted this wasn’t enough.

It seems she’d failed another digital task — registering her Apple Pay with Transport For London. She was edging ever closer to criminal status. But did her Apple Pay details need to be registered? Kelly revealed: “They told me, ‘there is no requirement for cards to be registered, the same as paying for any goods and services in a shop’. But it’s not the same, actually; in a shop, you are given a breakdown in the form of a receipt.” So, here she was, contactless and receiptless. Next, she heard that her court case had happened and she’d been found guilty. Oh, and she also owed a fine of around $592.
In the end, Kelly managed to get back to court and persuade the judge to void her conviction, but the process took months.

“Her story, however, aptly describes how the digital world demands our complete and unyielding participation,” writes Matyszczyk. “Digital systems are designed by those who strive for complete perfection and consistency. Which doesn’t describe the human condition at all.” Do you think digitizing everything is a good thing?

Amazon Workers May Be Watching Your Cloud Cam Home Footage

In a promotional video, Amazon says its Cloud Cam home security camera provides “everything you need to monitor your home, day or night.” In fact, the artificially intelligent device requires help from a squad of invisible employees. Dozens of Amazon workers based in India and Romania review select clips captured by Cloud Cam, according to five people who have worked on the program or have direct knowledge of it. Those video snippets are then used to train the AI algorithms to do a better job distinguishing between a real threat (a home invader) and a false alarm (the cat jumping on the sofa). An Amazon team also transcribes and annotates commands recorded in customers’ homes by the company’s Alexa digital assistant, Bloomberg reported in April.

AI has made it possible to talk to your phone. It’s helping investors predict shifts in market sentiment. But the technology is far from infallible. Cloud Cam sends out alerts when it’s just paper rustling in a breeze. Apple’s Siri and Amazon’s Alexa still occasionally mishear commands. One day, engineers may overcome these shortfalls, but for now AI needs human assistance. Lots of it. At one point, on a typical day, some Amazon auditors were each annotating about 150 video recordings, which were typically 20 to 30 seconds long, according to the people, who requested anonymity to talk about an internal program.

Cambridge Analytica Whistleblower: US Heading In ‘Same Direction As China’ With Online Privacy

“The United States is walking in the same direction as China, we’re just allowing private companies to monetize left, right and center,” Cambridge Analytica whistleblower Christopher Wylie told CNBC on Wednesday. “Just because it’s not the state doesn’t mean that there isn’t harmful impacts that could come if you have one or two large companies monitoring or tracking everything you do,” he said. CNBC reports:

Wylie, whose memoir came out this week, has become outspoken about the influence of social media companies due to the large amounts of data they collect. In March 2018, he exposed the Cambridge Analytica scandal that brought down his former employer and resulted in the Federal Trade Commission fining Facebook, 15 months later, $5 billion for mishandling. While Cambridge Analytica has since shut down, Wylie said the tactics it used could be deployed elsewhere, and that is why data privacy regulation needs to be dramatically enhanced.

“Even if the company has dissolved, the capabilities of the company haven’t,” he said. “My real concern is what happens if China becomes the next Cambridge Analytica, what happens if North Korea becomes the next Cambridge Analytica?” Wylie also said he believes that social media companies should, at a minimum, face regulation similar to water utilities or electrical companies — “certain industries that have become so important because of their vital importance to business and people’s lives and the nature of their scale.” In those cases, “we put in place rules that put consumers first,” he added. “You can still make a profit. You can still make money. But you have to consider the rights and safety of people.”

Twitter Took Phone Numbers for Security and Used Them for Advertising

When some users provided Twitter with their phone number to make their account more secure, the company used this information for advertising purposes, the company said today.

This isn’t the first time that a large social network has taken information explicitly meant for the purposes of security, and then quietly or accidentally use it for something else entirely. Facebook did something similar with phone numbers provided by users for two-factor authentication, the company confirmed last year. “We recently discovered that when you provided an email address or phone number for safety or security purposes (for example, two-factor authentication) this data may have inadvertently been used for advertising purposes, specifically in our Tailored Audiences and Partner Audiences advertising system,” Twitter’s announcement reads. In short, when an advertiser using Twitter uploaded their own marketing list of email addresses or phone numbers, Twitter may have matched the list to people on Twitter “based on the email or phone number the Twitter account holder provided for safety and security purposes,” the post adds.

FBI’s Use of Surveillance Database Violated Americans’ Privacy Rights: Court

Some of the Federal Bureau of Investigation’s electronic surveillance activities violated the constitutional privacy rights of Americans swept up in a controversial foreign intelligence program (Warning: source paywalled; alternative source), a secretive surveillance court has ruled. The ruling deals a rare rebuke to U.S. spying activities that have generally withstood legal challenge or review. The intelligence community disclosed Tuesday that the Foreign Intelligence Surveillance Court last year found that the FBI’s pursuit of data about Americans ensnared in a warrantless internet-surveillance program intended to target foreign suspects may have violated the law authorizing the program, as well as the Constitution’s Fourth Amendment protections against unreasonable searches.

The court concluded that the FBI had been improperly searching a database of raw intelligence for information on Americans — raising concerns about oversight of the program, which as a spy program operates in near total secrecy. The court ruling identifies tens of thousands of improper searches of raw intelligence databases by the bureau in 2017 and 2018 that it deemed improper in part because they involved data related to tens of thousands of emails or telephone numbers — in one case, suggesting that the FBI was using the intelligence information to vet its personnel and cooperating sources. Federal law requires that the database only be searched by the FBI as part of seeking evidence of a crime or for foreign intelligence information. In other cases, the court ruling reveals improper use of the database by individuals. In one case, an FBI contractor ran a query of an intelligence database — searching information on himself, other FBI personnel and his relatives, the court revealed.

Twitter Executive Is Also A British Army ‘Psyops’ Solider

“The senior Twitter executive with editorial responsibility for the Middle East is also a part-time officer in the British Army’s psychological warfare unit,” reports Middle East Eye:
The 77th Brigade uses social media platforms such as Twitter, Instagram and Facebook, as well as podcasts, data analysis and audience research to wage what the head of the UK military, General Nick Carter, describes as “information warfare”. Carter says the 77th Brigade is giving the British military “the capability to compete in the war of narratives at the tactical level”; to shape perceptions of conflict. Some soldiers who have served with the unit say they have been engaged in operations intended to change the behaviour of target audiences.

What exactly MacMillan is doing with the unit is difficult to determine, however: he has declined to answer any questions about his role, as has Twitter and the UK’s Ministry of Defence (MoD).

Responding to the article, the British army told Newsweek their brigade had no relationship with Twitter, “other than using Twitter as one of many social media platforms for engagement and communication.” And Twitter reminded Newsweek that “We proactively publish all tweets and accounts relating to state-backed foreign information operations on the service — regardless of the source. We built this industry-leading archive to promote better public understanding of these threats.”
Despite the assertions of Twitter and the British military, academics and researchers have said the division between the two is not so clear. David Miller — a professor of political sociology in the School for Policy Studies at England’s University of Bristol, who studies propaganda and public relations efforts concerning the British government — is one such academic. He told Newsweek he believes a link is there, adding that it was a “threat to our democracy.”

“I would say I know a good amount about army propaganda and ‘psyops’ operations as they’re called, but what is interesting is how little information we have 77th Brigade,” he said.” I suppose it means that all their work is covert, but what I would like to know is what they exactly are they doing? Are they just tracking accounts or are they trying to influence people’s views? What we do know is that their account itself is hidden by Twitter so we assume they are trying to influence people under the operatives’ own names. And because we know so little about exactly what they’re doing, we have to look elsewhere for clues for example.

The professor also argues that Twitter is deceiving us by “not acting as transparently as it could. If they are working with army personnel in this way, it is extremely damaging to our democracy. Given Twitter’s closure of accounts alleged to be used by foreign governments, it’s a very hypocritical stance of Twitter to take.”

YouTube is Experimenting With Ways To Make Its Algorithm Even More Addictive

While YouTube has publicly said that it’s working on addressing problems that are making its website ever so addictive to users, a new paper from Google, which owns YouTube, seems to tell a different story.

It proposes an update to the platform’s algorithm that is meant to recommend even more targeted content to users in the interest of increasing engagement. Here’s how YouTube’s recommendation system currently works. To populate the recommended-videos sidebar, it first compiles a shortlist of several hundred videos by finding ones that match the topic and other features of the one you are watching. Then it ranks the list according to the user’s preferences, which it learns by feeding all your clicks, likes, and other interactions into a machine-learning algorithm. Among the proposed updates, the researchers specifically target a problem they identify as “implicit bias.” It refers to the way recommendations themselves can affect user behavior, making it hard to decipher whether you clicked on a video because you liked it or because it was highly recommended. The effect is that over time, the system can push users further and further away from the videos they actually want to watch.

To reduce this bias, the researchers suggest a tweak to the algorithm: each time a user clicks on a video, it also factors in the video’s rank in the recommendation sidebar. Videos that are near the top of the sidebar are given less weight when fed into the machine-learning algorithm; videos deep down in the ranking, which require a user to scroll, are given more. When the researchers tested the changes live on YouTube, they found significantly more user engagement. Though the paper doesn’t say whether the new system will be deployed permanently, Guillaume Chaslot, an ex-YouTube engineer who now runs AlgoTransparency.org, said he was “pretty confident” that it would happen relatively quickly.

Optic Nerve: millions of Yahoo webcam images intercepted by GCHQ

Optic Nerve is a mass surveillance programme run by the British signals intelligence agency Government Communications Headquarters (GCHQ), with help from the US National Security Agency, that surreptitiously collects private webcam still images from users while they are using a Yahoo! webcam application. As an example of the scale, in one 6-month period, the programme is reported to have collected images from 1.8 million Yahoo! user accounts globally. The programme was first reported on in the media in February 2014, from documents leaked by the former National Security Agency contractor Edward Snowden, but dates back to a prototype started in 2008, and was still active in at least 2012.[1][2]

The leaked documents describe the users under surveillance as “unselected”, meaning that data was collected indiscriminately in bulk from users regardless of whether they were an intelligence target or not. The vast majority of affected users would have been completely innocent of any crime or suspicion of a crime.

Researchers Easily Breached Voting Machines For the 2020 Election

The voting machines that the U.S. will use in the 2020 election are still vulnerable to hacks. A group of ethical hackers tested a bunch of those voting machines and election systems (most of which they bought on eBay). They were able to crack into every machine, The Washington Post reports. Their tests took place this summer at a Def Con cybersecurity conference, but the group visited Washington to share their findings yesterday. A number of flaws allowed the hackers to access the machines, including weak default passwords and shoddy encryption. The group says the machines could be hacked by anyone with access to them, and if poll workers make mistakes or take shortcuts, the machines could be infiltrated by remote hackers.

Facebook Confirms Its ‘Standards’ Don’t Apply To Politicians

Facebook this week finally put into writing what users — especially politically powerful users — have known for years: its community “standards” do not, in fact, apply across the whole community. Speech from politicians is officially exempt from the platform’s fact checking and decency standards, the company has clarified, with a few exceptions. Facebook communications VP Nick Clegg, himself a former member of the UK Parliament, outlined the policy in a speech and company blog post Tuesday. Facebook has had a “newsworthiness exemption” to its content guidelines since 2016. That policy was formalized in late October of that year amid a contentious and chaotic US political season and three weeks before the presidential election that would land Donald Trump the White House.

Facebook at the time was uncertain how to handle posts from the Trump campaign, The Wall Street Journal reported. Sources told the paper that Facebook employees were sharply divided over the candidate’s rhetoric about Muslim immigrants and his stated desire for a Muslim travel ban, which several felt were in violation of the service’s hate speech standards. Eventually, the sources said, CEO Mark Zuckerberg weighed in directly and said it would be inappropriate to intervene. Months later, Facebook finally issued its policy. “We’re going to begin allowing more items that people find newsworthy, significant, or important to the public interest — even if they might otherwise violate our standards,” Facebook wrote at the time.
Facebook by default “will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard.” It won’t be subject to fact-checking because the company does not believe that it is appropriate for it to “referee political debates” or prevent a politician’s speech from both reaching its intended audience and “being subject to public debate and scrutiny.”

Newsworthiness, Clegg added, will be determined by weighing the “public interest value of the piece of speech” against the risk of harm. The exception to all of this is advertising. “Standards are different for content for which the company receives payment, so if someone — even a politician or political candidate — posts ads to Facebook, those ads in theory must still meet both the community standards and Facebook’s advertising policies,” reports Ars.

Politicians Can Break Our Content Rules, YouTube CEO Says

YouTube CEO Susan Wojcicki said this week that content by politicians would stay up on the video-sharing website even if it violates the company’s standards, echoing a position staked out by Facebook this week.

“When you have a political officer that is making information that is really important for their constituents to see, or for other global leaders to see, that is content that we would leave up because we think it’s important for other people to see,” Wojcicki told an audience at The Atlantic Festival this morning. Wojcicki said the news media is likely to cover controversial content regardless of whether it’s taken down, giving context to understand it. YouTube is owned by Google. A YouTube spokesperson later told POLITICO that politicians are not treated differently than other users and must abide by its community guidelines. The company grants exemptions to some political speech if the company considers it to be educational, documentary, scientific, or artistic in nature.

Uber Stopped Its Own Investigators From Reporting Crimes To the Police

The special investigations team inside Uber, which fields complaints from riders and drivers, is not allowed to escalate those issues to law enforcement or file official police reports “even when they get confessions of felonies,” according to The Washington Post. They are also not allowed to advise victims or potential victims of crimes to seek legal counsel, according to the report, which was based on interviews with “more than 20 current and former investigators” who worked at Uber’s investigations unit in Arizona.

The investigators are also allegedly instructed to “first to protect Uber” and make sure it is “not held liable” for any crimes that are committed by people using the company’s ride-hailing platform. In that vein, the investigators told the paper that even the language they use when communicating with alleged victims is carefully worded to avoid the appearance that Uber is taking a side. The investigators also said they’re not supposed to specifically ask alleged perpetrators about claims against them.

Vimeo Sued For Storing Faceprints of People Without Their Consent

Vimeo is collecting and storing thousands of people’s facial biometrics without their permission or knowledge, according to a complaint filed on September 20 on behalf of potentially thousands of plaintiffs under the Illinois Biometric Information Privacy Act (BIPA).

The suit takes aim at Vimeo’s Magisto application: a short-form video creation platform purchased by Vimeo in April 2019 that uses facial recognition to automatically index the faces of people in videos so they can be face-tagged. BIPA bans collecting and storing biometric data without explicit consent, including “faceprints.” The complaint against Vimeo claims that users of Magisto “upload millions of videos and/or photos per day, making videos and photographs a vital part of the Magisto experience.”

The complaint maintains that unbeknownst to the average consumer, Magisto scans “each and every video and photo uploaded to Magisto for faces” and analyzes “biometric identifiers,” including facial geometry, to “create and store a template for each face.” That template is later used to “organize and group together videos based upon the particular individuals appearing in the videos” by “comparing the face templates of individuals who appear in newly-edited videos or photos with the facial templates already saved in Magisto’s face database.”

The complaint also asserts that Magisto analyzes and face-matches the biometrics of non-Magisto users who happen to appear in the photos and videos, which is a violation of BIPA.

How the FBI targeted environmental activists in domestic terror investigations

The investigations, which targeted individual activists and some environmental organizations, were opened in 2013-2014, at the height of opposition to the Keystone XL Pipeline and the expansion of fossil fuel production in North America.

The new FOIA documents reveal the bureau’s motivation for investigating a broad cross section of the environmental movement and its characterization of non-violent protesters as a potential threat to national security.

In 2010, the DOJ’s inspector general criticized the FBI for using non-violent civil disobedience as grounds to open domestic terrorism investigations. US citizens swept up in such investigations can be placed on terrorism watchlists and subjected to surveillance and restrictions on international travel. The designation can also lead local law enforcement to take a more confrontational approach when engaging with non-violent activists.

Google Loans Cameras To Volunteers To Fill Gaps in ‘Street View’

Kanhema, who works as a product manager in Silicon Valley and is a freelance photographer in his spare time, volunteered to carry Google’s Street View gear to map what amounted to 2,000 miles of his home country. The Berkeley, Calif., resident has filled in the map of other areas in Africa and Canada as well.

“We start in the large metropolitan areas where we know we have users, where it’s easy for us to drive and we can execute quickly,” says Stafford Marquardt, a product manager for Street View.

He says the team is working to expand the service’s reach. To do that, Google often relies on volunteers who can either borrow the company’s camera equipment or take photos using their own. Most images on Street View are collected by drivers, and most of these drivers are employed by third parties that work with Google. But when it comes to the places Google hasn’t prioritized, people like Kanhema can fill in the gaps.

“It’s so conspicuous to have a 4-foot contraption attached to the roof of your car,” Kanhema says. “People are walking up and asking questions about, ‘Is that a camera? What are you recording? What are you filming? It is for Google Maps? Will my house be on the map? Will my face be on the map?'”

Google doesn’t pay him or the other volunteers — whom the company calls “contributors” — for the content they upload. Kanhema, for example, spent around $5,000 of his own money to travel across Zimbabwe for the project.

Google currently has no plans to compensate its volunteers, adding that it pays contributors “in a lot of other ways” by offering “a platform to host gigabytes and terabytes of imagery and publish it to the entire world, absolutely for free.”

The 120 Most CCTV Surveilled Cities In the World

Comparitech.com has published a report and spreadsheet laying out how many CCTV cameras are in operation in 120 different cities around the world, and data for the crime rates in these cities. The report notes “We found little correlation between the number of public CCTV cameras and crime or safety.”

8 of the 10 most surveilled cities are in China, even though London and Atlana also make the cut, and the report says that — depending on what numbers you believe — China will have between 200 Million and 626 Million CCTV cameras, or possibly even more, in operation by 2020. That would be almost 1 CCTV camera per 2 citizens in the country, and the number could go up.

Outside of China, the top most-surveilled cities in the world are:

London – 68.40 cameras per 1,000 people
Atlanta – 15.56 cameras per 1,000 people
Singapore – 15.25 cameras per 1,000 people
Abu Dhabi – 13.77 cameras per 1,000 people
Chicago – 13.06 cameras per 1,000 people
Sydney – 12.35 cameras per 1,000 people
Baghdad – 12.30 cameras per 1,000 people
Dubai – 12.14 cameras per 1,000 people
Moscow – 11.70 cameras per 1,000 people
Berlin – 11.18 cameras per 1,000 people
New Delhi – 9.62 cameras per 1,000 people

Ex-Google Engineer Says That Robot Weapons May Cause Accidental Mass Killings

“A former Google engineer who worked on the company’s infamous military drone project has sounded a warning against the building of killer robots,” reports Business Insider.

Laura Nolan had been working at Google four years when she was recruited to its collaboration with the US Department of Defense, known as Project Maven, in 2017, according to the Guardian. Project Maven was focused on using AI to enhance military drones, building AI systems which would be able to single out enemy targets and distinguish between people and objects. Google canned Project Maven after employee outrage, with thousands of employees signing a petition against the project and about a dozen quitting in protest. Google allowed the contract to lapse in March this year. Nolan herself resigned after she became “increasingly ethically concerned” about the project, she said…

Nolan fears that the next step beyond AI-enabled weapons like drones could be fully autonomous AI weapons. “What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed,” she said…. Although no country has yet come forward to say it’s working on fully autonomous robot weapons, many are building more and more sophisticated AI to integrate into their militaries. The US navy has a self-piloting warship, capable of spending months at sea with no crew, and Israel boasts of having drones capable of identifying and attacking targets autonomously — although at the moment they require a human middle-man to give the go-ahead.

Nolan is urging countries to declare an outright ban on autonomous killing robots, similar to conventions around the use of chemical weapons.

Social media stress can lead to social media addiction

Social networking sites (SNS) such as Facebook and Instagram are known to cause stress in users, known as technostress from social media. However, when faced with such stress, instead of switching off or using them less, people are moving from one aspect of the social media platforms to another — escaping the causes of their stress without leaving the medium on which it originated.

Research into the habits of 444 Facebook users revealed they would switch between activities such as chatting to friends, scanning news feeds and posting updates as each began to cause stress. This leads to an increased likelihood of technology addiction, as they use the various elements of the platform over a greater timespan.

Researchers from Lancaster University, the University of Bamberg and Friedrich-Alexander Univeristät Erlangen-Nürnberg, writing in Information Systems Journal, found that users were seeking distraction and diversion within the Facebook platform as a coping mechanism for stress caused by the same platform, rather than switching off and undertaking a different activity.