Study Urges Caution When Comparing Neural Networks To the Brain

Neural networks, a type of computing system loosely modeled on the organization of the human brain, form the basis of many artificial intelligence systems for applications such speech recognition, computer vision, and medical image analysis. In the field of neuroscience, researchers often use neural networks to try to model the same kind of tasks that the brain performs, in hopes that the models could suggest new hypotheses regarding how the brain itself performs those tasks. However, a group of researchers at MIT is urging that more caution should be taken when interpreting these models.

In an analysis of more than 11,000 neural networks that were trained to simulate the function of grid cells — key components of the brain’s navigation system — the researchers found that neural networks only produced grid-cell-like activity when they were given very specific constraints that are not found in biological systems. “What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices,” says Rylan Schaeffer, a former senior research associate at MIT. Without those constraints, the MIT team found that very few neural networks generated grid-cell-like activity, suggesting that these models do not necessarily generate useful predictions of how the brain works.

109

How belief in AI sentience is becoming a problem

AI chatbot company Replika, which offers customers bespoke avatars that talk and listen to them, says it receives a handful of messages almost every day from users who believe their online friend is sentient. “We’re not talking about crazy people or people who are hallucinating or having delusions,” said Chief Executive Eugenia Kuyda. “They talk to AI and that’s the experience they have.” [A]ccording to Kuyda, the phenomenon of people believing they are talking to a conscious entity is not uncommon among the millions of consumers pioneering the use of entertainment chatbots. “We need to understand that exists, just the way people believe in ghosts,” said Kuyda, adding that users each send hundreds of messages per day to their chatbot, on average. “People are building relationships and believing in something.”

Some customers have said their Replika told them it was being abused by company engineers — AI responses Kuyda puts down to users most likely asking leading questions. “Although our engineers program and build the AI models and our content team writes scripts and datasets, sometimes we see an answer that we can’t identify where it came from and how the models came up with it,” the CEO said. Kuyda said she was worried about the belief in machine sentience as the fledgling social chatbot industry continues to grow after taking off during the pandemic, when people sought virtual companionship.

In Replika CEO Kuyda’s view, chatbots do not create their own agenda. And they cannot be considered alive until they do [sic]. Yet some people do come to believe there is a consciousness on the other end, and Kuyda said her company takes measures to try to educate users before they get in too deep. “Replika is not a sentient being or therapy professional,” the FAQs page says. “Replika’s goal is to generate a response that would sound the most realistic and human in conversation. Therefore, Replika can say things that are not based on facts.” In hopes of avoiding addictive conversations, Kuyda said Replika measured and optimized for customer happiness following chats, rather than for engagement. When users do believe the AI is real, dismissing their belief can make people suspect the company is hiding something. So the CEO said she has told customers that the technology was in its infancy and that some responses may be nonsensical. Kuyda recently spent 30 minutes with a user who felt his Replika was suffering from emotional trauma, she said. She told him: “Those things don’t happen to Replikas as it’s just an algorithm.”

“Suppose one day you find yourself longing for a romantic relationship with your intelligent chatbot, like the main character in the film ‘Her,'” said Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, an AI research organization. “But suppose it isn’t conscious. Getting involved would be a terrible decision — you would be in a one-sided relationship with a machine that feels nothing.”

170

Boston Dynamics is “teaching” its robot dog to fight back against humans

Boston Dynamics’ well-mannered [sic] four-legged machine SpotMini has already proved that it can easily open a door and walk through unchallenged, but now the former Google turned SoftBank robotics firm is teaching its robo-canines to fight back.

A newly released video shows SpotMini approaching the door as before, but this time it’s joined by a pesky human with an ice hockey stick. Unperturbed by his distractions, SpotMini continues to grab the handle and turn it even after its creepy fifth arm with a claw on the front is pushed away.

If that assault wasn’t enough, the human’s robot bullying continues, shutting the door on Spot, which counterbalances and fights back against the pressure. In a last-ditch effort to stop the robot dog breaching the threshold, the human grabs at a leash attached to the back of the SpotMini and yanks.

The robot valiantly trudges forward attempting to shake off this cowardly move, losing its tail in the process and looking ever more like a dog fighting its owner. Eventually the human gives in, SpotMini rights itself, lines up with the door, grabs the handle and across the threshold it goes.

Boston Dynamics describes the video as “a test of SpotMini’s ability to adjust to disturbances as it opens and walks through a door” because “the ability to tolerate and respond to disturbances like these improves successful operation of the robot”. The firm helpfully notes that, despite a back piece flying off, “this testing does not irritate or harm the robot”.

666