Resources

Researchers Create First Flying Wireless Robotic Insect

You might remember RoboBee, an insect-sized robot that flies by flapping its wings. Unfortunately, though, it has to be hard-wired to a power source. Well, one of RoboBee’s creators has now helped develop RoboFly, which flies without a tether. Slightly heavier than a toothpick, RoboFly was designed by a team at the University of Washington — one member of that team, assistant professor Sawyer Fuller, was also part of the Harvard University team that first created RoboBee. That flying robot receives its power via a wire attached to an external power source, as an onboard battery would simply be too heavy to allow the tiny craft to fly. Instead of a wire or a battery, RoboFly is powered by a laser. That laser shines on a photovoltaic cell, which is mounted on top of the robot. On its own, that cell converts the laser light to just seven volts of electricity, so a built-in circuit boosts that to the 240 volts needed to flap the wings. That circuit also contains a microcontroller, which tells the robot when and how to flap its wings — on RoboBee, that sort of “thinking” is handled via a tether-linked external controller.

708

Artificial intelligence can create a 3D model of a person—from just a few seconds of video

Artificial intelligence has been used to create 3D models of people’s bodies for virtual reality avatars, surveillance, visualizing fashion, or movies. But it typically requires special camera equipment to detect depth or to view someone from multiple angles. A new algorithm creates 3D models using standard video footage from one angle.

The system has three stages. First, it analyzes a video a few seconds long of someone moving—preferably turning 360° to show all sides—and for each frame creates a silhouette separating the person from the background. Based on machine learning techniques—in which computers learn a task from many examples—it roughly estimates the 3D body shape and location of joints. In the second stage, it “unposes” the virtual human created from each frame, making them all stand with arms out in a T shape, and combines information about the T-posed people into one, more accurate model. Finally, in the third stage, it applies color and texture to the model based on recorded hair, clothing, and skin.

The researchers tested the method with a variety of body shapes, clothing, and backgrounds and found that it had an average accuracy within 5 millimeters, they will report in June at the Computer Vision and Pattern Recognition conference in Salt Lake City. The system can also reproduce the folding and wrinkles of fabric, but it struggles with skirts and long hair. With a model of you, the researchers can change your weight, clothing, and pose—and even make you perform a perfect pirouette. No practice necessary.

637

Researchers create simulation of a worm’s neural network

Researchers at the Technische Universitat Wein have created a simulation of a simple worm’s neural network, and have been able to replicate its natural behavior to completely mimic the worm’s natural reflexive behavior. According to the article, using a simple neural network of 300 neurons, the simulation of “the worm can find its way, eat bacteria and react to certain external stimuli. It can, for example, react to a touch on its body. A reflexive response is triggered and the worm squirms away. This behavior is determined by the worm’s nerve cells and the strength of the connections between them. When this simple reflex network is recreated on a computer, the simulated worm reacts in exactly the same way to a virtual stimulation — not because anybody programmed it to do so, but because this kind of behavior is hard-wired in its neural network.” Using the same neural network without adding any additional nerve cells, Mathias Lechner, Radu Grosu, and Ramin Hasani were able to have the nematode simulation learn to balance a pole “just by tuning the strength of the synaptic connections. This basic idea (tuning the connections between nerve cells) is also the characteristic feature of any natural learning process.”

769