Robot is given the ability to apply 'acquired learning'
Aug. 8, 2011
by Mark Ollig

Each day brings us closer to when humans will truly be living among the intelligent robots we observe in science fiction movies.

The latest robot one step nearer to becoming more human-like was created by researchers with the Hasegawa Group at the Tokyo Institute of Technology.

This group just released a new demonstration video of their robot’s ability to comprehend its surroundings and perform a task, that up until then, it didn’t know how to do.

This robot thinks and learns by interacting with an artificial intelligence called a Self-Organizing Incremental Neural Network (SOINN) technology, designed for performing online unsupervised learning tasks.

The experimental robot was connected to a computing system operating the SOINN.

SOINN creates algorithms (or set of rules) used by the robot for acquiring new learning.

The researchers were successful in getting a robot to appear to “think” as a human would when deciding on the best course of action to take when presented with an unfamiliar situation.

The human brain-like decision making taking place within the SOINN computing program is based, in part, upon the information communicated to it from the robot’s surveillance of its surroundings.

The experiment begins.

The video shows the robot (who is nameless) being instructed to fill a glass cup with water from a bottle sitting on a table.

I noticed they are not using real liquid water in this demonstration, but small plastic pellets simulating water.

On this table there is also a tray with one (plastic) ice cube in it.

“I’ll get the glass!” the robot verbally declares, while reaching with its left hand and picking up the cup.

“I’ll get the bottle!” says the robot as it picks up the water bottle using its right hand.

“I’ll put water in the glass!” the robot announces just before pouring the water into the cup it is holding in its left hand.

The robot then exclaims, “I’ll put the glass down!”

The robot then places the newly filled cup of water onto the table.

This part of the demonstration was completed satisfactorily. The robot picked up the water bottle with its right hand and while holding the cup with its left hand, it reached over with the bottle and poured the correct amount of water into the cup, it then placed the cup down on the table.

In this instance, the robot was following a predetermined set of computing instructions – which was impressive – but not overly extraordinary.

However, things get a bit more interesting when the robot is once again asked to perform the very same task; but this time, while it is pouring the water into the cup, the robot is told that the water needs to be cold.

Who really enjoys drinking lukewarm water?

Now the robot faces a conundrum, as its right hand is currently holding the water bottle and its left hand is holding the cup, the robot must “think” of a way to get the ice cube into the cup.

The robot is contemplating on how to accomplish a task it has never done before.

This is where the robot, using SOINN, determines what it needs to do in order to cool the water in the cup.

The robot then decides on what actions to take and in what order.

The robot chooses to place the water bottle down on the table and then reaches and picks up the ice cube out of the tray and successfully places said ice cube into the cup.

“So far, robots, including industrial robots, have been able to do specific tasks quickly and accurately,” says Osamu Hasegawa, associate professor, Imaging Science and Engineering Laboratory of the Tokyo Institute of Technology.

Hasegawa goes on to say that if one teaches the robot the things it can’t do, it will incorporate this as “new knowledge.” The robot will then attempt to solve new problems by including the newly learned knowledge.

In addition to the auditory, visual and tactile sensory data observed by the robot for use in creating SOINN algorithms, SOINN also collects information from other sources, including the Internet, and other robot’s experiences and knowledge.

Hasegawa also discussed another example of robot learning by verbally illustrating one possible scenario encountered when a robot was sent to assist an elderly person living alone.

The person asks the robot to make a cup of green tea; however, the robot does not know how to prepare green tea.

Being connected to the Internet, the robot asks other robots around the world (who are also connected to the Internet), how to make green tea.

A robot in the UK responds by saying it knows how to make a British-style tea.

Hasegawa explained how the robot would transfer the knowledge from the UK robot’s British-style tea-making method and apply it to making green tea using a Japanese teapot.

The lessons learned by the robot, over time, will allow it to become smarter.

The robot, like all of us, will learn by doing.

To view the Tokyo Institute of Technology demonstration video, go to http://tinyurl.com/3b4zr9d.

Advertise in over
250+ MN newspapers