Does hearing all the bells and whistles about ingenious gadgets and technologies promising to make our lives a bit more enjoyable and productive catch your attention?
Of course, yours truly is usually quick to embrace these new, promising technological revolutions even when they are in the development stage.
Six years ago, I wrote a column titled “Cell phones may soon be your tourist guide.”
In it, I described a proposed new cellphone software application which would provide instant information on what you saw by taking its picture.
The picture would be transmitted from your cellphone to a central computer system, which would interface to databases located on the Internet.
These databases would retrieve information collected about the picture.
This information would be transmitted back to your cellphone and displayed in real-time.
The technology would be used to identify buildings, scenery, plants, and even animals.
It’s six years later, and I have not read anything new from SuperWise Technologies AG, located in Wolfratshausen, Germany, about their concept.
It was five years ago, when yours truly wrote a column about a similar technology actually being used called Google Goggles.
In this column, I talked about how the Google Goggles application “takes the picture from our cell phone and sends it to the web, where a search for information about it is performed. The information is then returned to us in real-time.”
Google Goggles can be used to identify pictures, such as landmarks, artwork, bar and quick response codes, and a few others. It is currently being used in today’s smartphones.
For more information about installing Google Goggles, visit http://tinyurl.com/bytes-goggles1.
I did note on Google’s website, that Goggles is not very good for identifying pictures taken of animals.
This is where I jump to 2014, and reveal in this column, the latest object recognition technology which appears to be very good at identifying animals.
This new machine learning, artificial intelligence computing technology, is named Project Adam.
Yes, indeed, our old friends at Microsoft Research are the ones developing this new artificial object identification intelligence.
According to Microsoft, the objective of Project Adam “is to enable software to visually recognize any object.”
Microsoft’s current artificially intelligent virtual personal assistant, Cortana, was integrated with Project Adam’s technology.
Cortana is to Microsoft’s Windows Phone, what Siri is to Apple’s iPhone.
Cortana, by the way, is also the name of the artificial intelligence character from Microsoft Studios Halo video game.
Project Adam was shown during last week’s annual Microsoft Research Faculty Summit in Redmond, WA.
The demonstration of Project Adam’s capabilities was given on stage before a live audience, using three different breeds of dogs.
Johnson Apacible, Project Adam researcher, aimed his smartphone and took a picture of Cowboy, which was the name of a Dalmatian sitting on the stage.
“Cortana, what dog breed is this?” asked Apacible into the smartphone.
On the smartphone’s display screen, the word “Dalmatian” appeared.
Apacible then pointed his smartphone at another dog (without taking a picture) and asked, “Cortana, what kind of dog is this?”
“Could you take a picture for me?” Cortana’s voice over the smartphone’s speaker asked.
Laughter could be heard from the people in the audience.
Apacible pointed the smartphone’s camera to the dog named Millie, and snapped a picture.
Project Adam’s technology came through again by correctly identifying the particular dog breed, with Cortana saying, “I believe this is a Rhodesian Ridgeback.
The audience showed its appreciation with their applause.
The last breed of dog on stage, an Australian Cobberdog named Ned, was also correctly identified by Cortana.
Apacible wanted the audience to know Project Adam’s technology could tell the difference between a dog and a person, and so he directed the smartphone’s camera at Harry Shum, Microsoft’s executive vice president of Technology and Research.
“I believe this is not a dog,” Cortana correctly stated.
The human brain uses trillions of its neural pathway connections in order to identify objects; Project Adam uses 2 billion in its artificial neural network.
Eventually, it is hoped, this research will also allow one to take a smartphone picture of what you are eating, and instantly obtain its nutritional value.
This technology may someday lead to being able to take a picture of a rash, or other unusual skin condition, and receive an accurate medical diagnosis.
Imagine you’re camping out in the woods and come across some unfamiliar plants; by taking their picture and having it analyzed, you would be able to determine which ones are edible and which are poisonous.
Indeed, Project Adam has the possibility of developing into a very promising new technology.
A short video of Cortana identifying breeds of dogs can be seen at http://tinyurl.com/bytes-adam.
The 2014 Microsoft Research Faculty Summit webpage is at http://tinyurl.com/bytes-Summit.