ARCHIVES: This is legacy content from before Marketing Dive acquired Mobile Marketer in early 2017. Some information, such as publication dates, may not have migrated over. Check out the new Marketing Dive site for the latest marketing news.

Google expands mobile search to include images, voice, location

Google announced that its mobile search platform will now feature images, voice and location at a launch event at the Computer History Museum in Mountain View, CA.

According to Google, mobile devices straddle the intersection of three significant industry trends: computing, connectivity and the cloud. Phones get more powerful and less expensive all the time, they are connected to the Internet more often, from more places and they tap into computational power that is available in datacenters around the world.

?These ?Cs? aren't new?we've discussed them in isolation for over 40 years,? said Vic Gundotra, vice president of engineering at Google, Mountain View, CA, in a blog post. ?But today's smartphones?for the first time?combine all three into a personal, handheld experience.

?We've only begun to appreciate the impact of these converged devices, but we're pretty sure about one thing?we've moved past the PC-only era, into a world where search is forever changed,? he said.

?And we're excited to share Google's early contributions to this new era of computing.?

With a sensor-rich phone that is connected to the cloud, users can now search by voice using the microphone, by location using GPS and the compass and by sight using the camera.

Search by sight
Yesterday Google announced a Labs product for Android 1.6+ devices Google Goggles that lets users search by sight.

Goggles lets users search for objects using images rather than words. Users can take a picture with their phone's camera, and if Google recognizes the item, it returns relevant search results.

Currently Goggles identifies landmarks, works of art, text, books, DVDs and other products, logos, businesses, bar codes and contact information.

?When you connect your phone's camera to datacenters in the cloud, it becomes an eye to see and search with,? Mr. Gundotra said. ?It sees the world like you do, but it simultaneously taps the world's info in ways that you can't?and this makes it a perfect answering machine for your visual questions.

?Perhaps you're vacationing in a foreign country, and you want to learn more about the monument in your field of view,? he said. ?Maybe you're visiting a modern art museum, and you want to know who painted the work in front of you?or maybe you want wine tasting notes for the Cabernet sitting on the dinner table.

?In every example, the query you care about isn't a text string, or a location?it's whatever you're looking at, and in all cases Google?s ability to ?see further? is rooted in powerful computing, pervasive connectivity and the cloud.?

Google first sends the user's image to Google's datacenters, then creates signatures of objects in the image using computer vision algorithms. Google then compares the signatures against all other known items in its image recognition databases.

Google then figures out how many matches exist and return one or more search results, based on available meta data and ranking signals.

?We do all of this in just a few seconds,? Mr. Gundotra said. ?Now, with all this talk of algorithms, image corpora and meta data, you may be wondering, ?Why is Goggles in Labs??

?The answer?as you might guess?lies in both the nascence of the technology, and the scope of our ambitions,? he said.

Computer vision, like all of Google's extra-sensory efforts, is still in its infancy. Today Goggles recognizes certain images in certain categories, but its goal is to return high-quality results for any image.

?Today you frame and snap a photo to get results, but one day visual search will be as natural as pointing a finger?like a mouse for the real world,? Mr. Gundotra said. ?Either way we've got plenty of work to do, so please download Goggles from Android Market and help us get started.?

Search by voice
Google first launched search by voice about a year ago, enabling mobile users to speak to Google.

?And we're constantly reminded that the combination of a powerful device, an Internet connection, and datacenters in the cloud is what makes it work,? Mr. Gundotra said.

Google first streams sound files to its datacenters in real-time. The company then converts utterances into phonemes, words and phrases, then compares phrases against Google's billions of daily queries to assign probability scores to all possible transcriptions.

?We do all of this in the time it takes to speak a few words,? Mr. Gundotra said.

Over the past 12 months, Google has introduced the product for the iPhone, Nokia, Android and RIM?s BlackBerry devices and in more languages.

Yesterday Google announced that search by voice understands Japanese, joining English and Mandarin.

?Looking ahead, we dream of combining voice recognition with our language translation infrastructure to provide in-conversation translation?a UN interpreter for everyone!? Mr. Gundotra said. ?And we're just getting started.?

Search by location
Google has launched "What's Nearby" for Google Maps on Android 1.6+ devices, available as an update from Android Market.

To use the feature, consumers can long press anywhere on the map and Google will return a list of the 10 closest places, including restaurants, shops and other points of interest.

And if users visit Google.com from their iPhone or Android device in a few weeks, clicking "Near me now" will deliver the same experience.

?Your phone's location is usually your location?it's in your pocket, in your purse or on your nightstand, and as a result it's more personal than any PC before it,? Mr. Gundotra said. ?This intimacy is what makes location-based services possible, and for its part, Google continues to invest in things like My Location, real-time traffic and turn-by-turn navigation.

?Today we're tackling a question that's simple to ask, but surprisingly difficult to answer: ?What's around here, anyway??? he said.

Mr. Gundotra provided an example as an illustration.

?Suppose you're early to pick up your child from school, or your drive to dinner was quicker than expected, or you've just checked into a new hotel,? Mr. Gundotra said. ?Chances are you've got time to kill, but you don't want to spend it entering addresses, sifting through POI categories or even typing a search.

?Instead you just want stuff nearby, whatever that might be,? he said. ?Your location is your query, and we hear you loud and clear.?

Google said that its future plans include more than just nearby places.

In the new year, the company will begin showing local product inventory in search results and Google Suggest will include location-specific search terms.

?All thanks to powerful, Internet-enabled mobile devices,? Mr. Gundotra said. ?All of today's mobile announcements?from Japanese Voice Search to a new version of Maps to Google Goggles?are just early examples of what's possible when you pair sensor-rich devices with resources in the cloud.

?After all, we've only recently entered this new era, and we'll have more questions than answers for the foreseeable future,? he said. ?But something has changed.

?Computing has changed?and the possibilities inspire us.?