The Jelly Bean version of Google’s Android OS is coveted primarily for the Project Butter responsiveness enhancements that smooth out UI navigation. The more I use the OS, though, the more enamored I become with Google Now. This information aggregator on steroids combines search, voice recognition, and a measure of intelligence that provides relevant details before you ask for them. The Verge has taken a closer look at what’s going on behind the scenes, offering interesting insight on what seems to be an integral part of Google’s vision for the future.
For me, the best part of Google Now is the voice recognition engine—which, according to The Verge, uses neural network technology initially developed to recognize cats in video clips. When applied to speech, the neural network focuses on base sounds, or phonemes. This approach is reportedly tolerant of different accents, tones, background environments, and microphones. Switching to a neural network for speech recognition cut the error rate by 20-25%, Google says, a pretty substantial improvement.
Google Now’s real potential lies with its ability to pull information from a number of different sources, including your inbox and calendar, your smartphone or tablet’s GPS, and the wealth of data spread over the Internet. Increasingly, it relies on Google’s new Knowledge Graph, a sort of mini-Wiki that provides relevant—and more importantly accurate—information to accompany traditional search results. Responding to searches is one thing, but accurately predicting what data users need at any given moment is where the real magic happens.
As one might expect given its history, Google is taking its time developing Now. New information categories, otherwise known as cards, are being added slowly, with only a handful making the cut for the latest 4.2 revision of Android. Hundreds of cards are apparently in the pipeline, though, and it will be interesting to see how the service evolves.