Speaking to Robots: The State of AI
The human race has a rocky relationship with artificial intelligence as it’s portrayed in popular entertainment. Sometimes AI is a helpful laundry-folding, member of the family like Rosie from The Jetsons. Sometimes it’s an evil hell-raiser bent on human destruction like The Terminator. Either way, we’re fascinated with a future where AI technology can function and feel like a human.
IBM thinks that this future is close. Last week in Las Vegas, IBM held their second annual IBM World of Watson conference to discuss the progress of their Jeopardy-winning AI and the future of the industry. Since its creation in 2007, Watson has been programmed with algorithms that allow it to access massive amounts of data and learn from what it searches. Techniques such as natural language processing, machine learning and sentiment analysis are helping Watson move closer to sentience. It can read more than 800 million pages per second, but it still can’t fold your laundry. It is unclear if IBM ever plans to seek such features, but they say that in 5 years, Watson will be able to do much more important tasks. Supposedly, Watson is being groomed to be a tool benefitting several industries. It could help diagnose patient’s symptoms by reading their genomes and offering doctor’s treatment suggestions. It could read a market and offer advice to businessmen on the state of their companies. IBM is expecting Watson to change the world in crucial ways, and all within the next decade.
Watson may soon be on-the-go with us too and even sooner with IBM partnering with GM’s new OnStar Go service. On top of OnStar’s signature hands-free calling and remote diagnostics features, Watson will help navigate OnStar customers through high-traffic areas or complete small tasks like ordering coffee or give restaurant recommendations.
While these OnStar features are impressive and–no doubt–helpful, it is not a far cry from what Siri, Cortana and Ok Google are doing for our mobile devices already.
With the introduction of iOS 10 to new devices, Apple is finally caught up to the rest of the mobile AI partners. Lucky for users, there aren’t a huge amount of differences between the virtual assistants. They can all do the basics: Send texts for you, initiate calls, and read or send your emails. You also aren’t limited to just one of them, depending on whatever device you choose because both Cortana and Ok Google can be downloaded onto an Apple Device.
Outlined in the chart below are some of the ways in which the three most popular virtual assistants differ in their features: Siri is now integrated with third party apps (which the others have been able to do for a while) can call you an Uber, send money over Venmo, make calls over Skype and more. But she can also be taught to pronounce your name correctly, a feature not offered on Cortana or Ok Google. Uber company in itself is developing features which can make the whole process of booking a cab much more simpler and the exciting offers its offering which you can avail after you visit this site, is just adding onto its popularity making it the highest used car hire service application.
Smart Speakers are also making themselves at home with Amazon’s Alexa and the Google Home. Google Home is pretty new and doesn’t have a wide range of features yet aside from the ability to search the web and play music. It has a lot of catching up to do though because Amazon Echo can tap into third-party apps, control your lights and control your other devices. There are some brand limitations, but it’s still the most impressive Smart Speaker available today and it’s encouraging consumers to integrate AI into their homes.
Google may be behind in smart speakers, but it set itself apart in October when it released its Pixel smartphone with Google’s Assistant built right into the phone. This translates into a more personalized assistant that remembers your past interactions and adjusts future ones. Google brags that there isn’t another AI assistant that understand your speech like theirs plus it’s got the notable bonus of being directly tethered to the largest search engine ever.
So what will separate portable Watson from the portable virtual assistants that already exist in our pockets? One possible difference could simply be the ability to initiate or hold a conversation.
One of IBM’s focuses for Watson is the ability to communicate huge amounts of data. While Siri and Cortana can answer specific questions and make jokes, their language does not deviate from pre-set responses. In October, IBM acquired a business called Expert Personal Shopper (XPS), a bot that can hold conversations with users in order to help them shop online. With this tech Watson has even more access to data (even data about pants is important) and XPS’ software will give Watson’s language skills a boost.
Like any large industry advancement, there are several notable figures pushing back. Tesla and Space X founder, Elon Musk is famously not crazy about the direction that Google is going with their programmable AI. Early this year, Musk established a research startup called OpenAI that hopes to develop friendly AI that will be accessible to and benefit everyone. They also aim to stop AI from one day becoming too powerful or being used for sinister purposes. While it may seem like an irrational fear, at the rate that AI tech is growing and IBM’s predictions, we could be seeing sentience in AI within our lifetimes.