Editor’s note: The opinions expressed in this commentary are the author’s alone.
Artificial intelligence has a lot of misconceptions.
The first images that come to mind are the dystopian classics: Hal locking Dave out of the airlock, terminators shooting machine guns into a crowd, or (one of my favorites) V’ger taking pot-shots at the Federation.
The premise is nearly always the same — humanity gets ballsy and invents a superintelligence, which decides humans are petty and illogical, then subsequently tries to wipe us out with whatever means available.
But there’s the good ones too, right? Iron Man wouldn’t feel the same without the witty byplay between Tony Stark and Jarvis. And we learned how to feel more human because of Samantha in Jonze’s Her.
In reality, artificial intelligence carries the same moral duality — that something can be used for either good or evil but isn’t automatically one or the other — as any formidable tool, albeit one of the smartest we’ve invented yet.
Even more to the point, most experts agree that when A.I. reaches its full potential, it will be less “new sentience” and more a “very flexible computer.” We might call it A.I., but it’s probably more akin to its sibling Intelligence Augmentation, or I.A. — advanced systems that take in a lot of data and organize it contextually to help us make decisions.
These are the systems that will move us beyond the technology lineworkers we are today into the information directors we’ll be tomorrow.
Redefining the interface
Today, we digest information through user interfaces on monitors or the smartphones in our pockets. The push of a button can move mountains (or order pizza).
Call me unimaginative, but in the future I’m betting we’ll still use laptops and tablets — although they will probably seem like magic pieces of plastic compared to what we have now. The huge difference will be that the systems we use will be a lot smarter and understand us a lot better than our machines today.
Context coupled with the massive stores of available data that we’ve collected over the last few years will require systems that can distill that data into ways we can understand.
Artificial intelligence systems are the difference between the car you drive red-eyed in morning rush hour traffic and the car you’ll sleep in as it drives itself to your job. It will be the difference between a conversation with a human on customer support who yells at you when they’re frustrated and a call system that assesses the tone of your voice and changes the way it talks to you.
Systems will be advanced enough to start perceivably blurring the lines between human and machine, likely passing a lot of Turing-grade milestones. They will, however, be unlikely to develop a sense of self-actualization.
It will be easier to interact with technology as it learns to understand our intentions, read our body language, and “see” the world around it. It will change the way we work, the way we play and even the way we interact with each other.
Although it’s hard to say exactly what this will look like, the basic idea will be that eventually, smart technology could be embedded into most of the things you interact with. Think everything from your toothbrush to instant messaging at work.
As a user experience designer, this excites me more than it should. Gone will be the days of struggling to find a way to shove massive spreadsheets of data into a phone screen because “the client likes Excel a lot” when you can train a system to build graphical models and data distributions on the fly. These will be systems that incorporate mood, tone, and even dialect to be more approachable.
We’re already seeing these changes today. The likes of Siri, Google Now, and Amazon Echo are all the primitive real-world tests we have in our hands right now. While these examples are largely voice-based (and kind of suck), designers are working on other communication methods, like the rising trend of chatbots. Popularized by Slack, chatbots are being deployed by companies like Facebook in order to assist in tasks within text conversations or providing new ways of interacting with brands.
Here’s an example of a chatbot at work.
In short, plugging specialized data engines that have truly human-like communication abilities — both in delivering information and reading our inputs in context — will drastically remove the friction we experience today with our technology.
With further advancements, we’ll start to interact with technology in far more organic ways that will rival our human counterparts. Information will soon be even cheaper, easier, and more enjoyable to attain.
While it might be a bit before you can Amazon Prime a fresh set of Iron Man armor, that doesn’t mean Siri won’t be locking people out of their houses in the next decade.
“I’m sorry Dave, I can’t do that.”
Coty Beasley is co-founder of Edge Up Sports, which uses machine learning and IBM Watson to provide sports data insights. Connect with Coty on Twitter @beacrea.