Tonight, the final episode of man-versus-machine Jeopardy! will air, pitting Watson, the IBM supercomputer, against Jeopardy! champions Ken Jennings and Brad Rutter one last time.
During the first episode, Alex Trebek (who really does look better with a moustache) explained the genesis of Watson: “A little over three years ago, the folks at IBM came to us with a proposal that they considered to be the next grand challenge in computing. And that was designing a computer system that could understand the complexities of human language well enough to compete against Jeopardy!’s best players.”
So far, Watson’s Jeopardy! record remains impressive. But can artificial intelligence compete with humans in a more meaningful context? And if so, will human-cylon wars commence?
As Dartmouth professor Richard Granger explains in January’s Cerebrum article, computational neuroscientists and others working to create artificial intelligence systems that exceed human brain capabilities have miles to go.
“Brains, alone among organs, produce thought, learning, recognition,” writes Granger. “No amount of engineering has yet equaled, let alone surpassed, brains’ abilities at these tasks. Despite huge efforts and large budgets, we have no artificial systems that rival humans at recognizing faces, nor understanding natural languages, nor learning from experience.”
IBM scientists appear to have made headway in getting a machine to recognize natural language—at least language as presented by Jeopardy! But can Watson have a conversation, perceive emotions, or distinguish a U.S. city from a Canadian one?
Here’s what Granger has to say:
“Even our simplest perceptions often rely on top-down processing: using stored memory representations to inform our ongoing perception and recognition. In some circumstances, we can recognize objects in just tens of milliseconds, so rapidly that it is unlikely that any top-down pathways are yet engaged. Yet once we’re beyond simple recognition, to the far richer range of inference, association, and even language, memories strongly influence our perceptions. Merely thinking of a car is sufficient to activate the same early visual areas that would have been triggered by actually seeing the car, including its shape, size, color, and other features.”
Watson may be able to understand and compute answers to questions, but it certainly cannot derive meaning from those answers. For that, we have to look to people like Ken Jennings, Brad Rutter, and Alex Trebek. At least until the development of Watson 2.0.
For more on the development of Watson, watch this NOVA special.