Powered by Blogger

Thursday, February 24, 2011

Artificial Intelligence

I ended up having an email debate with family members, when Dan sent an article about Watson, the IBM computer that beat 2 human contenders on "Jeopardy!" - a story I had been following with some interest.

I heard a scientist saying on NPR that a "thinking" computer is not going to happen in the foreseeable future, and I used his argument when I had a great conversation with Caleb about this, because he was all caught up in the idea of computers taking over, as is portrayed in so many movies (e.g., I Robot, Terminator, Eagle Eye).  I also emphasized to Caleb that those movie scenarios depend on computers making this huge leap from processing to considering and interpreting and deciding and choosing and judging, which they can't do now, and they aren't expected to do any time soon.

A great example comes from a mistake that Watson made during Jeopardy, when asked, "What do grasshoppers eat?" and it said "kosher." As noted by one of Watson's developers - grasshoppers are apparently a kosher food and Watson made connections between the two words without comprehending the real point of the question.




Via email, Lisa sent a link to the Time magazine article based largely on Kurzweil's theories - considered very "optimistic" regarding the potential of AI (I have it, but haven't read it yet).

Then I read the Atlantic cover story, an excerpt of Brian Christian's book The Most Human Human.  Here is a couple of key paragraphs from that story:

Who would have imagined that the computer’s earliest achievements would be in the domain of logical analysis, a capacity once held to be what made us most different from everything else on the planet? That it could fly a plane and guide a missile before it could ride a bike? That it could create plausible preludes in the style of Bach before it could make plausible small talk? That it could translate before it could paraphrase? That it could spin half-discernible essays on postmodern theory before it could be shown a chair and say, as most toddlers can, “chair”?


As computers have mastered rarefied domains once thought to be uniquely human, they simultaneously have failed to master the ground-floor basics of the human experience—spatial orientation, object recognition, natural language, adaptive goal-setting—and in so doing, have shown us how impressive, computationally and otherwise, such minute-to-minute fundamentals truly are.

Though I must say that the Turing test - a 5 minute typed exchange, the computer "passes" if 30% of judges can't tell whether they're conversing with a person or a computer - seems like an extremely low (and contrived) threshold for AI.  It's fun, but I don't think it captures what really makes us HUMAN, as Mr Christian discusses to some extent in his article (and what makes it so unlikely that computers will surpass us in the most important cognitive areas).

Computing quickly, even very very quickly, as Watson was able to do, is not the same as thinking.  Here is an excerpt of an essay that I think covers the distinction pretty well - I like the way that he repeatedly notes that since we currently don't really know what makes our brains work the way they do, it's going to be hard to make a computer that can do the same things.

How close does something like Watson bring us to the goal of creating true artificial intelligences? The longstanding benchmark for an AI to pass is the Turing test, meaning that the machine could not be distinguished as nonhuman from its replies.



Even those close to the Watson project dismiss the idea that the system represents a Turing-level intelligence. Eric Brown, for example, remarks that Watson might be indistinguishable from a human playing Jeopardy!, but it lacks any good capability for general conversation. Stephen Wolfram, the computer scientist behind Mathematica and Wolfram Alpha, argues that Watson can only answer questions with objectively knowable facts and that it cannot offer a judgment.

. . . However, it is also surely true that the human brain does not think simply by hatching and evaluating thousands of possible responses to every situation in the way that Watson does. Machine intelligences certainly do not need to work the way our brains do. But if the goal is to create an artificial intelligence that can match a human one, science will also need to be alert to efficient alternatives in our neurosystems that can help machines scale up.



As effective a general savant as Watson is in the context of Jeopardy!, it is still a computer optimized to do one thing: play that game. A machine with exactly the same approach that could be equally versatile “in the wild” would need to be much, much more powerful. That sort of brute force approach might work; it is, after all, a big part of how Deep Blue beat Garry Kasparov in their chess tournament. But it is probably a wildly inefficient way to build a machine with human-level cognition. Computing power may indeed be increasing exponentially, but expanding the capabilities of something like Watson toward that end might involve a processing problem that escalates even faster.


. . . piling on computational resources without any clear regard for what might be a biologically guided way of deploying them makes it preposterous to think that anyone will bother with such a project. And because we currently have only the faintest glimmers about how such higher cognitive abilities emerge from our brains, the day when we can translate those mechanisms into something suitable for AI seems remote, too.

Labels:

0 Comments:

Post a Comment

<< Home