Feb 18, 2011

What happened to AI?

I don't know when it happened, but it feels like at some point AI researchers gave up. At some point they gave up on the idea or the goal of Artificial Intelligence, looking to break past that barrier between computation and something more: intelligence, consciousness, self-awareness.

They stopped being AI researchers, and just worked on computers.

Computers like Watson.


I haven't followed IBM's Jeopardy!-playing computer very closely, but every time I do, I find I'm disappointed. The part of it that is interesting is how the computer has been tooled to deal with language or respond to language or try to handle the vagueness and indeterminacy of human language. Yet, this is more applied linguistics than attempts at AI, and the Watson people seem completely devoted to building better computers but not really interested in advancing the exploration of the question of consciousness.

Reports are, for example, that the workings of Watson will be kept as trade secrets, so no one else will be able to investigate what worked or didn't work and build on IBM's successes or look at what the limitations were. The Watson web site talks about possible industrial uses for the computer, but nothing more than that.

Watson's game show victory doesn't seem to mean anything more than that either -- IBM build a pretty advanced computer, though it still seems basically to do what Google does. It's an advance, but an advance in computation machines, which seems to be what they wanted.

There's still this aura of AI, but when you look into it, none of the substance is there. None of those questions seem to be being asked. Below the headline level of this story, it's all about computers, and not about minds.

Part of the idea behind Watson -- or at least one of the people behind Watson who worked, before, on Brutus -- was to build computers that were functionally indistinguishable from humans. The idea, as I understood it, was that AI had spent too much time looking for a substance of consciousness and not enough time working on the behavior. After all, I can't even know for (philosophical) certain whether anybody besides me is really conscious, so why would I hold computers to a higher standard than the one I hold humans to? We assume, charitably, that other humans have minds because they act like they do -- insert the "like a duck" here -- and so, the AI people said, we should try to build AI that acts like it's conscious, and not worry about whether it really is, whatever that would mean.

I took the whole idea as interestingly phenomenological.

As far as I can tell, though, that idea was abandoned back along the way, and Watson represents not the next step in AI, but computation, computation, computation.

The more computers advance, basically, the further away they seem from anything we would want to call intelligence. Google seems further away from being self-aware than Deep Blue did. Watson's machiness seems more apparent than the car-building robots'. Maybe "The Cloud" or "The Internet" could make that evolutionary leap into thinking, but this seems more like Sci-Fi fantasy that practical possibility.

The question is have we given up on AI just in practice, or in our interests, or having we also given up on AI as possible in principle. If we don't think it's possible, anymore, why?

Maybe we do think it's possible, at least in theory, but just got tired of it?