no insight

stream of consciousness
rss
Kelsey Piper · theargumentmag.com
Everybody else continues to reject the premises, but they do so at their own peril. Over and over again, the crazy people have insisted that they are building AIs that can do everything humans can do. They have been slightly too aggressive in their expectations for AI development, but while you're busy dunking on them, did you even notice that AIs have improved far faster than anyone else predicted?

I think "assume things will proceed a little slower than the most aggressive estimates show — but only a little slower" was a better strategy for predicting 2025 than any other. It's certainly vastly, vastly better than "assume all this AI nonsense is about as good as it's going to get."

I also hope things slow down. But the stochastic parrot crowd is just plain wrong. For example, Luciano Floridi and Massimo Chiriatti wrote in 2020 that:

GPT-3 does not do what it is not supposed to do, and that any interpretation of GPT-3 as the beginning of the emergence of a general form of artificial intelligence is merely uninformed science fiction.

This piece reflects a kind of confidence that I hope we will see less of.

Anyone who makes confident predictions about the future of AI progress is, in my opinion, overextended. But "exponential growth continues" is not an outcome you can rule out, and "AI systems are useless and economically irrelevant" is now empirically wrong (and suggests the speaker doesn't know the state of capabilities).

Of course, the jury is still out on whether "AI is evil"; that can certainly still turn out to be true. But as Piper writes, "I don't blame anyone for hating AI, but I will say this: It makes you really bad at predicting it."