8/2/24

Tower of Babel, Robots, destruction --ar 16:9 --sref 2214001679

Tower of Babel, Robots, destruction --ar 16:9 --sref 2214001679

Note: This PoV was largely shoplifted by a discussion with David Bieber, who laid most of this argument out to me.

I believe that AGI is on the horizon behind us, having been achieved sometime in ‘21/’22 with GPT-2 or GPT-3 level LLMs. This seems largely out of touch with the current belief that AGI is still someways away and is either some Holy Grail or harbinger of P(Doom) = 1. A large part of this comes from a lack of consensus on the definition of AGI. My own definition of AGI is pretty simple - I’ll just decompose the parts required to make Artificial General Intelligence.

Artificial

General

Intelligence

Why does it matter?

Is arguing about AGI a little pedantic? Probably - but I think it’s part of a broader malaise in AI/ML these days, whereby we cannot agree on the inputs, outputs, or markings of things like AGI, or consciousness, or alignment, etc. Without consensus on what these look like, its hard to agree on what they will do, or how we should respond. This leads us down a path littered with a lot of hyperbolic expectations of what AGI will look like when it arrives. By recognizing AGI as already achieved (without any major societal upheaval - or value creation for that matter) I hope we can defuse the discourse around these things.

If we are to achieve something like ASI (Artificial Super Intelligence), the path has to go through AGI first. The parable of the Tower of Babel comes to mind - without a common language or vocabulary, it becomes impossible for us to coordinate large projects.

I don’t consider myself a Safety-ist; I find it to be a very paternalistic philosophy that a handful of wealthy bros in NY and SF should set the timetable for how the rest of the world enjoys synthetic intelligence. Proponents are closeted Puritans who want to use the specter of ASI as an opportunity to enfranchise themselves as global thought police. I don’t consider myself an Accelerationist either - its a fundamentally nihilistic and juvenile worldview that good or bad outcomes are equally (un)interesting and (un)likely so we should try to forward to the end and get the upheaval all over with. They are Jihadis who believe in a moral crusade to purge the unbelievers prior to the arrival of ASI.