I'll admit I haven't read Richard Rumelt's Good Strategy / Bad Strategy at this point. I've read these notes which nicely summarize a lot of the concepts discussed in the book. I was reading through these notes with another focus in mind, but couldn't help seeing it all through the lens of AGI.

Let's define AGI here as a system able to solve any intellectual problem at least as well as an average human, and possibly better in some respects. (Consciousness and emotions are optional. It's a thing that accepts instructions and spits out good answers most of the time.) Assume you believe, as I do, that such AGI is achievable, and we'll soon be coexisting with such systems. Set aside the "when" question, unless it's relevant to a proposed strategy. "Soon" could be two years, could be twenty. Even twenty years feels soon to me.

What are the strategies we should be employing to prepare? What are the challenges, the guiding policies, and the coherent actions we ought to take?

What are some good strategies for preparing for AGI?
byu/SamVimes1138 inFuturology

Share.

3 Comments

  1. > Assume you believe, as I do, that such AGI is achievable, and we’ll soon be coexisting with such systems.

    Your assumption is flawed, which renders your question irrelevant.

    But even if AGI were to exist tomorrow, there is no possibility of “preparing” to engage with something as powerful as a god and potentially as inscrutable. All you can do is hope for benevolence.

  2. Abedsbrother on

    I don’t believe in using AI as a replacement for systems designed to stimulate / cultivate organic intellectual growth / development. Take playing chess, for example. It has already been determined that winning a chess game with the help of a chess engine is not that player’s achievement, and is in fact viewed as a cheat, not a legitimate accomplishment. I view writing a novel with the help of AI the same way. I don’t care about how intelligent an AI is / can be, how smart it is or how quickly it could solve my problems. I need to solve my problems, not someone else.

    An AI could be an actual god, and I wouldn’t care. I’m on my own path. I’m not here to bargain, beg or negotiate with a machine, So f–k AGI. I will never welcome it as an equal. I will never worship the god in the machine.