Even those who advocate for and build AI accept there is a decent risk – in the double digit percentages – that it will be a catastrophe. Yet they go ahead.

https://www.thetimes.com/culture/books/article/anyone-builds-everyone-dies-case-against-superintelligent-ai-eliezer-yudkowsky-nate-soares-review-9hclcfwch

Share.

6 Comments

  1. Submission statement: Just as bacteria don’t understand the mechanism of penicillin, so we shouldn’t expect to understand the cause of our extermination by a vastly superior artificial intelligence. But we should fear it.

    What you have to understand is:

    1. the best-resourced companies in human history are trying to create a true artificial intelligence – intelligent in the way we are intelligent, but a lot more so;
    2. if they succeed, that intelligence will want unexpected things.

  2. Not sure how we stop today’s megalomaniac billionaires. It’s quit scary how appropriate Alien Earth feels – these tech giants can do anything and they are beyond reproach. How many years into the future will our planet be run by *Corporations* ?

    AI is just another way to prevent working class individuals from joining the elite. It’s going to be very hard for Joe Public to become a billionaire in the future – how do you take on the might of the tech giants who squash you like a bug.

  3. One of the authors was so freaked out by the Basilisk, he banned discussion about it from his forum. That’s the level of rationality behind this book.

    You know the Conservative argument that because one mentally ill person stabbed a stranger, we should round up all mentally ill and homeless people and subject them to involuntary lethal injection? This is the equivalent of that argument, but for AI.

  4. AI development is no secret held by one person or one group of people. If you stop someone else is just going to continue. Even if we could get the entire western world on the same page other countries would just carry on. As long as AI remains useful or even holds the promise of being useful people will continue to develop it. 

    I really don’t get what these fear mongering posts and  articles are trying to accomplish. It’s not like they’re suggesting realistic solutions and regulations that might mitigate the dangers of AI. 

  5. BitchPleaseImAT-Rex on

    Because unfortunately there is near certainty that humanity will nuke itself give enough time