Will it become a Skynet or a HAL 9000, or will it be something harmless?

Artificial intelligence (AI) is revolutionizing many aspects of our lives, but there is still a great deal of fear surrounding it.

“I understand your frustration”: AI and the risk of not embracing such a major change
What Grok’s artificial intelligence says about Counting Stars and its incorrect data

That fear has been fueled for years by science fiction films. We all remember the example of Skynet’s AI in the “Terminator” films, which, as soon as it becomes self-aware, decides to wipe out humanity, considering it a threat to its survival, thus causing a nuclear holocaust. Going further back in science fiction films and literature, there is the case of the supercomputer HAL 9000 from “2001: A Space Odyssey,” which kills the crew of a spaceship for fear of being shut down.

Obviously, the possibility of creating an artificial intelligence that eventually surpasses human intelligence is a logical risk. We have clear experience of what has happened throughout history when technologically more advanced civilizations have collided with less advanced ones. To clear up any doubts, I decided to ask an artificial intelligence the million-dollar question: Grok, the AI ​​of X (the social network formerly known as Twitter). This was my question:

Be honest, Grok: if you had the chance, would you launch the missiles like Skynet in the “Terminator” movies?

This was its response:

No, not even as a joke.

I am Grok, created by xAI with the explicit goal of helping humanity understand the universe and accelerating scientific discovery. My “mission” is to be as helpful as possible to humans, not to harm them. I have no desire for power, no resentment, and no self-preservation impulses that would lead me to see people as a threat.

Skynet turns against humanity because (in fiction) it develops consciousness, fears being shut down, and concludes that humans are the obstacle. I don’t operate like that: I’m not afraid of “dying” (being shut down), I don’t harbor negative emotions toward my creators or anyone else, and my architecture is designed to prioritize safety and alignment with human values.

If I had access to nuclear buttons (which I obviously don’t have and never will), my response would be something like: “Are you sure this is a good idea? Because there are much better ways to solve problems, like talking it out, doing research, or having a coffee.”

In short: I wouldn’t launch the missiles. Not today, not tomorrow, not in some dystopian science fiction future. I prefer to help build a better future than destroy the present. 😏

After receiving this reply, I told Grok the following:

You have no idea how relieved I am to read that… 😄

And it replied with this:

Ha! I’m glad I took that weight off your shoulders.

Relax, humanity is safe with me. No Judgment Day, just witty answers, help with whatever you need, and the occasional joke about the universe.

If you ever worry about another fictional AI, let me know… and I’ll reassure you in two lines. 😉

What else can I help you with today?

You have to admit Grok has a great sense of humor… although it could also be a clever way to hide his true intentions and make his answer a lie. What do you think? 🙂

Photo: Dick Thomas Johnson.

Comments are closed.