If artificial intelligence does not possess empathy, how would its use in warfare affect humans? Is the thought of AI developing on its own and rejecting its “creator” too far-fetched? A better question is not whether AI will take over our world but whether we are already giving it too much control in areas where human judgment matters most.
Most of us use AI on a regular basis, but have we ever considered AI to be the end of the human race? To be honest, I would not expect ChatGPT to be the cause of mass destruction, but the more I investigated it, the more intrigued I was.
In 1950, the first AI, called Theseus, was developed. Theseus was created by Claude Shannon. If you are a fan of Greek mythology like me, you might know Theseus, a hero sent to Crete to defeat the Minotaur in its labyrinth, ironically similar to the AI Theseus, a maze-navigating mouse that could remember its path. While this early version of AI was simple, it marked the beginning of a field that would evolve far beyond basic problem-solving.
AI has developed exponentially fast after the creation of Theseus. Today, everyone has at least heard of ChatGPT. ChatGPT launched in 2022, and it was originally based on large language models. LLMs are advanced AI systems trained on massive datasets that can understand and generate human-like language, allowing them to perform many different tasks. However, just because LLMs can communicate like humans does not mean LLMs have the ability to possess human emotion, yet.
This lack of genuine emotion is exactly why AI should not be trusted in high-stakes environments like warfare or mass surveillance. Recently, there was a debate between Anthropic, creator of ClaudeAI, and the Department of War. The disagreement began with the Department of War wanting the Pentagon to use Anthropic’s model for “all lawful purposes.”
This led to Anthropic denying the Department of War access to their model, which then resulted in the Department of War designating Anthropic as a “supply-chain risk to national security.”
Soon after that announcement, OpenAI, creator of ChatGPT, and the Department of Defense signed a contract. However, both OpenAI and Anthropic have expressed concern about how their models are used, particularly warning against their role in domestic mass surveillance or fully autonomous weapons systems. If even the companies developing these technologies recognize the risks, it raises serious concerns about how quickly AI is being integrated into national security without clear limits or accountability. This also highlights the growing need for legislation that protects citizens from AI-driven surveillance and ensures that these systems are used responsibly.
If even the companies building these systems are warning against their use in surveillance and autonomous weapons, we should be paying attention. The reality is that using AI for mass surveillance would erode privacy, expand government power and normalize constant monitoring in ways that are difficult to reverse.
The future of artificial intelligence remains uncertain, raising important questions about its potential consequences for humanity. Although early AI, such as Theseus, was a simple remote-controlled mouse designed to solve mazes, modern developments have produced technologies capable of influencing warfare at an unprecedented scale. This evolution suggests that as AI continues to advance, society must grapple not only with its capacity for innovation but also with the ethical and existential risks associated with weaponization and autonomous decision-making.
These concerns are not distant or abstract, particularly for students at Vanderbilt University, where AI is already integrated into everyday academic work and productivity tools. While its use in education may seem largely beneficial, the same technologies underpinning these tools could be adapted for far more dangerous purposes, making it essential for college students to remain aware of AI’s broader implications. Ultimately, the unpredictability of AI’s development demonstrates the need for regulation and reflection on how it may reshape human society in both beneficial and potentially catastrophic ways. AI is not inherently dangerous, but without clear boundaries, especially in warfare and surveillance, it has the potential to become exactly that.
