
As the "Are We the Baddies?" meme suggests. If you're a country's military, in a democracy, that wants to carry out mass civilian surveillance and use killer robots, maybe you're the one with the problem. Anthropic can be as principled as they like, there are plenty who'll be happy to help – Peter Thiel's Palantir is eager and enthusiastic about implementing this agenda.
It's depressing that none of the other Big Tech firms have any scruples about this.
Pentagon threatens to cut off Anthropic in AI safeguards dispute
The US military is threatening to cut ties with AI firm Anthropic over the company's refusal to allow its AI to be used for mass civilian surveillance and fully AI-controlled weapons.
byu/lughnasadh inFuturology
9 Comments
The United States argument is that everything they’re doing is “all legal.” As a way for them to convince Anthropic to allow them to use their ai without guardrails. It’s legal because you can basically throw anything under the patriot act at this point.
They can still use robots to do the mass killing of Americans. The Pentagon just needs to find some chap willing to pull triggers.
When the American Government tries to prosecute senators for daring to publically remind soldiers that their loyalty is to the constitution and their duty is to disobey illegal orders, you have to believe Antropic is right to refuse to cooperate with them.
This word “democracy” I think may be overused and inappropriately used. The United States, for example, is not a democracy, but rather a Constitutional Republic. Many people tend to call it a democracy, but it is not a democracy.
If you are discussing a country’s military, in a democracy, you will notice they are subject to a focus on national security that treats the civilians as a threat. For example, China is a democracy, and mass civilian surveillance is an absolute fact of life. It is not that they want to conduct mass civilian surveillance, it is at they will conduct mass civilian surveillance. “Big Tech” firms tend to be owned by other firms, owned by other firms, owned by other firms, and in reality, the mass surveillance people are worried about is readily and openly conducted market research, it is agreed upon in the EULA or EUSA. By using the product, the consumer agrees to be monitored, and also the consumer tends to agree, even if they don’t want to, that by using the product the data can be sold to other companies, even if those companies are shells and owned by alternative organizations such as an intelligence apparatus.
OpenAI would be all too happy to swoop in and let the government do as they please
So what I am hearing is that if I need to off-again-on-again ai for work, and mistral isn’t cutting it, Claude is IN.
We are heading towards a dystopia. If not already there.
This sounds like the Captain America 2 plot, a gun loaded and aimed at everyone, ready to shoot once they turn into “traitors”
Well that just makes me want to use Anthropic more.