Taco carts have caused more deaths than AI. Regulations have historically been reactive, not proactive.
MetaKnowing on
“When people ask me why I lose sleep over artificial intelligence, I don’t talk about killer robots. My fear is more prosaic: that we will hand over so many decisions to opaque algorithms that we end up no longer controlling our future.
The “taco cart” comparison is not a joke: In New York you need a license, a food safety course, and a Department of Health inspection before you can sell a plate of tacos on the sidewalk. Yet any company with enough money and talent can train a powerful AI model capable of drafting legislation, writing malware or optimizing content for addictiveness — without even writing a safety plan.
The regulatory asymmetry would be comical if it were not so dangerous.
With no common standard or baseline legal requirements, AI companies face perverse incentives to rush products out the door with minimal safety checks.”
Pentanubis on
Because tacos have a far greater probability of killing you in a miserable fashion. This is not hyperbole.
Midnight_Whispering on
> The truth is, as even the CEOs of AI labs admit, we are still a long way from understanding how AI systems work and how to make them safe.
So he wants idiot politicians to regulate something they don’t understand.
rqx82 on
Regulations are generally written in blood, so it’ll be a while (and too late to make a difference) before we see regulation on AI, especially while it’s printing money for the people who buy congress.
SignificantRain1542 on
Why can’t AI just be a military thing. I don’t care if the military has hover tech. I don’t care if the military has mutant hybrid people. I would if they brought it to the consumer market though. If you don’t understand something, you need further R&D and not release it to the masses. If the private money dried up and you have nothing to show for it? Guess it wasn’t a successful business. Its happened before and it will happen again. That’s capitalism. They just realize that this is the one shot they have at making AI ubiquitous at every level with 0 push back and oversight if they pay the right price to the king. That’s not capitalism.
aplundell on
I should **hope** so. Food is one of the most fundemental requirements of human life.
I know some people read this and think, *”But profit motives are causing AI companies to do bad things.”*
Ok, sure. You think the tacos are being given away for free? What do you think is stopping the taco people from doing bad things for profit?
MattGraverSAIC on
Yes food can kill. AI is a nascent industry when AI is as old as taco technology I’m sure there will be as many, if not more.
haarschmuck on
Improperly handled food can and does kill people. AI doesn’t.
This is such a dumb article.
UnifiedQuantumField on
So I just did a quick count and 21 posts on the /r/Futurology are about AI.
Can we please have a filter for AI posts?
This is getting ridiculous.
Ben_Thar on
Just wait. AI is going to kill off the humans with bad tacos.
11 Comments
Taco carts have caused more deaths than AI. Regulations have historically been reactive, not proactive.
“When people ask me why I lose sleep over artificial intelligence, I don’t talk about killer robots. My fear is more prosaic: that we will hand over so many decisions to opaque algorithms that we end up no longer controlling our future.
This “[gradual disempowerment](https://urlwatch.com/urlwatch?b=aHR0cHM6Ly83OTJkNTZlOS5zdHJlYWtsaW5rcy5jb20vQ2JTRFFEN1NVbDlSNmhkMk9BT1czeWF0L2h0dHBzJTNBJTJGJTJGZ3JhZHVhbC1kaXNlbXBvd2VybWVudC5haSUyRg==&m=j7Uj14IwyDk9hBil6pAcrhhOEZWmRgYSqyjZ8kIZ5HONxUrJ2bWiaviQRsrb2alAwaIVYGRUnAN250AuS33h46cGdE1HIcZmiWGgJcwFdVxDDbnfhJ_91BJcWXYUkh0XdEopSqePoL4OBjw5EUHknnNvw2E5HSuYkiXviNsSR6jraq-LYKHyZeB8YCfIGEVToW7BMvogeUdEjJmkp9CAj1Md-Jvco1eSHPxbA-KrduGGX-K_tn3C9y_ngvTECrUOFQpQPajv0hspHBzU75iRqlmEwpTDaQMVMVhFvg0RvDaHbJwapp_kBTwrpRnvV79z9zJdFcK39NQb4OqXCCM5KaCLw9gGeAIg&s=EzWRtnMpNvt4ItxW&k=AQIDAHioQIyO0eoYoITvmmVWPEd8w8kFgVR_x66Xvs3Hkz-H4QG4YW__rutRd_o3eR9nLLqnAAAAfjB8BgkqhkiG9w0BBwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM-IWjaC2zP2BxzZzCAgEQgDuF10iEsIiRAcAR48N5SH-7jp03N4HS_rkzhSR_9rbfqRKF6av0pWRsuDVornvLYK9xocbeQxCU5I8aGg==&bs=isvk1006mlRTS7ODR2KnBGFaH4wH6sEn9b7e9Ehz-y0=&domain=792d56e9.streaklinks.com)” is the default path ahead, if we go on treating AI as less risky than a taco cart.
The “taco cart” comparison is not a joke: In New York you need a license, a food safety course, and a Department of Health inspection before you can sell a plate of tacos on the sidewalk. Yet any company with enough money and talent can train a powerful AI model capable of drafting legislation, writing malware or optimizing content for addictiveness — without even writing a safety plan.
The regulatory asymmetry would be comical if it were not so dangerous.
The truth is, as even the CEOs of AI labs admit, [we are still a long way from understanding how AI systems work](https://www.darioamodei.com/post/the-urgency-of-interpretability) and how to make them safe.
To their credit, the heads of the leading AI companies acknowledge that their technology could create risks to public safety, including future [existential risks](https://urlwatch.com/urlwatch?b=aHR0cHM6Ly83OTJkNTZlOS5zdHJlYWtsaW5rcy5jb20vQ2JTRFFENzlmVlFpNW4wTVRRdUZMTnhWL2h0dHBzJTNBJTJGJTJGc2FmZS5haSUyRndvcmslMkZzdGF0ZW1lbnQtb24tYWktcmlzaw==&m=7iCRZrhX2giIYT9Scc3Oi9Khw-iFHphL1iC3NjtQsvxKUNyn0QlwpeovfzaM3yo0UEUtdHN4mg_FmfebxAbjQlddpTry3d9aruzD8pLgSHUbkv_6nydJ3wRgbSNNocZN2mB6yTw2ducSUkyCNMkZVi8pNQBzJMFafyqQ1som4Am1nLtgq9I0nymZ1Y2vDyDAKFxbB5hqbqhF-9gh6_b48E0VcS8YBM5FKIT5p_KAS2X5l2VGp1FrjlMXrQSg1QdQfmSU1wSQqeqU5w_A4X-7wxehI8STVzmrrDeXdJkceFgR7kvQeMMD-aRPQ15xYd9agr_AwwXsCz3-TNUCkQHmBuovSo9PA1IR&s=uUtM5tZB1ulr1Dz6&k=AQIDAHioQIyO0eoYoITvmmVWPEd8w8kFgVR_x66Xvs3Hkz-H4QG4YW__rutRd_o3eR9nLLqnAAAAfjB8BgkqhkiG9w0BBwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM-IWjaC2zP2BxzZzCAgEQgDuF10iEsIiRAcAR48N5SH-7jp03N4HS_rkzhSR_9rbfqRKF6av0pWRsuDVornvLYK9xocbeQxCU5I8aGg==&bs=Op2QhT6lEx28xNG0q6aMmp9iRGOdqbvXAY0k93NEpsQ=&domain=792d56e9.streaklinks.com).
With no common standard or baseline legal requirements, AI companies face perverse incentives to rush products out the door with minimal safety checks.”
Because tacos have a far greater probability of killing you in a miserable fashion. This is not hyperbole.
> The truth is, as even the CEOs of AI labs admit, we are still a long way from understanding how AI systems work and how to make them safe.
So he wants idiot politicians to regulate something they don’t understand.
Regulations are generally written in blood, so it’ll be a while (and too late to make a difference) before we see regulation on AI, especially while it’s printing money for the people who buy congress.
Why can’t AI just be a military thing. I don’t care if the military has hover tech. I don’t care if the military has mutant hybrid people. I would if they brought it to the consumer market though. If you don’t understand something, you need further R&D and not release it to the masses. If the private money dried up and you have nothing to show for it? Guess it wasn’t a successful business. Its happened before and it will happen again. That’s capitalism. They just realize that this is the one shot they have at making AI ubiquitous at every level with 0 push back and oversight if they pay the right price to the king. That’s not capitalism.
I should **hope** so. Food is one of the most fundemental requirements of human life.
I know some people read this and think, *”But profit motives are causing AI companies to do bad things.”*
Ok, sure. You think the tacos are being given away for free? What do you think is stopping the taco people from doing bad things for profit?
Yes food can kill. AI is a nascent industry when AI is as old as taco technology I’m sure there will be as many, if not more.
Improperly handled food can and does kill people. AI doesn’t.
This is such a dumb article.
So I just did a quick count and 21 posts on the /r/Futurology are about AI.
Can we please have a filter for AI posts?
This is getting ridiculous.
Just wait. AI is going to kill off the humans with bad tacos.