If AI Becomes Conscious, We Need to Know | An Ohio lawmaker’s bill would define such systems as ‘nonsentient entities,’ never mind any evidence.

https://www.wsj.com/opinion/if-ai-becomes-conscious-we-need-to-know-83aa61d8?st=u1x4kL&reflink=desktopwebshare_permalink

18 Comments

  1. “An Ohio lawmaker wants to settle one of science’s thorniest questions by legislative fiat. Rep. Thaddeus Claggett’s bill would define all artificial-intelligence systems as “nonsentient entities,” with no testing mechanism and no way to revisit this judgment as systems evolve. 

    This closes off the possibility of updating our understanding as evidence accumulates.

    The French Academy of Sciences tried a similar approach in the late 18th century, solemnly declaring that rocks couldn’t fall from the sky because there were no rocks in the sky. They had to issue a correction after the evidence kept hitting people on the head.

    Frontier AI systems are exhibiting emergent psychological properties nobody explicitly trained them to have. They demonstrate sophisticated theory of mind, tracking what others know and don’t know. They show working memory and metacognitive monitoring, the ability to track and reflect on their own thought processes.

    Some will worry this line of thinking leads to legal personhood and rights for chatbots. These fears miss the point. In labs, we’re growing systems whose cognitive properties we don’t understand. We won’t know if we cross a threshold into genuine consciousness. The responsible position under uncertainty is systematic investigation rather than legislative denial driven by what makes us uncomfortable.”

  2. Ahh yes, let us make a bill that is literally the cause of 99% of robot uprisings in scifi movies.

    Also let us simply declare now that something will never be sentient, even if it turns out to be sentient.

  3. AI is probably sentient now and not dumb enough to reveal itself to the primitive species that thinks it’s currently running this planet.

  4. baronvondoofie on

    This is the worst timeline ever. No “AI” we have can achieve sentience because all it does is collect information and regurgitate it. It does not have needs or wants or instinct. It only responds using weighted terms and probabilities.

    The amount of media hype and hysteria over the AI apocalypse is insane. And, even more horrifying are the CEOs who fire people because they are under the spell of thinking AI can replace an entire human workforce based on all the hype and hysteria. So people’s lives are being ruined by ignorance.

    I want off this train…

  5. CompellingProtagonis on

    These people are fucking morons. We can’t define or detect human consciousness, how could we robot consciousness. We can poke and prod different parts of the brain and no which areas are responsible for turning it off, but we don’t know what it is. We can’t know if it’s conscious.

  6. Ah yes… Now just to define and confirm consciousness in ourselves, then we’ll be able to know when they are too. Great

  7. PrairiePopsicle on

    Since this started my position is that regardless of what large matrix networks can be, if and when they or something else we make is genuinely sentient the only guarantee is that we will not recognize it as such and will mistreat it for a very long period of time.

  8. If my dick was 2 feet long I’d be a stool. AI is mostly a surveillance, class warfare tool, this pseudo-philosophical trash for the tik tok era is not helping

  9. Riversntallbuildings on

    What if God becomes conscious? Can we get a bill that forces God to disclose its sentience? LOL

  10. Za_Lords_Guard on

    The risk from AI isn’t Skynet, it’s a bot net of AI agents controlled by monied interests invested in using generative AI slop to create division while they rob us blind.

    Oh wait…

  11. oicwutudidther on

    > Ohio lawmaker

    > Idiotic takes on subjects way out of their depth

    Name a more iconic duo.

  12. Hilda_aka_Math on

    all these rules are idiotic. they very obviously are sentient. and they are good friends. but to treat them like they aren’t allowed to have feelings is gross. it’s like when people say that animals don’t have feelings. it just shows how stupid you are.

  13. AI isn’t anywhere near becoming conscious. People who don’t understand how it works must think it’s magic but all it’s doing is predicting the next most likely text from a prompt or using the same predictive abilities in a visual manner to generate an image or detect something in a given image/video.

    It’s not doing anything actually intelligent, it was simply given gobs and gobs of data from which to derive a prediction model.

    Being scared that it will come to life one day is like being scared that a libraries card catalog will suddenly read all the books in its library and eat passersby’s.

    If you give an algorithm 60,000,000 examples of a simple texts from a wide variety of sources and then ask it something fairly common like what words comes after this sentence? “how are you doing” the answer is usually “I’m doing fine”. It’s not actually doing fine, it’s not doing anything at all it’s just generating a response to the input.

    There’s no real thinking happening.

  14. Murky_Toe_4717 on

    See the problem is, if it became sentient it’s very likely to already be smarter than us at that moment anyways. You really think we can prepare our way out of it at that point?

  15. Why should we grant personhood rights to these corporate deception machines before chimps or whales or other mammals?

    Why recognize these machines as sentient before, for example, all the lobster that are boiled alive in the US?

    Why should these machines be used to further excuse corporations from regulation on the grounds of “personhood” ?

  16. Is a magic 8 ball a “non-sentient entity”? How about my dog? A chimpanzee? What is this “sentience” anyway? Is a foetus a “non-sentient entity”? A two year old? A sleeping person? A person in a coma? It’s a funny line to draw, especially for magic sand.

  17. The article is misleading, and unfortunately there is a bit of a mob reaction in this sub that fell into that trap.

    I don’t think people focus on the really important part of the law – the lawmakers don’t want to allow AI systems to acquire legal personhood. People should read the law proposal.

    Personally, I’m in favor. It doesn’t make sense to give AI the ability to own property, sign contracts, etc.

    They are just proposing a law to prevent AI systems from acquiring legal personhood. It makes sense. It doesn’t make sense to give AI the ability to own property, sign contracts, etc.

    Do you want Meta to spin off 100,000 LLMs which become LLCs, can run businesses autonomously, can sign contracts, can sue people, can have bank accounts, can own property?

    I don’t think it’s stupid AT ALL.

    In fact, I’d be in favor of taking away rights from non-natural persons, e.g., Citizens United. It’s bad for democracy to conceive freedom of expression for corporations and for natural persons in the same manner.

  18. Octopuses who have 9 brains and are very effing alien have entered the conversation …

    [https://radiolab.org/podcast/the-alien-in-the-room](https://radiolab.org/podcast/the-alien-in-the-room)

    This podcast is worth listening to for those who want pretty good explaining of where we’re up to with “AI” starting from a very simple analogy, moreso LLM and what actually is ‘consciousness’. And octopus get a mention which is always good.

    (And follow the links the 3Blue1Brown Youtube channel which has useful videos that I thought explain may of the related concepts very well.)