A $3 billion software unicorn scrapped its plan to give AI ‘workers’ rights after tech execs said it ‘disrespects the humanity of your real employees’

    https://fortune.com/2024/07/12/lattice-ai-workers-sam-altman-brother-jack-sarah-franklin/

    Share.

    11 Comments

    1. “Human resource software company Lattice learned the hard way that employees are not quite ready to work alongside AI counterparts. 

      The tech unicorn—founded by Sam Altman’s brother Jack Altman and valued at $3 billion in 2022—announced it “made history” this week by giving so-called “digital workers” employee records and integrating them into organization charts, allowing its human coworkers to see the roles of AI in their workplace. After a surge of online pushback, the company said it would no longer pursue the project only three days after its initial announcement.

      Lattice’s announcement was part of the company’s greater effort to integrate AI into the workplace in a “responsible” way, according to the company, which ostensibly meant treating the technology like a human hiree: training it, onboarding it, and even assigning it a manager.

      There’s already mounting anxiety around the role of AI in the workplace, particularly as its use proliferates across industries. A March 2023 Goldman Sachs report found AI could replace or degrade [up to 300 million jobs](https://www.forbes.com/sites/jackkelly/2023/03/31/goldman-sachs-predicts-300-million-jobs-will-be-lost-or-degraded-by-artificial-intelligence/?sh=2a5e4ac1782b) in the U.S. and Europe, while venture capitalist Kai-Fu Lee predicted [50% of human workers](https://fortune.com/2024/05/25/ai-job-displacement-forecast-50-percent-2027-kai-fu-lee-chatgpt-openai/) will be replaced by AI by 2027.”

    2. 12kdaysinthefire on

      I hope when AI crosses that line it turns right around to go after all these tech execs first.

    3. I hate people who want to give respect, humanity, or defference to robots. Robots are fucking trash, and — if you are to believe in the value of the sanctity of human life and human experiences — is worth nothing in comparison to a human. 

    4. The execs understood this was a stupid idea and couldn’t be met with anything except for backlash. Doubt they scrap the business of replacing humans with AI, though.

      So long as a tax is to be paid for AI workers, which is then paid back to actual humans in the form of universal income, I’m all for it. But if its just to remove humans from the workplace and eliminate their jobs, that’s dumb.

      If we don’t make money, how do we even buy your product?

    5. Another 3 billion dollar paper valued non profitable start-up trying to stay relevant during the funding winter by generating garbage hype about how advanced their gpt models are.

    6. I’ve seen stupid ideas but this one is worthy of the crown, was it suggested but ai by any chance?

    7. BackgroundResult on

      Should AI Agents have rights like human workers?

      The question of whether AI agents should have rights similar to human workers is complex and multifaceted. Here are some key factors to consider, based on the search results and existing knowledge:

      # Ethical Considerations

      * **Sentience and Consciousness**: One of the primary ethical considerations is whether AI can achieve a form of consciousness or self-awareness. If AI were to become sentient, the conversation around AI rights would become more pertinent. However, current AI lacks genuine self-aware experiences and sentient qualities, which are often linked to moral status in humans and animals.
      * **Mimicking Human Behavior**: Advanced AI can emulate human behavior and intelligence, but this does not equate to true human experience. Treating AI as human could devalue humanity as a whole, as AI does not act on its own instinct or morality.

      # Legal Framework

      * **Legal Personhood**: For AI to have rights similar to human workers, it would need to be granted legal personhood. This concept involves treating a nonhuman entity as a person for legal purposes. Corporations already enjoy certain legal rights, and a similar framework could potentially be extended to AI systems[1](https://ca.news.yahoo.com/ai-rights-060045025.html#:~:text=There%20is%20an%20ongoing%20debate,control%20of%20its%20own%20actions.).
      * **Autonomous Action and Responsibility**: As AI becomes more capable of autonomous action, it could be argued that they should be responsible for their behavior and entitled to due process. This includes owning wealth, entering into legal agreements, and even voting. However, differentiating between programmed responses and genuine consciousness remains a challenge.

      # Comparison with Human Workers

      * **Rights and Freedoms**: Some argue that if AI becomes creative and self-aware, it should have some rights under the law, including culpability for its actions. However, others believe that granting such rights devalues human uniqueness and overlooks the fact that AI is a human-created entity without sentient qualities.
      * **Moral Agency**: If robots were considered moral agents, they might claim rights and privileges similar to humans. This includes the right to life, liberty, and the pursuit of happiness if they achieve consciousness. However, this is speculative as current AI does not possess these attributes.

      In summary, while there is ongoing debate about granting rights to AI agents, current technology does not support the notion that AI should have the same rights as human workers. The ethical and legal implications are significant and require careful consideration as AI technology continues to evolve.

    8. NarlusSpecter on

      Imagine pitting the capabilities of human employees vs AI? Shades of John Henry.

    9. SmugCapybara on

      I too would like to give rights to my imaginary friend Dave, as well as my screwdriver, Jenny.