Share.

16 Comments

  1. “The company said it would now address those risks through its terms of service, restricting the use of its AI models in political campaigns and lobbying, and monitoring how people are using the models once they are released for signs of violations.

    OpenAI also said it would consider releasing AI models that it judged to be “high risk” as long as it has taken appropriate steps to reduce those dangers—and would even consider releasing a model that presented what it called “critical risk” if a rival AI lab had already released a similar model. Previously, OpenAI had said it would not release any AI model that presented more than a “medium risk.”

  2. Spara-Extreme on

    It’s not a sign of a healthy company making tons of money that they are trying to go for the lowest common denominator. Feeling the heat from Google, perhaps?

  3. In a couple of years, they’ll rename themselves to Cyberdyne Systems, before rebranding ChatGPT to Skynet….

  4. Nickopotomus on

    Thats funny because I just watched a video where people were adding signals to music and ebooks which humans can not perceive but totally trash the content as training material. Kind of like an ai equivalent of watermarks…

  5. And this is where you probably need to make the company liable for user miss use if they don’t want to actually implement safe guards. They can argue all they want that these people signed this usage agreement, but let’s be real most people don’t actually read the tos for stuff they use, and even if they did it’s like saying I made this nuke anyone can play with it but you agree to never actually detonate it cause this piece of paper saying you promised.

  6. TheoremaEgregium on

    More like they know it’s there, it’s inevitable, and they can either ignore it or don’t release at all.

  7. Tungstenfenix on

    Add to this the other post that was made here yesterday about disinformation campaigns targeting AI chat bots.

    I didn’t use them a whole lot before but now I’ll be using them even less.

  8. artificial_ben on

    I wouldn’t be surprised if this also ties into the fact that OpenAI removed the restrictions on military uses of its technology a few months back. Many agencies would love to use OpenAI technology for mass disinformation campaigns and it would be worth a lot of money.

  9. dontneedaknow on

    Sam and Thiel sharing a bunker in New Zealand for their upcoming apocalypse is such a can of worms…

    Hiding in a bunker with the geologic hazards in new zealand is just egregious.

    For someone that presumes their own status of ubermensch… this does not live up to the hype peter…

  10. HeavyRightFoot89 on

    Are we acting like they ever cared? The AI revolution has been well underway and manipulation and disinformation have been the backbone of it

  11. >OpenAI said it will stop assessing its AI models prior to releasing them for the risk that they could persuade or manipulate people, possibly helping to swing elections or create highly effective propaganda campaigns.

    >The company said it would now address those risks through its terms of service, restricting the use of its AI models in political campaigns and lobbying, and monitoring how people are using the models once they are released for signs of violations.

    Most of you are misinterpreting the headline. It’s not about AI getting tricked, it’s about not caring if the AI is weaponized to influence people. Well, they are ‘caring’ by forbidding it in the ToS… but I figure a good chunk of their rev probably comes from people running various campaigns, whether ‘legit’ marketing or political etc., so they proibably won’t want to lose that money just yet.

  12. Right in time after news broke out that Russia is corrupting western AI systems through flooding pro-Russian propaganda in the training datasets.

    Putin and agent Krasnov must be pleased.

  13. The problem is that if you want to get revenue from the product and not ads you need the product to be accurate and helpful. Enshittified ChatGPT is useless unless the goal is to just create revenue from user data.

  14. DarkRedDiscomfort on

    That’s a stupid thing to ask of them unless you’d like OpenAI to determine what is “disinformation”.