Instagram CEO: More practical to label real content versus AI

https://mashable.com/article/instagram-ceo-label-real-content

22 Comments

  1. 8hotsteamydumplings on

    Why don’t they just make a seperate app for AI only and make Instagram only for real people?

  2. IncorrectAddress on

    This will gain more truth as we move on, but really it’s a nothingburger, because at the point where someone needs to know if something was made with AI assistance or not, it will be too late as it will already be popular, and accepted popularity > everything else.

  3. Acrobatic_Switches on

    Dont give a shit about practical. Its not the consumers obligation to worry about the companies problems. Real content is the baseline. AI and the companies that are peddling this nonsense have the responibility to be transparent about its process.

    People are begging for a new platform away from the texh companies gargling trumps nuts.

  4. Stannis_Loyalist on

    That’s suppose to be the governments job. Regulate these tech companies to differentiate between what is real or fake.

    Both China and the EU have already codified the ‘Right to Know’ into law. The U.S. is currently the only AI superpower choosing to leave its citizens unprotected in the name of ‘deregulation,’ effectively prioritizing corporate greed over social stability and safety.

  5. Every app is going to die because of AI slop apart from Instagram. Instagram will die because of the unbearable amount of ads.

  6. More practical for me to ditch another social media app. Fuck Meta. Corpo dipshits ruin everything they touch.

    “cryptographically sign images at capture, creating a chain of custody.” is a slippery slope. More big brother bullshit.

    Doesn’t anyone realize how this can be used to stifle dissent? This will ensure the government knows exactly who took every photo & video uploaded to these services. Maybe not even if it’s uploaded if he’s asking manufacturers to get on board. You think the government can be trusted to not use this to identify & intimidate people when they are currently kidnapping people & making Antifa boogey men lists right now?

    This is a terrible idea and is Adam’s excuse for not policing his own service of bots, because Adam has a financial incentive to allow bots on his service for engagement.

    I already deactivated FB, and my accounts on IG and Threads are looking like they will be next.

  7. Practical for who?

    You tech motherfuckers decided to unleash an extraordinarily powerful technology that can make people question what is real and steals from the real world work of people in order to do that.

    And that technology was unleashed seemingly without any regulation or controls.

    So you motherfuckers want to do that without any effort on your part to control its use. Go fuck yourselves

  8. i_am_13th_panic on

    lol, how about they just enforce the use of the phone’s camera and label everything uploaded from outside the phone as AI or potentially AI ? Oh that’s right, AI content still makes them money.

  9. Kinda like how the Wongs on Mars own everything so its easier to brand things they dont own.

  10. Just scan for AI artifacts, yeah it will be expensive, but not a problem when you are a billionaire company.

  11. AnythingNo6910 on

    You might think that with all the hype surrounding AI and its life altering consequences it will bless the world with, it would be possible for companies who are the same ones who create the ai tools to develop something that could detect ai generated content.

  12. Shoddy-Pie-5816 on

    Software wise if you can identify something is OC then you can logically deduce when something isn’t OC. But the article mentions fingerprinting OC, not labeling. So, the article is unclear, but it sounds to me like adding a marker to genuine OC will make it easier to label both content types, respectively.

  13. Specialist_Heron_986 on

    It’s a race to determine whether the plethora of AI-generated junk content flooding social and traditional media and communications or the growing distrust in the authenticity of what we consume will be more damaging to creators and consumers. Content consumers assuming/accusing creators’ content of being AI-generated is growing in frequency.

  14. Facebook: “*AI is the future and we’re investing billions!*”…

    Facebook subsidy: “*We’re struggling with and really need to get a handle on this AI as it’s increasingly out of control!*”

    Foh. 🤦🏻‍♂️

  15. Angry comment section here, but I kinda think this makes sense, in a sort of unfortunate way.

    If AI slop is going to proliferate endlessly, which it will, then having a way to note or mark human-generated or “organic” content is more controllable and easy to implement. This will still be challenging, but maybe less challenging than the alternative, and also leads to an outcome where society puts premium status on human-generated content.

  16. this_my_sportsreddit on

    seriously, can you imagine using a social media app that uses AI to generate content and engagement? Thank god I only use Reddit, which uses AI to generate content and engagement.

  17. This is a joke right? There is no real way to separate AI from real people. There are tons of Ai filters for lighting, skin smoothing etc. so either you have purist no AI or AI because computer won’t be able to tell between an AI edit or a 100% AI generation in the next 2 years.

  18. Labeling ‘real’ content is a massive pivot toward the ‘real-first’ authentication model we’re seeing for 2026. With 82% of users demanding transparency on deepfakes, this isn’t just about ethics—it’s about platform trust as an asset. The algorithm shift toward conversation depth and ‘sends per reach’ means authenticity is the new engagement moat. Practicality wins over perfection here.