AI-generated content should be clearly labelled to help people spot fakes, committee says

https://www.theglobeandmail.com/politics/article-ai-generated-content-should-be-clearly-labelled-to-help-people-spot/

2 Comments

  1. MenudoMenudo on

    What’s depressing here is that no matter how much you agree with this idea, it won’t actually work. Suppose they pass a law making it mandatory that all AI content be tagged as such. Are they really going to go after your grandma when she reposts a meme on Facebook? How would they determine if someone had violated the law? Are they going to just automatically audit every post? Is it something that people need to report? And then report to who? Are we going to set up an agency that takes these reports and then reviews the material and determines whether or not it’s AI?

    A law is only as useful as it’s enforcement, and while I agree that AI content is a concern, I don’t think we should be spending hundreds of millions of dollars trying to police it. I would support this if they made it specifically for mainstream media. I would want to know if a movie, TV show, or song on the radio is AI generated but even then, its going to get increasing complicated and hard to enforce.

  2. Now Im not remotely saying we don’t have an issue with AI fakes

    But right now proving if something is AI or not is difficult,

    There are no reliable programs out there to detect AI, they are all notoriously unreliable, they have error giving false positive rates of anywhere up to 40%,
    [Human recognition of AI is basically a coin toss](https://cacm.acm.org/research/as-good-as-a-coin-toss-human-detection-of-ai-generated-content/)

    That is going to be the crux of any thing trying to regulate it, again not saying we shouldn’t do anything, but its not going to be nearly as simple as a water mark or an app you can use.