Share.

6 Comments

  1. Noteworthy:

    > In June, a judge ruled that Anthropic’s use of books to train its AI models was “fair use,” but ordered a trial to assess whether the company infringed on copyright by obtaining works from the databases Library Genesis and Pirate Library Mirror. The case was slated to proceed to trial in December, according to Friday’s filing.

    The training of their model on books was fine, that they stole the books was not fine

  2. The investors must love their money going into a legal black hole as well as an overhyped plagiarism machine.

    When is this bubble going to burst?

  3. Honestly a very interesting conclusion overall, especially if the judges determination is accepted more broadly.

    It seems the only real determination was that Anthropic should have paid for the individual copies of the books they used to train their model.

    The fact that the use of copyrighted works to train models was seen as fair use would have significant implications. 

  4. AHardCockToSuck on

    Training should not be copyright abuse, their customers distributing copyrighted content should be.

  5. CranberrySchnapps on

    So… copyright is worthless if you just build the expected fine into your expenses?

    Wonder how this is going to work out for Meta.

    edit: nvm, sorta

    > The company will pay roughly $3,000 per book plus interest, and agreed to destroy the datasets containing the allegedly pirated material, according to a filing on Friday.

    The troublesome part about this is it only applies to the groups bringing copyright lawsuits. Curious how this will affect the current models for Claude since… all? … of the models were likely trained with this copyrighted data?