Fair Use or Foul Play? Navigating the AI Copyright Mess
If you steal a song, you’re a pirate. If you steal millions of books, you’re an industry leader. At least for now.
Last week, a federal court handed down a major ruling in the wave of AI copyright lawsuits sweeping the U.S. Judge William Alsup of the Northern District of California ruled in Tremblay v. Anthropic that training AI LLMs on copyrighted books is protected under fair use. However, pirating those books is not.
That distinction has been widely hailed in headlines as a win for AI companies. But let’s not forget: Anthropic is still headed to trial for unlawfully acquiring millions of books through known pirate libraries. They reportedly downloaded more than 7.5 million pirated books from sources like Books3, LibGen, and Pirate Library Mirror.
The judge did not pull punches. He called the downloads theft, saying “…Anthropic had many places from which it could have purchased books but it preferred to steal them…” He even quoted internal emails showing that Anthropic leadership referred to buying books as a “legal/practice/business slog” that should be avoided.
It seems from the Judge’s comments that piracy was not a careless mistake, it was a business decision.
The ruling makes it clear that how training data is acquired matters, a point that resonates deeply with authors and creators. But at the same time, the court essentially said: as long as you acquire content legally, you can train on it. That sounds simple, until you consider that much of the industry’s training data may not have been obtained that way.
The whole thing feels like a flashback to the Napster era, when individuals were fined hundreds of thousands for sharing a few songs, and lawsuits ultimately shut Napster down. Now, Anthropic stands accused of mass-scale piracy and some feel like they may just get a slap on the wrist — the cost of doing business. Big AI feels like it’s too big to fail.
For agencies, this raises complicated questions about risk. While the big AI companies may continue to operate with few consequences, smaller players like us and our clients could find ourselves easy targets for future lawsuits. Our contracts must be clear on how AI tools are used and who holds the liability for what they generate. We can’t control the industry, but we can protect ourselves.
AI Search Overviews Are Ranking High and Delivering Low
- Google’s AI Overviews are not just cutting into traditional clicks, they’re bring in visitors who are less engaged.
- A new study by Adthena, based on 23 million clicks across more than 1,000 websites, found that traffic from AI Overviews tends to be lower quality, with higher bounce rates, shorter visits, and fewer pages viewed.
- Paid search is also being affected, as AI Overviews push ads further down the page and reduce overall impressions.
- The impact is especially noticeable on mobile, where AI content can take up nearly the entire screen.
- Agencies need to keep a close eye on traffic quality and revisit both SEO and PPC strategies in light of these changes.
- If you are providing SEO services, now more than ever it is important to communicate clearly with clients about shifting search behaviors and the metrics that matter most.
Worth a Look
- Rich Tabor on AI Coding… “When you’re not burned out from wrestling with dependencies and import statements, you have brain space for the interesting questions.”
- How to create a portfolio stands out, a six-part series from design expert Eric Kennedy.
- Using Semrush to find SEO opportunities on Reddit – some interesting ideas from SEJ.
- A detailed look from Anthropic into how people use Claude for support, advice and companionship.

