AI’s most powerful creative tools, like Stable Diffusion and Midjourney, are built on scraped datasets containing uncounted works of art, music, video, and photography. No permission. No licensing. No compensation. Just extraction at scale.
A 0.1% sample audit from the DataComp CommonPool dataset confirms what creators are starting to discover: Millions of creative work have been fed into AI models without your knowledge, and now powers commercial products you’ll never benefit from.
These models were trained on data scraped from 2014-2022, before most people understood what AI even was. There was no way to consent, no licensing framework, and no protection for creators. Today, that content generates billions in value. And creators see none of it.
Someone had to step up and build infrastructure that protects creators. Camp is equipping creators with the tools to define, track, and monetize their IP on their terms: → Register IP onchain via Proof of Provenance → Usage terms are enforced by creators → AI must license what it trains on → Royalties flow automatically
AI isn’t going away. But the way it sources data must change. Camp is building the infra for creators to become stakeholders, not data points. We will end the age of silent data scraping. ⛺
42,61K