AAP's filing in the Meta case argues that mass copying of protected text to train LLMs is commercially exploitative and incompatible with fair-use limits, especially where licensed markets for training data already exist. It also stresses that developers had lawful licensing pathways and chose not to rely on them.
The broader message is that AI disputes are shifting from abstract ethics to market-structure arguments: substitution risk, licensing displacement, and compensation mechanisms. For rights holders, that framing may prove more durable in court than purely rhetorical anti-AI claims.
(Shortened and summarised to avoid devaluing the source)