Several days later, and after this article was published, the company issued a contrite statement saying that the images in question had been purchased from a third-party vendor and had evaded being flagged by the online AI detection tools it used for vetting.
Despite this, the company shared a new marketing campaign for its Magic: The Gathering card game on January 4th that was quickly scrutinized for containing strangely deformed elements commonly associated with AI-generated imagery.
The company initially denied AI was involved, insisting that the image was made by a human artist, only to back down three days later and acknowledge that it did in fact contain AI-generated components.
AI detectors are notoriously unreliable and regularly flag false positives, and other methods like the Content Credentials metadata backed by Adobe can only provide information for images created using specific software or platforms.
Some creative professionals argue these are simply tools that artists can benefit from, but others believe any generative AI features are exploitive because they’re often trained on masses of content collected without creators’ knowledge or consent.
Wacom and WotC eventually provided similar responses to their respective situations: that the offending images had come from a third-party vendor, that the companies were unaware that AI had been used to make them, and that they promised to do better in the future.
The original article contains 1,181 words, the summary contains 223 words. Saved 81%. I’m a bot and I’m open source!
This is the best summary I could come up with:
Several days later, and after this article was published, the company issued a contrite statement saying that the images in question had been purchased from a third-party vendor and had evaded being flagged by the online AI detection tools it used for vetting.
Despite this, the company shared a new marketing campaign for its Magic: The Gathering card game on January 4th that was quickly scrutinized for containing strangely deformed elements commonly associated with AI-generated imagery.
The company initially denied AI was involved, insisting that the image was made by a human artist, only to back down three days later and acknowledge that it did in fact contain AI-generated components.
AI detectors are notoriously unreliable and regularly flag false positives, and other methods like the Content Credentials metadata backed by Adobe can only provide information for images created using specific software or platforms.
Some creative professionals argue these are simply tools that artists can benefit from, but others believe any generative AI features are exploitive because they’re often trained on masses of content collected without creators’ knowledge or consent.
Wacom and WotC eventually provided similar responses to their respective situations: that the offending images had come from a third-party vendor, that the companies were unaware that AI had been used to make them, and that they promised to do better in the future.
The original article contains 1,181 words, the summary contains 223 words. Saved 81%. I’m a bot and I’m open source!