Wednesday, Stability AI announcement this would allow artists to remove their work from the training dataset for an upcoming release of Stable Diffusion 3.0. The move comes as an artist advocacy group called Spawning tweeted that Stability AI would honor opt-out requests collected on its Have I been trained website. Details of the plan’s implementation, however, remain incomplete and unclear.
As a brief summary, Steady broadcastan AI image synthesis model, gained its ability to generate images by “learning” from a large data set images taken from the Internet without consulting the rights holders for permission. Some artists are upset about it because Stable Diffusion generates images that can potentially rival human artists in unlimited quantities. We followed the ethical debate since the public launch of Stable Diffusion in August 2022.
To understand how the Stable Diffusion 3 opt-out system is supposed to work, we created an account on Have I been trained and downloaded an image from the Atari pong arcade flyer (which we don’t have). Once the site’s search engine finds matches in the open, large-scale artificial intelligence network (LION) image database, we right-clicked multiple thumbnails individually and selected “Disable This Image” from a context menu.
Once flagged, we could see the images in a list of images that we had marked as disabling. We encountered no attempt to verify our identity or any legal checks on the images we supposedly “disabled”.
Other hitches: To remove an image from training, it must already be in the LAION dataset and must be viewable on Have I Been Trained. And there is currently no way to disable large groups of images or the many copies of the same image that might be in the dataset.
The system, as currently implemented, raises questions that have echoed in the announcement threads on Twitter and Youtube. For example, if Stability AI, LAION or Spawning undertook the enormous effort to legally verify ownership to control who is refusing footage, who would pay for the work involved? Would people entrust these organizations with the personal information needed to verify their rights and identity? And why try to verify them when the CEO of Stability said that legally, permission is not required to use them?
Moreover, imposing on the artist the responsibility of registering on a site with a non-binding connection at Stability AI or LAION and then expecting their request to be honored seems unpopular. In response to statements about Spawning’s consent in its announcement video, some people Noted that the opt-out process does not correspond definition of consent in the European General Data Protection Regulation, which states that consent must be actively given, not assumed by default (“Consent must be freely given, specific, informed and unambiguous. In order to obtain freely given consent , it must be given on a voluntary basis.”) In this sense, many Argue that the process should be opt-in only and that all artwork should be excluded from AI training by default.
Currently, it appears that Stability AI is operating within US and EU laws to train Stable Diffusion using scraped images collected without permission (although this issue has yet to be tested in court). But the company is also taking steps to acknowledge the ethical debate that has sparked big event against AI-generated line art.
Is there a balance that can satisfy artists and allow advancements in AI synthesis technology to continue? For now, Stability CEO Emad Mostaque is open to suggestions, Tweeter“The @laion_ai team are very open to feedback and want to create better datasets for everyone and are doing a great job. From our end, we believe this is transformative technology and we are happy to engaging with all parties and trying to be as transparent as possible. Everything is moving and maturing, fast.”