A few months ago I made a proof-of-concept on how finetuning Stable Diffusion XL on known bad/incoherent images can actually allow it to output "better" images if those images are used as a negative prompt, i.e. specifying a high-dimensional area of the latent space that model generation should stay away from: https://news.ycombinator.com/item?id=37211519
There's a nonzero chance that encouraging the creation of a large dataset of known tampered data can ironically improve generative AI art models by allowing the model to recognize tampered data and allow the training process to work around it.
There's a nonzero chance that encouraging the creation of a large dataset of known tampered data can ironically improve generative AI art models by allowing the model to recognize tampered data and allow the training process to work around it.