Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A few months ago I made a proof-of-concept on how finetuning Stable Diffusion XL on known bad/incoherent images can actually allow it to output "better" images if those images are used as a negative prompt, i.e. specifying a high-dimensional area of the latent space that model generation should stay away from: https://news.ycombinator.com/item?id=37211519

There's a nonzero chance that encouraging the creation of a large dataset of known tampered data can ironically improve generative AI art models by allowing the model to recognize tampered data and allow the training process to work around it.



Great lora post, thanks for sharing this again! Not sure how I missed as I'm especially interested in sd content.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: