He allegedly used Stable Diffusion, a text-to-image generative AI model, to create “thousands of realistic images of prepubescent minors,” prosecutors said.
Yeah, it’s very similar to the “is loli porn unethical” debate. No victim, it could supposedly help reduce actual CSAM consumption, etc… But it’s icky so many people still think it should be illegal.
There are two big differences between AI and loli though. The first is that AI would supposedly be trained with CSAM to be able to generate it. An artist can create loli porn without actually using CSAM references. The second difference is that AI is much much easier for the layman to create. It doesn’t take years of practice to be able to create passable porn. Anyone with a decent GPU can spin up a local instance, and be generating within a few hours.
In my mind, the former difference is much more impactful than the latter. AI becoming easier to access is likely inevitable, so combatting it now is likely only delaying the inevitable. But if that AI is trained on CSAM, it is inherently unethical to use.
Whether that makes the porn generated by it unethical by extension is still difficult to decide though, because if artists hate AI, then CSAM producers likely do too. Artists are worried AI will put them out of business, but then couldn’t the same be said about CSAM producers? If AI has the potential to run CSAM producers out of business, then it would be a net positive in the long term, even if the images being created in the short term are unethical.
Why is that? The whole point of generative AI is that it can combine concepts.
You train it on the concept of a chair using only red chairs. You train it on the color red, and the color blue. With this info and some repetition, you can have it output a blue chair.
The same applies to any other concepts. Larger, smaller, older, younger. Man, boy, woman, girl, clothed, nude, etc. You can train them each individually, gradually, and generate things that then combine these concepts.
Obviously this is harder than just using training data of what you want. It’s slower, it takes more effort, and results are inconsistent, but they are results. And then, you curate the most viable of the images created this way to train a new and refined model.
Yeah, it’s very similar to the “is loli porn unethical” debate. No victim, it could supposedly help reduce actual CSAM consumption, etc… But it’s icky so many people still think it should be illegal.
There are two big differences between AI and loli though. The first is that AI would supposedly be trained with CSAM to be able to generate it. An artist can create loli porn without actually using CSAM references. The second difference is that AI is much much easier for the layman to create. It doesn’t take years of practice to be able to create passable porn. Anyone with a decent GPU can spin up a local instance, and be generating within a few hours.
In my mind, the former difference is much more impactful than the latter. AI becoming easier to access is likely inevitable, so combatting it now is likely only delaying the inevitable. But if that AI is trained on CSAM, it is inherently unethical to use.
Whether that makes the porn generated by it unethical by extension is still difficult to decide though, because if artists hate AI, then CSAM producers likely do too. Artists are worried AI will put them out of business, but then couldn’t the same be said about CSAM producers? If AI has the potential to run CSAM producers out of business, then it would be a net positive in the long term, even if the images being created in the short term are unethical.
Just a point of clarity, an AI model capable of generating csam doesn’t necessarily have to be trained on csam.
That honestly brings up more questions than it answers.
Why is that? The whole point of generative AI is that it can combine concepts.
You train it on the concept of a chair using only red chairs. You train it on the color red, and the color blue. With this info and some repetition, you can have it output a blue chair.
The same applies to any other concepts. Larger, smaller, older, younger. Man, boy, woman, girl, clothed, nude, etc. You can train them each individually, gradually, and generate things that then combine these concepts.
Obviously this is harder than just using training data of what you want. It’s slower, it takes more effort, and results are inconsistent, but they are results. And then, you curate the most viable of the images created this way to train a new and refined model.
Yeah, there are photorealistic furry photo models, and I have yet to meet an anthropomorphic dragon IRL.