Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At least for AI image generators it is a giant liability. As of two years ago AI-generated CSAM that is indistinguishable from original photographic CSAM is considered equally criminal. If users can spawn severely illegal content at will using your product you will find yourself in a boiling cauldron 30 seconds after going live.

Stable diffusion no longer uses even adult NSFW material for the training dataset because the model is too good at extrapolating. There are very few pictures of iguanas wearing army uniforms, but it has seen lots of iguanas and lots of uniforms and is able to skillfully combine them. Unfortunately the same is true for NSFW pictures of adults and SFW pictures of children.



I realize this is a highly taboo topic, but I think there are studies which suggest that access to (traditional) pornography reduces frequency of rape. So maybe Stable Diffusion could actually reduce the rate of abuse? (Disclaimer: I know nothing about the empirical research here, I just say the right answer isn't obvious.)

Edit: It seems also that language models are a very different topics, since they block any erotic writing outright.


Yep. No sane company wants to deal with the legal and PR nightmare of their product being used to generate realistic CSAM based on a child star and/or photos taken in public of some random person's kid.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: