I was using V2 to restyle a picture in different ways, just to try it out because I assume V1 will be going away at some point. It rendered six of styles and one it said didn't meet the content moderation.
I retried it with the same prompt and the second time went through fine. This means nothing was wrong with the prompt or the image; it just arbitrarily decided it needed to render porn.
I don't know what it found objectionable since it was literally just a face from the neck up, which doesn't typically include any naughty bits, so I'm only left to assume the AI took it upon itself to draw boobies or something and charge me 30 credits for its own insanity.
I didn't ask for porn; the AI decided to draw porn and reject it anyway. In V2 this costs anywhere from 30 to 150 credits for the AI to interpret the prompt WRONG and blame the user.
This isn't a fluke either; this has happened more than once over the past few weeks and it's not limited to renders of people. It's enough to suspect a pattern, design, or bug.
To cost this much, the AI should be following the prompt to the letter and if it detects that it made porn, it should just keep going until it unmakes the porn instead of charging us for nothing.
Otherwise V2 is just going to fast become not worth the cost when V1 only costs one credit to get the prompt entirely wrong.
Also, for 30 to 150 credits, the error message needs to tell us exactly what in the prompt was objectionable so we can avoid it in the future.