It's basically defacing at this point. Most think it will stop "AI" but at the end only the person that controls it may or may not be.
Given that the basic concept (heavily simplified) of AI image-gen is that you hand a denoising tool a canvas of nonsense and see what it hallucinates the "real" image under the noise to be, I doubt it would be a significant detour to also train denoising tools on common forms of adversarial noise, to clean up their training set. Not perfect, since in the end the image is being irrevocably defaced, but good enough. It'd just need the same tools put to a different use. I remember seeing extensions for AI frontends to handle common varieties like Glaze a year and a half ago. And that's assuming the adversarial noise works as well as its authors claim in the first place.
The ugly watermarks won't do anything because in large-scale AI efforts, there's probably not going to ever be a human who even sees it among the training set; at best it'll get included anyway, and at worst images detected as being watermarked will be passed through an automated removal tool, which will be imperfect but good enough. A hobbyist making a LoRA can use more precise semi-automated watermark removal tools with manual touch-up, because it's small enough training sets for that to be viable.
In the end, most "effective" aspect of these measures is that if an artist reduces his work to nothing but ugly garbage that isn't worth looking at, then people won't bother paying the artist's work enough attention to even want to make LoRAs. But if someone does care enough to go through all the preprocessing, the irony is that what comes out of the LoRA is probably going to look a lot better than the real artist's work with all the sabotage like this thing.
It's basically defacing at this point. Most think it will stop "AI" but at the end only the person that controls it may or may not be.
Agreed. As much as I avoid posting those watermarked images since it ruins them for me, it's unlikely Elon or Jack or other big tech companies would care not to scrape those because "artist put things on their art"
Isn't this going to have limited effect since they still have the absolutely massive trove of scraped data from before anyone knew this was a thing?
I was told by someone that AI can still process data from it. Worse since the idiots use the same watermark image, AI can just easily be process after dozen of artwork that bare the same watermark... In short their efforts are completely pointless.
Agreed. As much as I avoid posting those watermarked images since it ruins them for me, it's unlikely Elon or Jack or other big tech companies would care not to scrape those because "artist put things on their art"
People can opt out of grok using their material for learning. I feel most antis didn't read that far into the agreement.
People can opt out of grok using their material for learning. I feel most antis didn't read that far into the agreement.
People can opt out of grok, but the only way to opt out of AI entirely is by not uploading the art to the internet. Making something public inevitably means losing some control. Twitter has no opt-out or ToS condition for art being reuploaded from there to Danbooru, but that doesn't stop people. And neither did this image's "do not reupload" watermark.
Edit: That said, I think you're right, even though people are downvoting. It does seem to be a common misunderstanding among anti-AI artists that Twitter's forcing it and they need to go elsewhere to avoid it, which is doubly wrong, for both of the reasons we state.