[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/ic/ - Artwork/Critique


View post   

File: 238 KB, 1792x1024, AI-Poison.jpg [View same] [iqdb] [saucenao] [google]
7054113 No.7054113 [Reply] [Original]

>TL;DR - Two tools to protect your art from LLM learning have been developed.
>One is defensive
>The other is borderline malware, but in an image format
>*CYBERPUNK INTENSIFIES*

>Watch it here, with the timestamp - https://youtu.be/SbCBaopgbqw?si=b7-6-K0HBku4GH2e&t=444
>It's worth your time, I promise

Hello, /ic/

First post here. My apologies if this thread isn't artwork related per se, but I do find it to be quite relevant to many of you.
As most of you must be aware by now, we have extremely plagiarizing levels of Artificial Intelligence today which we as artists have little to no say in how said tools and companies operate.

>Which is extremely unfair and plain evil, since it uses our artwork as direct sampling pools for those.]
>Personally I think this is industrial espionage and anyone pursuing such activities should be punished harshly

Anyhow, we have two tools, both developed by the University of Chicago.

>CONTINUES

>> No.7054117
File: 506 KB, 1024x967, AI-Poison_02.jpg [View same] [iqdb] [saucenao] [google]
7054117

>FUCK I FORGOT THE THREAD TITLE
>Sorry

>Continuation

>One is named Glaze

This prevents LLMs from "learning" your art style accurately and it produces a very weird, abstract thingy in the end.
I think this would be particularly good if you do cartoonish/anime style types, since the line art could easily become a huge hazard for a LLM trying to learn it

>The other is named Nightshade

This completely novel concept took me by huge surprise. It's super obvious in hindsight yet nobody (that I know of, even in the fictional realm of ideas and literature) came up with such an idea.
LLMs are basically neural networks which function similar to a brain. Difference is, this Nightshade permanently induces the LLM from learning erroneously about potentially not only a single concept but also multiple at once.

Potentially, you could literally waste months or years of time spent with learning cycles in a matter of weeks or days.
Eventually, it could be hard or impossible to tell what is a "correct" shape and what is the erroneous, artifact result from say a generated graphic prompt.

>Cool! Are you shilling for the team who've made this?
No. I am a 3D artist myself and I would like to share this news with you, since it could help your art career or aspirations.

>But what if they train AI in order to detect glazed and poisoned images? Wouldn't this render this whole technology useless?
Yes, but AI is very bad at this. At least as of right now. I sincerely doubt it could ever be good at this. To this day it's hard to tell what is photoshopped and what is raw, unaltered.


>Anyhow, have fun with Glaze and Nightshade :D

>> No.7054123

>>7054113
>>7054117
AI bros are just gonna develop a way to counter these, in the end the one with more funding wins, and we know which side that is.

>> No.7054137
File: 139 KB, 1080x1080, Disdain_Itachi_Pepe.jpg [View same] [iqdb] [saucenao] [google]
7054137

>>7054123
Ok. In such case there would need to be three things

>A - A.I/LLM models good enough to detect what is a prompt and what is not. There is no such thing right now. Not even for forensics.
>B - A.I/LLM that can "revert" said changes to a satisfactory degree. How can you undo something that you didn't do nor know how to?
>C - A.I/LLM companies willing to slow down and downscale the whole size of their operations. The whole point is to automate, not to hire curators.

There isn't A,B or C as of right now.

>By the way your post sounds like AIdiot cope

>> No.7054140

>>7054137
>By the way your post sounds like AIdiot cope
No, your post gave me some level of relief, we're on the same side

>> No.7054148

>>7054140
I wish you luck.

Give those scum the Night Shade.
Show them your whole fury.

Let their Data Centers BURN.

>> No.7054153

Artists got too comfortable with boorus. That's why it is so easy to have your shit scrapped.

>> No.7054161

>>7054153
Perhaps.

If not for scrapping, LLMs wouldn't have been this good this fast.

>> No.7054217
File: 178 KB, 2918x596, 1707425615899254.png [View same] [iqdb] [saucenao] [google]
7054217

T B H a good amount my stuff is /pol/ humor and was always gonna be questionable racist or political so training on my stuff gonna be kinda funny.

More so too in the fact if company niggas really want to post it haphazardly and it ends up as someone's advertisement. They'll need to explain why they're promoting nazism or ultra nationalists though I know fiverr folks already used for that on the small scale

>> No.7054223

>>7054217
What the fuck wrote this shit?

>> No.7054233

>>7054223
Not sure though I found this repost on /pol/ honestly
I think original anon was a curry since they've honestly been spamming stuff there and /int/ and for some reason are the most surprised by honestly Tuesday 4chan stuff
>most prob are newfag call center guys

>> No.7054299
File: 229 KB, 938x962, 1707355884567466.jpg [View same] [iqdb] [saucenao] [google]
7054299

>>7054223
Funny thing I saw too was a thread made a couple months ago by some prompt retard going
>look who's ok with AI training on their stuff
and it was a Stonetoss
Then a week or 2 also some other AIbro made a thread going how could Anti-AI people use AI to be racist
Honestly next they train it on Lovecraft's writing and be shocked to see this appear

>> No.7054308
File: 72 KB, 526x524, tin man get fucked.jpg [View same] [iqdb] [saucenao] [google]
7054308

>>7054113
little too late
ai bros already drank all the milkshake
no new art will emerge from artists rendering nighshade glaze obsolete

>> No.7054322
File: 377 KB, 622x621, Angry_Cat_Purring_Stops.png [View same] [iqdb] [saucenao] [google]
7054322

>>7054233
I think not even AI could write garbage this bad.

>> No.7054326

>>7054308
Once again though if your thing is more kool-aid if they post that shit else where they're retarded

>> No.7054327
File: 115 KB, 1000x563, AI_Out_Of_Control_Stonetoss.jpg [View same] [iqdb] [saucenao] [google]
7054327

>>7054299
This reminds me of something

>Pic. Extremely related

>> No.7054333
File: 52 KB, 501x720, American_Psycho_Advices.jpg [View same] [iqdb] [saucenao] [google]
7054333

>>7054308
Not true.

Again -

>Nightshade is poison

That means that LLMs can "learn" how to generate from correct sampling to wrong sampling.

And we're not even touching the biggest issue of all:

>Using sampling from already AI-generated graphics
>Literal AI models inbreeding their learning models

>> No.7054334
File: 1.29 MB, 3573x2720, GD4CcwLagAAbgWZ.jpg [View same] [iqdb] [saucenao] [google]
7054334

Nightshade is too little too late. LAION-5B has already given aibros all the training data they'll ever need

how "up to date" the data is is completely irrelevant now

>> No.7054348
File: 62 KB, 646x566, Become_Ungovernable_Orb.jpg [View same] [iqdb] [saucenao] [google]
7054348

>>7054333
And by the way:

>AI only got this good from scraping entire websites with no curation whatsoever

Now they will have to deal with a literal sampling minefield.

>Am I going to fuck up my LLM if I add this artist?

Every sampling becomes a game of Russian Roulette with Nightshade.

>> No.7054357
File: 127 KB, 720x369, IMG_20230629_195923.jpg [View same] [iqdb] [saucenao] [google]
7054357

>>7054334
I wonder if this gonna give /pol/ ideas for "reference photos" being "up to date" I know their was a Taylor Swift thread there earlier last week.
Then again too I wonder how much all the training data ever needed is just gonna be for Loli porn or fetishes
>truly the pinnacle of science is to provide more coom material
Also something fucked to think about
>China made an AI child
>it's a little girl
>some AIfags dream of robo waifus
>near first thing made is a little girl
Also not to long a Chinese couple was executed for killing their kids. I wonder if this a "training" thing
Some would say getting a dog is a warm up for responsibility on raising kids but than again I don't think that analogy works to well for China
>funny screenshot semi related

>> No.7056093

>>7054357
meds

>> No.7056397

>>7056093
Well anon I said I thought of something fucked I'm not sure what were you expecting

>> No.7056428

This sounds nice and all but last I checked both glaze and nightshade both noticeably degraded image quality and in the case of nightshade was shown to be ineffective.

>> No.7056502
File: 3.01 MB, 2894x4093, 1699180331133974.jpg [View same] [iqdb] [saucenao] [google]
7056502

I suspect AIfags will just hire thirdies to paint over the artefacts and noise. take picrel, I could probably edit enough of it out to make it usable again for training in two or three hours, meaning a guy in China might be willing to do it for 10-20$. It won't work for artists with busier rendering styles but for anyone else it'll just mean making a LoRa take time and effort instead of being free and taking a few hours of your GPU.

It might be useful for corpos, maybe if every artwork you made public depicting something you own the rights to was glazed to shit and models made by another company still figured out how to draw it you could argue there was an intent to fuck you over and sue, but they're not stopping the CivitAI open source neckbeards with this.

>> No.7056564
File: 21 KB, 230x219, c1.png [View same] [iqdb] [saucenao] [google]
7056564

>>7056502
>AI autists will unironically spend hours drawing to avoid drawing

>> No.7057188

you fell for a academic shill project with no real-world application. i hate the aifags too but shilling glaze/nightshade is just actual seething

>> No.7057191

>>7056502
just to recap:

> noisy images are easily filtered in early stage of processing when building a large model
> this technique is ineffective on LoRA models which account for most of the eggregious stealing right now

it's actually worthless.

>> No.7057192

>>7054113
>>7054117
all AI tech nerds have to do is just create a seperate program to just screencrop images and offload those into the learning algorithm, how the fuck is hidden malware going to stop windows screencropping?

>> No.7057204

how is it theft? your art being in a model is no different than your art being in someone’s memory, the only dif is that ai doesn’t have to grind or practice

>> No.7057209
File: 697 KB, 701x1161, file.png [View same] [iqdb] [saucenao] [google]
7057209

>>7057192
>how the fuck is hidden malware going to stop windows screencropping?
if they want to circumvent NS? nothing
only the law can effectively stop the training on a large scale
their NS team task is thrice as hard as AI team's since the image has to be seen normally by the human eye but somehow can trick the AI

>> No.7059720

>>7054123
already done btw heres some tard on tumblr explaining why it's snake oil https://www.tumblr.com/reachartwork/740023392920551424/not-remotely-the-same-ai-system-to-my-knowledge

>> No.7059825

>>7059720
Well you said he's a tard so there's no reason to waste time reading that

>> No.7059829

>>7054334
At this point the only solution is to invent a time machine, go back to the early stages of the internet and show people what happened with everything that got putting online for the next 25-30 years. Show them how it will all be swallowed up and compiled into a monster that will ruin their careers and passions and create the perfect deepfake propoganda machine.

>> No.7059830

>>7059829
*got put

>> No.7059831

>>7059829
no one is really using this in a workflow yet and the technology would need to undergo a pretty extreme revolution to allow it. Still 5 years out imo.

>> No.7059840

>>7059831
True, I hope it's longer than that and preferrably some kind of wall. Simply out of spite because these wanna bes don't deserve anything better than what they have available now.

>> No.7061884

>>7057192
>all AI tech nerds have to do is just create a seperate program to just screencrop images and offload those into the learning algorithm
If that's true, why make a forensics tool to detect it?
https://github.com/RichardAragon/NightshadeAntidote
Also, here is some (((ai art))) loving tech bros taking this seriously
https://www.youtube.com/watch?v=SbCBaopgbqw
https://www.youtube.com/watch?v=nDrCC2Uee3k
https://www.youtube.com/watch?v=7pZ0kC3aSz0
https://www.youtube.com/watch?v=MGEtHqsFnHU

>> No.7061983

>>7054123
You take on a very minor inconvenience in order to inconvenience them an unknown amount. Seems like a good gamble to me.

>> No.7061988

>>7054333
And we're not even touching the biggest issue of all:

>Using sampling from already AI-generated graphics
>Literal AI models inbreeding their learning models

I don't know how impractical AI learning from AI images will be in practice. At the very least, Chess neural nets learn by playing against themselves. Although I realize it's not quite the same thing.

>> No.7062058
File: 66 KB, 640x733, FsAxQWSaUAA-dHr.jpg [View same] [iqdb] [saucenao] [google]
7062058

>>7061988
>chess analogy
I'll call you a curry for that but what part of possibly spreading misinformation or helping spread racism is part of the chess strategy then