[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/ic/ - Artwork/Critique


View post   

File: 1.18 MB, 1470x3349, 340895890e4w80934859.png [View same] [iqdb] [saucenao] [google]
6435921 No.6435921 [Reply] [Original]

Is this true? Are datasets just a static archive of images after all? If so, then that means that AI artists really aren't artists at all, because they aren't actually creating anything by prompting. That would also mean that this is a clear cut case of data laundering and copyright violation that doesn't even need a change in copyright law to be considered as such, since not only does the dataset still contain all the images that it's trained on but also only creates "new" images via interpolation. Right or wrong? What do you guys think?

>> No.6435930

I am an artist and I like ai.

>> No.6435947

>>6435921
The anon in the screencap is using pilpul to trick you into believing that. He's going "hurr everything is reduceable to math so it actually DOES contain the images" but then when the other anon takes him to task using the library of babel as an example, the original anon is dismissive and moves his position to being the opposite of what he originally claimed.

Also I love how fucking braindead people are to allow "You are a flesh automaton animated by neurotransmitters" to live rent-free in their head. It's especially ironic because that wasn't said by some AI nigger, it's from Cruelty Squad. A videogame MADE BY AN ARTIST which is more-like art than anything these anti-AI seethers have ever made and probably will ever make.

>> No.6435954

>>6435921
It’s over anon, let it go

>> No.6435957

>>6435947
>everything is reduceable to math
Because it is? How do you think that algorithms work, anon?
From the research that I've been doing on the topic, this seems to really be how they work. People who work with AI like SD even say as such.

>> No.6435959

https://youtu.be/YQ2QtKcK2dA?t=620
Emad himself even says in this video that they took 250 TB of data and compressed it down to 2 GB.

>> No.6435964

>>6435947
> The anon in the screencap is using pilpul to trick you into believing that. He's going "hurr everything is reduceable to math so it actually DOES contain the images"...
Anon, how do you think computers work?

>> No.6435977

>>6435957
>>6435964
He's using a language trick to try to make people believe that the data in the actual datasets "are the actual images" when they're not. The data is distinct from the images themselves, but it is DERIVED from them.

That's why I said that when the other anon took this to the logical conclusion, and the original one got mad, it's because the logical conclusion of just saying "well it's all just data" means that EVERYTHING is "all just data" and you are now at the point where nobody is actually "making new art" because the library and canvas of babel (two different things) have already got anything that you could ever possibly make inside them.

His line of logic is shit and he dosn't actually believe it, he's just using whatever rhetoric he thinks will support his "AI BAD" conclusion.

>> No.6435980
File: 1.89 MB, 710x3995, nitter.garudalinux.org_svltart_status_1592227197921939456.png [View same] [iqdb] [saucenao] [google]
6435980

Could anyone on /ic/ weigh in on this? It seems to me that if people actually had a simple, accurate explanation of how this works and what the latent space is then this would deal a pretty serious blow to the AI shills. I'm having trouble figuring it out since I'm not very technically minded.

>> No.6435982

>>6435980
Bad idea, if people knew how it worked they'd be pro AI.

>> No.6435987
File: 909 KB, 710x2860, nitter.garudalinux.org_svltart_status_1592233213514362881.png [View same] [iqdb] [saucenao] [google]
6435987

>>6435980
So what I'm getting so far is that images are encoded into this "latent space", and they can be decoded back into "image space" practically unchanged. The latent space seems to be like a graph with every image being a point plotted on it, and every point in between the original image points is an "interpolation". Of course, this graph has way more than two or three dimensions, not really sure how many.

>> No.6435988

>>6435977
>He's using a language trick to try to make people believe that the data in the actual datasets "are the actual images" when they're not. The data is distinct from the images themselves, but it is DERIVED from them.
But isn't that bad enough? Given that the data in question was taken without consent via a bot scraping popular sites? There's a reason that certain styles of work need to have companion datasets made for them, because the datasets are made from artist's images and work.

>> No.6435995

>>6435977
But anon, from everything I'm reading, this "latent space" does exist and does contain every image that's been fed into it. Could you provide a counterpoint proving that it's not the case?

>> No.6436000

>>6435982
How is taking everyone's portfolios to train a program not theft? Diffusion models aren't even true AI. When there's an android with true AGI and sentience and I'll agree that a computer can be inspired. Until then it's all just semantics to cover up your Artist Simulator video game.

>> No.6436005

>>6435977
>He's using a language trick to try to make people believe that the data in the actual datasets "are the actual images" when they're not. The data is distinct from the images themselves, but it is DERIVED from them.
Even if this were true (it's not), this would be a distinction without a difference in terms of ethics of use.

>> No.6436007

>>6435988
No, in the US and EU data scraping is protected by law because a fuckload of other technology requires it to function. It would cripple the internet as a whole if it weren't allowed.
Copyright law in the US also has a "de minimis" rule where small enough amounts of copying are not considered to be infringement. Any individual work scraped is such a small amount of the total that it easily passes that test. The only exception is if someone outputs something substantially similar to a copyrighted work, at which point they have to simply not share it.

Even the paid ones like MidJourney can probably avoid losing a case, for the same reason that games with microtransactions have you buy "credits" and then the credits are used for the actual store items rather than it being a direct monetary payment.

>> No.6436011

>>6436007
Web scraping might be legal but is it legal to use copyrighted data for profit in this manner?

>> No.6436017

>>6435995
Because flat out the images aren't there. If you could compress such a huge amount of files into 4GB and make them retrievable again you'd have a much more significant technological advancement than AI art.

It simply doesn't work that way.

>>6436000
Because de minimis copyright exemptions and laws that protect data scraping. Copying a 2x2 pixel square out of thousands of images to make a collage isn't unethical. And AI does less than that because it isn't copy-pasting.

>>6436005
It is a difference. Data "about something" or "derived from something" is not the same as "data that is the thing." You are allowed to use other peoples' work as long as your result lies within fair use.

>>6436011
Yes, it falls within fair use particularly because it's using such small pieces ("de minimis") of any individual work that it's not a violation.

Ethically, art has always been more about things like "If you copy one person that's considered bad, if you copy from everyone then you're hailed as a genius."

>> No.6436020

>>6435977
>The data is distinct from the images themselves, but it is DERIVED from them.
That doesn't make it any better. It ain't even about taking peoples ideas. Everyone's data was harvested en masse to create a software and the devs profited from enormous amounts of funding. In any other instance the AI art crowd would probably be horrified, but they don't care because it gave them something they wanted.
People making it about copying ideas or styles are ruining the conversation and missing the point. But most artists are too emotionally attached to their work to see the bigger picture and that's why we keep seeing bs arguments about copyright and computers being "inspired".

>> No.6436028

>>6436011
Sounds like what needs to happen is not more copyright laws, but laws specifically about web scraping. Art is only a tiny piece of the pie here, and I hope this issue has woken people up at least a little bit.

>> No.6436040

>>6436011
Its not legal.
https://www.socialmediatoday.com/news/LinkedIn-Wins-Latest-Court-Battle-Against-Data-Scraping/635938/

"LinkedIn has argued that this is against its user agreement (i.e. users had not agreed to allow the usage of their information in this way) and is therefore in violation of the Computer Fraud and Abuse Act. The case has gone back and forth ever since, and has become a precedent-setting example for data scraping, and what can be done, legally, with publicly available information online.

And in the latest ruling, the court has ruled in favor of LinkedIn.

As explained by LinkedIn’s Chief Legal Counsel Sarah Wight:

“Today in the hiQ legal proceeding, the Court announced a significant win for LinkedIn and our members against personal data scraping, among other platform abuses. The Court ruled that LinkedIn’s User Agreement unambiguously prohibits scraping and the unauthorized use of scraped data as well as fake accounts, affirming LinkedIn’s legal positions against hiQ for the past six years. The Court also found that hiQ knew for years that its actions violated our User Agreement, and that LinkedIn is entitled to move forward with its claim that hiQ violated the Computer Fraud and Abuse Act.”


Just because some skid stained paj said it isnt illegal doesnt make it so

>> No.6436046

>>6436020
The data harvesting is fine and legal, this is why you shouldn't put photos of myself on the internet. If it's in the public net then you're consenting to having your data accessed, analyzed, and used by others within fair use.

The twitter trannies mad about this also have a lot of crossover with the shitheads like keffals who tried to get kiwifarms shut down. Literally if you put your shit out in public, it's in public and your options for fucking with people using it are limited to the narrow set of exclusive rights outlined in copyright law.

>> No.6436049

>>6436040
personal data is taken more seriously, for obvious reasons

>> No.6436051

>>6436040
Your gay american laws don't apply to me.

>> No.6436052

>>6436040
It is legal. The HiQ case is "special" because they made fake accounts to scrape linkedin which bound them to linkedin's TOS.

US law generally requires affirmative consent of a site's TOS, which is the little checky box you have to select when making say, an artstation account, or clicking a button to enter a site. So this really only applies to scraping data whic is behind a signup wall.

>> No.6436054

>>6436049
> and has become a precedent-setting example for data scraping, and what can be done, legally, with publicly available information online.

>> No.6436056
File: 2.19 MB, 2596x1070, perceptual image compression.png [View same] [iqdb] [saucenao] [google]
6436056

>>6436017
>flat out isn't there
>it simply doesn't work that way
I've been trying to parse a research paper on the subject, "High Resolution Image Synthesis With Latent Diffusion Models", and even though I don't understand all the jargon and mathematical gobbledygook it actually does seem to be that the entire purpose of latent diffusion models is to encode images down into this "latent space" and then decode them. I mean, it's even in the name- "latent", as in latent space, and "diffusion", the method of encoding.

Thus they're doing exactly what you're denying that they do. Do you even know how the technology works in any meaningful capacity? Are you just ignorant of its inner workings and pretending that you're not?

>> No.6436058

>>6436052
>The Court also found that hiQ knew for years that its actions violated our User Agreement,
>also found
It wasnt the only thing. Youre not reading

>> No.6436061

>>6436054
yeah, in the context of personal data

>> No.6436067

>>6436061
Ah ok youre retarded

>> No.6436074

>>6436067
it's okay to say you don't know how legal precedent works

>> No.6436077

You should stick to your shitty drawings and leave all the legal stuff to smarter people.

>> No.6436079

>>6436046
Sadly in this day and age you are expected to have a web and social media presence and if you avoid doing so you're accused of being a schizo and a luddite. I think if people knew that their data could be used this way then a lot more people would have avoided social media. Or maybe not, since a lot of people these days don't value privacy and all and are cool with giving it all away.

>> No.6436083

>>6436074
Sure thing jeet. You really debunked me and convinced everyone else reading this thread how wrong I am by saying that
what will I do now!

>> No.6436087

>>6435982
Have you ever fucking spoken to a ML engineer? Most of them are in the pro-artist camp, gawking in horror at a bunch of AI fanboys spouting falsehoods about ML, circle jerking each other in their AI hype echo chambers about how AGI is coming within the next 5 years because ChatGPT told them so.

https://www.youtube.com/watch?v=pPP5JpPP4sU

Stop listening to people who's credentials are "well I read a forbes article about machine learning once"

>everything is reduceable to math

Ain't that fuckin simple bud.

https://www.youtube.com/watch?v=HeQX2HjkcNo&

I fucking guarantee you, we will hit the limits of computing technology before we come remotely close to AGI.

https://www.youtube.com/watch?v=hXgqik6HXc0

The success of the AI hype cult depends entirely on you being computer-science illiterate, don't let them scare you into submission.

>> No.6436110

>>6436056
That's what's being ATTEMPTED in some areas but it's not actually successful to my understanding of being used at mass scale. There's a lot of problems with it, but it's really not compressing images in the same way anything else does. It's more like it's recording instructions on "how to make a similar image" to my understanding. Which is why it's such a nightmare trying to explain to people, because they assume that doing THAT is impossible, but it's how this seems to work.

It's also why claims of "it literally contains the images" are not true any more than instructions on how to make a cake contain a cake. The difference obviously is that displaying pixels is near 0-cost resourcewise while you still need to have ingredients to make a cake from the instructions.

>>6436058
I did read, they're only bound by the user agreement due to the affirmative consent required.

>>6436079
I'm an artist so people already think I'm a schizo for trying to make money by drawing. Their opinions of me can't get any lower!

>> No.6436146

>>6436110
You said they were special because they simply used fake accounts. You ignore the “and” “as well as” and “also”

>> No.6436156

>>6436110
Just because it's not compressing images in the same way OTHER things do, doesn't mean that's not what's happening. You can read in the image that, and I quote:
>given an image x ∈ RH×W ×3 in RGB space, the encoder E encodes x into a latent representation z = E(x), and the decoder D reconstructs the image from the latent
And you can see from the image that what the latent diffusion model is doing is exactly what I said earlier, encoding and decoding an image to match the original completely. I mean, it even SAYS that it's compressing the image in the damn paper, man.
The methodology or means of doing so don't matter, if you put in data and it spits it back out practically unchanged, then it's explicitly recording the data. And latent diffusion models do this for EVERY SINGLE IMAGE. You're just getting caught up in pointless semantics here.

>> No.6436170

>>6436156
It's not pointless semantics though. You have to be precise.
The actual rights granted by copyright are EXTREMELY limited, and what you are proposing by trying to claim the difference "doesn't matter" only expands the ability to claim violations on less clear-cut grounds. And doing THAT that hurts artists more than helps them. Copyright infringement is - by its nature - more advantageous to those with the money for an army of lawyers. Making it even less clear only slants things in that realm harder.

I don't know about you anon, but I don't really want a world where the laws are even more fucking convoluted and oppressive.

>> No.6436179

>>6435947
Anon... it was all maths
you will never be an artist

>> No.6436182

>>6436170
Yes it is pointless semantics. If you mathematically transform image data into another kind of data in a way that the process is reversible and you get the same data back, that's pretty clear cut and simple. Even if the compression is lossy (and in fact I have heard latent diffusion referred to as a type of lossy compression before) you're still getting the same thing back. You might as well be arguing that, after you compress a batch of files with WinRAR that none of the original files exist in the .RAR file anymore, or that if you take a raw image file and save it as a .jpeg that it's not the same image anymore. In a very, VERY persnickety technical sense you may be right, but in terms of what is happening in effect, that's not true at all.

>> No.6436183
File: 6 KB, 226x223, 87F5FC12-E191-432E-9961-FFC921583741.jpg [View same] [iqdb] [saucenao] [google]
6436183

Sure is a lot of
“How do you do fellow artists, yeah ai is bad but dont you think we’re going a bit too far? you dont want the big bad corpos to win do ya? Lets just forget about the whole copyright thing :)))” going on lately.

>> No.6436192

>>6436170
You keep avoiding any mention of the latent space, and this is what is key here. Does the latent space not exist? Are the images not contained in the latent space? Everything that I read or watched on the subject says that both of these statements are true, and you aren't doing anything to dispel me of the notion.

>> No.6436203

>>6435921

Ai brainlet are delusional

https://www.youtube.com/watch?v=Ias8qaUylQ8

enjoy reading the comments

>> No.6436207

>>6436170
>If I take a .png and through a lossy compression algorithm save it as a .jpeg it is no longer the same work
This is what you are arguing.

>> No.6436212
File: 1.24 MB, 725x2850, well there it is.png [View same] [iqdb] [saucenao] [google]
6436212

Holy shit, here it is, this is exactly what I've been saying this entire time! It really does store all of the images and interpolate between them! That's all that they do! FUCKING YES! HOLY SHIT GET IN HERE GUYS, I THINK WE'RE GOING TO BUST THIS THING WIDE FUCKING OPEN

Source is https://towardsdatascience.com/understanding-latent-space-in-machine-learning-de5a7c687d8d

>> No.6436221

>>6436203
Why is it always an Indian?

>> No.6436224

Why is /ic/ the only board that's pozzed by twitter trannies?

>> No.6436228

>>6436212
Explain this to a retard like me
Are you really saying that this whole time “ai art”was basically just a huge glorified RAR file

>> No.6436229
File: 285 KB, 571x344, kjg-doodle.png [View same] [iqdb] [saucenao] [google]
6436229

>>6436182
>In a very, VERY persnickety technical sense you may be right

Welcome to law dipshit.

>>6436192
To my understanding the images don't actually "exist" in the latent space except as a series of parameters used to construct new images. But I could be wrong.

>>6436183
I've been anti-copyright since before the AI crap. That's why I use Krita.

>>6436207
If you deep fry an image enough it does become a new work desu. I know that's not what you're really implying but there is a point where something has been edited enough to make it new.

>>6436212
It doesn't store "the images" anon. You're reading above your pay grade.

>>6436179
I already am an artist - and not an AI artist. Been shopping for copics to try to dip my hand into using them for colors cause I like the look. You should try giving less of a shit, it'll improve your art.

Unrelated: Look at this cute girl KJG drew.

>> No.6436231

>>6436224
>twitter trannies
Last time I checked they've been spamming the entire site with their AI shilling bullshit, I'm not sure what you mean anon.

>> No.6436232

>>6435921
How are the copium reserves doing?

>> No.6436240

>>6436203
I really hope you don't listen to this guy. I don't hate AI art but every other argument this guy makes is a shitty transparent straw-man, no wonder nobody is buying any of his arguments.

Have you tried forming your own opinions, instead of getting them from youtube brainlets?

>> No.6436244

>>6436229
>I've been anti-copyright since before the AI crap
Right
Into the toilet you go

>> No.6436249

>>6436170
AI doesn't learn like humans, it memorizes how to denoise images and interpolate them. The fact it does it "from scratch", doesn't change anything. I mean, the origina trainedl image isn't there, but if it can perfectly reproduce it again, that's the same as storing it.

If you memorize an essay from a colleague, word by word, than type it yourself later, it doesn't matter you didn't directly copy and paste it.

>> No.6436250

>>6436232
How much money have you made producing AI generated art?
Any? At all? Tell me who's coping here.

>> No.6436255
File: 572 KB, 405x619, 863.png [View same] [iqdb] [saucenao] [google]
6436255

>>6436229
>it doesn't store "the images"
It stores the images as a lower-dimensional representation in the latent space. The only sense that it's not "the image" is that it's in a completely different format and it's extremely compressed. I mean, that article is in pretty damn plain english my man. At this point I feel like I'm re-enacting that scene from spongebob where man ray is trying to give Patrick's wallet back, holy shit.

>> No.6436261

>>6436249
You are correct. But as long as you (the tool operator) aren't outputting said copies and trying to claim it as your own, then nobody actually cares.

>>6436250
Why are anti-ai artists obsessed with the idea of raw AI art outputs being used to make money? Simultaneously they keep screeching about losing jobs but then laugh when AI tards aren't making any money with their crap.

It's one or the other, either the handful of spiteful reactionary AI tards are right and they're replacing us or they're wrong and not actually a threat (this is the actual answer, so you don't have to burn 400 calories trying to figure it out.)

>>6436244
Anon I think you need to tell the pajeets that instead.

>> No.6436264
File: 65 KB, 699x261, Screenshot from 2022-12-23 19-54-34.png [View same] [iqdb] [saucenao] [google]
6436264

>>6436229
I mean, how much more fucking plain does this get? Do you mind explaining how this is different from what I said?

>> No.6436265

>>6436261
>no im one of you i swear sirs
Youre not new

>> No.6436268

>>6436255
A "lower dimensional representation" is not THE IMAGE you dumbfuck.
The logic you are using would consider a traced silhouette of 2B as the same as the image of 2B it was traced from. They're not the fucking same and your entire theory is just trying to go
>"ITS THE SAME EVERYONE OWES ME MONEY FOR ANY USE OF MY SHIT FAIR USE IS FAKE FAKE KILL IT I WANT CONTROL I HATE AI ART I WANT THEM ALL DEAD FUCK EVERYONE WHO DISAGREES IS A TRANNY PAJEET TECHBRO INHUMAN MONSTER GIVE ME MY ART CONTROL"

>> No.6436270

>>6436250
I don't need to make money on that, I have a real job, and I spend a little money on that to make a cool rig I use to play games after work and proompt while I'm busy at work. Didn't you people always say that art should be for fun? Why are you mad at me having fun with art?

>> No.6436272
File: 137 KB, 822x1200, FjbnIs8XwAAIf2O.jpg [View same] [iqdb] [saucenao] [google]
6436272

>>6436261
>But as long as you (the tool operator) aren't outputting said copies and trying to claim it as your own
That is indeed my stance right now. If the output doesn't look a direct copy, ok. Now, people that do models based in 1 specific artist or use img2img, should be treated as tracers.

>> No.6436279
File: 2.14 MB, 3519x2589, media_FjYoAyLakAAUAU2.jpg [View same] [iqdb] [saucenao] [google]
6436279

>>6436272
I don't have a problem with tracers honestly. It just comes across as low-value and I'd never pay another artist for a comm where a significant portion is traced.
That said a few panels in a larger comic are almost always gonna be traced lol

>> No.6436290

>>6436268
Fucking emad is losing it lmao

>> No.6436292
File: 27 KB, 699x139, Screenshot from 2022-12-23 20-03-53.png [View same] [iqdb] [saucenao] [google]
6436292

>>6436268
Ah, so you ARE just being extremely fucking pedantic. Just like I thought. So even though the whole entire purpose of a latent diffusion model is to compress an image down to a tiny fraction of the size and bring it back just as it was before, and you're actually going to try and take this angle? Hilarious.
According to your logic, you really would argue that a .RAR file doesn't contain any of the files that were put into it.
>UM, ACKSHUALLY YOUR HONOR, THIS 2GB .RAR FILE DOES NOT ACTUALLY HAVE ANY CHILD PORNOGRAPHY ON IT, IT'S A COMPRESSED REPRESENTATION OF THE CHILD PORNOGRAPHY THUS IT IS NOT THE SAME, HEH, I AM VERY SMART, YOU HAVE JUST BEEN OWNED BY FAX AND LOGIC
I don't know, but I don't think a shit ass argument like THAT would hold up in court. FUCKing lol and lmao, gottem.

Oh, and I really do love this choice of wording in the article.
>generated images are not technically independent of the original data sample
I mean, damn, zamn, i dunno about you but that sounds like an issue, nawmsayin'?

>> No.6436298

>>6436261
I don't give a shit about art theft unless it takes my job. There is literally no issue with AI art, because it will never take my job. ergo, nothing to cope with. If anyone actually gets economically displaced by a glorified search engine then that's entirely their own fault.

The real problem is the horde of AI white knights who go around picking fights with windmills. AI reactionaries are a bit annoying, but what's more annoying are people who want to act like their lack of talent and inaction is somehow a better move than honing a skill for decades because suddenly AI art exists and now they can make a PNG appear by typing words.

They will go on someone's twitter account, outright say this as if they really believe it (they don't,) this pisses off artist X and now artist X is anti-AI because their first exposure to a supporter of AI art is one of the dumbest most obnoxious people on earth. Simple.

Those are the dumbass grifters who are riling up this whole discussion and writing the narrative that AI will replace real artists in the first place.

>> No.6436302

>>6436292
If you think that's the logic then you really need to learn to read better.

And correct, they're not independent of the data sample. Because they're made from the data sample. Are you really this dumb? Do you not understand anything about copyright or art ethics?

Do you know what appropriation is? Did you skip the explanation of de minimis earlier?

Of course these use art for other purposes, that was never a contention. The contention was whether the use of any individual piece of art was enough to constitute infringement. And it's not, and you're proving that it's not with your articles. The only reason you seem to think that this is evidence in your favor is because you think:
>ANY use AT ALL, and ESPECIALLY if it's being used to make money is a copyright violation!
When this is 100% NOT the case.

>> No.6436326

>>6436302
I can read just fine. The article is in plain english, very simple and easy to understand.
>the image is encoded to the latent space
>the image can be decoded from the latent space exactly as it was put in (except for some data loss since the compression is lossy)
>every 'new' image is just an interpolation between two or more images
>therefore the dataset contains every single image input in full as well as every instance of interpolation between them
>when prompting, nothing 'new' is created, you just did the equivalent of a google search
This is very easy to understand, and it cannot be refuted.
If you did not create the image, you do not own the copyright.
If your program contains assets that are copyrighted, you cannot use it for profit. The dataset, containing all of the images it was trained on, is affected by the copyright status of its contents. Very simple. I highly doubt any new laws or changes to law would need to be made to come to this conclusion.

>> No.6436352

>>6436231
No just look at art station all the ones complaining about A.I. art are trannies.

>> No.6436361

>>6436326
>If your program contains assets that are copyrighted, you cannot use it for profit
100% wrong.

>> No.6436367

>>6436279
>That said a few panels in a larger comic are almost always gonna be traced lol
If that were true you'd actually post some examples of large comics being traced.

>> No.6436372

>>6435921
they don't literally store the images in the model, they store weights. The weights are way smaller in size and compressed. You're not going to be able to mine copyrighted images out of the weights. You might be able to generate something similar to an existing image with prompts, but it'll never be exactly the same as what was input. So when people say its compressed, it's a (very) lossy compression.

>> No.6436377

>AIfags say that AI generators don't take images even though people can look up in a few seconds that the AI generator developers said so themselves that they take images along with every other AI generator from the past

What's with AIfags expecting people to believe obvious lies and expecting being unable to google simple things in just a few seconds?

>> No.6436380

>>6436377
they obviously were trained on real images, but it's irrelevant because the distributed model uses weights. No actual images are inside the model. Training is a reductive, compressive process.

>> No.6436381
File: 934 KB, 1533x874, 1662726535640711.png [View same] [iqdb] [saucenao] [google]
6436381

>>6436372
>oh god oh god Im fucking retarded hurr hdurf fuck hurr durr durrfff

>> No.6436384
File: 1.73 MB, 3112x2060, f7b.jpg [View same] [iqdb] [saucenao] [google]
6436384

>>6436367
Greg Land is a really really easy example but most are less obvious about it.

>> No.6436386

>>6436381
none of those are the original image, and there's nothing against derivative art in copyright law

>> No.6436388
File: 46 KB, 776x602, Waynes-World-Get-A-Load-Of-This-Guy-Cam.jpg [View same] [iqdb] [saucenao] [google]
6436388

>>6436361
Sure I am. Try using a copyrighted image or song in a piece of software that you make for profit and see if you don't get a copyright strike from it. Fair use is a thing but it's very limited. I highly doubt that a data laundering scheme wherein you mathematically interpolate a shit ton of images to create 'new' images to use for the purpose of profit is going to fly.

>> No.6436390

>>6436388
the model used to create art does not ship with any copyrighted images
this discussion is moot desu, we'll just have to wait for the supreme court''s interpretation at some point

>> No.6436391

>>6436384
That's the only example you faggots ever use

>> No.6436393

>>6435921
Yes you mong. Have you not noticed the blatant shill campaign going on since fucking summer? They're snake oil salesmen nothing more

>> No.6436399

>>6436388
>copyright strike
Wake me up when I actually get sued, not kicked off a platform because they're trigger-happy about the DMCA.

>mathematically interpolate a shit ton of images to create 'new' images to use for the purpose of profit
Yea I'm legally allowed to do this, lol. Try again.

>> No.6436402
File: 704 KB, 610x693, cave.png [View same] [iqdb] [saucenao] [google]
6436402

>>6436380
Lemons are obviously used to make lemonade, but it's irrelevant because I used a squeezer. No actual lemons in the lemonade. Squeezing is a reductive, compressive process.

>> No.6436410

>>6436361
No he's right. According to the FTC at least.

https://www.weil.com/-/media/files/pdfs/2021/ftc-orders-destruction-of-algorithms-created-from-unlawfully-acquired-data.pdf

>> No.6436417

>>6436380
Not this guy but if the question is of "whether or not this qualifies as copyright infringement" just Imagine trying to explain this to a judge.

Pedantry doesn't matter here, the substance of the claim is that copyrighted works go in, infringed transformative works come out.

Copyright is not a cut and dry issue, cases are interpreted from an often ethical standpoint and well, seems like these arguments are a lot of "technical" and not a lot of "ethical."

Read more about transformative works cases and the basis they use to define a lot of the infringement. A lot of people losing these fights in court have much, much better arguments than these..

>> No.6436419

>>6436402
Mr. Johnson, the accurate analogy would analyzing the DNA of thousands of samples of lemons across multiple continents, creating an averaged genome for a lemon from the data, and then cultivating a lemon in a lab from this new DNA which would then be juiced.

>> No.6436420

>>6436410
Oh shit nigger lessgooooo

>> No.6436425

>>6436410
Read again:
>from UNLAWFULLY ACQUIRED data
As long as the data was lawfully acquired it's fine.

>> No.6436428

>>6436425
These retards can't read, what do you expect

>> No.6436429

>>6436417
well at the substance of the matter - can someone copyright a style? Because that's what's being done here primarily. Create X in the style of Y.
I imagine not given the sea of Sakimi-chan clones out there

>> No.6436430

>>6436425
>yes, your honor, I legally acquired this sprite sheet of megaman from the internet and thus my using it in my fangame that I sold for money constitutes fair use

>> No.6436433
File: 74 KB, 640x480, 1656672948704.jpg [View same] [iqdb] [saucenao] [google]
6436433

>>6436425
>As long as the data was lawfully acquired it's fine.
Sure is.

>> No.6436434

>>6436425
it wasn't lawfully acquired, they laundered the data by scrapping it under a non profit research license then using it in a for profit product for a for profit company

>> No.6436436

>>6436429
>style
Completely irrelevant when what latent diffusion models do is interpolate between images. It's not even close to the issue at hand.
>>6436434
Gottem.

>> No.6436437

>>6436425
>>6436433
>>6436434
And that's the beauty, you cannot prove the generated image was generated using copyrighted material unless you get the particular model that was used, and even then you cannot prove anything without having access to the training dataset, get fucked

>> No.6436438

>>6436417
A lot of the arguments are "technical" because it's ethical to use art to make other art, as long as the resulting art isn't infringing.

The tool itself is what people are getting stuck on, because a lot of people are thinking that the tool "must" be doing something wrong, because it's allowing people to (in their view) "cheat at art." A lot of people see the PROCESS of art as important, especially if making art is something emotionally taxing for them, and so if you skip that then to such people you MUSt be cheating, it HAS to be fake. It CANNOT be allowed because it would mean that their activity isn't defined as they formerly believed.

It's an attempt at preserving reality as they know it, because it's being threatened by other information.


That's why I don't mind AI art. The process was never painful for me, I have fun making art and I love the art I have made. I can empathize with these people BUT I wholeheartedly disagree that their viewpoint is grounded in reality and I think it's a particularly unhealthy and toxic mindset. If I were in their shoes I'd be better off if someone knocked me out of that horrific mentality, which could require ego death/rebirth or other shit lol.

>> No.6436439

>>6436434
Yeah that 2nd issue is the big one. Conducting shady and unethical practices as a nonprofit is bad enough but once money's involved the suits get real pissed.

>> No.6436440

>>6436437
>well, I actually AM stealing but you can't catch me NYAHAHAHA
Ah, so the mask comes off. The AI Jew reveals his true nature after all.

>> No.6436444
File: 672 KB, 825x906, 1671731794805859.png [View same] [iqdb] [saucenao] [google]
6436444

>>6436437
Well I guess the AI must be sentient aliens then gg Emad

>> No.6436445

>>6436434
That's not what data laundering is. Data laundering is taking unlawfully acquired data and then republishing it in ways that makes it "fine."

What you're actually describing is "for-profit company funding nonprofit research, because they benefit from that" which has ALWAYS been fine.

>> No.6436448

>>6436440
I don't represent others, but I don't give a shit, I pirate software, so I will also pirate artworks, even if for profit, and the most you can do is to cry

>> No.6436452
File: 352 KB, 861x715, snapshot835.png [View same] [iqdb] [saucenao] [google]
6436452

>>6436444
That study was done by digging up the exact tags on each image in a much smaller dataset. They admitted that they couldn't "scientifically" determine how a normal prompt would work for the purposes of testing.

>> No.6436455

>>6436440
>Sir, this "Cure-All" hasn't cured my cancer I don't think it works-
>Well you can't PROVE it doesn't work

>> No.6436458

>>6436444
Read what you send you disgusting plebeian, they were only able to reproduce these results because they had the model, good fucking luck doing anything about model mixes, like the feature SD webui offers. And no, you won't win the court case with "well akshually it was generated so it MUST have used copyrighted material"

>> No.6436465

Funny how much faster they scramble once you point out the lawmen

>> No.6436471

>>6436437
>you cannot prove anything without having access to the training dataset, get fucked
Good thing LAION5B is public then

>> No.6436476

>>6436445
Interesting how many of those companies used licensed Disney and Nintendo media

>> No.6436483

>>6436471
And what about that fact? Do all models use LAION5B? The official stable diffusion model for sure uses it, you have a shit ton of other models. Besides, fucking prove that an AI generated image you seen on internet used a model that was based on LAION without having the metadata of the image. You people have designated shitting streets for your brains

>> No.6436489
File: 13 KB, 1118x119, 3000 hours on google dot com.png.png [View same] [iqdb] [saucenao] [google]
6436489

>>6436438
Ah, you're right, it's not an ethical or technical issue. It's a legal one. Because changing someone else's art until it's unrecognizable and then calling it yours is illegal. Written plain as day.

From what I understand, even taking the most conservative and technical arguments that you people have made, the model compresses images into a weighted model, uses said weights to create an image that is a combination of several, probably copyrighted image's weights, and then spits one out.

Is there any part of this process that you wouldn't define as "changing" another person's work? Maybe through some annoying semantic arguments, but courts don't tend to like those.

At best, generated images could be blanket considered transformative, but then it becomes a case-by-case basis on whether or not they infringe on the works of an original. And in the case of stylistically identical works? I think the court would tend to side with the owner of the original copyrighted works. And good luck trying to pass them off as "satire," read up on some infringement cases and tell me how often that works.

To clarify, I don't *believe* AI art is inherently a problem, it's a tool and like any tool there are positive and negative uses. I *believe* that it can and will be used by bad actors to infringe on the works of other artists, akin to tracing and theft, and that it's important that everyone understands that they are equivalent.

I don't think anyone would complain if websites or the law enforced putting a watermark or tag on any images generated by machine learning, and I don't think you can conceive of any negatives to that which wouldn't be hypocritical to your argument here. If it's really not cheating, then why would people have a problem tagging it as AI generated?

>> No.6436490

>>6436483
Buddy, you don't understand that without the LAION models contributing an actual FUCK TON of data points inside the latent space, you can't generate shit worth a damn. You saw what happened when all they did was remove some of the tags referring to certain artists, right? It completely FUCKED the efficacy of SD to generate good looking images.
If the LAION5B dataset gets fucking rekt, good luck scraping millions and billions of images and then training them on your own, you fucking idiot. Without those massive datasets you don't have shit. That's what we're going after.

>> No.6436491

>>6436483
You think law enforcement won't be able to prove that an image was generated using a specific model?
>so what if I murder someone, as long as I hide the knife nobody will know I did it kek

>> No.6436493

>>6436458
Do you know what a subpoena is?

>> No.6436496

>>6436493
Maybe if you explain it to him in prompts.

>> No.6436502

>>6436490
Dude really thinks that the LAION5B hasn't been downloaded locally fuckton of times, and even if it got taken down from official site it wouldn't be spread around by torrents. Besides, with the pace AI is being developed at, wouldn't be surprised if alternatives to LAION started popping up, especially in shithole countries that don't give a fuck. And as for existing models you can quite literally cry about them as that's only thing that you'll realistically achieve
>>6436491
They have no method of proving that I used model X without kicking in my door and decrypting my SSDs

>> No.6436504
File: 94 KB, 560x390, cariou-prince.jpg [View same] [iqdb] [saucenao] [google]
6436504

>>6436489
Go look up fair use and de minimis. Go look up what "appropriation art" is.

Substantial similarity is a huge amount of copyright law. And the bar is really really weird since cases like cariou v prince (picrel) were considered noninfringing.

Again the main thing is that you have to ignore huge portions of the ENTIRE FIELD OF ART that use more portions of any individual piece than AI does, and are considered 100% fine.

>akin to tracing and theft
Tracing isn't bad and "art theft" properly only refers to physically stealing art pieces

>> No.6436511
File: 141 KB, 724x624, violator3.png [View same] [iqdb] [saucenao] [google]
6436511

Hey fellas, you wanna hear something REALLY fucking funny? Okay, so, hear me out: if a dataset is an archive of every single image it was trained on, plus every single step of interpolation between all of the images...
Riddle me this, what happens when you have pictures of small children inside the same dataset as all kinds of nudes and porn?
Oh yeah, we're talking about possibly some of the biggest archives of CP the world has ever seen being freely shared around the internet, being used by certain companies for profit, etc. This is some JUICY fucking shit right here.

>> No.6436513

>>6436490
And maybe by then most artists will have developed half a brain to keep their work off of image hosting platforms with neatly tagged galleries and on their OWN websites and vehemently send DMCA takedowns for re-posters. People need to start valuing their data and privacy yesterday.

>> No.6436515

>>6436511
Stop getting aroused, no wonder you want to reverse engineer the models

>> No.6436518

>>6436502
The point isn't to make it impossible to use the programs trained on these datasets, the point is to make it ILLEGAL. Most people have the ability to murder or severely hurt somebody, that does not mean they are all running around killing each other every day. Why? Because most people care about following the law.

>> No.6436523

>>6436502
It wouldn't come to that. Any judge or jury could make the visual comparison plain as day. All the plaintiff would have to do is find the 4 or 5 images that your model used as the primary sources, generate their own example using another or the same model, then you're fucked anyway.

You seem young and impressionable. Maybe you should use your confusion as to how the legal system works to rob a bank or something. Not like those stupid lawyers could ever prove it was you, right?

>> No.6436525

>>6436518
>use the programs trained on these datasets
Small correction, it's the dataset itself that's the issue. If, say, the dataset contained no copyrighted data then there would be no problem whatsoever.

>> No.6436528

I think you should probably deal with the existing culture of rampant stealing in the art industry before you start fighting with the AI.

>> No.6436531

>>6436511
Medical records were already found in the dataset and that's completely illegal to use. I'm expecting cp to be found any day now.

>> No.6436534

>>6436017
Imagine for a minute it was only greg rutkowskis work that was trained on. royalty free photos, and nothing else but gregs portfolio. with a super tagging system for all his images, rather that shitty tags, each image is tagged and bagged, with coordinates for those tags and whatnot. Then this model is used to create interpolations of his work.
Selling these interpolations would not fly under under fair use. if someone did, it would be "stealing", in the sense that greg could sue for damages. No company would ever try such a thing, they would lose in court 100% of the time.

Why is it suddenly ok to do as you add more artists to the mix? It doesn't change the underlying theft when you scale up the issue, it just obscures it.

>> No.6436536

>>6436534
right it's like agreeing murder is bad but genocide can be ignored

>> No.6436537

>>6435980
>>6435987
Latent space is basically the "memory" of the ML model. It's the data store of common features from the training data. i.e. a face has eyes, nose, mouth, has this shape, etc. If you really simplify ML down to the basics, ML is trying to find the optimal averages of various features of a subject and then storing those so it can roughly recreate the subject later. When you have millions of features the ability of an AI to generate various images becomes vast. This guy's entire argument is that ML can only generate something that it's been trained on but humans somehow produce outputs using data that they've never stored and is magical in that way. There's no real precedence for this sort of idea. Consciousness is a huge mystery for AGI researchers so there's no merit to making a claim one way or another. It's just a bullshit appeal to emotion and mysticism, e.g. "human creation should be deemed unique".

This is even funnier since the 3D art industry has been using procedural generation tools for over a decade. Some of the algorithms used are arguably AI, in the field of automated planning. Not to mention all those 3D smut artists with those juicy bouncing titties, which is done through physics simulations

>> No.6436541

>>6436523
Impressive, I really want to see them get the model I used then, or anything similar, to generate anything looking similar. Maybe you have any bright ideas how they'd manage to do that with the pile of more and less popular models floating around, not to mention own mixes?
>>6436518
So you want to make it illegal to train neural networks on LAION dataset, which is cool and all, I'd also want to see the LAION to be cleaned up a bit from all the private things that may have ended up in it, and republished, but what about things like NovelAI did? Training own model with help of LAION AND own dataset, and keeping everything private? The LAION won't matter much if you use your own copyrighted images for retraining.

>> No.6436542

>>6436531
You're not picking up what I'm putting down. I'm saying that just by merit of photos of children and photos of pornography being in the SAME DATASET, CP is AUTOMATICALLY GENERATED by merit of how latent diffusion works. Remember, PROMPTERS DON'T CREATE ANYTHING, they only bring up what is already in there.
If you want to see evidence of cheese pizza being in a dataset, go look in the stable diffusion generals on /b/ and see what pedophiles are dredging up out of the dataset and posting of their own volition.
>>6436536
>one death is a tragedy, a million is a statistic

>> No.6436545

>>6436531
They're not illegal. People freaked out but as long as HIPAA law was followed properly for their release then there's nothing wrong with using them.

>> No.6436546
File: 206 KB, 771x804, Yes.png [View same] [iqdb] [saucenao] [google]
6436546

>>6436537
>human creation should be deemed unique

>> No.6436548

>>6436534
If the outputs were different enough it would 100% fall under fair use. I don't know why you guys keep pushing this unless your understanding of copyright is based on youtube's overzealous system for responding to DMCA complaints, which is WAY stricter than actual law.

>> No.6436549
File: 23 KB, 887x274, duhrr just google it .png [View same] [iqdb] [saucenao] [google]
6436549

>>6436504
Amazing! You've done the absolute bare minimum. Too bad you've made some classic legal mistakes, including: cherrypicking precedent, false equivalency, and the assumption that this will be decided with one or two cases. Please review for me the four criteria with which courts decide fair use, and answer the following questions. Return the worksheet by tomorrow.

What is the purpose and character of an individual generating an AI image, attempting to pass it off as their own work, and never revealing that it was AI generated?

What is the nature of an illustration, particularly in the context where the "fair use" work is also an illustration? (All generated images are generated in the same medium in which they are trained, IE a digital image.)

Certainly, one infringed work may qualify as a de minimis influence on a generated image, but what if said image is generated from exclusively copyrighted works, each likewise contributing? Would the generated image not then be 100% composed of copyrighted works?

Do you think there is perhaps a compelling argument for effecting the market and value of an artist's work if someone else begins generating art identical to theirs?

Thankfully, courts don't just decide these cases based on single, cut and dry pedantic arguments. They carefully consider every factor! How many of the above factors do you think would compel a judge to perhaps, side in favor of an original artist?

>> No.6436551

>>6436542
>If you want to see evidence of cheese pizza being in a dataset, go look in the stable diffusion generals on /b/ and see what pedophiles are dredging up out of the dataset and posting of their own volition.
Uuhhhh I'd rather not. Aren't the fuds surveilling /b/ and /pol/ 24/7 anyway? What the fuck are they doing?

>> No.6436555

>>6436537
>It's just a bullshit appeal to emotion and mysticism, e.g. "human creation should be deemed unique".
Anon ... the reason why people argue this is because we live in a world made BY humans FOR humans. It's not an "appeal to emotion" to demand that we should not equate tasks performed by machines to the human concept of inspiration. Machines don't have rights, laws about fair use don't apply to them. Any "thinking" the machine does is just the process of it trying to perform its task.

>> No.6436557

>>6436542
Except that the images don't actually LITERALLY exist until you make the program generate it. And if someone does that, it's on them.
Just like how a bunch of pixiv artists got banned recently because they made 3d shota/loli art that was using models which weren't stylized enough. That doesn't mean that honey select or blender should be banned.

>> No.6436558

>>6436548
Nope. Can I overlay copyright photos at 50% opacity and sell them as my work? fuck no.
so why should interpolating between them count as fair use? LOOK UP THE FAIR USE LAWS BEFORE ARGUING FURTHER! there are 4 points to consider. Go now! read all those points. There is a 5th, "are you a bad person"(really) that caused garbage pail kids to lose in court against cabbage patch kids, despite being parody.

>> No.6436560

>>6436537
You're not getting it. Algorithms interpolate between images, humans don't do that. What we create is based on what we have seen but is still OUTSIDE of the raw images that we have seen. Just look at how any human can stylize and create things completely unlike what we have seen in real life. ML algorithms can't do that at all, it's not even remotely comparable to what humans do. It's just factually incorrect to compare the two.
>>6436551
>what the fuck are they doing
implicating themselves, that's what they're doing. They're so desperate for validation and community and feel like they're "anonymous" enough that no one is going to do anything about them. Which to an extent is true, I guess.
>>6436557
>the files inside of a .RAR don't actually LITERALLY exist until you unpack them, yer honer

>> No.6436565
File: 343 KB, 512x1814, fair use.png [View same] [iqdb] [saucenao] [google]
6436565

>>6436558
Here, I'll post a screenie for everyone's benefit.

>> No.6436568

>>6436549
The purpose of an AI image is whatever the person is generating it for. It literally is their work, they may not get copyright for the image under current copyright office policy but they still can claim they made it.

> Would the generated image not then be 100% composed of copyrighted works?
Yes this is fucking allowed. Collage is already allowed! Again - IF you took a 2x2 pixel square from thousands of images and collaged them it's passing that test! And AI uses less of any individual work!

Copyright is ABOUT THE RIGHT OF REPRODUCTION PRIMARILY! Mixing a lot of images together to where no individual one is easily represented by the resulting image is 100% fine.
IF you think this SHOULD NOT be fine then you're advocating for destruction of the entire fucking internet, death of memes, death of fucking simple shit like YOUTUBE POOPS and other inane bullshit (which is protected not SOLEY on the grounds that they're parody), not to mention "serious" works like collages.

>Do you think there is perhaps a compelling argument for effecting the market and value of an artist's work if someone else begins generating art identical to theirs?
No because it's 100% fine to make art in "the style" of someone else. It's NOT fine to affect the market for any SPECIFIC INDIVIDUAL PIECE under copyright law, but if for example I decided to really grind and learn to paint like RJ Palmer because I find him to be a cunt, and wanted to outcompete him in the same field, with the same subjects, but with less whining and less mammilian dinos, there's nothing stopping me from doing that.

And to show how individual images are used, the whole Prince silkscreen that warhol did was decided to be infinging in great part because the market for the photo it was based on was literally the same magazine cover where both were commissioned, but the magazine never told the photographer that her art would be used in that way (which affected her rate.)

>> No.6436570

>>6436558
>Can I overlay copyright photos at 50% opacity and sell them as my work? fuck no.
If you overlay enough of them to where they make a weird effect that looks nothing like the originals, yes you can. People do this! You are an ignorant fuck!

>> No.6436572

>>6436565
So, riddle me this- what happens, when your work is made up of countless other copyrighted works, for the use of profit because you can't be bothered to learn to fucking draw?
>b-b-but muh collage yer honer! It's l-l-l-like a collage, it's art, it's fair use!!! AIEEEEE
I don't know, when I see a collage, I kinda understand that it's collage, see? Kinda different from trying to algorithmically photobash a shit ton of images together to pass off as your own work, yeah?
Point four though, MMMM boy that seems like a pretty spicy point right there. Flooding the market with your shitty algorithmically derived bootleg bullshit and fucking with the original creators' livelihoods, devaluing artistic expression? Hoooeeee. I don't know about this one, AIsisters. Looks pretty dicey to me!

>> No.6436575

>>6436568
>but they still can claim they made it.
But the algorithm made it, they just requested it. You wouldn't claim you "made" an image that you found with a google search, either

>> No.6436576

>>6436568
>generating
It already existed inside the dataset. You did not create the generated image, the algorithm did :)

>> No.6436578

>>6436560
The latent space is not anything human-readable and contains no image data. There is not the near-infinite combination of output images in existence just sitting around you absolute schizo.

The entire fucking model is 4 gigs!
It was bad enough when twittards were claiming that it contained literally every input image, but now you're trying to claim that every fucking potential output image exists in there as sub-1-byte of data. What the fuck

We are at copium levels I never thought possible.

>> No.6436581
File: 7 KB, 256x256, kavcwjrjvzudj9k24oam.jpg [View same] [iqdb] [saucenao] [google]
6436581

>former hedge fund manager who spent years trading oil for the government
>had a hand in government dealings with the pandemic and middle eastern political affairs
>has said himself he has a team of marketers on his discord in order to increase the demand of AI generators thus get more funding

>> No.6436582

You GATEKEEPERS are calling for SKILL SEGREGATION this is why everyone hates artists!!!

>> No.6436583

>>6436572
>I don't know, when I see a collage, I kinda understand that it's collage, see? Kinda different from trying to algorithmically photobash a shit ton of images together to pass off as your own work, yeah?

The point is that a collage uses A GREATER PROPORTION of any individual copyrighted materials than AI-generated art.

>devaluing artistic expression
Prove "artistic expression" has been "devalued." Explain what you mean by that. Because nobody fucking does, they just take it for granted because they have a fucking persecution complex.

>> No.6436587

>>6436560
>Algorithms interpolate between images
No, they literally do not do that. That's a misinterpretation at best, or at worst the graph did its job and misled your understanding. For one the images aren't there to interpolate. The entire point of the latent space is to break down immense data sets into a smaller set of data that is not only optimal to store and access when generating outputs, it's also a collection of averaged data from the training data. Your data doesn't exist in its original form anymore in the latent space anyway. It's already been optimized into oblivion by the model engine because it's not efficient to store a million unique images and then look through all of them and scramble bits and pieces together in real time. You can hypothetically reproduce a facsimile of a single image by asking for very specific parameters in the prompt, which would indicate over fitting on a small dataset, but more generally you'll get a new image that's a jumble of feature soup that the model has deemed a suitable representation of the prompt. IF that twitterfag is being honest about the "Afghan girl" prompt (unlikely) and that's what midjourney produced for a generic prompt, then it was simply a poorly trained model. ML is closer to monkeys reading millions of books and compiling various recurring plot points, themes, words, phrases, etc. in a way it's easy for them to understand into an encyclopedia and then using the encyclopedia to produce what you want.

>> No.6436596

>>6436583
You think people prompting for images in Greg Rutkowski's style using a generator trained on Greg Rutkowski's art are not harming Greg Rutkowski's future market?

>> No.6436598

>>6436581
I wonder how much he's seething about Disney getting involved

>> No.6436600

>>6436587
If stable diffusion is trained using images of only monkeys, is it gonna be able to generate an image of an elephant?

>> No.6436602

>>6436570
I meant to type "two images" (hence the 50%)

>> No.6436604

>>6436596
The "market" test is not based on the artist, it's based on the individual piece.

Honestly it's probably boosted his value because nobody outside of his fucking niche market of concept art knew who the fuck he was beforehand.

But besides that, I keep seeing people misunderstanding copyright as protecting "an artist's body of work" or "artists as a class" for some fucking reason. Stop doing it. That's not how it works.

>> No.6436605

>>6436568
>It's NOT fine to affect the market for any SPECIFIC INDIVIDUAL PIECE under copyright law,

Can the AI generate images nearly indistinguishable from an original piece, or that share undeniable traits of the artist's original piece?

"Style" cannot be considered, but "elements?" Absolutely. And AI doesn't use "style," it doesn't know what style *is,* but it knows the elements. Elements of an original are what are used to produce a new image, not style.

Curves, color choice, individual splotches or light sources. Anything the AI latches onto as part of a visual pattern. These are often, if not *always* taken wholesale from the original work, just masked through the layering of other elements.

> Collage is already allowed

This is a circular argument. Collage is only allowed if the collage itself qualifies as fair use, which would be determined using the aforementioned factors.

>Mixing a lot of images together where no individual one is easily represented

I would agree with you here, except it is extremely arguable whether or not generated images easily represent their constituent parts. I don't think you can say in good faith that you can't, with some digging, determine which images went into many generated works.

>destruction of the entire internet

You refuted this point yourself by mentioning parody. Youtube poops are easily defended as parody, as are memes, etc. What defines parody is plain to see, and while some AI works would probably qualify, to say that they all could be treated as parody based on the *tool that is used to create them* is asinine.

>> No.6436606

>>6436578
>y-yer honer, the .RAR file is not human-readable and contains NO image data!
Back on this one again, eh?
>b-but how could all the big image numbers fit into a smaller number, that's crazy you're crazy there's no way that can be real!!
Back on this one again, eh?
>>6436587
>Well it's not LITERALLY doing that, it doesn't LITERALLY have the images in there, just LOWER DIMENSIONAL REPRESENTATIONS, and because you can't see them with your eyes and they aren't TECHNICALLY LITERALLY THE SAME means that they are totally different! haha see I win because I'm a fucking pedantic piece of shit AHAHAHA!
Both of you can see >>6436212 and you can kiss my sweet honey glazed and buttered ass. We've been over this before, I'm not doing it again.
Those talking points ain't workin' anymore, sister.

>> No.6436608

>>6436555
>Machines don't have rights, laws about fair use don't apply to them
Yea, instead the operators do. AI will go to court and most likely win in the long run because humans have been using human operated generative tools for ages, there's precedence for it. Those trees in your favorite game? Procedurally generated by a piece of software the game studio licensed. Those realistic textures for aliens in that generic hollywood blockbuster? Generated by a huge branching tree of steps a computer followed to poop out an image that was slapped onto the model later on. This category of tools aren't new, they existed for a long time but you didn't know about them because they were operating in different but adjacent industries.

>> No.6436609

>>6436604
>Honestly it's probably boosted his value
lmao nice cop-out

>> No.6436610

>>6436578
https://youtu.be/YQ2QtKcK2dA?t=626
is diffusion a kind of compression? Emad thinks so.

>> No.6436613

>>6436600
No. Similarly, a person who has never seen an elephant will find painting an elephant a pretty challenging task.

>> No.6436615

>>6436608
The actual algorithm itself isn't the problem. It's the usage of copyrighted data fed into them that is. Stop fucking deflecting, please and thank you UwU *nuzzles u*

>> No.6436617

>>6436608
>Those trees in your favorite game? Procedurally generated by a piece of software the game studio licensed. Those realistic textures for aliens in that generic hollywood blockbuster? Generated by a huge branching tree of steps a computer followed to poop out an image that was slapped onto the model later on.
And were the tools generating those trees and textures trained on other people's copyrighted images?

>> No.6436619

>>6436608
Procedural generation isn't based on a massive database of illegally sourced copyrighted images.

>> No.6436621

>>6436613
>uhm well if a human-
Let me stop ya right there, faggerino. An algorithm isn't a human and human concepts don't fucking apply to them. Again, stop fucking deflecting.

>> No.6436624

>>6436587
>IF that twitterfag is being honest about the "Afghan girl" prompt (unlikely) and that's what midjourney produced for a generic prompt
You can go to the midjourney discord and find that image being generated.

>> No.6436625

>>6436600
If a human has only ever seen monkeys his entire life will he be able to draw an image of an "elephant"? If he's never seen color will he know what color name matches with which color he sees? If he's been blind his entire life and only known shapes by touch will he know what they look like? We know the answer to the last question actually, it's no. He won't.

>In 2003, Pawan Sinha, a professor at the Massachusetts Institute of Technology, set up a program in the framework of the Project Prakash[8] and eventually had the opportunity to find five individuals who satisfied the requirements for an experiment aimed at answering Molyneux's question experimentally. Prior to treatment, the subjects (aged 8 to 17) were only able to discriminate between light and dark, with two of them also being able to determine the direction of a bright light. The surgical treatments took place between 2007 and 2010, and quickly brought the relevant subject from total congenital blindness to fully seeing. A carefully designed test was submitted to each subject within the next 48 hours. Based on its result, the experimenters concluded that the answer to Molyneux's problem is, in short, "no". Although after restoration of sight, the subjects could distinguish between objects visually almost as effectively as they would do by touch alone, they were unable to form the connection between an object perceived using the two different senses. The correlation was barely better than if the subjects had guessed. They had no innate ability to transfer their tactile shape knowledge to the visual domain.
>They had no innate ability to transfer their tactile shape knowledge to the visual domain.
They had to learn the connection
>However, the experimenters could test three of the five subjects on later dates (5 days, 7 days, and 5 months after, respectively) and found that the performance in the touch-to-vision case improved significantly, reaching 80–90%.[9][10]

>> No.6436627

>>6436605
>Can the AI generate images nearly indistinguishable from an original piece, or that share undeniable traits of the artist's original piece?
Yes and if you AS AND INDIVIDUAL do this and post it you're liable. That isn't a problem with the tool since you can do the same by right-click->save an image.

>Curves, color choice, individual splotches or light sources.
No that's not how it works! I can go copy the lighting style, color pick, and study the shape language of a frame from Star Wars Ep 1's pod race scene and make a new non-starwars image out of these things, and Disney can't fucking sue me for it because it'd be nothing like the fuckign starwars frame.

>This is a circular argument
It's not but you struggle to follow logic so I'll spell it out.
You are saying that AI-generated art crosses the line into not-being-fair-use. However, OTHER FORMS OF ART are very firmly considered to be fair use.
AND these other forms of art, use MORE of any one particular image, than AI art does. Therefore, your claims of "AI uses enough of its source images to be infringing" is bullshit. Since if AI was infringing on that line, it would mean undoing decades of precedent by moving the line.

>it is extremely arguable whether or not generated images easily represent their constituent parts.
It has to be determined on a case-by-case. The AI is simply a tool to make images, so if you use it to make images that are infringing then you are... making images that are infringing! But it isn't going to necessarily do that often.
As always, the only thing that matters is the OUTPUT. On a CASE BY CASE.

>You refuted this point yourself by mentioning parody.
No my point was that they don't rest entirely on the parody argument, and destroying the other parts would cause that to crumble.

>> No.6436628
File: 273 KB, 800x600, 800px_COLOURBOX1667281-3714313018.jpg [View same] [iqdb] [saucenao] [google]
6436628

>>6436613
idk this just reminded me of Durers Rhino.(never saw one before drawing this)

>> No.6436629

>>6436608
This one's a little different because AI corpo is up against Multimedia corpo. Guess who judges usually favor.

>> No.6436631

>>6436598
>Disney getting involved
They are? Sauce?

>> No.6436632

>>6436627
>you
>use
>to make images
You don't make images, you just google them
Just a lil' reminder, because you guys keep forgetting that

>> No.6436634

>>6436613
Then it can't create anything outside of what it was trained on. That's what people mean by "interpolation", the point is that it's not creating anything new but just mixing things together based on what it "learned"
>b-but muh humans
Irrelevant, machines don't have human rights

>> No.6436636

>>6436628
>the image is based on an anonymous written description and brief sketch of an Indian rhinoceros that had arrived in Lisbon in 1515.
It's literally based on a sketch of someone made of a real rhino.

>> No.6436637

>>6436568
>but they still can claim they made it.
Not the art itself. They can claim ownership for the image, but state that they did the art is objectively wrong.

If I commission a human artist, the final image is mine, the idea is mine, the characters can be my OCs, but I still can't take credits for the art itself. With AI art it's the same, you did not make the art, instead you requested the AI to do the art for you.

>> No.6436638

>>6436627
>Yes and if you AS AND INDIVIDUAL do this and post it you're liable. That isn't a problem with the tool since you can do the same by right-click->save an image.
as simple as typing in 'afgan girl'. if you were unaware of the original image. you might go ahead and use this image, sell books and posters with it, then cry when you are ordered to pay damages to the original copyright holder. it's on you for utilizing a program you know used copyright images as it's training data.

>> No.6436639

>>6436625
Why do you "people" keep justifying a task performed by a machine by comparing it to human learning? Do you want to fuck a robot or something

>> No.6436641

>>6436636
I'd like to see that sketch(I know it's probably gone). I just posted it for interest, not to make a point

>> No.6436642

>>6436610
Explain how there's not just 5 billion full input images but also the infinite amount of potential output images just "compressed" into 4GB you fucking idiot.

And I say infinite because the amount of parameters you can set multiplies the number of potential PROOMPTS with their own combinations of words, word order, negatives, choices of emphasis/deemphasis, formatting and so on to numbers beyond human understanding.

You fucking absolute shitbrain.

>> No.6436643

>>6436634
To put a finer point on it, what it "learned" was how to recreate the images fed into it, and associating patterns in the data with words. And the latter "learning" is only used to search for data points matching the pattern.

>> No.6436644

>>6436598
Disney has their own in house AI technology they're working on but it's not getting the attention of stablediffusion and midjourney. They'll shut those guys down and keep developing theirs.

>> No.6436645

>>6436639
>Do you want to fuck a robot or something
this is what it all boils down to. By pushing back against this, we are pushing robot waifus further into the future. they will deny it of course, but who denied it, supplied it.

>> No.6436647

>>6436637
The wonderful fields of installation art and Found Object art disagree with you.

>> No.6436649

>>6436615
>>6436617
>>6436619
If you're the original poster posting >>6435980
then you're just shifting goalposts. If you're not, why are you responding to a totally different conversation?

The point made by the twitterfag in the pic boils down to
>generated images aren't legitimate and just stealing
See: >>6436560
>Algorithms interpolate between images, humans don't do that

If you have a problem with the copyright that's fine and reasonable. But companies will simply work around it with the power of money and licensing. AI won't simply disappear because some people opted out due to lack of compensation. AI is projected to be the next trillion dollar industry, it'll come one way or another. There's too much money involved.

>> No.6436650

>>6436642
Ok, easy. Images share features, color, texture, composition, with statistical variation, yada yada. if you store only the differences. you get 200tb into 4gb. bam.

>> No.6436651

>>6436638
Basically yes, that's correct.

>> No.6436652

>>6436631
They're indirectly involved by proxy in the gofundme campaign. Some anons connected the fundraiser organizers.

>> No.6436654

>>6436650
Cool, so it's not storing images. Now do the outputs.

>>6436652
yea the gfm was trying to get an in with the copyright association, the fucks behind the DMCA, SOPA and PIPA

>> No.6436656

>>6436652
America's trillion $ companies and near trillion dollar companies are all heavily invested in AI. Apple, Google, MS, Tesla, Facebook, nvidia, etc. Disney? $160B.

>> No.6436657

>>6436649
I responded to a separate argument with a separate counterpoint. I have literally no fucking clue what you're on about.

>> No.6436658

>>6436649
>why are you responding to a totally different conversation?
Cause it's public board anon

>> No.6436661

>>6436647
Those guys at least studied art theory for years in order to inform their installations and found object art and articulate their creative choices. They would look down their noses at proompters.

>> No.6436662

>>6436654
>so it's not storing images
Idk why you keeo repeating this, I didn't see anyone ITT claim that it "stores images"

>> No.6436666

>>6436656
Thing is those companies aren't dumb enough (or desperate for money) to publicly release their shit. Anything publicly available for profits going to be fully licensed and copyright of their own. At least what we'll know about

>> No.6436668

>>6436654
But you seem to be forgetting the laaaateeeent spaaaaace
See, the thing is, once you have the original data points, you don't need to have a separate set of values for every single point in between. That's what math is for.
That doesn't change the fact that they're still there. In the laaaatent spaaaaace.
>>6436662
I mean, going by the content of this article >>6436212 and the words of Emad himself, it really truly does, but not as literal discrete .pngs and .jpgs, which is what our friend here just can't seem to get throught their thick skull. Because apparently compressing data in a way that's reversible and you can get the input back means that they aren't the same thing anymore.

>> No.6436669

>>6436661
The prompters are just shit skills. Doesn't mean they're not doing the same thing at a much lower value.
Low value art is still art. CWC is a really shitty artist, but still an artist.

>> No.6436674

>>6436654
? you asked for how to compress that data?
diffusion. latent space representation of data points. some images are more represented due to the nature of those points. just because it left out a lot doesn't mean they are not compressed. it's compression. get over it.

>> No.6436675

>>6436668
It all existing purely "in the latent space" for all intents and purposes means it doesn't fucking exist until it's generated you retard.

>> No.6436683

>>6436668
>Because apparently compressing data in a way that's reversible and you can get the input back
this hasn't been demonstrated by any of the examples in this thread so far. It's lossy.

>> No.6436684

>>6436627
Sorry i'm way too tired to make another full rebuttal but
>the only thing that matters is the OUTPUT
I think this is our core disagreement and it would take a lot more time and effort to argue that and I don't really feel like it tonight and this dumb dogshit thread will be closed by tomorrow so who cares

>No my point was that they don't rest entirely on the parody argument

you're right i misread that bit sorry

>AND these other forms of art, use MORE of any one particular image

The reason I consider this circular is because you're cherrypicking precedent to focus on one of the 4 fair use factors without considering the other 3, the amount used of the original image only matters when brought into context of an individual case so you're basically saying it matters because it matters here
sorry had to get the last word on that one bye bye

>> No.6436687

>>6436674
No I'm asking how the fuck you idiots believe there's more output images than we have numbers for "compressed" into the latent space.

The fact you can convert the data into these output images does not means they already exist just because the math to do so can (but reasonably won't) be calculated ahead of time. That's such a fucking ignorant take on the level of
>"everything is predetermined maaaan nothing mattersss the big bang already set everything in motion maaaaan"

>> No.6436689

>>6436683
It gets the data back from similar "images"(points in the latent space). it's weird compression, but if you get the right prompt you get close enough. the prompt could be long, but still less bits that the original image.

>> No.6436691

>>6436684
Not cherrypicking precedent, only stating that you're basically arguing for destruction of the concept by chipping away at each in different ways.
I've addressed all parts of it. It's just that any individual portion of fair use being attacked here is a completely different beast. Some of the attacks ignore that copyright is focused on individual specific resulting pieces, and some just ignore precedent for those pieces. Additionally you don't need all 4 to support fair use, only enough from any of them to do so. Plenty of fair use work isn't parody or educational (which is basically one of the pillars) but is still considered fair use. Each pillar is moreso a shield, and can defend a work against being branded infringement, but the more pillars that fit the easier this is.

>> No.6436692

>>6436687
I believe anon's point is that the output images already exist "in theory". So if you would keep using the same prompt with the same seed, the output would always be the same. This does not mean there are literally images "stored" in the latent space.

>> No.6436697

>>6436675
>if I can't see it as a set of pixels in the image space, it doesn't exist! It's just gobbledygook numbers, see! It's totally, completely gone until you generate it, at which point it just automagically exists again! Wow!
>>6436683
That's like saying that because you saved a .png as a .jpg that it's not the same image anymore, even though for all intents and purposes it looks nearly identical. And if you didn't see it already >>6436056 latent diffusion models actually do output images that are nearly 99% identical to the input.
If you have the latent space coordinates of the original images and input them directly, bypassing the text prompt bullshit, then you would get them back completely unchanged (except, of course, for the negligible loss incurred by compression, which I shouldn't have to mention but will anyways).

>> No.6436699

>>6436669
>Low value art is still art. CWC is a really shitty artist, but still an artist.
I'll agree with that. This was one of the main fights I had with my undergrad program. They really hated all forms of digital art, functional pottery, and drawings, paintings, and sculptures that depicted representational subject matter even if it was just one element of the piece. It's fine if they consider it bad art, but to say it isn't art at all is pretty dumb. Those fuckers were stuck in the 60s.

>> No.6436701

>>6436697
Image compression algorithm powered by stable diffusion when?
Finally we'd get something better than jpg.

>> No.6436705

>>6436692
Yes, thank you. Although not in theory, in fact. They're just "hidden" behind mathematical operations, the outputs of which are immutable. This has nothing to do with causality or some philosophical bullshit like that.

>> No.6436709

>>6436701
If they changed it up a little, it literally could do that. Remove the functionality of interpolation from the algorithm, record a list of coordinates for each of the images, and bam. That actually would be pretty fucking cool, you could archive the entirety of human created media this way. But no, let's make a stupid fucking toy and grift people with the technology instead.

>> No.6436710

>>6435921
I use tools to create art.
I am an artist.

One of my tools is AI.
I am an AI artist.

Deal with it.

>> No.6436718

>>6436692
like .kkrieger, the 96kb game. The game is there, in the 96kb. it just uses other stuff to pull it out.

>> No.6436720

>>6436705
You are making a philosophical argument on the same level of "the canvas of babel has already got every image in existence so nothing new can be made as it already was made by those researchers."
Simply the fact that we know how to produce a thing mathematically does not mean we already have that thing in existence. You can quite literally calculate the brush strokes for creating some image in photoshop purely by math, and then run a tool to make those things happen automatically but that doesn't mean that the resulting art exists already.

>> No.6436721

>>6436701
>>6436709
https://towardsai.net/p/l/stable-diffusion-based-image-compresssion

>> No.6436723

>>6436710
>One of my tools is AI.
One of them, not the only. I don't have a problem with this.

>> No.6436727

>>6436710
>AI artist.
That's an oxymoron.

>> No.6436731

>>6435930
i am screeching and seething

>> No.6436736

>>6436720
The art produced by the model already exists within the latent space, and the model is simply a tool that we are using to explore and discover that art.
The latent space is a mathematical representation of the input data, and it captures the underlying structure and relationships within the data. It contains a wealth of information about the input data in a compressed form, and it represents the input data in a lower-dimensional space.

By using the model to generate new images, we are simply accessing and exploring the latent space, and we are finding images that already exist within that space. The model is not creating new art, but rather it is uncovering art that already exists within the latent space.

>> No.6436740

>>6436720
>philosophical argument
no it's not. Each dataset contains a LIMITED AND MATHEMATICALLY DEFINED SET of data. This data is fixed and immutable. We know what's in it because we know what we put in there.

Just because WOW WOW IT'S SO BIG MY LITTLE MONKEY MAN BRAIN CAN'T COMPREHEND IT WOOOOOWW THERE'S SO MUCH STUFF IN THERE doesn't mean that's not true. "Every image in existence" my ass. The only type of "infinity" in a dataset is the "infinitesimal", like the infinite amount of increments between two real numbers.

>> No.6436742

>>6436736
What this guy said. Put it far more eloquently than I could.

>> No.6436748

>>6436268
>The logic you are using would consider a traced silhouette of 2B as the same as the image of 2B it was traced from
That would be a traced image, and the person who traced it would be called out as a tracer.

>> No.6436749

>>6436721
Hey, that's pretty cool.

>> No.6436755
File: 1.26 MB, 710x3130, latentspace2.png [View same] [iqdb] [saucenao] [google]
6436755

>>6436721
This article is really interesting and does a lot more to explain exactly what the latent space is and how the data is stored. You're very helpful, anon.

>> No.6436759

>>6435947
AI-niggers don’t need to be creative. All they need is to consume creative work and regurgitate it back like the very AI itself. I don’t think they will mold over AI replacing them in next few years. They are too diverse in employment, so one by one replacement will just make them switch targets. Also btw AI art is not new. It existed since 2015 and earlier. Pattern recognition softwares were able to create photorealistic landscapes from simple colors and shapes, creating entire pictures with good composition was also possible for many years, those images were however littered with random patterns of spirals and rainbow colors. Photorealistic people and even original fur sons and anime girl designs were possible with AI 2-3 years ago. Style transfer able to transfer artstyles was also possible for more then a decade. AI music existed since 2016. The only revolutionary thing that appeared in early 2021 was CLIP technology that allowed using prompts for creating images.

>> No.6436768

>>6435921
That last one is pure cope. Guy still thinks all AI can do is big titty anime pics. It can copy his art style too in just an hour of training.

>> No.6436769

>>6436087
>Have you ever fucking spoken to a ML engineer?
Can’t say I have, and I’ve only seen public-facing people like Emad and that Evans guy who interviewed on Proko, and some discord fucko’s ooohing and aaahing at AI art.
So yes it’s fair to say I’ve never spoken to an ML engineer or even seen them speak about how their work is being used publicly.
Unless they are, as a group, trying to get funding like that Unstable Diffusion kickstarter that wanted everyone EXCEPT artists to weigh in on ethics.
So I’ve never seen them be pro-artist or be horrified at anything that’s been happening, no.

>> No.6436772

>release an image generation model using an artist's works, designed to ripoff their style
>"heh, take THAT, artist!"
>release a model using that person's publicly shared photos of themself
>NOOO YOU CAN'T DO THAT, THAT'S MY FACE!!
Why are they like this?

>> No.6436774
File: 15 KB, 348x345, Screenshot 2022-12-24 174343.jpg [View same] [iqdb] [saucenao] [google]
6436774

>>6436755
Ok, so all the images Could be in the data set?! We don't know for sure, the math not human readable.
you could fit 250tb of images into 4gb with this level of compression. Why all the lying AI-bros?

>> No.6436775

>>6436736
>>6436740
Have you ever used one of these fucking things? The amount of variation you have is so big it might as well be infinite as any individual person would never be able to see every single fucking image it could produce in their lifetime.
You're making the argument that because the earth is going at a stupidly fast speed around the sun that everyone is automatically speeding.
Nobody gives a shit that your definition of "exists" includes things that literally do not actually exist. There are no images until they are produced using the AI. The latent space isn't a real place, it's a representation of the potential combinations of input data, and is in fact missing parts because the prompt is entirely decided by the human operator.

You are trying to use a language trick to make it seem like it's "all in existence" and that we're just "exploring" it but it literally does not exist, you're just extending a metaphor.

You are in fact worse than the techbros saying that it "literally learns like a human with actual artificial neurons and is going to start thinking on its own in 2 years"

>> No.6436777
File: 179 KB, 972x1727, 1671750322939562.jpg [View same] [iqdb] [saucenao] [google]
6436777

>>6436769
>like that Unstable Diffusion kickstarter that wanted everyone EXCEPT artists to weigh in on ethics.
The pure seethe directed toward artists in that fundraiser was palpable. Funny how art is never a real job, but they needed money to... tag art?
I guess Arman Chaudhry is butthurt about being a community college zoomzoom, stuck in New Jersey, living with his parents.

>> No.6436781
File: 571 KB, 704x448, 20221010135516_2565871040.png [View same] [iqdb] [saucenao] [google]
6436781

>>6436775
I have. I got 10k images out of it during the SD beta. I used Gan+clip notebooks last year. I was also in the Dalle2 beta in april. I have a local install of SD too. "might aswell be infinite"? what do you mean? you will never find a coherent hand grasping a tool correctly in there, nor an old man knocking out a horse with a punch.

>> No.6436791

>>6436781
>>6436775
Maybe because you retards use old as fuck models. Grab the latest SD, make burner account for Midjurney and try out the latest stuff for yourself. Also don’t forget to add —v 4 into the prompt on Midjurney. Also there were already SD pics of old man punching a horse on /g/ specifically because of that challange.

>> No.6436793

>>6436768
Yeah it's always funny when people say "if your art was good AI couldn't copy it! Make something AI can't do!" It can copy literally any art unless its sealed away in a storage unit. Even things as complex as Heironymus Bosch. If you fed it enough pictures of conceptual art installations it could make that too. One day when Ai video is out we'll see AI performance art.

>> No.6436796

>>6436781
"Might as well be infinite" in the sense that you can keep generating new stuff until you die of old age and never see every potential image that could exist. I doubt that you could even automatically generate every image with every set of words and parameters in any reasonable amount of time either.
It doesn't mean the outputs will be accurate to every idea that springs to mind, but it does mean that it can take those inputs and output something unique. I messed with SD yesterday and had a horrible time trying to get it to output anything visually related to MGS2 for example. None of them were anything like what I was thinking of, but still generated unique images.

In general I've found these tools to be horrible for trying to generate certain specific things, as it's a crapshoot as to whether you'll figure out a way to make something workable. But again, they still use the operator inputs to output "something" each time.

>> No.6436798

>>6436781
>Also there were already SD pics of old man punching a horse on /g/ specifically because of that challange.
Never saw those, when was that?

>> No.6436805

>>6436791
>use old as fuck models.
Don't see how that changes the facts. Explore the latent space is perfectly accurate way of describing getting images from it. Yes the space does not "exist" it's math. does the game kkrieger "exist" in the 96kb?

>> No.6436806

>>6436791
>lso there were already SD pics of old man punching a horse on /g/ specifically because of that challange.
pics?

>> No.6436812

>>6436805
It's not an accurate way of describing it at all.
Again with the language tricks. kkrieger exists but any specific elements of the gameplay do not exist in those 96kb until you're actually playing it. Similar to how "an AI art generator" exists but none of the art you make with it exists until you generate something

>> No.6436815

>>6436806
I checked 2 sdg around the time of the challenge and the was nothing. I doubt anyone got one without img2img, but would be happy to be shown otherwise.

>> No.6436818

>>6436229
>To my understanding the images don't actually "exist" in the latent space except as a series of parameters used to construct new images. But I could be wrong.
The real question is "does it matter"? When a painting is digitized it's turned into 0's and 1's and only a specific or set of algorithms can decode that for it to be visible on a screen.

>> No.6436819

>>6436812
>Explore the latent space is perfectly accurate way of describing getting images from it.
language tricks? are you esl by chance?
I didn't pick that term up from my head, I got it from to machine learning experts, people who built this tech you are using.

>> No.6436823

>>6435921
Why do people care so much, is it unironically boiling down to simply profit driven fear?
The ai produces good coom material at low effort and at the same time can do visually/aesthetically pleasing images.

Why does it make so many people upset? It's just content at the end of the day.

>> No.6436824

>>6436721
Hey thanks for sharing this.
This is actually the first tangible proof I've seen that the checkpoint could, in theory, be compressing images, though it seems like this implementation is only about 20% more efficient than jpg. So it doesn't fully explain how you get all those billions of images in a 4GB checkpoint.
Then again, I it could be storing only some of the training data while it "forgets" the less influential images.

>> No.6436825
File: 233 KB, 512x512, 20220830162740_2677781675.png [View same] [iqdb] [saucenao] [google]
6436825

>>6436818
>When a painting is digitized it's turned into 0's and 1's and only a specific or set of algorithms can decode that for it to be visible on a screen.

Yes, that's correct. When a painting or other type of image is digitized, it is converted into a series of 0s and 1s, which is known as binary data. This process is often done using a process called sampling, which involves measuring the intensity of the colors in the image at regular intervals and representing them as a series of numbers.

To view the digitized image on a screen, a specific set of algorithms is used to decode the binary data and to create a visual representation of the image. These algorithms use the binary data to determine the colors and other visual characteristics of the image and to display them on the screen.
It is important to note that the process of digitizing an image involves a loss of information, as the original image is sampled and approximated using a finite number of digits. As a result, the digitized version of the image may not be an exact replica of the original, but rather a representation of it.

>> No.6436834

>>6436818
You're getting somewhere. Yes it matters, the file actually exists and your computer is not a big "infringement machine" because it can decode the image data of a .jpeg to display an image. But that .jpeg file still exists as a concrete, finalized set of image data.
The latent space doesn't actually CONTAIN any image data. It contains the "potential" for image data.

>>6436819
ML dudes pitching shit for investors and dumbing things down in ways that don't actually match reality is not really what you wanna be doing.

If it worked the way you are assuming, then everything would just break, legally. As I said before - any art program can be said to have a limited set of potential images it can make, and you only need to figure out the math and parameters to produce those.

That's why this line of reasoning is nonsense. You are literally saying "things that don't exist actually exist because we can use a computer to calculate how to make them exist." This ignores things like LINEAR TIME.

>> No.6436835
File: 723 KB, 448x704, 20220830181040_2437861923.png [View same] [iqdb] [saucenao] [google]
6436835

>>6436824
>"forgets" the less influential images.
I'd think so, if some points in the latent space are close enough, just merge them. like when you make a model and decimate it by merging vertices that are within x units from each other.

>> No.6436837

>>6436823
It makes people realise that all art ever made was just content for fun, coom or looking pretty. All the masters of old were nothing but consoomer content of the old times. AI can produce anything you could, coom and pretty pics are just in demand, so they get more attention then other pics. It’s not that the AI can’t do that, it’s that people ask it to make these pics.

>> No.6436840

>>6436834
>ML dudes pitching shit for investors and dumbing things down in ways that don't actually match reality is not really what you wanna be doing
This is how they talk to each other. Why do you think you know better?

>> No.6436842

>>6436079
This is just horseshit
You can have social media things like IG (request to follow) and fb (only allow friends to see shit) and still have them private , no one thinks any less of you.
These days most normies do this anyway. It's unusual when I add someone on IG and they have a shitload of stuff on public

>> No.6436846

>>6436840
see >>6436834

>> No.6436847

>>6436774
Could be? We knew that they were in there to begin with, but this gives us a very nice visual representation of what is in there that is very easy to understand to the common viewer. I think we just struck gold here.

>> No.6436849
File: 1.81 MB, 1024x1024, SaucyDoodles_sports_photography_of_a_woman_with_horns_punching__7f9f49df-bf7b-450c-b0f6-7fe384d6c9f7.png [View same] [iqdb] [saucenao] [google]
6436849

>>6436791
>try out the latest stuff for yourself
Well shit... it's over. lmaooooo

>> No.6436851

>>6436842
A shocking number of people I know have their personal IG on public. It's weird and I don't understand why. Putting your accounts on private will prevent random users from stealing your pictures but the site can still sell your data based on terms of service.

>> No.6436852

>>6436572
I learned how to prompt instead

>> No.6436853

>>6436846
see >>6436840

>> No.6436855

>>6436769
Steven Zapata says that he's met a lot of folks from the /g/ side of things that absolutely are on our side.
You're forgetting that the unethical usage of scraped data is going to affect them negatively too. See the Copilot fiasco. I'm think that this person in picrelated here >>6435980 >>6435987 is an example of a techfag that's got both of our best interests in mind.

>> No.6436856

>>6436853

again: >>6436834
>If it worked the way you are assuming, then everything would just break, legally. As I said before - any art program can be said to have a limited set of potential images it can make, and you only need to figure out the math and parameters to produce those.
>You are literally saying "things that don't exist actually exist because we can use a computer to calculate how to make them exist." This ignores things like LINEAR TIME.

Argue against what is actually stated, fuckhead.

>> No.6436860

>>6436834
>ML dudes pitching shit for investors and dumbing things down in ways that don't actually match reality is not really what you wanna be doing.
All of what you said would be true and makes sense. However we have the tech here and everyone can try it out and see it actually works really well and all of their promises are true. Even these professionals of hype are underselling their products here and the future of it. We can see right now how influential and powerful these things are, and how quickly they improve. Not just with 2D art, but also with video, code, text, audio, 3D and so on. Creative jobs are not the only ones threatened, even in the slightest. I already used ChatGPT to make me some code for videogames, used it to write all of my assignments and lab protocols, and then asked it about legal advice and got correct answer. Once it becomes more open how the quality data for ChatGPT is created (it is actually more then 100 times smaller then GPT-3 despite being better at everything, all thanks to improved way of teaching the AIs) and these smaller trained models get released just like how SD training datasets get send over the internet, then it is over for so many jobs it’s not funny. Like apocalyptic levels of job loss. Just to give you a reference, I made mods for video games on ChatGPT, something that has very specific API that is not part of the programming language, and it worked.

>> No.6436861

>>6436855
Zapata is an idiot and grifter. He's using AI to boost his career by being very vocally opposed to it.
The copilot fiasco is a mess because of the potential use of blocks of code that's under various copyleft licenses that don't play nice with each other, so it's a licensing morass from what I understand. It's not the same as the art one as the amount of any piece of code used is very different from the amount of any piece of art.

>> No.6436865

>>6436860
I'm not talking about their "promises" I'm talking about the "hurr we just are EXPLORING LATENT SPACE" bullshit.

>> No.6436867

>>6435921
Inside a model there are no images. Unless it's overfitting hard, the AI will only learns the patterns. This is common sense. These models were trained with billions of images, yet they are only 4 to 5gb. They can even be prunned to half of that.

>> No.6436869

>>6436860
>Once it becomes more open how the quality data for ChatGPT is created (it is actually more then 100 times smaller then GPT-3 despite being better at everything, all thanks to improved way of teaching the AIs) and these smaller trained models get released just like how SD training datasets get send over the internet, then it is over for so many jobs it’s not funny. Like apocalyptic levels of job loss.
That's horrifying. I can't help but think all this argument about AI art is just a distraction for the coming disaster. Oh well I guess I can make cool video game mods and anime titties until society collapses.

>> No.6436870

>>6436861
How is he a grifter boosting his own carrier by opposing AI? He won’t have carrier if AI wins.

>> No.6436872

>>6436870
Because even if AI wins his career isn't going anywhere.

>> No.6436880

>>6436861
if you've ever worked an art job for a legitimate business that isn't just some coomer furry's neetbux then you would know that copyright & image licensing is not taken any more lightly than codershit.
90% of your job is making sure your company doesn't get sued or end up with a pr firestorm. the pure liability of any one of these model's spitting out some faggot's donut steel oc or something that is *close enough* is a massive business liability.

>> No.6436881

>>6436861
Zapata is one of the realest niggas out there and loves creating art for art's sake. You don't know a single thing about the guy if you believe that.

>> No.6436885

>>6436856
>If it worked the way you are assuming
When you "explore" the latent space and use a machine learning model to generate new images or other data, the generated data is not literally created from scratch. Instead, it is generated based on the patterns and relationships that are present in the data and captured in the latent space.
In this sense, the generated data is "new" in the sense that it did not literally exist before it was generated by the model. However, it is not "created" in the sense that it is not produced by a creative process or human intervention. Instead, it is generated automatically by the machine learning model based on the patterns and relationships present in the data.

>> No.6436888

>>6436881
The tell tale sign of a grifter is if he ends up selling "anti-AI" t-shirts with very nebulous claims of where that money is going.
Zapata seems like a good guy though so I doubt he'd sellout like that.

>> No.6436894

>>6436880
Completely unrelated because the actual content of what the copilot lawsuit consists of is entirely divorced from the art controversy.

>>6436881
>>6436888
He did the research to make a long ass video and got everything basically wrong. So he's either a grifter (more likely from the amount of effort it would take to get that far and still be wrong) or an absolute fucking moron. But I'll give you the benefit of the doubt and just call him a moron, as a compromise.

>>6436885
It's not "exploring" anything because you can mathematically model almost fucking anything to predict shit the same way. There's mathematical spaces that contain all possible ANYTHING - software, image data, text, you name it. It's extremely reductive to call this all "not creating anything."

>it is not "created" in the sense that it is not produced by a creative process or human intervention
There is your fucking lynchpin. You don't consider people entering in fucking prompts and adjusting the various settings to be "human intervention" which is just plain stupid to try to claim.

>> No.6436896

>>6436885
"The latent space has structure when interpreted by the generator model, and this structure can be queried and navigated for a given model. A series of points can be created on a linear path between two points in the latent space, such as two generated images. These points can be used to generate a series of images that show a transition between the two generated images. Finally, the points in the latent space can be kept and used in simple vector arithmetic to create new points in the latent space that, in turn, can be used to generate images."
sounds like I am right. exploring is not a marketing buzz word for investors.

>> No.6436900

>>6436894
>There is your fucking lynchpin
While human intervention is involved in the process, such as by providing prompts or adjusting the various settings, the machine learning model is still the primary driving force behind the generation of new data.

>> No.6436901

>>6436896
There is a mathematical space that contains all possible 4chan posts, but that doesn't mean you're simply "exploring the latent space of retarded statements" when you post your bullshit. You're making new bullshit when you hit post!

>> No.6436905

>>6436900
The primary driving force of it is the human because this shit doesn't do anything on its own. You are doing the stupid thing where you're anthropomorphizing the tool. It's like saying a car is the primary force when you're driving it because "all you do is push some pedals and adjust the steering."

>> No.6436906

>>6436901
>There is a mathematical space
I think this is your problem. you are very hung up on the library of babel shit while latent space different.

>> No.6436909

>>6436906
The latent space is a mathematical space derived from the ways the AI generates images. It doesn't actually exist except as part of the conceptualization of the image generation process.

>> No.6436911

>>6436905
>It's like saying a car is the primary force
Oh no. nottuthisushittu again.
"take me to work" car proceeds to drive you to work navigating all the traffic, running over the old lady instead of the child.
>You murdered that lady!
it's true... I drove right into her...

>> No.6436912

>>6436909
It is not a physical space that exists in the real world, but rather a concept used to represent and understand the underlying patterns and relationships within the data.

In the context of generating images using a machine learning model, the latent space might contain information about the various colors, shapes, and textures that make up an image, as well as the relationships between these features. When you "explore" the latent space, you are using a machine learning model to generate new images or other data based on this underlying structure and these relationships.

>> No.6436913
File: 379 KB, 710x1556, exploration of the latent space.png [View same] [iqdb] [saucenao] [google]
6436913

>>6436856
We're not assuming anything because the article here >>6436212 states that this is exactly how it works. The AI "learns" all the images through the diffusion process and saves them as 64x64x32-bit blocks of data, as seen here >>6436721 . When you plug in a block of data that is NOT one of those originals, it outputs an interpolation of whatever it is in between. The way that this works
>>6436896
Here, check this out. https://keras.io/examples/generative/random_walks_with_stable_diffusion/

>> No.6436920

>>6436912
>>6436913
>you are using a machine learning model to generate new images

Thank you for finally fucking agreeing instead of sticking on that "hurr we're just EXPLORING this, all the images already exist!!" line.

>> No.6436927

>>6436913
my interpretation is that it uses carefully categorized jigsaw pieces that have been generated from millions of images
a form of data laundering because if you have enough lobotomized "jigsaw" pieces (which are pieced together using statistical averaging), and other related pieces can make up for the deficiencies of a single one, and because of this you can't simply pick out a single piece and call it out as being directly part of a stolen image that was encoded, with some overfitting exceptions

>> No.6436928

>>6436920
I don't know who said the output images "already exist" before they are output. only that they are in theory "stored", as you can take the seed and regenerate it anywhere with access to the model.
You can EXPLORE the existing latent space by generating "new" images.

>> No.6436932

>>6436913
That's how VAE works. VAE is a quirk of Stable Diffusion, which is why it's getting explained in the article. But there are no images saved in the model. Anon, these 4gb models are trained with billions of images. Do the math.

>> No.6436933

>>6436927
>data laundering claim again
You don't know what data laundering is, stop using this phrase to mean "I think this shouldn't be a legal use of data even though it is."

>>6436928
The chucklefucks earlier in the thread who tried to claim that, since the "latent space contains all the images that could be output" it meant that the models themselves were violating laws related to copyright and CSEM by virtue of their existence.

>> No.6436937

>>6436933
if existing laws don't apply then nothing stops legislators from making new laws to catch up with the technology
all this discussion, and discussions like it, need to do is establish the connections from where things stand to where things should be

>> No.6436944

>>6436933
>violating laws related to copyright and CSEM by virtue of their existence.
By existing? or by being used? storing copyright images isn't illegal afaik.

>> No.6436946

>>6436937
I don't think damaging fair use is where things "should be" and I don't think there's a problem with AI as it's not actually a threat to anyone, save a few soulless, low-paying art jobs that people already hate doing. Like the book covers for shitty no-budget novels or whatever will replace the globohomo corporate art that graphic designers - not illustrators - are already stuck doing outside of their job description.

>>6436944
By existing. The whole argument was from illiterate people who fucking thought they figured out AI's Achilles heel to take it down.

>> No.6436949
File: 8 KB, 86x86, data.gif [View same] [iqdb] [saucenao] [google]
6436949

>>6436932
If you have 2 billion images(laion 5b minus the "low quality images") that are 64x64 pixels and each image is 4 kilobytes (KB) in size, then the total size of the images in gigabytes (GB) would be:

2,000,000,000 images * 4 KB/image = 8,000,000,000 KB
8,000,000,000 KB / 1024 KB/MB = 7812.5 MB
7812.5 MB / 1024 MB/GB = 7.62 GB

So in this case, the total size of the 2 billion images would be approximately 7.62 GB.

It's important to note that this calculation assumes that the images are uncompressed. If the images are stored in a compressed format, using some new fandan gled diffusion tech, you might get that down to half that.

>> No.6436980
File: 963 KB, 720x720, 0_6Nrb168aK7xoaV8b.png [View same] [iqdb] [saucenao] [google]
6436980

>>6436920
But they do already exist, just in the latent space, which is 100% fixed from the outset based on the input.
The libary of babel comparison is not really apt. The library of babel contains every single possible configuration of all 29 english characters, thus theoretically comprises all possible written words, infinite monkey theorem style. It extrapolates from a small number of elements to infinity.
However the latent space of a diffusion model is different. It is a set of X number of images plus all points of interpolated space between. It very much does NOT contain all possible images and pixel configurations, it is a set containing far less than that. The only possible "infinity" is that of a set of all decimal numbers between two real numbers, but this "infinity" is a smaller set contained within the set of ALL real numbers.

For all intents and purposes, everything that a latent diffusion model can generate is finite, and hard limited to what was input into it. If you trained a latent diffusion model on four images, the total set of images it could generate would be the original four images + every point of interpolation in between them. See picrelated for a good example. Everything within the model is derived from the input. All of it was determined from the start. You cannot get anything from outside of the interpolated space but noise. Even if you input billions of images and can generate more images than you could view in a lifetime, a human being can still create something that it cannot.

>> No.6436985

>>6435977
Ok so I take several images, throw them in a zip file, i can extract pieces of data from the zip file, combine them together through some algorithm.
boom i'm an artist

>> No.6436986

>>6436933
>you dont know what data laundering is you just dont ok!

>> No.6436987

>>6436946
>by existing
No, the argument is that everything generated from copyrighted data is still bound by those copyrights and thus cannot be used for profit. No one ever said anything about them violating copyright merely by existing.

>> No.6436989

>>6436949
>that are 64x64 pixels and each image is 4 kilobytes (KB) in size
64x64x32-bits would be 16.384KB in size, over 4x larger than your estimate. The latent space also needs to record relational data and metadata for the features, like distance between features. And Stable Diffusion's model isn't just the image latent space, it also contains the prompt model which relates prompts to image features which your image actually references. Also, >>6436913
is about a specific interpolation feature between outputs called Latent Space Walking.

You deliberately cut out the rest of the article when even a little more would reveal the actual context.
>Interpolating between text prompts
>In Stable Diffusion, a text prompt is first encoded into a vector, and that encoding is used to guide the diffusion process. The latent encoding vector has shape 77x768 (that's huge!), and when we give Stable Diffusion a text prompt, we're generating images from just one such point on the latent manifold.
>To explore more of this manifold, we can interpolate between two text encodings and generate images at those interpolated points:

>> No.6436992

>>6436987
>>6436946
Think of it this way, if you are working on a free and open source project and then include code or assets that are not free and open source and are bound by a copyright license, then distribution of that software could be considered a violation of copyright, and usage of said software for profit would DEFINITELY not be legal unless it was ruled fair use for some reason.

>> No.6436993

>>6436989
I was using 64 * 64 * 8 bpp just cos I knew it would be around 8gb lol.

>> No.6437002

>>6435921
>>6435980
>>6435987
This is so retarded. Nobody is born knowing everything. A person need to train to know how to draw. But after training enough we know how to draw anything, and we don’t need references anymore. AI always need references, it’s not learning anything. AI doesn’t know how to draw Mickey Mouse without pictures of Mickey Mouse. AI doesn’t storage data of anything but pictures and whatever lines of codes that tell the program what should and shouldn’t copy

>> No.6437011
File: 829 KB, 896x512, Palais_blanc_construit_dans_les_montagnes_geantes.png [View same] [iqdb] [saucenao] [google]
6437011

when we get real-time latent space explorer software, you could just stroll on through the temple. It would remember your initial prompt in context of what to generate next as you press onward. Latent space explorer will be a new moniker for those who don't prompt, but instead just explore through joystick inputs.

>> No.6437015

>>6437011
What are you going to walk through in that pic? The conjoined pillars, the melting windows and doorways, or the balconies with random empty spaces?

>> No.6437020

>>6437015
when I get to a wall, it interpret me as an unstoppable force and just smashes through

>> No.6437021

>>6436989
2,000,000,000 images * 64 pixels/image * 64 pixels/image * 32 bpp/pixel = 67,108,864,000,000 bpp
This is equivalent to approximately 67 terabytes (TB) of data.

here is a hypothetical example of how you might use different techniques to try to reduce the size of a dataset of 2 billion 64x64x32-bit images down to 4 gigabytes (GB):

Color quantization: By analyzing the distribution of colors in the images and selecting a set of representative colors, you might be able to reduce the number of bits required to represent each pixel, potentially decreasing the size of the dataset by 50%. This would result in a reduction of approximately 67 TB down to 33.5 TB.

Image compression: Using a lossy image compression algorithm, you might be able to achieve a further reduction in the size of the dataset by removing redundant or irrelevant information from the images. Depending on the specific algorithm and parameters used, you might be able to achieve a further reduction of 50% or more. This would result in a reduction of approximately 33.5 TB down to 16.75 TB.

Feature extraction: By identifying and extracting the most important or relevant features of the images, you might be able to further reduce the size of the dataset without losing too much information. Depending on the specific features that you extract and the method used, you might be able to achieve a further reduction of 50% or more. This would result in a reduction of approximately 16.75 TB down to 8.375 TB.

Dimensionality reduction: Using techniques such as PCA or SVD, you might be able to reduce the number of dimensions in the data and remove redundancy or irrelevance, potentially decreasing the size of the dataset by 50% or more. This would result in a reduction of approximately 8.375 TB down to 4.1875 TB.

>> No.6437022

>>6437021
wait, I asked for GB... dammit chatgpt...

>> No.6437024
File: 121 KB, 1289x844, Screenshot_2022-12-23_23-58-40.png [View same] [iqdb] [saucenao] [google]
6437024

So let me know if I understand this:

Programmers feed in data to some program that distills each into a set of parameters, and these distilled images become a data point

These data points are arranged in a multi-dimensional space, where similar data points are closer to one another.

Then the user enters in a prompt which creates a point that is somewhere in the multi-dimensional space.

Since that point probably won't land directly on a data point in the data set, it interpolates between data points to generate some kind of merged image from the neighboring data points.

Mathematically, there could be infinitely many data points to interpolate too, although with computers there are much less, but points that are closer to one another would look essentially the same to the human eye. Which means there's pretty much a limited number of points that can be interpolated to.

If I understand this correctly, no new images are being created. Images are just being interpolated between using whatever algorithm you pick. The AI doesn't know anything beyond the data set. It cannot extrapolate, which would create new images, like a human using their imagination. It can't mix and match different pieces of art based on whatever it feels like doing like humans do.\

This isn't art and you're not an artist.

>> No.6437032
File: 485 KB, 512x512, 20221005175004_2112329109.png [View same] [iqdb] [saucenao] [google]
6437032

>>6437024
That's generally correct. In a process known as "training," a machine learning model is fed a large dataset of input data and corresponding desired outputs. The model then "learns" to map the input data to the outputs by finding patterns and relationships in the data and adjusting a set of internal parameters to minimize the error between the model's predictions and the true outputs.
Once the model is trained, it can be used to generate outputs for new input data that it has not seen before. The model does this by using the learned relationships and patterns from the training data to make predictions for the new input data.

The process you described, where the model interpolates between data points in a multi-dimensional space to generate a new output, is a common technique used in machine learning, especially in the field of generative modeling. In this case, the model is not creating new images from scratch, but rather combining and manipulating the features of the training data in novel ways to produce new outputs that are similar to the training data.
It's important to note that the capabilities of a machine learning model are limited to the patterns and relationships it has learned from the training data. While it may be able to generate new outputs that are similar to the training data, it is generally not able to create entirely new and original outputs that are not based on the training data. This is because the model has not been exposed to any information beyond the training data and therefore does not have the ability to "imagine" or generate novel ideas in the way that a human might.

>> No.6437033

>>6436980
It does not exist. The mathematical model is purely theoretical. The babel comparison is still constrained by limitations, I forget how long each entry is limited to. The Canvas of Babel as well is constrained, it "contains" all images of a specified resolution.
Similarly, MSPaint has a hard limit of the potential sizes and pixel combinations on your computer. The set of all possible combinations of pixels in MSpaint on your specific system is a finite, but effectively infinite, amount of images.

All of this is the same as the fucking latent space in a diffusion model. The images do not exist until they are made, however. You can calculate whatever the fuck you want but that doesn't mean the actual things those calculations point to exist until they ACTUALLY EXIST.

>>6436985
unironically yes, you could make some weird glitch art that way
>>6436986
Data laundering is taking illegally-obtained data and running it through things like "sending anonymous tips to yourself" to give it the appearance of legally-obtained data. If you were to break HIPAA law and upload a photo of a medical record to 4chan anonymously, then went and downloaded it onto another computer to claim you simply "found it on 4chan" then that would be data laundering and illegal.

>>6436987
>everything generated from copyrighted data is still bound by those copyrights and thus cannot be used for profit.
Completely ignorant of art history and copyright-illiterate. Fair use and de minimis bitch, learn it. I can make money off a youtube poop, I can make money off a collage. I can make money off whatever as long as it's within fair use.

>>6436992
Software is not the same as visual art, I'm not really familiar with how software copyrights work in detail but I do know it's different to a large degree and likely has its own set of case law.

>>6437024
Interpolation of images = the creation of new images. That's why people say artists do the same thing - "draw deku in this pose" is just that

>> No.6437036

>>6437032
Humans generating "novel ideas" is only "different" because we have experiences outside of what photos and art we have seen. We are constantly processing shit that goes into our eyes and associating other senses with our visual inputs. Not all of our memories has to do with art either (even for us artists).

The unexpected combinations of memory are where "creativity" really comes from. In that sense humans are just generally interpolating, but using less-understandable, more-chaotic but less numerous datapoints than an AI does.

>> No.6437037
File: 114 KB, 986x1024, A15E9E86-6F53-42DF-9546-02AB87D47DC2.jpg [View same] [iqdb] [saucenao] [google]
6437037

*ahem*
Fuck AI art

>> No.6437039

itt: bunch of computer retard try to justify why AI isn’t stealing because they claim the brain does the same without understanding how the brain works

>> No.6437040
File: 83 KB, 750x750, c16d7acad0450e43fca84cc3b2009f1a.jpg [View same] [iqdb] [saucenao] [google]
6437040

>>6437033
>Interpolation of images = the creation of new images. That's why people say artists do the same thing - "draw deku in this pose" is just that
Except artists do not do that. At all. Have you never heard of "construction drawing"? Artists work in perspective projection, the projection of 3D objects onto a 2D plane. In fact you might consider drawing to be a reverse engineering of our visual process, converting 2D image streamed from the eyes into an understanding of 3D object within the mind, back into 2D image upon the canvas.
I hope you're not the pro-AI guy who claims to be an artist because there is absolutely no way that you are if you think that artists work by interpolating a huge set of images from their brain. That's pants on head retarded, anon.

>> No.6437043

>>6437033
>>6436985
>unironically yes, you could make some weird glitch art that way
but that's not at all how ai works.
>Interpolation of images = the creation of new images. That's why people say artists do the same thing - "draw deku in this pose" is just that
not at all. artists take ideas and can extrapolate to new ideas.
if you asked an ai to "draw deku in this pose" it could only come up with whatever is in the data set. nothing more. there's no imagination.

>> No.6437046

>>6437039
Not claiming "the brain does the same thing", claiming "artists already use other peoples' shit without permission and it's legal so it's fine if you use AI to do the same thing"

>> No.6437048

>>6437036
>The unexpected combinations of memory
Much of my creativity is from understanding a subject deeply enough that I can extrapolate and invent. Like knowing about specific ways life evolves on earth, then thinking about conditions on a certain planet, that informs my designs of species living there. It's not unexpected, it's deliberate.

>> No.6437049

>>6437040
I literally grab references all the time when I draw, whether they're external to my brain or internal this is how I work. My art education has been focused on constructing shit, learning from other art, building a visual library inside my head and making things look nice, not the proko-esque side of things where you're "feeling the form" and doing a billion gesture drawings and other stuff like that.

Not every artist works the way you do, but plenty work the way I do. Enough of us exist to make copyright law have all these fair use exceptions.

>> No.6437051

>>6437048
That's really autistic, anon. I don't mean that as an insult, I mean that you have described a process which is something I've only ever seen autists claim they do.

>> No.6437052

>>6437051
>Concept artists are all autists
Do you only look at anime or something?

>> No.6437053

Lmao dude it's just images, like pictures n shit
some of you guys take this hobby too seriously I swear
go outside, have sex, please

>> No.6437054

>>6437049
your visual library is not just you interpolating between a bunch of things you've seen.
it's you taking things that you've seen and adding your own creative spin to it.

>> No.6437055

>>6437052
I have legit never met someone who went into such detail that wasn't trying to construct their own world in tolkeinesque detail, and they're all autistic. Most people who want to make some cool setting have some visuals they want to portray and they work backwards from that to justify it as other people ask questions.

>>6437054
The "Creative Spin" comes from other shit I've seen and experienced. Yes I am saying the quiet part out loud and demystifying how creativity works.

>> No.6437058

>>6437032
so theoretically, i could feed in the exact tags, tell the ai not to randomize too much, and I should be able to pull out an image that closely resembles one of the images in the data set?

>> No.6437060

>>6437046
That's an overgeneralization though. Artists draw from a much larger pool of data and furthermore think in three dimensions. They also extrapolate from points of data rather than interpolate. AI generated art is created solely from man made creations, thus what it can create is ALWAYS 100% derivative of that input.
To put it simply-
(Set of all 3D space in the universe)
|
v
(Set of all 2D images as seen by the eyes of humankind)
|
v
(Set of all 2D images created by humankind)
|
v
(Set of 2D images put inside of a dataset) <--- You are here

Humans do not "steal" when they take inspiration from something else, because there is always new data being added via this process. AI on the other hand is a different matter- there is only the images it has been fed and mathematical interpolation between them. They aren't even remotely comparable outside of the realm of extremely general analogy.

>> No.6437061

>>6437058
Even better, if you had the coordinate points of the original image you would get the original image back exactly.

>> No.6437062

>>6437040
it's funny seeing ai trannies trying to equate algorithmic machines with humans

>> No.6437067
File: 194 KB, 1123x749, media_FjPWsfnVIAA9sV5.jpg [View same] [iqdb] [saucenao] [google]
6437067

>>6437060
The small amount of "new data" isn't really relevant as long as you're still making something new. Photobashes like picrel is a new image made up of other images and is totally fine copyright-wise. And it uses far more of the inputs than AI does.

Heck you could say that, even for pure-unmodified AI output art, there's "human creativity" added by the curation of the outputs into a specific selection of what's shared.

>> No.6437071

>>6437055
>The "Creative Spin" comes from other shit I've seen and experienced. Yes I am saying the quiet part out loud and demystifying how creativity works.
your brain's "data set" is ever changing and is influenced all the time by things that you have and have not seen before. remember your brain is a really complicated chemical reaction that takes data from more than just your eyesight. it can take data from one sensation and turn it into data that resembles another sensation.

an ai's data set is static and can only create things that it has seen.

>> No.6437073

>>6437071
Doesn't matter for the purposes of copyright and art ethics. I keep saying they're not chemically, literally the same. But the part that's relevant is:
USING COPYRIGHTED IMAGE AS INPUT
|
v
MAKING SOMETHING NEW THAT DOESN'T RESEMBLE ANY OF THE INPUTS

At a basic core level this is what a human does with or without AI.

>> No.6437074

>>6437040
>>6436183

>> No.6437076

>>6437073
>MAKING SOMETHING NEW THAT DOESN'T RESEMBLE ANY OF THE INPUTS
it's not new.
it's like saying I created a new number between 1 and 2, I call it 1.345

>> No.6437077

>>6437073
Except when "using" AI it mathematically interpolates all possible images from the given set and you have no agency except picking which image you want out of the data points it presents to you. (You) did not create shit.

>> No.6437078

>>6437076
It 100% is new because no image like that has been produced before.
>>6437077
That's enough agency to qualify as creating art. WELCOME TO ART

>> No.6437080

>>6437078
>100% new because no one's ever written down this number before

>> No.6437081

>>6437080
The image is quite literally new. Unless you're saying "no art can actually be new"? It'd be an interesting way to interpret the line "all art is derivative" but I can accept using that definition for the purpose of discussion if that's what we're going with.

>> No.6437084

>>6437081
yes anon the data point at 1.345 is a creative spin on data points at 1 and 2.

>> No.6437087

>>6437084
>noooo you drawing naruto posed like superman isn't art
>NOO ITS EVEN MORE NOT-ART BECAUSE YOU USED AN AI TO DO IT NOOOO

>> No.6437090

>>6437055
>I have legit never met someone who went into such detail
You've never though about the process of making something to fit into an established world?

>> No.6437092

>>6437078
Whether it's art or not is irrelevant, "art" is such a meaningless term that you can call literally anything "art". Even googling images from the latent space, apparently, makes you an artist. But it is actually quite safe to say that *YOU DID NOT CREATE SHIT*. You may as well argue that by inputting 2+2 into a calculator and getting 4 that you were the one who solved the problem, when it was the calculator which did the work and output the answer.
And you still don't seem to get the concept that a human artist working within a space several orders higher than what the AI works within makes it fundamentally different. Say that you are a god, and the images you create are your creations. You can see other creations from other gods, and from your position above the creations can create new things that are like, but fundamentally new from, what has already been created.
However, when speaking of an AI, it is like a creation creating creations from creations, and then proclaiming it is a god, when all it can do is work within the bounds of its creation. It cannot create anything that is fundamentally transformative and new.
The AI creates nothing new in the eyes of man any more than man could create anything new in the eyes of God.

>> No.6437096
File: 188 KB, 1385x508, sad-keanu-in-full-plate-armour-wounded-on-the-battlefie.jpg [View same] [iqdb] [saucenao] [google]
6437096

>>6437092
The only reason anyone thinks that any of the images output by AI are "new" is because they are not aware of the source images. However, if you knew what the source images of each output was, then you would be able to put the puzzle pieces together and see what part comes from where. It's like photobashing but without any of the transformative ability of a human being.

>> No.6437098

>>6437090
Most people who do that that I've met are autistic.
The vast majority of my fellow artists just make character and monsters and ideas and such and then bend the world around it. Hell even when doing tabletop RPGs I tend to just loosely define a lot of the world parameters and the randomly scatter shit around for the players to discover. The lack of direct-sense makes it interesting. But for my art OCs really honest to god I just do with whatever is cool.

The "worldbuilders" I know just endlessly worldbuild but refuse to actually make some kind of compiled document/wiki for it. So I really think they're making shit up as they go along to for the most part.

>>6437092
Using the tool to create new image = making something.
I don't know why you keep pressing this idea that the latent space "already exists" when it only does so as a theoretical mathematical model which is ENTIRELY unrelated to the practical reality in which we live and make art in.

No I'm not going along with your god metaphor because it doesn't make sense at all, though you using it is really telling of your worldview.

The fact is the AI is a tool. It is NOT an "entity" equivalent to humans, it's a fucking program you use to make art. It is not a robot you commisson, it's a tool you fuck around with the settings and inputs until you get something you like, similar to a 3d printer, lathe or espresso machine.

>>6437096
It is in no way similar to photobashing.

>> No.6437100

>>6437096
Each image is only technically "new" in the sense that it is a unique arrangement of pixels, but in the conceptual space it is 100% derivative of everything within it. It cannot work with styles, concepts, characters not within it.
Sure, it can be said that human artwork is derivative of everything the human has ever seen, but in the case of AI art, it is a derivation of a derivation, and a poor derivation at that. It will be forever limited by its nature.

>> No.6437103

>>6437100
>Sure, it can be said that human artwork is derivative of everything the human has ever seen
Yep

>in the case of AI art, it is a derivation of a derivation, and a poor derivation at that. It will be forever limited by its nature.
Also true! There's no reason to think it's a threat at all because of this. To get high quality AI art requires a lot of extra work and skill on the part of the artist, even moreso if working only within a "pure AI" workflow (inpainting and such only, no image editor use).

>> No.6437104
File: 515 KB, 512x512, 20221029025524_833612426.png [View same] [iqdb] [saucenao] [google]
6437104

>>6437096
Sometimes I have no prompt at all, and it's just a seed.(by accident) I wonder what that means for the image in the latent space?

>> No.6437106

>>6437104
It means that it picked a random point from within the latent space and decoded it into an image.

>> No.6437107

>>6437098
Sounds like you mainly know hobbyists? I meet with pros who work with directors like James Cameron.

>> No.6437110

>>6437107
I do not really talk with many writers, no. I barely read anything that isn't nonfiction these days, and all the creatives I speak with are mostly internet visual artists. I occasionally investigate what manga/anime creators do but they also seem to primarily go in the directions I've said, not with the super deep worldbuilding.

>> No.6437114

>>6437103
>Also true! There's no reason to think it's a threat at all because of this.
>see fellow artists nothing to fear now just give up the lawsuits and we'll all be on our way :))))
Tell it to the judge

>> No.6437117

>>6437110
Ok, makes sense you would only meet autists in that context. I was fortunate to live in a town where AAA/hollywood concept artists come to drink and draws at local bars, do demos, put on lectures at universities etc.

>> No.6437120
File: 1.22 MB, 1280x976, 49736641-70A9-4ECA-8024-C64C763564C3.png [View same] [iqdb] [saucenao] [google]
6437120

>>6437117
I'm not meaning this as an insult, but it really feels like this kind of split in picrel. I don't meet people who make their art in the on the left side that often. Most people I know more fit in with the right side.

Though I will say this as an insult: The fact you interact with Hollywood ghouls would make me keep you at arms length if I knew you as anything other than anon. I do not trust those people and I trust people who have good relations with them almost as little.

>> No.6437122

>>6437098
You keep calling it a "theoretical" mathematical model when there's nothing theoretical about it, it's 100% predictable. And not "ooh the whole universe is predictable because causality wooooo!" predictable, I'm talking "if you put 2+2 into a calculator you're going to get 4" kinds of predictable. Math itself also doesn't technically exist in the real world, yet the world can be (mostly) modeled by it.
You're trying to equivocate a limited mathematical model manipulating a limited amount of inputs with a real human living in the real world creating art.
>inb4 it's just a tewl
Yeah I know it's a tool. Therefore if your tool does all the fucking work for you, you really can't take the credit for that can you? Because no matter how much mental gymnastics you perform, an AI picking a point of data from a pre-set pattern does not you the artist make.
And going back to the copyright argument, if your tool uses copyrighted assets then you're going to be dealing with copyright issues. Since the human didn't do it, and the tool did, it's not just about the output, it's about the input and how the inputs are manipulated too because each and every part of that can be quantified.
You argue that if you manually photobashed a billion images together then it would be yours and it would be transformative enough to count as fair use, and you may well be right on that count. But YOU didn't do that, the AI did. Metaphorically, analogically, whateverthefuck, I'm sick and tired of the fucking semantics.

>> No.6437125

>>6437122
EDIT:
"Math itself also doesn't technically exist in the real world, yet the world can be (mostly) modeled by it. And what we're talking about is a purely mathematical model to begin with. If the math says that every single interpolated image possible within the model exists, then it exists. There is no ambiguity, everything is set in stone from the start."

>> No.6437126

>>6437120
>Hollywood ghouls
If you want work with a salary here as an artist, you don't have much choice. it's not literally hollywood tho. I would say the concept artists I know are a blend of the two. I don't know the directors who come here, and often the artists working for them feel a bit stifled. The more out there ideas(like right pic) tend to get ground away until they fit modern audience expectations. Very rarely do the coolest works see the light of day.
>I do not trust those people and I trust people who have good relations with them almost as little.
I will have to say this as an insult. You sound autistic. projection much?

>> No.6437129

Someone make a bingo card of AI shill talking points, I'm on mobile and can't do it right now.

>> No.6437132
File: 61 KB, 688x823, 20221217_090225.jpg [View same] [iqdb] [saucenao] [google]
6437132

>>6437129
been done yes

>> No.6437133

>>6437122
>>6437125

The "limits" of the mathematical model are extremely huge and can never be reached by a human in his own lifespan. It may be theoretically predictable but nobody is going to fucking do that calculation because it's a worthless waste of time and serves no purpose when the AI can just be used to DO WORK.

I keep calling it theoretical because you refuse to take it out of the theoretical and into the world of practical reality, while claiming the potentials are already real. Again nobody cares about it being "predictable" if there's nobody actually predicting shit. The reason I compare it to the big bang doomerism is because in both cases the fact that you "can" predict something is absolutely useless and has no bearing on how people behave regardless. You can perfectly mathematically model it.... and? That means very little when the only way you can practically "test" this is to... simply use the tool.

>You're trying to equivocate a limited mathematical model manipulating a limited amount of inputs with a real human living in the real world creating art.
I'm not and you need to stop lying and saying that I am. The only comparison I draw is:
>INPUT SOME ART -> OUTPUT NEW ART
Which is the extent of the similarities, but an important one as people like to deny that humans make derivative works.
>if your tool does all the fucking work for you, you really can't take the credit for that can you?
I can and I will, just like if I make coffee using an espresso machine or 3d print something. I still made something in those cases, and they have even LESS controls than AI does for me to mess with. And those can be art too.

>Since the human didn't do it, and the tool did
Only applicable to people who try to deflect responsibility. The human is responsible for ANYTHING they make. You're trying to remove responsibility from the human by claiming the human made nothing in order to pin every possible issue onto the AI but that's not how advanced tooling works.

>> No.6437135

>>6437133
>I can and I will
You will be lying to yourself. You are ok with that?

>> No.6437136
File: 386 KB, 830x387, mumxsn.png [View same] [iqdb] [saucenao] [google]
6437136

:^)

>> No.6437140

>>6437133
>The human is responsible for ANYTHING they make
makes sense. so when the AI makes an infringing images, by accident, the human who tries to profit off it is sued, and has no "the robot did it, I didn't know!" defense.

>> No.6437143

>>6437136
wat it mean? latent rep of data? this is how they fir 67TB into 4gb?

>> No.6437147

>>6437061
how can one sift through a model and view the individual data points?

>> No.6437148

>>6437133
The limits of the mathematical model are hard set by the images it's trained on. Again, no "theoretically" predictable whatsoever. It's a single mathematical model, set in stone, 100% actually predictable, and we know that it's so because when you input a certain set of coordinates you will get the same output every single time. Every image generated that's not from the input images is an interpolation of said images. You do not need to generate every image possible to know these statements are true.
AI: INPUT ART -> OUTPUT SAME ART + INTERPOLATIONS
Human: INPUT ART + UNCOUNTABLE QUALIA OF A HIGHER ORDER OF EXISTENCE THAN HUMAN ART -> OUTPUT NEW ART
"Derivative" in the human sense is an order of magnitude greater than the "derivative" of AI art.

>> No.6437151

I think the argument can be ended with this:
As the author and copyright holder of my artwork, you may not use my art to train AI.

>> No.6437153

>>6437135
It's not a lie, if I make a really nice cup of coffee and give it to someone then they'll be like "oh thank you for making me a cup of coffee!"
I am consistent in my principles. I don't say "I drew this using AI" I say "I generated this using AI." Very simple and truthful.

>>6437140
Correct. That's why the human has to curate the outputs carefully, and why people who try to make money using direct AI outputs are some of the dumbest motherfuckers in existence.

>>6437148
Those statements are true but practically useless information. It really doesn't matter because the numbers are so big they're impractical to do anything with. That's why it's "theoretical" - yes your math is there, but the actual outputs don't exist except in hypotheticals until you actually perform calculations. And the primary way you perform a calculation here is by generating an image.

Only the outputs matter for copyright purposes. The same laws apply to a human who generated art using an AI as they do the human who made it by hand. The "method of production" in effect doesn't matter, the resulting image does.

>>6437151
You don't get to restrict people from doing that (or various other things permitted by fair use.)

>> No.6437154

>>6437147
Don't think you can, because they designed the latent space to be navigated through prompts. It's possible to get the original images back but due to the design of models like Stable Diffusion, the sheer amount of images in question, and the huge (although still fixed) size of the latent space itself means that finding them by chance is nearly impossible unless there is a highly limited number of images correlated with a keyword (like the bloodborne guy that keeps showing up in bloodborne prompts).

>> No.6437157
File: 532 KB, 512x512, 20221029013842_356090154.png [View same] [iqdb] [saucenao] [google]
6437157

>>6437153
>"I generated this using AI." Very simple and truthful.
The AI generated this for me. I didn't want it, should I still thank it?

>> No.6437158

>>6437154
Maybe a reverse clip thing could get really good at figuring out prompts with enough data.

>> No.6437159

>>6437158
https://www.latentspace.dev/ there is this thing, don't know how well it would work though.

>> No.6437160

>>6437157
>the AI generated this for me
Anon the AI can't "generate it for you," you did that to yourself.

>> No.6437162

>>6437160
right, and a DJ is a musician

>> No.6437164

>>6437162
They can be!

>> No.6437175
File: 480 KB, 512x512, 20221029012340_176366627.png [View same] [iqdb] [saucenao] [google]
6437175

>>6437160
I asked it to generate it for me. but not like this! it has a sick sense of humor. this is not Kirby!

>> No.6437178

>>6437175
Anon if you keep generating these things without meaning to then you really are probably doing something very wrong in your inputs. Or using the wrong tool.

>> No.6437181
File: 540 KB, 512x512, 20221029012457_1232497025.png [View same] [iqdb] [saucenao] [google]
6437181

>>6437178
Make it stop!!!! please!! It's locked up, the cursor is doing that spinning thing when I try to interact. my hard drive is being filled with abominations!

>> No.6437184
File: 420 KB, 512x512, 20221029021818_525626284.png [View same] [iqdb] [saucenao] [google]
6437184

I'm gonna be sick.

>> No.6437185

>>6437181
>>6437184
Oogy spoogy have you ever seen that security footage of the russian factory worker getting caught in a big lathe? Human pieces everywhere.

>> No.6437187

>>6437153
>it doesn't exist until my eyes see it
I guess mathematically proving the earth being round or the sun being the center of the solar system was only "hypothetical" until we managed to confirm it with our own eyes, huh?
And we're not even talking about math being applied to the real world, we're talking about pure math divorced from the real world entirely. Even if it's a really big number, it's still a known quantity, one that we know the limits of. We don't need to calculate where each and every grain of sand is to know where the beach ends and begins, here.
You cannot compare what an AI does to what a human does.
You cannot even compare what a human using AI as a tool to what a human can do with a brush or MS Paint or whatever. Art coming from the human mind, directly from the mind, is fundamentally different from you typing shit into a prompt and seeing what the machine spits out the other end.

>> No.6437188
File: 399 KB, 512x512, 20221029014642_2044786033.png [View same] [iqdb] [saucenao] [google]
6437188

>>6437185

>> No.6437191

>>6437187
The limits of "where the beach ends" are ultimately arbitrary. Calculations of coastline are really weird for that reason.
And yes all that shit you mentioned is hypothetical until there's trustworthy confirmation. I don't know why you think this is a gotcha. We cannot operate, as humans, off pure faith in numbers. Or pure faith in anything.

>Art coming from the human mind, directly from the mind, is fundamentally different from you typing shit into a prompt and seeing what the machine spits out the other end.
I never said AI art was painting or illustration, it's a fundamentally different form of art more like a combination of Found Object, photography, fractal art, and the kind of 3d art where you only input parameters to manipulate primitives and do no sculpting.

>> No.6437198

>>6437187
>There's images in this space!!
Ok what images?
>Images made up of other images, that aren't PUBLIC DOMAIN
Ok... are the images infringing?
>Yes! Some of them
In what ways?
>Uh-
Can you show me some of these infringing images?
>I don't- I would need hours...
Hours for what?
>Hours to generate some
Generate? You mean you'd need to make copyright infringing images?
>NO see the tool does it itself
This tool is sentient?
>No but you see it does all of the work for me-
But you're the one operating it
>Yes but-
So you'd be proving images exist that violate copyright by intentionally making images that violate copyright, which would open yourself up to getting sued for that breach?

>> No.6437207

>>6437198
dumb nigger. "afgan girl"

>> No.6437210

>>6437207
>So you'd be proving images exist that violate copyright by intentionally making images that violate copyright, which would open yourself up to getting sued for that breach?

The violating images don't exist until you generate them. By doing so you fuck only yourself over.

>> No.6437233

AI "art", theft or plagiarism, or legitimate?
Can anyone give a clear answer with 100% irrefutable proof.

>> No.6437239

>>6437233
Art in content, plagiarism in practice.

>> No.6437241

>>6437239
Sure, assuming that, is there any conclusive proof that the content of the databases used to train the AI were obtained illegitimately or "stolen"?

>> No.6437243

>>6437210
And if you didn't mean to, you just wanted an afghan girl, unaware of the original, you get fucked over. Think this through. These infringing images won't stop appearing. More are found every week it seems. That rate will increase as more people use it. No company will allow workers to use it with this dataset.

>> No.6437245

>>6437241
There was no precedent. Laws would have prevented unfair training, if people had time to think about it.

>> No.6437248

>>6437245
Of course, there is no precedent. But that aside, is there proof that the images have been stolen, yes or not?

>> No.6437251

>>6437248
The labor of production was.

>> No.6437277

>>6437248
Nobody said scraping was stealing. It's the whole process that makes it stealing.

>> No.6437303

>>6436391
Why would you need more than that?

>> No.6437308

>>6436504
>and "art theft" properly only refers to physically stealing art pieces
Ever heard about digital copyright? It gives ownership over something's use.

>> No.6437395

>>6435987
>they can be decoded back unchanged
You fucking niggers. That only happens if some fraction of the data is “overfitted”. How the fuck can you encode 5 billion images on a 4gb file????

>> No.6437399

>>6437002
AI stores reference in a model the same way you store references in your memory.

>> No.6437401

>>6437395
considering the way images are generated, where one set of weights fills the other's blanks so to speaks, and that input images are also cropped and compressed, it seems possible. The same way an encoded video file doesn't actually store every lossless frame of a movie. 2H of 4k video footage (90GB) can easily be compressed into a few hundred MB

>> No.6437402

>>6437399
there you go equated AI models to human memory. the two are not directly comparable, not in the least

>> No.6437404

>>6437402
Explain then please.

>> No.6437411

>>6437404
AI algorithms for one do not encompass at all the totality of human experience. Emotion, passion, etc. It cannot improvise, or understand intent/cues, or relate. Nothing informs the AI about any of this. All it can do is precisely what it's designed to do, which is statistical interpolation from it's database of image data. The same input, with the same randomization seed on the same model will always result in the same image. It's pointless to equate computer algorithmic image generation with a real human. And the human driver inputting prompts is nothing more than someone google searching for what the machine can produce. Hence the DJ vs musician comparison.

>> No.6437413

>>6437401
>5 billion images
Literally impossible for that many images to be compressed in 4gb. Anons did the math in this thread and got into the tens of terabytes for a model to store that many images.

>> No.6437414

>>6437411
And that has nothing to do with what was said.
AI needs a picture of Mickey Mouse to generate a picture of Mickey Mouse the same way a human needed to learn what Mickey Mouse is to draw it later.

>> No.6437431

>>6437413
if you mean >>6437021

the example in this thread is rather half-assed and it looks like he asked the question to a bot
there is this too
>>6435959

but perhaps Im responding to a bot as well that doesnt seem to be able to read?

>> No.6437436
File: 2.68 MB, 1008x9619, Q29VoIcj20eF.png [View same] [iqdb] [saucenao] [google]
6437436

>>6437431
>throw away comment made by ceo
Goes contrary to the scientific papers. If you think youtube shit like this is your gotcha moment then I’m sorry to tell you but you are a brainlet.
NOBODY has provided any clear evidence or reasoning of how they could have compressed 5 bill images into 4gb, and we aren’t even fucking counting the language model and other technology embedded into this single file.

>> No.6437451

>>6437243
Sure they would, and already do. They just wouldn't, and don't, use raw outputs as-is and ensure a chain of command takes responsibility as normal.

They've been outsourcing to lazy SEA studios already, this is the less risk since some overworked sweatshop artist isn't going to intentionally copy something closely out of laziness or spite.

>> No.6437454

>>6437399
explain how humans store anything in their memory, explain what is memory, explain cognitive abilities (thoughts, memories, visual images in our consciousness) in physical terms and only then you can compare it to an algorithm

>> No.6437458

>>6437308
Infringement and theft are two different things, anon.

>>6437251
Go away communist. Labor theory of value is bullshit.

>> No.6437463

>>6437454
Human brain = flawed lossy database of all life experiences
AI = less flawed, but still lossy database of training data specific to the work it's designed for (aiding art creation.)
Kerping a database of reference material is fine whether or not it's purely in your brain or loaded into some kind of tool.
Processing art into other art is fine as long as the result falls under fair use.

>> No.6437464

>>6435980
>color analogy
Basically, you know how gamuts are used to create specific pallettes, but cannot go beyond that without breaking the gamut? The latent space is that gamut of colors. You have a predefined set of colors: desaturated reddish-brown, bright pastel purple, neon turquoise, etc. Latent space can use these colors, but it cannot combine them properly. It also cannot go beyond the gamut for any reason.

Imagine if specific colors within a gamut had some sort of social media esque trendy upvote system. These upvotes assign priorities to some colors over others. Trendier colors gain a priority over non trendy colors within the gamut. This is bad for say, Copic, a popular company who has their rules about certain predefined colors. Copic sees that the only colors being selected are their own colors, and gets ass-mad: the only reason those colors were "trending" to begin with, was because of the copic brand.

>> No.6437489

>>6437463
>Human brain = flawed lossy database of all life experiences
Citation needed

>> No.6437499

>>6437464
Why would copic get mad in your example? Seems like something not worth their time to bother being upset by.

>> No.6437502

>>6437489
not that anon by it’s a pretty apt description.

>> No.6437503

>>6437489
>do thing
>try to retrieve data(remember something) about thing you did
>get flawed copy in your mind's eye
>improve recall using adjacent data (other memories) to compare/contrast and get a better recall

>> No.6437507

>>6437503
>data
>flawed copy
>mind's eye
and now explain the same in physical terms, I'm not interested in concepts, they are not verifiable

>> No.6437514

>>6437507
Don't need to, the concepts are the parts that matter.

>> No.6437515

>>6437507
Images that you remember are loosely stored in adjiacent neurons and to recall them you activate those neurons

>> No.6437591

>>6437153
>You don't get to restrict people from doing that (or various other things permitted by fair use.)
it's not fair use if you're selling access to the model created with my art or are selling outputs of a model created with my art.
if this were just a bunch of people fucking around with ai models, i wouldn't care.
but no, these are corporations scraping my art off the internet with the intent to make a profit.
that's not fair use.

>> No.6437592

>>6437514
yes you need, until that concept is verified there's no way of knowing if it's true or false, therefore that comparison is vain
>>6437515
what images and what neurons, in other words, what experiment shoud one make to demonstrate if this is true?

>> No.6437830

>>6436212
>latent space
so this AI shit it’s just a glorified animoprh? lmao

>> No.6437847

>>6437830
Yes, it is. AI bros keep trying to pass it off as being okay because "it's just too big, bro, there's so many images that it's not stealing anymore bro, it's like a canvas of babel bro" but at the end of the day that's still all this is.

>> No.6437980

>>6436759
>AI-niggers don’t need to be creative. All they need is to consume creative work and regurgitate it back like the very AI itself.
/thread

>> No.6439365
File: 10 KB, 454x520, 1672015063298.png [View same] [iqdb] [saucenao] [google]
6439365

>>6436212
>FUCKING YES! HOLY SHIT GET IN HERE GUYS, THINK WE'RE GOING TO BUST THIS THING WIDE FUCKING OPEN
Mental capacity of your average d/ic/ksucker

>> No.6439371

>>6439365
cope, seethe, sneed, and chuck :3