[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/ic/ - Artwork/Critique

Search:


View post   

>> No.6897048 [View]
File: 1.11 MB, 1201x614, 1681901909159424.webm [View same] [iqdb] [saucenao] [google]
6897048

>>6897034
that video is made by the average anti AI tard. except he pretends to understand how the tech works.
just like the retards here actually. somehow you can read the explanations without truly understanding anything you read.

>>6897035
>ai needs to be trained on existing images to work
none of you truly understand this line of argument. the true extent of it is much more complicated than your tiny peabrains can imagine.
AI needs to train on dogs to have an "understanding" (i.e. an internal representation) of dogs.
it needs to train on cats to understand cats
it needs to train on paintings in order to understand what makes something a painting.
it needs to learn from [thing] in order to understand it.

now you can argue that AI should not be able to see a single piece of art and should just learn from photos and historical art, but that's not how you learned (if you can draw at all, lol). artists grow up seeing all kinds of art and let themselves in influenced by them. we even deliberately study them, copying their art for practice.

you call none of that "stealing"
but for some reason, when the AI does it, it's stealing. despite the fact that the training data is not recreated.
it's actually simple to understand the basic issue. and i already said it.
to summarize:
you retards, consciously or not, believe that when the AI makes the images in pic related, it is referencing pictures of dogs.
but what happens in reality is that the AI is relying on what it has learned about dogs, and is RECREATING the features step by step. first the blurry, then eventually edges and details that converge to become the features of a dog.

>BUT AI CANT CREATE NOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
yeah yeah. whatever you say you people lack the IQ to understand any of this anyway. i've given up and i'm dissapointed.

>> No.6858567 [View]
File: 1.11 MB, 1201x614, 1683299606997327.webm [View same] [iqdb] [saucenao] [google]
6858567

>>6858543
no. its job is to recreate the concepts tied to the keywords.
if you write cat. then it is not trying to create some specific cat, but just a cat, based on all the cats it has seen.

again, the originals are all gone. the only thing remaining is the vecorspace of concept and ideas, stored in the neural net.
and even for how the image is made: is is starting from noise, then makes something that could maybe be a dog (based on what it knows about dogs). and then it gradually makes up the details as it goes on. it "converges" towards an image of a dog.

>> No.6852663 [View]
File: 1.11 MB, 1201x614, 1686541651168356.webm [View same] [iqdb] [saucenao] [google]
6852663

look. this is how SD operates.

>>6852643
it's very simple: the point is that if it is "learning", then you are calling learning theft, stealing, copying and everything else you tards describe AI with.

again, it is not learning like a human, that is irrelevant.
but the question here is: which one of us is more accurate, more truthful here?
knowing what you know about AI, which description actually is true? if you don't call what the AI is doing with the training images "learning" what do you call it?

i can even call it ingesting, but learning is simply more accurate, BECAUSE it can apply what it has taken in from the training data.

i will repeat again: what do you call it? and can you actually reason why you call it that?

Navigation
View posts[+24][+48][+96]