[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/ic/ - Artwork/Critique

Search:


View post   

>> No.6903959 [DELETED]  [View]
File: 175 KB, 1072x1083, lmao what is this.png [View same] [iqdb] [saucenao] [google]
6903959

>>6903886
>With AI it takes a few clicks.
that's what you assume. because you don't understand anything about it.

do you know how training actually works? the AI is just putting out random crap, and then comparing that to the training image. and then it is nudging itself to get closer to that training image by writing notes into various "notebooks" (tokens).
it does that over and over. until it gets things right enough to move on.
does that process sound familiar to you in some way?

it takes a lot of computing power to train an AI from the ground up.
a style LoRA is easier to make, but it only works at all because there is the base model, which has learned a ton of stuff already. otherwise a lora would be unable to do anything at all other than directly copy or mash together images in nonsensical ways.

>I don't know why that's such a hard concept for you.
lol. because you cannot justify that in any way. or at least none of you have succeeded in doing so to me.

the way i see it, most of you have one of two reasons as justifications:
>protect artists
which is fair enough, but i don't think artists should be protected from the world changing just professions may change shape
>because AI is unethical/steals
which is simply not true

>>6903912
you're an idiot who only understands things through emotion.
i already said that direct plagiarism, direct impersonation is unethical.
but it is not at all what AI fundamentally does. it doesn't even take directly from its training data, it only learns from it.

in fact,
>you can remove 10000 top attributing images from a model without it change what the AI learned(https://arxiv.org/abs/2310.03149))
>you can train on an entirely different dataset of faces and still result in similar faces at the end, because the model gained a similar understanding of faces in general
(https://arxiv.org/abs/2310.02557))

of course you can't see past that because you're too fucking riled up in righteous anger :)

also pic related lol.

Navigation
View posts[+24][+48][+96]