[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/ic/ - Artwork/Critique

Search:


View post   

>> No.6897015 [View]
File: 314 KB, 407x626, 1674865627552122.webm [View same] [iqdb] [saucenao] [google]
6897015

>>6896986
AI can plagiarize. just like a human can choose to do so.
but the question is: does AI inherently copy?

>but you can't provide a single piece of evidence of your claims.
i can explain how the entire thing works again but you wouldn't read it anyway :)
you people deliberately choose to ignore it every time, thinking it doesn't matter.

also what single example? i can post anything that is context dependant, like reflections, mirrors, and that would already be proof that AI doesn't copy.


>>6896608
>Imagine being so deperate to feel special that you believe this
>noooo impossibruuuu!!!!! AI bad AI bad! no touch!
it's just the truth.
it's very simple to understand too: if if you want to improve on the AI with your input (as in, manual input, like sketches, corrections, overpaints, photoshopping), you need to make decisions that won't make everything worse, but better.
otherwise you will just make the result worse with every manual thing you do.
shadiversity is the best example of this in action.

>> No.6869867 [View]
File: 314 KB, 407x626, 1664954439292945.webm [View same] [iqdb] [saucenao] [google]
6869867

>>6869791
>The original images are encoded in the hidden layers of the NN
no, the training images are thorougly TAKEN APART inside the neural network.
imagine the higher layers (layers of nodes) grasping the vague shapes, other layers grasping details, and others again taking relationships and other elements from the image. as well as the tags relationship with other tags even.

now consider that this is what happens with one image.
the next image you add, it further READJUSTS the adjustments from the previous image. THE SAME ONES, if it's the same tag.
so at the end, by training on a cat, you might end up with:
>parts that understands the rough shape of a cat, parts that understands the colors that cats can have
>parts that understand how cats can look in different settings, which are tied to other tags. (like daylight, night, snow, etc)
>parts that understand that a cat has two protrusions called ears. and many, many other details of a cat. or even that it has fur, just like dogs have
and much more.

think about it. every single thing i listed is being trained on the word cat, but the neural net is large enough to capture all of these features separately.
not only that, when training, even only on this single word, cat, all of these features are adjusted simultaneously.
so when you pull the word "cat" you pull all of this "knowledge" with it.
as the model trains on more cats, what model gets out of training is not a snapshot of a single cat. but something more thorough. a representation of what a cat is and can be.
there is a reason that the outputs of AI is just "a cat" and not any specific cat from the training data.

>..rebuilds pieces of stolen images on a framework of a random noise.
there is more data in the image than only its pixels. if it could only work with the pixels (like in photobashing)
it would not be able to do any of this. it works because it can work with the actual relationships between pixels and analyze the image in a deeper way.

cont.

>> No.6859498 [View]
File: 314 KB, 407x626, reflection.webm [View same] [iqdb] [saucenao] [google]
6859498

Is AI learning and applying or is it stealing and copying?
If it is stealing, how do you think it works?
discuss.

Navigation
View posts[+24][+48][+96]