[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/ic/ - Artwork/Critique

Search:


View post   

>> No.6901072 [View]
File: 239 KB, 1024x1024, 1672083393299116.jpg [View same] [iqdb] [saucenao] [google]
6901072

>>6901044 >>6900058
continued:
it's funny because imo your hypotheticals REALLY highlights how flawed your viewpoint is.

-normally only objects can be stolen.
-in terms of ideas, only protected ideas can be stolen
-so in your hypothetical photoshop example, without artistry or style, the AI is just learning processes that are free for anyone to learn.
so where is the theft happening? what is being stolen?

and i know you insist on the "it wouldn't be able to do this without the training data" part, but YOU CAN PAY SOMEONE FOR THE LABOR, and then what? the end result is still a machine that can create MUCH more labor than was put in.
and do you think the people providing that labor actually are the source of the photoshop techniques they are using?

lets say they make 10 professionals provide the training data.
let's say you actually value that labor by its output value. so in this case, the value of all photoshop skills in existence combined: a million billion poozillion dollars or whatever.
do you think those 10 professionals actually deserve that money? what they are providing is the accumulation of photoshop knowledge. but are THEY themselves actually deserving of that amount of compensation?
don't you see how something is CLEARLY not adding up here?

(and this argument can absolutely be extended beyond your hypothetical, with styles and artistic input. but this hypothetical is good at showing the core issue so thank you for that.)

>> No.6885958 [View]
File: 239 KB, 1024x1024, 1694599804882786.jpg [View same] [iqdb] [saucenao] [google]
6885958

>>6885943
>talking about how it should be regulated or that artists should be compensated. lol, good luck to that
yes, even if they were to be FAIRLY compensated, it would be not even pennies. the entire thing is just pointless.


proof: https://arxiv.org/abs/2310.03149
>By now there is substantial evidence that deep learning models learn certain human-interpretable features as part of their internal representations of data. As having the right (or wrong) concepts is critical to trustworthy machine learning systems, it is natural to ask which inputs from the model's original training set were most important for learning a concept at a given layer....
> ...We find some evidence for convergence, where removing the 10,000 top attributing images for a concept and retraining the model does not change the location of the concept in the network nor the probing sparsity of the concept. This suggests that rather than being highly dependent on a few specific examples, the features that inform the development of a concept are spread in a more diffuse manner across its exemplars, implying robustness in concept formation.

>>6885951
did you read and understand this part?
>that is not how the AI fundamentally works and learns.
the paper above also supports what i'm saying. it is not learning from individual images, it learns by seeing patterns from multiples of images.

>>6885954
probably. since dallE has some manga creation capabilities. albeit very limited.
it isn't "saving" shit btw.
the data is separate from the model. the model trains on the data, but then it generates shit completely independent of its training data, using only its own internal learned representations.

Navigation
View posts[+24][+48][+96]