[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/ic/ - Artwork/Critique

Search:


View post   

>> No.6931936 [View]
File: 2.80 MB, 500x281, 1684111162192748.gif [View same] [iqdb] [saucenao] [google]
6931936

>>6931036
>The soul.
lol.

>>6931172
yes, but it was a gradual process.no one person created rock on their own. it was built upon previous influences which also came from other influences in turn.

no one person could possibly create rock without either listening to rock directly or at least some of the genres that directly influenced rock, like blues.
do you disagree?

>This seems to me the fundamental difference between LLMS and people, humans don't need direct outside influence and can think abstractly.
the machine is just worse in many ways and can't do all the things that we can do.
but it can do a narrow range of things, even things that traditionally required human cognition.
and learning from images is just what it does.

the main difference is actually that it can't learn iteratively.
humans can do this:
>see an image
>try to draw that image
>play with that image and adjust it as you want
>experiment with it in real time
>have the cognitive ability to evaluate the image as you make it and to aim for certain qualities.

an image AI on the other hand:
>"sees" an image during training
>and that in the only time it actually learns from the image.
>it can not truly learn from its own output.
>it can not evaluate anything on its own
>(other than just trying to get the token represented in the way it learned to)
>it does nothing in real time, it only learns during training

the difference currently is that humans can learn in real time, all the time.
the AI only learns during training. outside of that you can consider it a static, unchanging brain, regardless of what it outputs.


now despite everything i said, what the AI is doing is still considered "learning".
and it still isn't sapient. you people always seem to assume that these things have to go together. but as you can see, there are nuances to the matter.

>> No.6901044 [View]
File: 2.80 MB, 500x281, 1684986859977246.gif [View same] [iqdb] [saucenao] [google]
6901044

>>6900058
>So it's not ok to do it to one artist, but it's ok to do it to millions, somehow the scale of the neo-theft makes it ok.
yes because it's fundamentally not theft. the AI is learning your style by analyzing your art.
and by having more things to learn from it literally is not even "stealing" anyone's style anymore, which is the ONLY issue where i'd agree with you.
unless you're saying that drawing pupils in a certain shape. using certain colors together is actually "stealing" and not just using techniques that NOBODY really owns. or should own.
if you do argue for that, i'm sure you can imagine what kind of parallels i would draw to humans doing the same throughout all of history...

>Let's go one more hypothetical, let's say artists working in Photoshop were all unwittingly sending all their pen clicks and image data to Adobe, then adobe comes out with an ai Photoshop user, that can do any task in Photoshop, replacing the need for a pro photoshoppers. Labor was stolen, uncompensated, it's just clicks and images no style was stolen, adobe is in the wrong here yes?
let's break it down, say an ai learns how to separate a character from its background. (or any photoshop task you can think of really. like changing adjusting colors in an image, or changing the colors of an object to be something different.)
do you consider that "stolen labor" just because it wouldn't be able to do it without using examples to train on? even though all it did was learn how to do the task by seeing examples?
how can labor be stolen in the first place? how does the logic work here?
it seems to me like you're basing this ENTIRELY on the result, without considering what actually happens. for your case, it would be more accurate to say that JOBS were stolen.

but in actuality, [what] has been stolen? basically just techniques and processes. here, without the baggage of style and artistry, do you think the basic ability to adjust some curves is something that can be "stolen"?

>> No.6895553 [View]
File: 2.80 MB, 500x281, 1673310000593767.gif [View same] [iqdb] [saucenao] [google]
6895553

>>6895511
why are you talking as if art is the first real application of machine learning..?
anon, AI can be used for anything.

https://www.youtube.com/watch?v=Dw3BZ6O_8LY

>> No.6858679 [View]
File: 2.80 MB, 500x281, 1691203002760738.gif [View same] [iqdb] [saucenao] [google]
6858679

>>6858605
and something else i just realized in addition to >>6858647:

you seem to think that the AI should be able to do things even with only one image in its training data.
this suggests to me that you think of the AI as a traditional computer program.
it seems to me that you think that the AI has inherent abilities. like any software, and that it's only using the training data to add to its abilities.

but that's not how it works. the AI gets ALL of its abilities from the training data.
(and maybe you recognize this, because this too is a line i've repeated over and over and over again.)

see for example pic related. a non-art example in hopes that it won't trigger and sidetrack you:
the AI does not recognize the elements in the video because it was programmed to do so. i can do it because it was trained on the relevant data. so it "learned" to do so by seeing the relevant data.

>> No.6850520 [View]
File: 2.80 MB, 500x281, 1680552681484245.gif [View same] [iqdb] [saucenao] [google]
6850520

>>6850389
>the human doesn't make use of the data but goes through filters of abstraction and conceptualization, the ai doesn't
except it does.
what else do you think makes the AI realize that some arrangement of pixels is a "tree" or a "car"? that you don't normally have trees indoors, or my more advanced examples with reflections?
what is this if not conceptualization?

look at the gif in >>6847509 again.
the AI also works from loose shapes and only works on the details at the end. what is this, if not abstraction?
it clearly knows not only what a fish looks like, but also what a extremely blurry fish looks like and how that can lead to depicting a detailed fish.

>diffusion is a direct use of pixel data.
you don't even know what this means.
or maybe you can explain to me what exactly you mean by this.
as the previous sentence is already wrong, i doubt it matters too much though.

Navigation
View posts[+24][+48][+96]