[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/ic/ - Artwork/Critique

Search:


View post   

>> No.6912623 [View]
File: 924 KB, 814x610, 1695813762047678.png [View same] [iqdb] [saucenao] [google]
6912623

>>6912614
but that is wrong.
AI does not have human level understanding, but it has understanding of all the tokens in its system.

when you tell it to make a bird, it makes a bird because it knows what a bird looks like and what features it has.

just like with the "bald" example earlier. it understands the features. it's not just making an entire image or an arrangement of pixels.

mirrors are still my go to example for this. pic related. these were prompted using "mirror", among other things.
it clearly understands something about mirrors. but it lack the spatial reasoning ability and the accuracy to fully understand what mirrors are.
but limited understanding is not no understanding.
this is not just pixel crunching, otherwise you would never be able to generate something new together with a mirror.

>> No.6869903 [View]
File: 924 KB, 814x610, 1676894114849542.png [View same] [iqdb] [saucenao] [google]
6869903

>>6869791
>It cannot work any other way,
why not? i explained how it works.
in fact you should understand that it cannot possibly be photobashing. just based on how it generates images alone. even without looking at the training.
because it starts with noise and is making up shit the entire time all the way until the finish line.

even just the fact that it starts off blurry and slowly converges on the final image already disqualifies it from being photobashing.


>it has no understanding of what objects are or that it is projecting a 3D object onto a 2D plane. This is why it usually fails to depict stuff from unusual angles and why "robot dog" looks like outline of a dog paintbucketed with "robot parts" pattern.
it fails at things because they're hard. that is not proof for any of your statements. understanding doesn't mean perfect wisdom.
i'll again use the reflection example: pic related.
this time i'm using an actual mirror instead of just water reflections like >>6869867
here you can clearly see what the model grasps and what it doesn't grasp.
>it can grasp that a mirrors contents is related to what is in front of the mirror.
>but think about what it would need in order to do this perfectly. it would need a perfect sense for spatial positions as well as...well, everything else basically. because you'd even need anatomy to know how the exact same pose works from a different angle.
>and you can see the model trying to do that anyway, but failing.
but even here, with more training examples, this is something that can eventually be grasped more fully. because that's just the nature of machine learning. it can extract features and relationships from its training data. it "learns".

Navigation
View posts[+24][+48][+96]