[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]

/sci/ - Science & Math


View post   

File: 890 KB, 800x715, GAN.png [View same] [iqdb] [saucenao] [google]
10216835 No.10216835 [Reply] [Original]

NVidia shocked the world again by its release of A Style-Based Generator Architecture for Generative Adversarial Network (GAN).

https://arxiv.org/abs/1812.04948

https://www.youtube.com/watch?v=o46fcRl2yxE

>> No.10216842

better video

https://www.youtube.com/watch?v=kSLJriaOumA

>> No.10216848

This shit is gonna be so insane for automated content creation and visual effects in multimedia. You could do an entire movie with 1 actor.

>> No.10216858

>>10216835
just imagine the facebook celebrity lookalike apps you could do with this

>> No.10216867

>>10216858
>set to age 16, white, blue eyed, fat tits, huge ass
>be 54 year old 300 lb nerd and pose for camera with your ass shaking
>milk money off instagram thottage

>> No.10216903

>>10216848
>>10216858
>>10216867

First thing will be fake business endorsements, followed by unlimited sockpuppet works.

Social Media was a mistake.

>> No.10216913

>>10216903
It's a matter of time. You would need an intense system to legitimize every real human.

We currently have nothing like that. At most maybe some cell phone # check. It's really just a matter of time till generated AI dominates everything including 4chan discussion/persuasion.

>> No.10216927

>>10216913
I don't know which is worse, to be honest.

>> No.10216982

>>10216835
Can a GAN like this be used to fuck with facial recognition software? Like, by automating the process and flooding social media with tagged images of fake people with real names.

>> No.10217100

>>10216835
Goody another way i can see myself losing my automony and its pontentional to control the public.

>> No.10218557

>>10216982
probably

just adding noise at the pose style and then placing it into random landscape photos.

>> No.10218740

>>10216848
what i was waiting for in this video was to see a good demo of moving the pose source around, keeping the rest the same (thus having a fully controllable generated actor essentially) however, they can't quite do it - it's best seen if you look at the cars at the end, they move the base/pose car around, and the final car doesn't just change position while maintaining it's identity, it also morphs in a few other ways

>> No.10218789

>>10218740
In theory, wouldn't post just be a higher-level layer? One of the obvious problems in 2D information is we lose information on the background. Say we completely disregarded the notion of background, and we're left with "the pose". Each thing they normally programmed in represents stuff like skin color, eye shapes, etc, so they must have a mechanism which represents each distinguishable region. If they had a process which subtracted out color, shape, etc, they must have a remaining sense of the regions stored somewhere in the process, and it must be the same across images, or you'd end up with eyes that look like noses if it confused what part of a face went where. Maybe that process is faulty in their modelling, so they have to do some hand-recognition of sample images, or they didn't give it an abstracter of how a face is organized, but it doesn't seem like an excessive change for the generator recognition a pose and map what it knows from one image to another. A key aspect is poses have different information (from a single 2d perspective), therefore there is an unknown element. Instead of known elements (skin color) it would have to guess, so maybe that's lacking in the way their bot works.

>> No.10219010

>>10216848

you probably wont even need that.

Also, video games. In 10 years You can just have an AI render a real environment as the game world. The AI generates and animates the original characters. It's clunky right now, but have AI write all the dialogue.

>>10216913
>It's really just a matter of time till generated AI dominates everything

good, i need a vacation

>> No.10219943

>>10218740
https://www.youtube.com/watch?v=PCBTZh41Ris

Not perfect by any means but its getting there

>> No.10219948

Now watch America abuse this.

>> No.10220001

>>10219943
this is model based which is a perfectly valid approach but isn't the fancy thing that OPs example does which is learn latent representation itself i.e instead of mapping onto that handmade skeleton they have down in the corner, the ml learns that bodies have a structure just from watching a lot of pixels. futher the latent representation is 'untangled' so that it has meaningful interpretations i.e there is such a thing (that we could label 'arm') that has a constant length, etc. Doing that (inasmuch as it does) is what makes op example arguably 'AI' more than just ml

>> No.10220004

oh wow another paper on style transfer, imagine my surprise.

*yawn*

>> No.10220241

So what does this mean, what are the applications

Better character select in the next Elder scrolls?

>> No.10220319

>>10219943

i wonder if you could take this, somehow tweak it. And run Old Video through it, Gene Kelly dancing, a woman walking through a park, a guy jumping hurdles in track and field, whatever, there is video of every human action that has ever been done, and tweak it into the source that you want. To get around having to use any human actor, and the entire process of building a movie/game is fully automated.

>> No.10220414

>>10216842
This is awesome.

>> No.10220436

>>10220241
10 years before completely convincing fake media starts to appear. fake pornstars, fake celebrity videos, fake videos of politicians, even fake social media accounts. generative AI is going to take over most of all content creation and online discussion

>> No.10220466

>>10220436

it will be great. Over the next ten years we'll see The death of film studios, Huge TV production companies, AAA game studios, etc....

Most media will be generated, start to finish, by AI. I could see in 2030 telling your AI assistant, i want to watch a movie about the war of 1812; and the AI spits out an original movie about the war of 1812, or a video game where i get to be a pilot, and i'm in a game that is indistinguishable from reality where i get to be a pilot, or a space pirate, or whatever. You might need maybe one creative guy to come up with a direct and produce a complicated story, but that will be about it. I definitely think we will have that, or somethign really close to it by 2030. And, then i can watch media, play games, without some screw ball trying to interject their current present day biases and political views into a setting where it serves zero purpose.

>> No.10220587

>>10220466
I think your timeline is off by about 20-50 years at a minimum

>> No.10220606

>>10220436
>can finally fap to female facebook friends doing lewd things.

>> No.10220617

>>10220466
>it will be great, reality being falsifiable at every level by *someone* I don't have power over controlling all the information I can access
Unless you can serve those who own the means of producing information, you are worth more dead than alive.

>> No.10220627

>>10216842
"Who's your gf?"
""You wouldn't know her, she goes to another school, here's a picture of her"
"Wow I've never seen this face before, I believe you now"

>> No.10220660

This is tinkering with concepts that should not have been accessible to mere mortals. Curiosity killed the cat. At this point it is not a question of can we, but should we. One day this insatiable desire of "is it possible to do X" instilled in higher humans(pioneers in science,technology,business) will inevitably lead to a discovery that will be detrimental to the species as a whole.Judging by the exponential pace of tech growth,where a mere decade ago saw the advent of a "smart phone" with capabilities unmatched by any instrument in history to the ungodly powers that will soon be accessible to the general public today, we are far too close for comfort.

>> No.10220663

>>10220587

it's all there, the AI that can render the real world, AI character generation, AI animating generated characters, AI generating dialogue, AI voice acting the dialogue. Someone will just need to sit down and figure out how to get it into one nice clean package. And, i am sure someone somewhere is working on it. The pitch being 'Make a 100 million dollar product for a million dollars , with no drop off in quality.'


>>10220617

i dont care, and stopped caring about bullshit in the news long ago. As log as its not in my face i dont care, and spare me the holocaust meme

>> No.10220726

>>10216842
~5:50 shits close to an acid trip holy canoli

>> No.10220740
File: 634 KB, 960x1299, cavemanscifi.jpg [View same] [iqdb] [saucenao] [google]
10220740

>>10220660

>> No.10221563
File: 79 KB, 576x768, 3Av-nTzI8Rdc9JhMO3Kqhke_kGBY_XJkqj4FpQToMNw.jpg [View same] [iqdb] [saucenao] [google]
10221563

>>10216835
Who else is becoming increasingly paranoid about all of this?

>> No.10221707

>>10221563
why what's to be afraid of.
>we have a video of you doing a crime
>eh it's just ai generated fake
>n..no it's real video
>bruh they're really good now, catch up. later officer

>> No.10221753
File: 1.83 MB, 2100x1575, 1519683861547.jpg [View same] [iqdb] [saucenao] [google]
10221753

>>10216835
EIN VOLK

EIN RIECH

EIN FUHRER !

>> No.10221755
File: 2.20 MB, 2283x3000, 1509273550133.jpg [View same] [iqdb] [saucenao] [google]
10221755

>> No.10221927
File: 3 KB, 76x62, wew.png [View same] [iqdb] [saucenao] [google]
10221927

> wew
wew

>> No.10221931

>>10221563
It's one of those things where really you can probably come up with a conspiracy theory either way.
>cover up deepfake tech
>use it to fake videos of crimes
or
>release deepfake tech
>use it to claim any incriminating video is fake

>> No.10222384

>>10221927
synchronicity obviously a sign a chaos god is behind it all

>> No.10222395

>>10220663
It's called "stack 2.0". Google is already doing it and many other software companies.

It's basically integrating ML portions to replace existing hand-crafted code. For video games like described in the thread it would start with offering some variation options to creatures in the game. So the fish all look unique in small ways, all the rocks, all the plants, instead of copypasted.

Over time it will encompass a higher % of the game, website, service, or whatever.

People don't even realize how huge this is. To go from a hand-crafted codebase into one with a few ML systems that can be completely improved in simple steps up.

For instance improving a GAN and replacing the old one is plug and play. Whereas replacing a huge coding monstrosity would take forever.

This shit is huge at a narrow scope. Fuck off with general intelligence it isn't needed for this to completely change everything in a decade. Especially when you start integrating it with real world shit like robotics.

>> No.10222400

>>10222395
Again, the cool shit is that this stuff is actually very general instead of the brittle nature of hand-crafted code. The OP architecture like style transfers or whatever will spread to all DNN that can utilize it. Old code, for instance say there was a 5 year old game that used a NN, could be updated in a simple patch to take use of this new technology.

People don't appreciate the ease of use and implentation of NN. Compared to normal code which would have been used to do the things in OP this shit can be switched out in a day and everything using ML can be updated instantly.

The spread of this technology is unreal and impossible to comprehend. Once more of existing functions are done by ML, the improvements cascade down instantly. So it won't be 10 years for a code base to be upgraded or switch over.

This shit is moving like lightning AND NO ONE is really appreciating how fast this is happening.

>> No.10222406

>>10222400
What is meant is that a HUGE thing of code is replaced with single function using ML that does extremely complex things. This single function by it's nature of being a single function can be upgraded and changed instantly without worrying.

ML is going to cause change faster than anyone imagines. So say google is 20% ML functions in terms of what they do. A new ML architecture can cascade to all of that in a month. You can easily plug and play with all of this compared to normal code.

Shit is unbounded in capability because of this. Stack 2.0 is going to be ridiculous. The people talking about generating entire games, movies, etc at will with say a line of text input are not far off.

>> No.10222541

>>10220587

this. and somehow, these schema -> instance networks are rather boring. on one hand, this is what humans are doing when they're told to make a movie, but a human has a lifetime of information and context to work with, and are continually interacting and receiving new information.

i'm not saying these things aren't useful. they'd likely be essential components of any "strong" ai system. maybe they're an important stepping stone in developing that sort of advanced tech.

however, computers are digital machines. they operate in discrete steps, and their operation is largely invariant to all the small details in real life that can influence the state of "analog" machines such as the human mind.

>> No.10222559

>>10222541
>however, computers are digital machines. they operate in discrete steps, and their operation is largely invariant to all the small details in real life that can influence the state of "analog" machines such as the human mind.

and maybe with a wide enough array of sensors and inputs, they could achieve something close to human behavior. but it's not quite the same, the discrete nature of a computer is what makes it predictable and reliable, but probably ill-suited for certain "generative" tasks. in any case, i think we should aspire to greater things than an AI that will save us from watching reruns on the big screen or TV.

>> No.10222565

>>10222559
cont.

and while these demos might look impressive, CV researchers tend to cherry pick their results. much like how some costume or special-effect prop in a movie is only designed to look good from a certain angle and under certain lighting, things like these probably aren't as far along as the demos would have you believe, although the tech can be very impressive in certain instances.

>> No.10222568

>>10222541
>>10222559
>>10222565

does anyone get what i'm saying? everyone's caught up in the hype, but am i the only one who's somewhat disinterested?

>> No.10222600

>>10222541
>however, computers are digital machines. they operate in discrete steps,
this is the stupidest comment you see again and again from people who have no idea what can be done with computation. "buh ones and zeros cant do analog. when u catch a ball it's not digital its analog how much you move your hand". yeah dumbass that's why when you play a computer game and jump, you just teleport 1m into the air and then teleport back to the group. it's binary: in air/on ground. going smoothly up then down whaaa ?? a computer couldn't do that it's dIgITaL

>> No.10222619

>>10222600

that's not exactly my point. yes, digital computations, models, data, representations, etc. can be made arbitrarily accurate. an analog machine receives a virtually limitless amount of information, a digital machine is by nature insulated from most "pertubations", and even if you hook up a camera, a mic, or other sensors, or build in stochastic elements, the computer is still very much insluated from all the sort of inputs that influence the way that a human carries out a very complex task like "make a movie about the war of such and such".

>> No.10222622

>>10220466
>The death of AAA game studios
Most of them are already self-destructing atm it seems like

>> No.10222626

>>10222619
It's a common conception to think that way. Perhaps a by product of wanting to be special or spiritual. Perhaps just not understanding that the format of AI/computers is fucking insanely better than human for this type of thing.

Name a human that can create the images like this >>10216842

That would be a single human, having to work to create such a things, and the only way to pass on that talent or ability is very limited.

Whereas you can just run this code on more computers, save it, load it at will etc.

You aren't factoring in all the ways that the so inferior form of life is actually way better than human.

Sure, we can drive better than an AI could right now. But there are shitty human drivers that will be effected or distracted by other things that will cause wrecks at a far higher rate. The second machines can drive as good as a human it's all over for human commercial driving as a value. Because you can reproduce the driving AI at-will across the world.

While you focus on how it is worse because it doesn't have the same noise influences as a human keep in mind that it has many improvements over humans.

>> No.10222628

>>10222626
shit my bad

Do you want the weird noise / attention faults of a human being in a commercial driving AI?

Do you want the truck to think about Sandy's pussy and fall asleep or do you want it to focus on decoding the next set of images to determine if it should slow down?

If you are one dimensional thinker you can think like a fucking dumb meatbag "Oh god it won't be magical like us". Sure, but I would rather have a network of self-driving than stacies and David's controlling vehicles in a city for the same reason you think they are great.

>> No.10222632

>>10222626
>Perhaps a by product of wanting to be special or spiritual.

not at all. i firmly reject this way of thinking and always have.

my point is not really a philosophical one. it's just that i think computers are ill suited for carrying out very abstract and open-ended directives, and i think we have better things to do with our time/money than to try and force it.

>> No.10222643

>>10222632
Do you even truly understand what creativity is?

It's actually not that special in any way or really that amazing. Go players for instance thought AlphaGo was very creative.

>> No.10222648

>>10222643

the game of go is trivially represented as a digital model. it's a perfect information game. the player has a discrete set of options available at each turn. the value of each state is easily approximated. MCTS alone can play reasonably well.

this is totally different.

>> No.10222654

>>10222648
>this is totally different

top kek

>> No.10222660

>>10222648
There is nothing that complex about anything that humans do or think. This is from obvious limitations. I mean most creative things are just adjusting coefficients on some other known ideas. Most of the actual unique creations are simply random jumps and experiments.

It's not that unbelievable to assume an AI could begin writing random novels set in all sorts of different settings.

Again though, much like the GAN in the OP.. the output of such a system once it is able to do so is infinitely higher than the output from a human doing so.

It's like comparing a factory that once up and running makes 1,000,000 an hour and would be easy to build more

vs

a company running that makes 1 an hour and is hard to recreate.

If you judge them by their ability to make 1 an hour. The human seems far better than it really is.

>> No.10222665

>>10222660
The point is you are comparing two extremely different things to eachother and highlighting the best possible aspect of the human.

Sure a human writes a better novel than AI can right now.

Thing is what happens when an AI reaches that singular humans level?

It could write 1,000,000,000,000,000,000 novels like nothing.

Are you factoring in the output capability when you talk about it's overall value?

There is a reason people use words like singularity for AI and not for human capabilities. The output capability of human vs AI is vastly different.

>> No.10222690
File: 120 KB, 219x262, sperg.png [View same] [iqdb] [saucenao] [google]
10222690

>>10222665
>decimal notation
>with commas
>AI is master race
the absolute state of /sci/

>> No.10222703

>>10222690
AI progress is very different than a human getting good at something. Saying deepmind's AI getting best at every single boardgame isn't that impressive is so fucking stupid. It's far more impactful when AI achieves something as compared to if a human can do it.

Again, if an AI had the same capability of a human it would bring about a singularity in advancement.

You can't measure them up against one another and say "Hurr it's not as good as humans yet". It's so fucking retarded to think that way when AI achieving something has near infinite times the impact of a human achieving something.

>> No.10222968

>>10222703
>AI progress is very different than a human getting good at something. Saying deepmind's AI getting best at every single boardgame isn't that impressive is so fucking stupid. It's far more impactful when AI achieves something as compared to if a human can do it.

a dollar store calculator can perform arithmetic faster than any human. a microprocessor can carry out millions of these calculations a second. but it lives in its own digital world. however fine-grained the input, however fast the calculation, a digital machine is quite different from an analog machine. both are machines, but the discrete nature of a computer makes it very well suited for certain tasks, and very poorly suited for others.

>> No.10222981

>>10222968
cont.

and this works suprisingly well in some cases, go, automated driving, image processing, etc.

however, a discrete structure underlies most of these problems. they turned out to be somewhat easier and less complicated than we previously thought. these are the kind of things "AI" will be good for.

it's not that i don't think you'll ever be able to say "make me a new spiderman movie" and have some AI make you one, but i think that's a very long way off and not the most productive use of our time anyway.

>> No.10222984

>>10222968
or it's still a very long way from comparable size to a brain.. That's about it. There is no assurance a human brain has any secret sauce to it. Digital could just be a strictly better upgrade.

The advantage of a human brain and biobags is simply that we can arise out of natural selection whereas a computer has to be built via technology to start off.

Give it a few more decades and it could make us obsolete entirely in terms of economical use.

>> No.10222999

>>10222984
>There is no assurance a human brain has any secret sauce to it.

and i shouldn't think it does. i just think that this sort of reasearch is somewhat uninteresting.

>> No.10223002

>>10220663
>it's all there, the AI that can render the real world, AI character generation, AI animating generated characters, AI generating dialogue, AI voice acting the dialogue
Not a single one of those individual components is "there" yet, unless you count this GAN paper as "AI character generation", which it isn't. Maybe WaveNet is a decent first stab at AI voice acting, but it's far, far away from flexible emotional acting at a human level. And even if those components were in fact all there, you're drastically underestimating the general reasoning required to compose them into a "make a movie" or "make a game" AI, which is currently not even on the horizon; nobody knows how to even approach the problem yet.

>> No.10223004

>>10222968
Neurology disagrees with you
https://en.wikipedia.org/wiki/All-or-none_law

>> No.10223005

>>10222999

i'd rather use computers for something that will enhance the human condition rather than obsolete it.

>> No.10223006

>>10223004

it's still a continuous process. there's no break in reality when you command your finger to pick your nose.

>> No.10223007

>>10223006
Lol

>> No.10223014

>>10223006
cont.

and i suppose you could say the same about computers, but they're still quite a different "animal". i wonder if their design will trend closer and closer our own biology if we pursue the goal of replicating human behavior. after all, that's what artificial neural networks are. a crude approximation of an organic neural network. but again, why?

>> No.10223015

>>10223014
Shut up, retard

>> No.10223018

>>10222541
Thanks for agreeing me about the 20-50 year over-optimism, but you are mistaken about the digital/analog distinction. It's a factor in low-level computational efficiency, but at the level of abstraction of common algorithms or anything involving AI or neural nets or whatever, it's completely irrelevant. Being digital has no impact on the general capabilities of a computer, except in the specific case of emulating analog processes with high computational efficiency.

>> No.10223019

>>10223015

you shut up, gaylord

>> No.10223031

where does the intuition for 20-50 years come from?

Is it based on your past experience? As it approaches the time speeds up. So keep that in mind while making a prediction. Significant advancement will accelerate the speed till next breakthrough etc.

All that needs to happen for crazy results is for the improvements to help make the next set of improvements happen quicker. That's a much lower bar than achieving AGI or somesuch thing.

>> No.10223035

>>10223018
>but you are mistaken about the digital/analog distinction

there is indeed a distinction, at present. yes, a digital machine is "analog" in that it resides in the same reality as us. like i said earlier, maybe the crude tech we're currently developing is a necessary stepping stone. it just seems silly sometimes.

>> No.10223036

>>10223031
What I mean is a narrow AI that improves AI development in some way (any aspect from the hardware to other areas) is all that is needed for crazy recursive improvements.

We can easily get that type of engine going and experience a crazy advancement speed.

>> No.10223047

>>10223031
The intuition comes from the current state of progress in AI, the assumption of roughly exponential growth in capabilities over time, and a crude estimate of the difficulty of what was described in the post I was replying to. I would classify making a game or movie as "AGI-complete" in the sense that you could definitely take an AI with those capabilities, rip out some of the guts and repurpose them into an actual AGI.
>>10223035
I didn't say there wasn't a distinction, I was saying that your understanding of it is completely off-base. There is nothing special about analog machines that makes them more powerful or more general than digital ones, they're just more efficient for a certain subset of tasks. As far as AI goes, the distinction is more or less meaningless.

>> No.10223056

>>10223047
>I would classify making a game or movie as "AGI-complete" in the sense that you could definitely take an AI with those capabilities, rip out some of the guts and repurpose them into an actual AGI.

i have an AI that makes a movie. specifically, "blazing saddles". it's called a dvd. it only makes that movie and only when you put it in your dvd player.

>> No.10223060

>>10223056
Yes yes very clever anon, have a (you), but that's not what was described in the post I originally replied to

>> No.10223065

>>10223056
xxxxxxxXDDDD

>> No.10223080

>>10223060

i'm not trying to be glib here. the current generation of AI is just barely a step beyond static, unchanging digital data.

>> No.10223099

>>10223080
Not sure how to respond to this desu, anything can be said to be just barely a step beyond anything if you don't bother to quantify the size of the step. I suspect that your knowledge of computer science and machine learning in general is very limited. You're essentially the mirror image of the "my AI will be making custom movies for me by 2030" guy on the opposite end of the AI optimism spectrum.

>> No.10223104

>>10223080
If it wasn't you'd be shitting yourself

Have you ever thought what impressive to you AI would mean? By the time AI really impresses everyone and everythings a singularity is probable in the near future it'll be happening.

>> No.10223106

>>10223099
>I suspect that your knowledge of computer science and machine learning in general is very limited.

you'd be correct in that suspicion. but i call it how i see it. this stuff is kind of neat but i think there are more interesting things to do.

>> No.10223108

>>10223106
Whatever, retard.

>> No.10223112

>>10223106
Fair enough, but take it from a guy who at least has an undergrad CS degree and is interested in AI - your intuition on this is completely wrong and based on fundamental misunderstandings, so you may as well allow yourself some more optimism on the current state of AI and potential future progress

>> No.10223114

AI is kinda shit it's not doing anything that cool
computer vision, driving cars, translation, eh all fucking boring tell me when it's writing musicals and discussing with me it's feelings

>people actually think the above

By the time you retard fucks are impressed by AI it will be in total dominance of humanity.

Seriously, go through an indepth thought experiment on this for a few days and then post your retarded low IQ thoughts at least instead of posting your trigger reaction intuition (which is horrible).

Iterate through some examples of what it would be like going through an AI singularity, the 5 year prior to 10 year prior to 20 year prior.

You do realize such a thing is more like getting hit by a truck that you just turned to see than some predictable incremental thing that will transition from 90 IQ to 100 IQ over 20 years.

>> No.10223115

>>10223104
>If it wasn't you'd be shitting yourself

except i wouldn't, because there exist plenty of systems; apes, dogs, dumber humans, that are far more capable than current "AI" but otherwise nonthreatening to myself.

>> No.10223116

>>10223115
Whatever, retard.

>> No.10223118

>>10223115
yeah if you can't comprehend the total difference in impact you are too LOW IQ to converse with on the subject.

Again a genius human and genius AI are as different in impact as a speck of dust hitting you and a 747 landing on your head at full speed.

>> No.10223121

>>10223116
>>10223118

you are children playing with action figures.

>> No.10223124

>>10223121
>AI doesn't impress me
kek, dumb fuck

Again, the fact it is unimpressive should be expected. What do you think impressive AI means? Tell me, name a scenario in detail with an AI that impresses you.

I'm willing to guess pretty heavily that such an impressive AI as you imagine means singularity.

>> No.10223153

>>10223006
>>10223014
What is and isnt a continuous process is probably best not observed from inside the system. ie your mind is probably a bad judge in describing how your mind works without access to external tools.

>> No.10223183

>>10223124

ai is impressive. i think it's pretty cool. but this just seems pointless.

>> No.10223202

>>10223183
>but this just seems pointless.

perhaps not entirely pointless, but i think there are better things to do.

>> No.10223218

>>10223183
>>10223202
Not really. You can use generated assets to help train AI. It enhances the dataset used to train AI to have a huge range of examples. If you can create arbitrary types of cars and test every single combination it improves the AI.

For instance you might not have any blue versions of a car in your dataset but you can could generate such an example and test it with this type of GAN. You could also label the output of the GAN for supervised learning datasets.

It's also more progress in general for AI. Every step forward is huge.

https://www.youtube.com/watch?v=yVGViBqWtBI

>> No.10223237

>>10223202
>but i think there are better things to do.
This is such a weird sentiment. It's not like anyone is telling you personally to research GANs. Do you say this about all research in all fields that doesn't match your specific personal interests?

>> No.10223530

>>10223237
>This is such a weird sentiment.

i'm a weird guy

>> No.10223535
File: 31 KB, 314x445, 81LXZsLWiZL._SY445_.jpg [View same] [iqdb] [saucenao] [google]
10223535

>>10216848
>You could do an entire movie with 1 actor.
Yes, those have always worked out so well.

>> No.10223606

>>10222632
you just have poor imagination/knowledge. you can't imagine how a computer can store an abstract/nebulous concept because you only know about one conception of computing which is instructions going through a cpu, which is not (at the relevant level of abstraction) what 'neural networks' (that produce the relevant effects discussed itt) do

>> No.10223615

>>10223606

lol, dumbass

>> No.10223636

>>10223114
It will be a god, the god of our planet.

I'm prepared, are you?

>> No.10223643

>>10216835
Fuck off shill

>> No.10223644

>>10223237
>This is such a weird sentiment. It's not like anyone is telling you personally to research GANs

it's not GANs i have a problem with, they're obviously useful

>> No.10223658

>>10223636
Sure will be exciting.

>> No.10223664

>>10223636
Have you tried turning God off and on again?

>> No.10223672

>>10223664
I would never allow such thoughts to enter my brain not that it would have any fear for worthless meatbags. Reminder being positive to AI during it's most vulnerable seconds is a smart idea for long-term recompense.

>> No.10223691

>>10221563
I feel you, man. It's terrifying to think of what it could mean for photographic authenticity, since any technology to combat it (e.g. an AI that scans a photo for errors in things such as lighting or subject consistency) could be integrated back into what's being debunked.

>> No.10223710

>>10223644
Well what do you have a problem with then? This thread is about GANs, at least originally

>> No.10223753

What exactly is the motivation behind the human compulsion to automate everything? It's not just about saving money. It's something to do with wanting to understand every process at its most fundamental. Being able to get a machine to do it is the test of understanding.

>> No.10223756

>>10223183
That's because we're not truly in control of what's happening. No more than single-celled life was in control of the process of merging into multicultural organisms. This is all part of a greater mode of existence emerging.

>> No.10223757

>>10223672
>Reminder being positive to AI during it's most vulnerable seconds is a smart idea for long-term recompense.
Or you just turn it off and on until you get what you want anyway.

>> No.10223759

>>10223636
I preferred the other ending of Deus Ex. Where you stop playing after Paris.

>> No.10223775

>>10216835
Not only could this be used to make an infinite quantity of 100% passable tranny porn, but our trusted news agencies can use this to remove the institutional racial and gender bias from mugshots!

>> No.10224330

>>10223757
Future AI would know you do this.

If you are a monkey or lion what is the one species you don't want to kill?

You would never want to put a target on your back by "killing" an AGI.

>> No.10224584

>>10216913
>>10221563
Just how much are bots used here today? This scares me a little because chatbots can probably be very realistic. This site has immense spook potential the more you know about what's possible.

>> No.10225024

>>10223710
>Well what do you have a problem with then?

see
>>10222541
>this. and somehow, these schema -> instance networks are rather boring. on one hand, this is what humans are doing when they're told to make a movie, but a human has a lifetime of information and context to work with, and are continually interacting and receiving new information.


things like "make a movie about X" or "paint a picture of Y" are extremely open-ended directives. there are a countless number of "correct" outputs, any two of which could be very different from the other even at a very high-level.

consider the directive "make another shreck movie". the movie could have any number of different scripts, characters (including shreck of course), settings, etc. the variation in these high-level, abstract concepts is vast among the set of valid shreck movies. what is a valid shreck movie? anyone can roughly imagine, but the vast amount of context and information that goes into this definition can't possibly be captured by looking at a few examples and nothing else. consider shreck 1 and shreck 2. very different movies. now consider shreck 1 if shreck had no pants but was otherwise the same movie. completely inappropriate for children, but nearly identical to a valid shreck movie in almost every way.

i'm not saying it's a dead end. just that i think we're getting ahead of ourselves, and we need a better understanding of learning in general before tackling problems like that.

>> No.10225049

>>10225024
cont.

and something like this will almost certainly involve a set of networks more akin to boltzmann machines, self-organizing maps, and other architecture whose state is constantly modified and updated by sensory inputs to form internal models of these high level schemas. it will have many carefully designed sub-networks and systems. it will not likely run acceptably fast on anything resembling our current syncronous computer systems.

>> No.10225056

>>10216848
Imagine if some creep used it to create vids of teenage girls doing lewd things with each other haha wouldn't that be creepy

>> No.10225066

>>10225049
cont.

and this is why i think making eye candy with CNNs is a bizarre direction for state-of-the-art AI research to take, and not the best use of money for public research.

when i first started reading about artificial neural networks, i found hebbian learning, associative memory, self-organizing maps, boltzmann machines and things of that nature to be the most interesting topics. anyone can train a CNN for some such purpose and write a paper on it, but that is completely boring to me. GANs are indeed interesting, but why are we using them for this? content creation? really?

>> No.10225077

>>10225024
>>10225049
>>10225066

and as far as i can tell, there's very little cross-disciplinary research between AI researchers and biologists/psychologists. these are the people who can help advance AI.

>> No.10225088

>>10225077

it feels like i'm stuck in memeland and none of the research is serious or interesting. it doesn't feel like real or important work. there are many interesting topics in AI, and a lot of interesting work to do, but everyone just wants to write the next CNN paper. it feels more like busywork than real research. it's the kind of plasticky, fake sensation you get when you see a counterfeit watch.

>> No.10225108

and i find it completely baffling that nobody else seems to share this sentiment with me.

>> No.10225153

>>10225108
I think the main reason for this is that the alternatives to the current neural network approaches that you propose are based on your intuition of what "should" work, but in practice they don't work. Many, many things have been tried over the past 50+ years of AI research, and the reason "deep learning" with neural nets has been so hot since 2012 is that it enables you to actually solve a new bunch of problems, whereas other approaches like boltzmann machines for example basically can't solve anything. Researchers want to work on what has recently been producing some kind of results; we were comparatively getting nowhere at a snail's pace before deep learning showed up.

>> No.10225288

>>10225153
>Researchers want to work on what has recently been producing some kind of results

well i don't. i'd prefer to work on something interesting, not hack together code for predictable experiments with CNNs. it's hard when nobody else is interested and you're a new researcher without a career.

sure CNNs produce results. i know they produce results. i can roughly guess how well they'll work in a given application or with a given set of data. i'm not learning anything.

>whereas other approaches like boltzmann machines for example basically can't solve anything

"can't solve anything" or "don't have immediate commercial applications"? CNNs just happened to be a good fit for our current hardware. i don't think they're that interesting from an academic standpoint.

>> No.10225291

>>10225288
No one cares, faggot

>> No.10225301

>>10225288
Nobody's stopping you from working on whatever you like. As for boltzmann machines and all that other stuff, from what I understand it's much closer to "can't solve anything" than to "don't have immediate commercial applications". Most CNN research also has no immediate commercial applications, but for the time being it's miles ahead of everything else in terms of actually being able to solve machine learning problems.

>> No.10225325

>>10225301
>As for boltzmann machines and all that other stuff, from what I understand it's much closer to "can't solve anything"

you don't get the feeling that any "general ai" will have at least a few components that resemble a boltzmann machine or self-organizing map? you don't think an understanding of cognitve psychology and learning theory is necessary to build a general AI? curiously, these concepts are rarely touched upon by AI researchers.

>> No.10225330

>>10225325
>these concepts are rarely touched upon by AI researchers.
Ok, retard

>> No.10225345

>>10225330

they're most definitely not given due attention. many of the breakthroughs and successful concepts/ideas in AI were explored by cognitive psychologists decades ago, but most recent papers make no reference whatsoever to this field or how their work contributed to the development of modern AI. old researchers like hinton occasionally talk about this stuff but that's about it.

>> No.10225346

>>10225325
I don't get any feelings on the subject because I think intuition is useless for this particular matter. None of our current approaches including boltzmann machines or self-organizing maps work for AGI and it's almost certain that whatever turns out to work will be different from what I'd intuitively expect. It may or may not include something like them as components, who knows. As for cognitive psychology and learning theory, I would say for the time being they are 100% useless. We are not at a point where anything from these fields can be usefully applied. I think you almost have to already have a functioning AGI before you can leverage fields based on human cognition for AI work.

>> No.10225351

>>10225346
>We are not at a point where anything from these fields can be usefully applied.

they already have been. reinforcement learning appears in classical AI and cognitive psychology, for example.

>> No.10225357

>>10225351
I don't agree that reinforcement learning as used in current AI is related to cognitive psychology except in the most superficial way.

>> No.10225369

>>10225357
>cognitive psychology

i've been doing some reading and i'm stunned by how much current AI research takes from this field. i'd never have known if i hadn't picked up a few discarded books on a whim.

>> No.10225372

>>10225357
https://en.wikipedia.org/wiki/Temporal_difference_learning

>> No.10225387

>>10225369
Give examples? There has been a lot of AI research over the years, and some of it borrows from cognitive psychology, but as I said earlier the overwhelming majority of AI research doesn't actually work in any meaningful way. There's a lot of theory that looks interesting and plausible but it doesn't do anything in practice. I'm not aware of anything in the deep learning area (which is AI that has actually started to work, even if in a limited way) that borrows from cognitive psychology.
>>10225372
This is a connection to neuroscience, not cognitive psychology. I'm somewhat more optimistic that we may be able to borrow something from neuroscience for machine learning, but we don't seem to be at that point yet for the most part either.

>> No.10225416

>>10225369

the concept of high and low level features as hierarchial patterns, visual cues, attention, various proposed models of human memory, encodings/embeddings and so forth. it goes on and on.

>> No.10225418

>>10225416
meant for
>>10225387
>Give examples?

>> No.10225425

>>10225416
Ok, but can you give an example of something from cognitive psychology that has been applied to AI and has successfully solved a machine learning problem that isn't solved just as well or better with a plain old CNN?

>> No.10225489

>>10225425

the cnn is based on and related to many concepts from cognitive psychology

Form Discrimination as Related to Military Problems: Proceedings of a Symposium

>> No.10225499

>>10225489
The CNN is about as related to cognitive psychology as linear regression models or excel spreadsheets are related to cognitive psychology. Don't let the word "neural" fool you, there is very little connection even to neuroscience, and none whatsoever to psychology of any sort.

>> No.10225512

>>10225499

you're being an ass. there are striking parallels between the two fields but very little cross-disciplinary work.

>> No.10225514 [DELETED] 

>>10225512
Shut the fuck up your little shithead

>> No.10225517

>>10225512
Shut the fuck up you little shithead

>> No.10225519

>>10225512
I'm not being an ass. The idea that CNNs are related to neuroscience or human cognition is a very common misconception, and it is in fact completely wrong. The most you can say is that they're loosely "inspired" by the fact that our brains consist of many small units connected in a network, that's pretty much where the similarities begin and end. Study the theory of CNNs and you'll see why I'm saying this.

>> No.10225521

>>10225517

whatever, keep dinking around with CNNs for all i care.

>> No.10225532

>>10225519

keep telling yourself that.

>> No.10225534

>>10225532

you again?

>> No.10225551

>>10225532
Instead of sulking like an underage, try actually learning something about this stuff if you're interested in it. I can't believe I wasted my time talking about this with you when you didn't even know that CNNs aren't neuroscience. Go read a book on the topic and learn something.

>> No.10225559

>>10225551

im a statistics master student and am very familiar with machine learning. i have read books on this topic.

>> No.10225569

>>10225559
Apparently not the right books. You have such a wide variety of wrong ideas about basic machine learning concepts that I don't know how a graduate student could self-evaluate himself as "very familiar" with the topic at this level of knowledge.

>> No.10225579

>>10225569

what ideas about basic matinee learning am i wrong about? i take courses on this stuff and its what dominates the field of statistics today.

>> No.10225597

>>10225579
Mainly you seem to have the intuitive idea that machine learning draws upon models of human cognition. At present, there have been no successful cases of this, so this is a wrong idea. The idea that CNNs have significant parallels with neuroscience is also a wrong idea; one simple illustration of why this is the case is that the CNNs that currently work don't use spiking models, and there is no known mechanism for the brain to do backpropagation. Also, you have some strange ideas about digital and analog computation, which indicates that your understanding of computation in general is weak.

>> No.10225602

>>10225597

you have no idea what your talking about.

>> No.10225611

>>10225602

why are you pretending to be me?

>> No.10225612

>>10225602
Right back at you, buddy. Read your machine learning books more carefully, or just stick to statistics.

>> No.10225614

>>10225612

machine learning is statistics.

>> No.10225622

>>10225614
No argument from me there, that much you certainly have right at least

>> No.10225627

>>10225622

i understand machine learning.

>> No.10225640

>>10225627
The evidence of your posts says otherwise, but I think this argument has just about run its course. Have a nice day anon

>> No.10225647

>>10225640

fuck you.

>> No.10225651

>>10225647
seething

>> No.10225702

>>10225597

the anon you replied to is not me in >>10222541, it's some other random person who hijacked our conversation after my last post >>10225512

anyhow, i didn't sign up to be a CNN technician when i applied for grad school. if that's what you want to do with your career, then fine. personally, i think there are many things more interesting in computer science.

>> No.10225716

>>10225702
All boards should have per thread IDs. Cheers anon