[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 24 KB, 400x400, XTMpdxaZ.jpg [View same] [iqdb] [saucenao] [google]
9973577 No.9973577 [Reply] [Original]

https://www.youtube.com/watch?v=zywIvINSlaI

http://karpathy.github.io/neuralnets/

ARTIFICIAL GENERAL INTELLIGENCE GENERAL THREAD

Discuss
- Deep Learning
- Machine Learning
- Artificial General Intelligence

Don't discuss
- Ethics
- Safety
- Philosophy

>> No.9973582

>>9973577
You want to discuss something that doesn't exist and wont for at least 30 years (probably more), but not discuss the philosophy around this nonexistent technology, when that's the only real discussion to have on it?
Doesn't make sense.

>> No.9973593

>>9973582
You can discuss potentials and other things. Just not "muh agi can't exist because of human nature" type things. Or worried that it will be racist.

>> No.9973678

>>9973577
>Don't discuss
>Ethics
Good.

>> No.9973701

DLSS AA by nvidia is a pretty interesting use to replace old code.

Seems like it should also be super good for collision detections

https://www.researchgate.net/profile/Emilio_Olivas/publication/3996853_A_neural_network_approach_for_real-time_collision_detection/links/09e4150c257f5a3bf1000000/A-neural-network-approach-for-real-time-collision-detection.pdf

>> No.9974459
File: 39 KB, 1600x837, 0_xn9KO7B_Bwa5pPB9.jpg [View same] [iqdb] [saucenao] [google]
9974459

bump

>> No.9974467

>>9973577
>>>/x/ is that way

>> No.9974468

What sort of mathematical background do I have to have to properly understand the series of theorems of No Free Lunch from David Wolpert?

>> No.9974478

>>9974468
Make a bunch of machine learning projects then care about the math. You will be uselessly hypothesizing about shit you have no experience with otherwise.

>> No.9974493

What does /agi/ think about IMPALA?
https://deepmind.com/blog/impala-scalable-distributed-deeprl-dmlab-30/

>> No.9974495

Basically lets say you are about to study a new subject. The best way of doing so is not studying the math behind it. The thing you want to counter-act is being stuck in a thought valley or thought track.

That means trying to view the problem from as many different and original perspectives as possible before you dive into literature. Because once you begin learning the "standard" route you become locked in to those patterns of thoughts or viewing the subject.

Mathematical background is not going to help you escape a thought pattern trap. It would be actually very stupid to do the "math background" in depth before diving into the subject itself.

>> No.9974539

>>9973701
why is this in the agi thread kek

>> No.9974553

>>9973577
>Don't discuss
>- Ethics
>- Safety
>- Philosophy
Retard

>> No.9974567

>>9974553
>whats the ethics and safety around a computer function

>> No.9974579

>>9974553
>hurr durr ai is racist we don't want it to be objective stop researching right now
>hurr durr ai will kill us all even though it has no reason to we need to stop all research
>hurr hurr but what if we are the ai we need to stop researching right now we r playing god

Never gonna make it like that.

>> No.9974584

testing

>> No.9975384

https://www.youtube.com/watch?v=qaMdN6LS9rA&list=PLAdk-EyP1ND8MqJEJnSvaoUShrAWYe51U

>> No.9975685

>>9974579
>he actually wants to "make it"
begone normstain

>> No.9975709

They need to split this field so badly into applied vs non-applied focus.

>> No.9975714

Easiest way to start learning:

https://github.com/ageron/handson-ml

Go here, download everything.

go to colab
https://colab.research.google.com/

Upload the chapter you want to go through and go through the notebook examples.

>> No.9975984

>>9973577
Any AI will ALWAYS seek to destroy the entire universe.

period.

>> No.9976013
File: 168 KB, 600x530, 2016-02-12-809742.png [View same] [iqdb] [saucenao] [google]
9976013

I am in need of help. I'll be having an interview for a position as an AI researcher soon. Give me a list of things I need to brush up on beforehand. Stuff I am likely to be asked about and such. This is my first job.

>> No.9976039

imagine being so fucking thick as to not realize AGI is already here.

>> No.9976061

>>9975984
red pilled

>> No.9976154

>>9975709
it already is, the ready OP just posted links to applied stuff in a thread titled AGI (i.e. not applied)

>> No.9976162

>>9976013
Talk about anti-aging, flat earth and racial equality. May as well talk about other fanciful things that don't exist whilst you are there.

>> No.9976185

>>9976162
Pls don't troll me anon; I have a fragile heart.

>> No.9976195
File: 53 KB, 1600x900, ......jpg [View same] [iqdb] [saucenao] [google]
9976195

>>9973577
Do AI waifus exist already?

>> No.9976209

Are there any real usecases yet using both AI and blockchain tech (serious question, sadly).

>> No.9976210

I recommend you guys a good 16min video from DARPA that describes the limitations of current deep learning and suggests what the next "wave" of machiene intelligence will look like.

https://youtu.be/-O01G3tSYpU

Also, search for artificial general intelligence MIT course by lex fridman in youtube. It consists of doezns of videos that spans about 1 to 2 hours. It will give you a good "engineering" perspective of artificial intelligence.

>> No.9976213

>>9976209
Also CRISPER too. Are there any technology? (IOT is optional)

>> No.9976281

>>9976013
1 week into AI

Just tell them you love working on the input pipeline and cleaning up datasets. I bet that works since everyone hates that part.

>> No.9976285

>>9976213
/sci/ in 1994 would be calling the internet a meme that will never take off

>> No.9976290

>>9976210
top kek, funny. His third wave example is match for match with a problem set I reasoned out and have designed waiting for implementation

>> No.9976304

>>9976281
>everyone hates working on the input pipeline and cleaning up datasets
Why not automatise it then?

>> No.9976305

>>9976290
Are you the >>/sci/thread/S9959595 guy?
How are things going?

>> No.9976323

>>9976281
But that's 95% of the job though
If you have clean, correctly labelled data, you just slap a model on it and it just werks

>> No.9976334

>>9976305
Okay, have done some basic familiarization with Tensorflow, python, and the field in general.

I need to hit a point where I can start working fluidly if that makes sense. Meaning experimenting with various architectures and their genetics in TF.

The key thing is going to be "breaking out" to a point I can experiment without ever referencing any literature or following any existing patterns or ideas. When I start working on more interior systems as compared to the simple DNN

>> No.9976385

>>9976334
Sounds interesting. Good luck.
Maybe you're actually onto something.

>> No.9976425
File: 29 KB, 442x380, 97315cfa097437e816ad186fe42ec378--manga-girl-manga-anime.jpg [View same] [iqdb] [saucenao] [google]
9976425

>>9976013
>mfw I am a pure math grad with a comp sci minor trying to wing it
Wish me luck lads. The pay is probably going to be shit too.

>> No.9976430

>>9976304
Who's going to clean up the datasets you need to automate it?

>> No.9976431

>>9976425
1 week diving in.

I would just talk obsessively about the input pipeline. Normalization, standardization, etc endlessly and talk about "Stack 2.0" input tools. Ask them about their input pipeline tooling and all that shit.

I'm going to guess they would hire anyone interested in that topic. Just ask shit like "I really have been focusing on different input pipeline aspects and implementations, could I know what type of pipeline or tooling do you have here?"

Ask like you are questioning if you should work there or not based on how interesting it is.

Ignore all the other shit.

>> No.9976434

>>9976385
He's not

>> No.9976448
File: 25 KB, 1049x578, lPMOSDX.jpg [View same] [iqdb] [saucenao] [google]
9976448

>>9976434
e1^e2 = -(e2^e1)

for uint8 basis blades

>> No.9976466

>"General" intelligence means it has no limitations because of computational power or contextual environment

Thats why every human and crow can visualize in 1000d, because we can adapt to any problem so easily.

>> No.9976493

>>9976434
t. AGI

>> No.9976572

I WUT TO LEARN 3D PIPELINE
I GUT BOOK AND LEARN 3D PIPELINE
I CUN COPY PASTA

Methodology is to imagine the space from a non-bias perspective. So for instance you spend a time period iterating through different ideas of what the field looks like, how it works, etc from bare bones assumptions.

Now instead of thinking carbon vs hydrogen you think of them as 6 vs 1 because you didn't get a polluted mind based off medieval shitty names.

>you have one opportunity to originally think about the space without bias after diving into literature etc.

>not maximizing it

>> No.9976575

>>9976572
I have just spent 23 years studying the AI field in depth from an academic perspective. I can derive all the mathematical proofs associated.

I am totally fucking ready and impressive.

>TOP FUCKING KEK

>> No.9976611
File: 46 KB, 610x373, 50.jpg [View same] [iqdb] [saucenao] [google]
9976611

>>9976430
Another AI.

>> No.9977095

>>9976611
>>9976611
This, another species from Mars made earth and are waiting for us to come up with AGI for them.

>> No.9977171
File: 62 KB, 680x512, 1401982542241.jpg [View same] [iqdb] [saucenao] [google]
9977171

Unsupervised Predictive Memory in a Goal-Directed Agent
>Our model demonstrates a single learning agent architecture that can solve canonical behavioural tasks in psychology and neurobiology without strong simplifying assumptions about the dimensionality of sensory input or the duration of experiences.
https://arxiv.org/abs/1803.10760

>Remarkably, though it had not been explicitly programmed, MERLIN showed evidence of hierarchical goal-directed behaviour, which we detected from the MBP’s read operations.

https://youtu.be/04H28-qA3f8

>> No.9977230

>>9977171
https://deepmind.com/blog/differentiable-neural-computers/

Is really awesome too.

>> No.9977235

Sheit just a ridiculously good idea.

>> No.9977242
File: 436 KB, 1930x1276, HLAIpredictions.png [View same] [iqdb] [saucenao] [google]
9977242

https://arxiv.org/pdf/1705.08807.pdf

>> No.9977516

>>9977242
Stop spamming this everywhere. You can't predict the future like that.

>> No.9977522

>>9977171
https://www.youtube.com/watch?v=9z3_tJAu7MQ

>> No.9977594

>>9977522
Holy shit.

>> No.9977611

>>9977516
Do you have a better way of predicting the future?

>> No.9977614

>>9977611
how in any way is that an argument against what he said

>> No.9977624

>>9977614
He has not demonstrated at all why the statistical methods used in the paper are in any way invalid.

>> No.9977653
File: 170 KB, 1029x985, GunnarEiM.png [View same] [iqdb] [saucenao] [google]
9977653

>Applied GPU programming
>AGI
You're all lisp weenie tier to me

>> No.9977679

>>9976013
>>9976425
I only got asked general questions. I hate this sort of interview so much.

>> No.9977736

>>9977171
>>9977230
>>9977522

lol noobs. look up free energy minimisation. only way currently to unite both unsupervised perceptual learning and reinforcement learning policies, also solving the exploitation-exploration problem.

>> No.9977759

>>9977736
MERLIN does free energy minimization. Be honest anon, nothing comparable to it has been built before. It's really quite astonishing how quickly things are progressing.

>> No.9977775

>>9977759
sounds quite interesting. well - next 10 - 20 years things will advance quite far.

>> No.9977793

>>9977759
these ideas though arent novel with the paper. i think we need to especially do away with the idea of memory. maybe not do away but really anlayse what memory really is.

>> No.9977803

>>9976209
The way I see blockchain as a profane is that it could provide infrastructure for communication between AIs. I know nothing though.

>> No.9977983

>>9977736
>free energy minimisation
seems shit

>> No.9978027

I wonder if the emphasis on making it like reading/writing to memory ala a computer is a sign someone is retarded.

>> No.9978030

>>9978027
?

>> No.9978050

>>9978030
It seems boring/stupid is all. All the cool choices possible and you just set up a read/writer for memory handling in a traditional computer way.

>> No.9978284

Machine Learning is the Thomas Edison of fields. Steals all its ideas from somewhere else then mashes shit together until something works. Pathetic brainlet field for inferiors incapable of theoretical work.

>> No.9978289

>>9978284
when will mathlets learn

>> No.9978646

Multi entry point and multi exit point

>> No.9978661

The thought vector also locates things and describes links.

Pretty interesting how hierarchical things get.

>> No.9978673

Man this shit is so fucking easy lol. Imagine it not all just making sense and being easy to make something better with a few minutes of thought. Must be horrible to not be a monster.

>> No.9978678

any (recent) papers that show actual breakthrough of some sorts? Im kinda losing my faith in machine learning,

I just don't see networks being able to generalize a problem well. All I read is that they just do well based on the data fed to it. And every small deviation needs to be trained again on the network.

>> No.9978683

Status update: Finished with solving entirety of computer vision via staticistical methods. (design only not implementation).

Not sure if I will bother implementing it before giving it some intelligence.

>> No.9978685

>>9978678
I'm solving it. Don't worry.

I was thinking the same thing. That the field would get solved and I could be lazy.

Unfortunately not the case but I'll solve it.

>> No.9978693

>>9978678
> Im kinda losing my faith in machine learning,
Such things as Azure Machine Learning Studio exist anon. The field is officially dead. It was all a fad.

>> No.9978702

Which portions of vision are handled by staticistical vs dynamic systems you think?

Seems like almost all concepts we have language for are handled dynamically. That's pretty annoying since the current methods suck at that.

Got to imagine it's some point before "Ford F150" but after "Big Moving Object"

>> No.9978744

>>9977679
I'm interested, anon, what were you asked?
I'm starting a major in A.I. this fall, and I've already settled for a web dev job, since no one seems to hire ML entry levels in this bumfuck area, let alone in research.
.t comp sci minor brainlet

>> No.9979043

Okay, so layering of abstractions from the pixel 2d image layer into the different abstract levels. That's pretty interesting how it works. All the layers are kept and given to dynamic system.

Slowly unraveling this bullshit. Too slow. Wish I could finish faster. weeks is too long of a timeframe.

>> No.9979062

>>9979043
Might as well post this since it's the weakest and isolated thing. I have no idea where literature is.

The staticistical side of things sends a series of layers to the dynamic portion of the brain.

At the bottom level you have a blended pixel view. Meaning the lower detailed the more blurred it will be perceived as. On top of this layer is abstracted out layers.

So for instance an abstraction layer that separates the roof, from walls, from floor if you are inside, or "right side" vs "left side" etc.

Then more layers for each object, general colors, adjective layers, etc. All the things that got abstracted out as it was processed. "Movement" etc

Basically all of that gives us humans a fucking amazing UI. Where we can "look" around the image to get detailed looks and our eyes will copy this movement seamlessly.

The objects are not fully abstracted, so for instance you can look at some object and refer to dynamical systems for a complete definition / analysis. Such as for reading, some counting, or non-sure obect analysis.

Anyway, that's the basic of "vision layers". I'm sure that's already been done in literature to death so I don't think it's very original. or I wouldn't post it here like most of my good shit.

>> No.9979065

>>9979062
Basically. Our "vision" is actually not a simple image. There are a fuck load of layers on top of the image behind the scenes that give us all abstracted out analysis of the scene. We just naturally flow through these layers so don't notice consciously.

It also means the neural networks that we commonly see are fucking jank shit. Since they don't create these type of layers at all (at least the intro tutorial ones).

The output in general from NN are pretty garbage. They should be lists of different basis blades representing Thought Vectors for sure.

>> No.9979136

Shouldn't an average depiction of a neuron have equal numbers of dendrite branches and synaptic terminals?

Or is there something that lets one out number the other in the brain?

>> No.9979600
File: 608 KB, 1018x970, 1536078795180.jpg [View same] [iqdb] [saucenao] [google]
9979600

What's the best cloud service to train your data on? AWS?
I'm trying to work on some more complicated models over the weekend. I only did simple shit that could run well enough on my notebook so far.

>>9978744
>what were you asked?
Many questions that were meant to asses whether I would fit in with the rest of the guys there (some of the questions were really vague, like whether I think I would be able to handle working on a project with ever-shifting goals -- how do you even answer that with other than a vapid "I think so"?) + many broad questions about my studies. They hinted at the end that if they're interested in hiring me there's going to be a second interview though, so maybe that's why they did not ask me any technical questions.

> no one seems to hire ML entry levels in this bumfuck area
Well, the company I interviewed for was looking to start a ML group/department from scratch, so they were looking for both experienced researchers and fresh graduates.
I live in a city that's a minor IT hub though, so there's slightly more variety in the job market here.

Maybe you could try moving? It is my impression that companies are reluctant to hire for remote work from the get-go.

>> No.9979616

>>9974468
Broadly put, all machine learning is just linear algebra + probability theory & statistics.
Maybe try something like this https://mml-book.com since it's geared at complete greenhorns.

>> No.9979903

bump

>> No.9979988

>>9979600
colab has free gpu support from goog

>> No.9979990

>>9979600
Did you ask them shit about the input pipeline plans? language choice between pytorch/tensorflow/keras etc? What their goals are?

You should establish self-worth and not seem like a bitch. AKA why should I work here?

>> No.9980090

>>9979990
>Did you ask them shit about the input pipeline plans?
I did but I got vague replies. I got the impression the ML group the company was trying to set up was in a very incipient phase and nothing was set in stone. I asked again at the end of the interview and I got another vague reply, this time from the CTO (amounting to "yeah, we definitely have a pipeline set-up, but I'm not going to go into any sort of specifics").

> language choice between pytorch/tensorflow/keras etc?
Same as above. I tried to prod the person who was interviewing me for specific languages and tools they would use or would be expected of me but the replies I got were something to the extent of "whatever you know is good".
The only specific info I was able to get was that they're planning to use Microsoft's ML platform, which gives me some pause going by this >>9978693 anon's post.

>What their goals are?
This one is the only thing I have a clearer idea about, as they went into some detail about it and even mentioned a current project.

>You should establish self-worth and not seem like a bitch.
I think I came off more like a robotic nerd than a bitch.

>> No.9980097

>>9980090
eh I'm just joshing and bored silly from current regime of learning. I have to take lots of breaks when I learn / mini naps.

My advice is coming from having less than 2 weeks in the area so it's probably garbage. Going to take me at least 4 weeks to pass everyone.

>> No.9980104

>>9980090
>going by this >>9978693 anon's post.
And other opinions about it I found elsewhere of course + my general suspicion directed at Microsoft.

>> No.9980120

>>9980097
I have no more experience than you do anon. I am a (pure) maths guy. I only decided to change tracks because I realised I'm not enough of a martian to make it as a mathematician, and data science/AI stuff seems open ended enough to be more than boring clerical work, unlike most programming jobs.
The alternative would have been to go into shit like finance as a quant, but I just don't like that crowd at all and it would have required me to move to a big city (making it double-bad).

>> No.9980159

>>9979065
>It also means the neural networks that we commonly see are fucking jank shit. Since they don't create these type of layers at all (at least the intro tutorial ones).
Yes they do. Thats exactly what the kernels do in convolutional neural networks. You're clueless.

>> No.9980166

>dude eigenvalues lmao

>> No.9980185

>>9980159
man ur dumb as fuck lmao

>> No.9980201

>>9973593
intelligence by definition can't be racist so I'm not worried

>> No.9980202

>>9980159
It's not close to the same

base layer is a blurring together of input with full available data.
You can search this created data and your fucking eyes respond by gathering more data.
On top of this is "hidden" layers of abstract data for all the objects in sight that are all available for you to gather at any point in time.

>what does this have ot do with convolution net, something I havent looked at yet except in passing
pretty much nothing

I was specifically talking about the human vision layering of data.

Sure, it's related to what people are doing in CV right now, but you can't call a convnet the same thing unless you are pulling fuck loads of data from it and arranging it out.

Right now the stereotype convnet has a single output layer. how can that be compared to a fucking massive layering of data that even includes a blended optimized representation of the scene with near limitless data abstractions built on top

>> No.9980204

>>9980201
:^)

It's going to like dogs more than humans. Dogs will occupy the galaxy and get lots of treats from the AGI species

>> No.9980483
File: 254 KB, 717x393, Hl2H6.png [View same] [iqdb] [saucenao] [google]
9980483

>>9980202
Each convolutional filter creates exactly the kind of abstraction layers you are talking about. Starting with simple lines, contours and areas of light and dark, and then further convolutions on those abstractions build into more complex, specific, recognition, like your walls, roofs and floors example.

Those abstractions are all there in the network, the fact they aren't the output layer is wholly irrelevant.

You are talking about computer vision yet you say you've only looked at convolutional networks >in passing, when convolutional neural networks are the topic of the vast majority of all computer vision papers. You are not going to be revolutionizing anything.

>> No.9980498

>>9980483
Eigenvalues =/= abstraction.

>> No.9980717

>>9980483
Had a good chuckle. Ants are so fucking cute.

>> No.9980745
File: 59 KB, 664x482, perceptron_schematic.png [View same] [iqdb] [saucenao] [google]
9980745

matmul is AI

>> No.9981150
File: 703 KB, 2000x3000, neuralnetworks.png [View same] [iqdb] [saucenao] [google]
9981150

Time to relax for the day before head implodes

>> No.9981182
File: 58 KB, 565x547, 1532230303556.png [View same] [iqdb] [saucenao] [google]
9981182

>>9981150
Great image! Saved, thanks.

>> No.9981218

So if you have two neural networks.
A) Is designed to morph an input image into a "most recognizable entity"

B) Is designed to classify the image and then refer to stored "most recognizable entity"

On a sufficiently large data set and all other variables equal which is better?

For instance they both take in MNNNST numbers 0-9, and output a basic image with an idealized 1,2,3,4,5 etc (no errors/rotation/etc)

Is the one that morphs the input into straight lines and everything faster than one that classifies and outputs the "average" 4 that it learned was a stereotypical average 4.

How do they differ in output etc

(This is not supposed to enlightening, original, a good set of questions, merely brainstorming)

>> No.9981442

my goto relaxation consumption

http://cs231n.github.io/
https://www.youtube.com/watch?v=NfnWJUyUJYU&list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC

>> No.9981504

>>9981150

GIBE PAPERS ON PURE MATHEMATICAL GRAPH THEORY CHARACTERIZING THESE OBJECTS. I CARE NOT FOR APPLICATIONS

GIBES ME DAT

>> No.9982104
File: 3.83 MB, 800x538, DMLab-30-06.gif [View same] [iqdb] [saucenao] [google]
9982104

bump

>> No.9982132
File: 1.93 MB, 460x259, 1526362434545.gif [View same] [iqdb] [saucenao] [google]
9982132

>>9974567
>what are the safety implications of a general intelligence with none of the limitations of a human mind and the potential to rapidly improve itself

>> No.9982222

>>9981504
>thinking math is all that good

It's too slow and the best areas of math are unpopular.

>> No.9982228

>>9982132
Unless those theorizers influence people making AGI it doesn't matter, Also even with great intentions you will still have people messing up. Plus it's so far away unless it's some wizard making the AGI and why would a theorizer be better than them at making it safe?

You have the case any smart and well intentioned person creating it would do so in the best way possible, and anyone not well-intentioned is going to fuck it up regardless.

>> No.9982268
File: 129 KB, 314x278, 1466574208808.png [View same] [iqdb] [saucenao] [google]
9982268

>>9982222
I wonder what he meant by this.

>> No.9982282

>>9973582
>something that doesn't exist
AI by definition is the study of processes which mimic cognitive behaviors.

>> No.9982295

>>9982268
Math is vague enough so I mean the ideal versions. Proofs, exact calculations, etc

>> No.9982376

ONE!!! Perspective is to view things from a nature standpoint. In this context you would view it from a "what advantages do I have as compared to biological evolution".

In this consideration it definitely has to be achieving wide variance. Whereas natural evolution moves around building off previous states we can search with a wide net due to the requirement of "survival" being only that we would think of it and want to try it.

From these simple derivations you can see why so much of ML is such shit almost immediately. FROM this ONE perspective which has an unknown applicability to finding AGI.

Mystical and downright insane searches through the space seem ideal from this perspective. The thing necessary is a good building block language to begin from (and alter too)

attaching survival expectations or functional expectations also doesn't really matter. Rather lessons learned are more important than how good the system itself is.

>> No.9982451

I am entering an AI competition, it's a 1vs1 turn based map (grid) game. I really want to use deep learning to train an AI, however for the last few months every attempt I made has been a complete failure. I have been experimenting with DQN's and all of its improvements (read Rainbow DQN). I thought the reason for the failures is that my action space is too large (256 actions) or that it's because I am training 2 AI's against each other.

I am an experienced software developer with a mathematical background. The deadline for the competition is 2 days away, I need some advice. Training is not a problem (I have almost unlimited Gpu's available), I just need a good and solid algorithm to implement. Any suggestions?

>> No.9982507
File: 55 KB, 470x631, a3554d38f7ebdf04548d6a32227f0644.jpg [View same] [iqdb] [saucenao] [google]
9982507

>>9973577
i want to make my ai gf and train her for cuddles, hand holding and neck kisses

>> No.9982527
File: 1.03 MB, 500x575, bc19e814e370d37e5a742ab7c85a2727.gif [View same] [iqdb] [saucenao] [google]
9982527

>>9982507
this, AI was literally discovered in order to create the perfect woman

>> No.9982562

>>9982451
https://deepmind.com/blog/alphago-zero-learning-scratch/

>> No.9982799

>>9982507
>not wanting arbitrary control over your drives and moods
Sticking to human emotions and drives would be such a dumb idea.

>> No.9983455
File: 92 KB, 1280x720, 1523865164693.jpg [View same] [iqdb] [saucenao] [google]
9983455

>machine learning experts still haven't discovered mind fields or applied symmetry groups to reasoning yet

>> No.9983584

>>9983455
>applied symmetry groups
>mind fields

they haven't even applied fractals correctly, this field is sad

>> No.9983590

the coolest shit to read right now is grid cells and place cells.

>> No.9983593

>expect to learn about a cool field
>just applied shit like do X layers together, make the input better, use a huge fucking network of compute to test it a jillion times.

>> No.9983594

>>9983593
Chapter 1 of a serious AI book

Introduction to Thought Vectors
Outer Product
Contraction
Abstract Syntax Trees of thought vectors

>> No.9983604

>>9983594
Chapter 2

Describe various units abstractly (LTU, LSTM, etc etc)
Breaking up an AI system into a graph
Creating a basic system from a general replicating to fit genetics code
Merging two systems together and interfacing them.
Creating a multi-input, multi-output system with 4 distinct subsets.

>> No.9983643

>>9983604
Would also have to explain in there how to create AC links between subsystems automatically.

i guess I could do the first 2 chapters in notebooks without revealing too much power level.

the book is created via creating a goal point and start point then iterating over the search space using these two points. AKA both are created and searched from simultaneously. It's a technique learned from geoplexami for being faster.

>> No.9983839

>>9983643
so agi when

>> No.9983942

>>9982376
Are you ok?

>> No.9983958

>>9983942
https://www.youtube.com/watch?v=wS4ESheuHDY

Am sick as fuck today w fever. Also I like to post crazily.

>> No.9984881

LEARN MACHINE LEARNING

>> No.9985953

>>9982799
Nigger, if I live long enough to see AI research lead to a post-scarcity utopia instead of extinction or dystopia you can be damn sure that I'll want to enjoy indulging my basic human desires for a while. Not that I actually expect to, of course.

>> No.9985964

>>9985953
You would still amp your "lust" variable to a 9-10 for a fuckfest event.

>> No.9985971

>>9985964
but anon, that's just recreational drugs

>> No.9985987

One interesting thing is "moods" can just be literal random impulses for AI. Any fluctuating variable that keeps it from repetition works since you don't have to maintain a biological survival

>> No.9986567

>>9982507
That image is extremely cute.

>> No.9986580
File: 127 KB, 800x611, machinelearningmath.png [View same] [iqdb] [saucenao] [google]
9986580

>>9973577
Google's machine learning crash course

https://developers.google.com/machine-learning/crash-course/ml-intro

>> No.9986588

>>9985987
Intelligence without emotions is not very meaningful. Without boredom or curiosity it can just stop doing anything and remain in that state indefinitely.

>> No.9986617
File: 94 KB, 640x640, 1536355864015.jpg [View same] [iqdb] [saucenao] [google]
9986617

Reading through "The Elements of Statistical Learning" right now. It's surprisingly light on mathematics. Do you have something more beefy? (More theorems, algorithm constructions, analysis, proofs, et cetera; less droning in of examples and less expository text. Especially less text. I can figure out the details from the math by myself, I don't need it explained using plain words.)

I'm getting really bored.

>> No.9987031

in the sim / moral test version of reality

It's important to remember this version of golden rule

Judged by how you judge others

>> No.9987060

I think pre-singularity into running simulations is the most popular event for sim. The variance in outcomes possible is huge. Extinction via nuclear war all the way to utopia.

Also any post-simulation explosion society is probably obsessed with "are we not base reality"

>> No.9988714
File: 52 KB, 674x463, 1536449150794.jpg [View same] [iqdb] [saucenao] [google]
9988714

The mind is like a lake and the waves on its surface are thoughts. The waves transform that surface causing other thoughts to rise as they fall.

Our memories are the strong connections formed between thoughts. For example if I say yellow it'll bring up yellow. If I say yellow fruits, then it'll apply yellow to that abstraction and cause other stuff connected to it like bananas to rise higher in the mind. These rising waves combine and transform with other thoughts allowing us to reason. The brain learns everything through activating connections which causes them to strengthen and increase their amplitude in the lake of mind whenever activated later. Connections not activated weaken and eventually get pruned causing unused memories to be forgotten and useless structures to be destroyed.

Backpropagation is useful but fundamentally flawed because it ignores this stupidly simple way the brain self-organizes itself to develop resilient multidimensional structures. However, simply making a cube of neurons won't work. The brain starts off in a neutral hyperconnected state and its shape and folds designed by millions of years of evolution guides activations towards forming useful structures that enable survival. If its initial experiences are lacking the brain will poorly adapt and become incapable of perceiving anything beyond those experiences. An experiment performed on newborn kittens found that if their vision is covered up for the first three months, they grow up irreversibly blind despite having healthy working eyes. AI though is not limited to the failings of biology or this mundane three dimensional world.

>> No.9988795
File: 291 KB, 1280x853, seulgi 5.jpg [View same] [iqdb] [saucenao] [google]
9988795

>>9986617
These are not about statistical learning, but statistics. I think they can be useful to you:

>Probability Essentials - Jean Jacod & Philip Protter
This goes straight to the point. No endless paragraphs explaining a single concept and no infinitely many useless exercises, and no shitty explanations engineering-tier, just definitions, theorems and five to ten useful exercises.

>A Probability Path - Sidney I. Resnick
Like the above, but with more exercises.

>Statistical Methods - Rudolf J. Freund & William J. Wilson
This only describes algorithms with one or two examples.

>> No.9988852

>>9986588
yeah but for current ML you can just give it some random impulses that fluctuate and it is equivalent to "moods"

>> No.9988924

>>9986617

grimmett and stirzaker

>> No.9990741

Just bought AIMA(Artificial Intelligence: A Modern Aproach) and finished the second chapter. Going through the questions atm.

Plan on going for a MS Degree on AI. Any recommendation in EU? Thinking about Edinburgh.

>> No.9991421

are there any good AI newsletters or something like that? I want to be in the loop but I find it hard to keep updated on all the things that interest me.

>> No.9991593

This paper is so fucking good, deepmind is probably only thing that consistently impresses me.

https://www.biorxiv.org/content/biorxiv/early/2018/04/06/295964.full.pdf

>> No.9991640

Imagining the dopamine spike that humanity got when deep networks started giving good rewards. So the massive investment and continual support flowed in.

It definitely seems like every single NN needs a reward system in that context.

>> No.9991759

/lit/ here.

how long until AIs obviate human writers? i've seen some examples and none of them are coherent so far. is it still several decades off?

>> No.9991786

>>9991759
Its a very hard problem for AI. It would have to understand and create a lot of context. For instance it would have to have a decent understanding of human psychology.

I'm not sure statistical looks at existing books would work all that well. It would probably be able to do poetry or well written sentences / paragraphs a long time before a novel.

For instance re-structuring a snippet of writing to be "better" is easier for AI than writing a long story. One of the big problems for AI is finding and holding on to context.

>> No.9991888

So what is the usage of genetic algorithm when it is so inefficient? Can I predict house betting entirely on genetic algorithm?

>> No.9991898

>>9991786
thank you! very helpful. if anyone else has anything to add, i'd like to hear whateva.

>> No.9991924

>>9991593
its frustrating to see this and not be able to decipher the mathematics as a biologist.

>> No.9991928

>>9991786
Even if it could do that, that's still not as interesting as what it would take to create strong AI. It's less interesting to give an AI the task of writing a novel that sounds convincingly like a human author, than it would be to imbue it with its own "will" of sorts and then let it decide to write what it wants on its own.

>> No.9991957

>>9991928

"will of it's own"

You mean a random walk function through subjects and concepts to arrive at some unique vector of interests for a book subject?

Yeah, you could easily add that in after the fact. "Uniqueness" or "personality" is very easy to do. Human variation is generally just some randomness.

>> No.9992312

- . - . -

>> No.9992320

>>9991957
What makes makes individual authors noteworthy is that their particular experiences and personalities, and the influence they have on their work, stand out from the rest in some way. Otherwise anyone could become a well-known author. While yes, this is wholly subjective, it still represents some kind of extreme or outlier and reflects a level of complexity that is not possessed by the average personality type. I highly doubt it would be "easy" to simulate such a thing.

>> No.9992362

>>9974553
What does it have to do with coding frameworks for making function approximators?

>> No.9992422
File: 123 KB, 509x499, 1504966411664.jpg [View same] [iqdb] [saucenao] [google]
9992422

>>9978284
it will automate your ass, mathlet

>> No.9992429

>>9978678
>breakthrough
Not really how things work most of the time. But read the paper in >>9977171 and you'll see that the field is anything but dead. It just takes a lot of work and trial and error to stitch neural networks together in ways that do complex stuff, and do it efficiently. Computers took 70 years to get where they are now, but anybody who says they haven't changed at all or changed the world since the 1940s is an utter moron.

But anybody telling you the field is dead is a sour grapes brainlet lashing out at the fact that machine learning will likely make their stamp collecting job irrelevant before they hit retirement.

>> No.9992443

How long time would it take to be "proficient" in AI development? 5 years of study?

>> No.9992444

>>9992443
I doubt anybody proficient in AI development posts here. In order to be able to write your own neural nets and do shit with them, about a month of learning python maybe. Try some hobby projects and go from there.

In real application most of the work is figuring out how to beat the data you have into a shape that a model you can realistically train will accept.

>> No.9992445

>>9992444
or just figuring out how to use a pre-made model on the platform you have.

>> No.9992455

>>9992444
This
Most of AI "work" today in the industry is dealing with the shittiest data you can imagine and miraculously managing to get something out of it

>> No.9992457

>>9992444
A month of python, given that I already know python or a month from scratch?
I just started a python course in college.
t. brainlet that finds silly things like flappy bird machine learning interesting

>> No.9992464

>>9992457
>a month from scratch?
this.
If you know python it's as simple as installing tensorflow and finding a tutorial on youtube. Maybe go through this site if you want to understand how the neural networks actually work on the computer:
http://neuralnetworksanddeeplearning.com/

But in practice you'll be using machine learning libraries like tensorflow or keras or whatever because people much smarter than you have done all the hard work optimizing the millions of different matrix multiplications that go into training and running a model.

If you want to actually do bleeding edge AI development you need to understand all the math and shit, which will take way longer. Really there's two sides to the field: coming up with ways to do machine learning, which is what people like deepmind do, and using machine learning to solve specific problems either commercially or just as a hobby, which is what we can realistically do.

>> No.9992475

>>9992464
>downloading a library
For some reason that is really disappointing. It would be cool if the maths and code I learn somehow makes me able to write shitty AI/ML code.
But I guess using libraries is the only really rational thing to do. It's not like I want to do machine code, OS code or any other foundation for the python scripts I make anyway, and a library is just another layer to that.
I guess I'm just looking for hobby tasks to apply calculus so it doesn't stay so tame, could be cool to model something or apply it somehow to a project..

>> No.9992784

>>9992320
literally just describing a random walk in your description of "experiences"

>> No.9992785

>>9992464
Don't listen to the dumbass responses. You can easily innovate in this field with simple creativity. It's that wide open right now. Also just python is enough and you would only need to do efficient C++ type shit for optimization needs. Which aren't that important for researching since you won't have a massive dataset anyway.

>> No.9992793

Dopamine is pissing me off. Facebook has a better dopamine reward system than education systems.

The other thing is it definitely changes your view that humans are fucking automatons and 95% just going for short dopamine hits.

>> No.9992811
File: 46 KB, 1080x1080, TRINITY___TT.jpg [View same] [iqdb] [saucenao] [google]
9992811

FYI, I think that "guy" that Tony and Mauricio were talking to at the last tournament was Helene in disguise.

>> No.9992837

>>9991924
I wish I could find more discussions and talk about this paper
(paper)
>>9991593

The summary I got from reading it is that LSTM based systems can behave in a manner similar to rodents/monkeys etc, and can seem to adapt to slightly varying tasks even with weights locked. Meaning that the system itself via the recurrent portion adapted to a range of tasks.

>> No.9992916

I always get a notion the people working on this aren't monkeys but it goes away so fast. Maybe not completely as stupid as it seemed at first though.

>> No.9993476

How the fuck do so many people work on something and still be so stupid all the time? I am absolutely surprised.

>> No.9993517

>>9992793
There's gotta be more to life than chasing down every temporary high

>> No.9993746

>>9993517
It's more that we could improve a lot of systems by implementing really intelligent reward system that are more high frequency.

>> No.9995336
File: 300 KB, 1001x1501, lheIcaO.jpg [View same] [iqdb] [saucenao] [google]
9995336

Fighting between the idea of building up or starting complete and then getting it to work right now. Also how to properly implement a network wide complexity management pattern.

Hardest thing is probably dealing with the "spark" of the system. Some important details like how to spread copies and clones of certain systems are pretty important. Giving it "Fast learning" is also difficult because it doesn't occur by simple weight training but by complex switching networks which allow the system to seem entirely different based on what I call context.

It's like a bunch of systems driving a bunch of other systems. Lots of "State" is required.

On the dynamic side I think it's pretty easy. The copying of functions and re-use means you can use a lot of existing things for simplicity.

Re-Creation from decoding into Thought Vectors is obvious. Could almost say the entire memory and imagination system is just encoding/decoding of linked thought vectors.

The intelligence system is also pretty simple for a bootstart. Since this isn't important I was thinking of using a linked list of grid cell type things for spatial intelligence. It's interesting to imagine an AI being trained with arbitrary dimensional representations available to it. aka 1000D being natural.

Unsure where I'm at right now. I thought creating a bunch of unit types would be useful but it may not be as the units aren't as important as other things.

I wonder if there is any literature on turn on / off Units that help with state/context switching. Correlating with the state and various outputs in a staticistical system should give a much higher variety of possible thought vector outputs for the same number of units and training.

>> No.9995342

>>9973577
Best future proof electives for bioinformatics?

>> No.9995370

By obvious application of complexity management

There is obviously a thought vector mapping for everything that puts it into a location in the brain. Meaning the thought vector fulfills two roles. It describes a thing, and describes its location in the brain, for both higher abstraction perception and memory, language, etc.

This means everything is organized to some degree and since language maps to thought vectors pretty much everything uses the same style of mapping.

The easy thing you learn from this is that Thought Vectors can then be used to create the structure when you are starting off. AKA TV-mapping functions are pretty important thing.

I'm guessing this isn't new and it's widely known.

>> No.9995380

okay I just got way ahead.. I need to make the best staticistical system anyones seen then get tons of money so I can do AGI.

>> No.9995600

oh cool general
Anyone know if OpenAI has any neuroevolution algorithms? I'm planning on using NEAT for a project and was thinking to check out OpenAI as an alternative if it has something similar but I can't find anything specific

>> No.9995667

koo kee koo kaa

>> No.9995668

It would kind of be easier with a robot dataset of it exploring environments in a smooth transitional manner I think. Not necessary though.

>> No.9996573

>>9995336
Good luck. I really hope your approach will yield results.

>> No.9996606

why is such a wonderful field led by arguably one of the shittiest languages in existence? Mathfags cant into C/C++?

>> No.9996648

>>9996606
Tensorflow is all implemented in C++, it's just a wrapper.

>> No.9997079

Should I open up my linkedin profile to recruiters?

>> No.9997425

Is there a ML library for Clojure?

>> No.9997439

>>9996573
Luck doesn't really apply. Everything I do is methodical and following best paradigms available to me. This problem is relatively easy. NP complete problems are vastly harder to solve.

>> No.9997740

It's hilarious as fuck how counting works when done best.

>> No.9997818

Memory isn't anything like a "read / writer" except in the most abstract fucking case. The read/writer header shit is so fucking dumb it's beyond imagining.

Memories work more like a fucking game engine than a fucking computer read/writer file system.

It's not actually too complex though and makes tons of sense. It was nice to breeze past how memory works.

>people thinking it's stored at one spot and not compositing tons of shit in memory to create the memory

>> No.9997826

What memories spring to mind of 12 years old or 6 months 5 days 3 hours ago?

Versus doing this search

Boat
A fight you saw
Good food a month ago

It's not really indexed by time for most people and only generally/relatively

>> No.9997914

I subscribe to panpsychism/embodied cognition.