[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 534 KB, 1000x647, machine_learning_ai.jpg [View same] [iqdb] [saucenao] [google]
11252180 No.11252180 [Reply] [Original]

What should one study to make revolutionary achievements in the field of AI and machine learning?

Math? Stats? CS? Math and CS minor?

>> No.11252218

AI is a buzzword that doesn't mean anything. Machine Learning is just a fancy way of saying "statistics applied by a computer", so you should study statistics and math. CS is just a watered down degree that doesn't teach you shit

>> No.11252266

>>11252180
This:
>>11252218

But you don't need a degree in all those fields. Get one in an area that Interesses you most, then learn whatever else you need. The math isn't that difficult, surprisingly. I'd recommend getting courses on statistical learning theory.

>> No.11252417

>>11252180
neuroscience

>> No.11252419

>>11252417
neuroinformatics

>> No.11252746

>>11252266
>>11252218
what a load of crock

>>11252180
if you understand AI to mean deep learning, then that has pretty much nothing to do with statistics or math.

it would be like saying you need to study physics to become a surgeon.

>> No.11252769

>>11252180
if you want to be a good goy codemonkey and make black box models with no understanding of their significance or assumptions: CS

if you are genuinely interested in computers 'understanding' data: statistics or math at a graduate level

>> No.11253030

>>11252180
Honestly some hardware stuff. ML won't be shit until real brain-like parallelism is achieved.

>inb4 graphics cards
Not even close

>> No.11253034

>>11252769
based on what evidence do you come to this conclusion

>> No.11253184

>>11252746
Then what do you study?

>> No.11253196

>>11252746
Deep learning is heavy foundational in statistics and math you Cumfuck, yes you can use it like a black box but to make any advances you still have to understand the architecture

>> No.11254295

>>11252180
Philosophy and linguistics

>> No.11254676

Cognitive science.

>> No.11254679

>>11252746
>deep learning has nothing to do with statistics
It's just regression using cost-function minimization..... it's 100% statistics.

>> No.11254684

>>11254295
somewhat unironically this

>> No.11254695

>>11252180
I want to say math, but given that literally no ML paper would ever hesitate to use 'reals', I am going to say computer science. Trying to understand neural networks using 'real' math is like trying to do number theory without knowing how integers are constructed. It is not going to work. An indication that this is true is the absolute dominance of empirical methods at this moment. The theorems never say anything about the real life performance of the algorithms and are nothing better than ad-hoc rationalizations.

A lot of the 'math' that goes into the field will need to be redone in order to make progress.

This would be so much work that I am not sure if just trying things out at random and seeing what works will get us to where want faster.

The entire field of ML is a joke right now. I thought it was the fault of the field itself, but the math community as a whole is to blame for not coming up with proper foundations to support it.

>> No.11254720

>>11254695

uh, progress is slow, but foundations are being developed, and they are interesting. don't be so pessimistic..

>> No.11254765

>>11254720
It is not fast enough. At this rate even if there was an impetus for it, it will take at a few generations for mathematicians to come around to point of view that proofs should be computable.

Kurzweil's timeline has been holding pretty steadily, so we should get human level AI in the next decade. Deep learning really is a decent approximation of the low level unconscious processing.

And so the next level is within reach. There should be an evolutionary path to it somewhere.

But there is nothing to say that we will have any understanding of it when we finally hit it. That is what I am pessimistic on.

>> No.11256266

what's the deal with AI in the context of quantum computing?

>> No.11256285
File: 138 KB, 900x1200, 1573881955929.jpg [View same] [iqdb] [saucenao] [google]
11256285

>>11252180
Computational mathematics and statistics (both Frequentist and Bayesian)

>> No.11256323

>>11252180
CS because in most universities this is where more of the ML researchers will be. some universities will have ML focused stats departments (or mixed cs+stats for ML) which are good too.

>> No.11256340

>>11252180
Quantum neurophysics. Anything else is just glorified statistics sitting on top of mass data collection.

>> No.11256456

>>11252180
CS is good enough, EI could work as well as components such as the memristor now enables building ai with hardware.
Just email the faculty and ask how about AI competence among the teachers.

>> No.11256574

>>11252180
Statistics, linear algebra & numerical analysis, optimization for the basics.
Probability, measure theory, signal processing, topology, learning theory for extra shit.

>>11256266
See Quantum algorithms for search and optimization

>>11254695
Check out progress being made in adversarial learning theory and analysis of piecewise convex/piecewise linear neural networks. A lot of cool stuff.

>> No.11256748
File: 18 KB, 615x548, 1523056873083.gif [View same] [iqdb] [saucenao] [google]
11256748

>>11252746
>no, others are retarded, I'm not retarded
The most important thing is to have written SOMETHING on here, right?

>> No.11256762

>>11256266
Some quantum algorithms are well-suited for machine learning tasks, for example QBoost or training in general. Everything involving optimization, basically.

I know only one useful application of ML for QC, where the best adiabatic step to take is determined by some ML algo.

>> No.11256857

>>11252180
whats the difference between CS and software engineering and which has the more advantage??

>> No.11257504

>>11256857
do a google search you fucking white male

software engineering is a discipline that tries to make code monkeys less monkey

cs is a branch of mathematics

Software engineering has nothing to do with AI, AI is a branch of CS

>> No.11257835

>>11257504
Beautifully said

>> No.11258425

>>11257504
this

>> No.11258429

original thought

>> No.11258442

>>11256762
what kind of math is involved in quantum algorithms?

>> No.11258444

Isn't the point of all this A.I. to basically get humans to trust machine output more than human output when it comes to practical/real-world applications?

I'm just curious as to the preferred end result because I doubt machine adoption would be 100% amongst humans (e.g. Amish people).

Or are humans always stuck in a loop of having to explain the external world to other humans because of ever expanding vocabulary to explain ever increasing resolution of focus (Noam Chomsky) to an ever diminishing set of language subscribers?

>Personally I believe that we're already there with people trusting Google output more than, well, anything. Perhaps it has to do with the fact that the end-user still has to compose the question, thereby adding trust to the output because of compositional input?

>> No.11258446

>>11258444
this is pseudo philosophical nonsense, mental masturbation

>> No.11258491

The Gamma function. It is crucial in AI.

>> No.11258636

>>11258446
It compliments your physical masturbation very well. Although I am not sure he could do it every day in front of his mother like you do.

>> No.11258668

>>11258446
How is it pseudo-philosophical when one is asking for clarification and presenting my own interpretation of things in terms of practical application/real-world result? Isn't simply being dismissive more mental masturbation because it doesn't require any real thought beyond slapping a label/reason to exclude the observation you made?

Oddly enough brings me back to an interesting point. When is a dismissive opinion, outside of a known emergency (e.g. medical), ever beneficial to the individual or the group?

>> No.11258718

>>11258668
Not the person you are responding to, but Universities are dismissive of bad applicants for one. Have to separate the shit from the cash somehow. Being dismissive in an academic setting is frowned upon, but this isn't an academic setting. The default assumption is that everyone on this board is fucking retarded.

Honestly, I cant understand the point of your og post either. It seems like you are trying to say something in an overly-intelligent manner, but it just reads like bullshit to me. Like the guy you are responding to, I'd rather not waste my time trying to understand what you are trying to say. Sorry if that's a little rough.

Do recommend Stewart Russel's new book though - as a follow-up to Bostrom's

>> No.11258726

>>11258718
Rejection is never a rough experience. I was more curious as to how AI would be useful to the common citizen is all. Getting too wordy is kind of the polar trap of virtually any discipline because either it is too little, too specific, or too much.

Actually if A.I. could solve that then that'd be fucking interesting to me.

Most common people I speak with really would rather it just be like some sort of Uber for hedonistic pursuits, including people that just like to learn for learning's sake.

>> No.11258925

>>11258442
Pretty much just complex linear algebra. Our probability theory. Both are valid and fruitful approaches.

>> No.11258933

>>11254765
>human level AI within the decade
W-what? Won’t that mean the end of the world?

>> No.11258938

>>11258933
he’s lying

>> No.11259299

>>11253184
>>11253196
>>11254679
>>11256748
He's right though. When you do, for example reinforcement learning in a virtual environment, you're not doing something statistics is used to deal with. Deep learning is a more and more important tool in statistics, so you can find countless examples where lines between feilds are blurred, but deep learning is very much its own field, where it doesnt make sense to say that statistics is a "foundation".

to answer op, I'd say getting a math 3-year is the best bet if you're going to uni. Almost all the shit you learn in undergrad is gonna come in handy sooner or later.

>> No.11259341

>>11258726
begone schizo

>> No.11259500

>>11254765
>Kurzweil's timeline has been holding pretty steadily
No it hasn't

>> No.11260620

>>11259299
I was saying you didn't need a degree in all of the mentioned fields to understand deep learning, just one plus, optionally, statistical learning theory. He said that was a load of crock. So he's not right and you're defending a retard.

>> No.11260650

>>11260620
>didn't need a degree in all of the mentioned fields to understand deep learning
I can be a god at ML and AI learning shit in basement ?

>> No.11261144
File: 21 KB, 500x483, patient-satisfaction-survey-did-you-die-oyes-27838524.png [View same] [iqdb] [saucenao] [google]
11261144

>>11259341
Satisfaction vs Disambiguation vs Patience

>> No.11261161

>>11252218
True

>> No.11261162

are there a lot of industry jobs available in America for AI/ML?

>> No.11261163

>>11252180
Math major and then CS

>> No.11261277

>>11260650
You will never be good at AI because you're incapable of reading comprehension.
>don't need a degree in ALL of them
>do ONE degree that interests you most

Fucking retards on this board I swear.

>> No.11262050

Math or Stats

CS is not worth majoring in. CS minor is the best option and take only classes on data structures and algorithms. Everything else is useless.

>> No.11263067

>>11252180
all of it

>> No.11263105

>>11252180
>revolutionary
Find that machine that operates of itself, a harmonic catalyst that can be described but not be calculated.

Fuck current 'AI', I'm awaiting their downfall.

>> No.11263406

>>11252180
>to make revolutionary achievements
>revolutionary
These papers: https://ai6034.mit.edu/wiki/index.php?title=6.844_Info
Gerald Sussman did a similar class a few years ago: https://ai6034.mit.edu/wiki/index.php?title=6.S966:_A_Graduate_Section_for_6.034

A sample:
>we have seen several models of learning (nearest neighbor, neural nets, SVMs, etc.). All of them share the property that they require large numbers of examples in order to be effective. Yet human learning seems nothing like that. It's remarkable how much we manage to learn from just a few examples, sometimes just one. How might this work? This paper for this week explores that topic:

Those are all the real problems in AI, not all of them can be solved by statistics tricks

>> No.11264302

>>11260650
if you're geohot, yes.

>> No.11264535

>>11252218
>a math degree is just a watered down degree that doesn't teach you shit

>> No.11264627

>>11258933
Well, it is definitely a next stage in programming. You will really have to think about relationship with the work that you do when it is literally alive.

The challenge once you get to that level is figuring out how to put it in your head.

You have to think - what exactly does 'put in your head' even mean in this context?

Consider that just putting some chip in your brain will not make you smarter any more that taking a bunch of code from somewhere and copy pasting it into the program you are working on can be expected to make it work better even if both pieces are individually fine.

You have to think - what does it mean for the power of the program you are working on to be your own power? What does it mean to close that gap between yourself and the other? When you do programming and cache yourself into the process, it really does feel like the distance closes. Where is the spark coming from?

And there is also the matter of dealing with the terror of self improvement. Because if you look closer at how agents get better now, it kind of looks like persistent cycle of genocide and/or iterated suicide. These aspects are integral to optimization and won't really change with increases in understanding of how learning works, rather they should just become more obvious to the laymen.

Right now, you run some Python script and fiddle with some hyperparameters and throw away models if they do not work up to snuff. You never think twice about it. What exactly would you do if you were inside that Python script? Would you throw away the model...if you were that model?

At literally any point in time, there are usually signs pointing to what the next breakthrough will look like to those bothering to look. And now in this era you can look at the present state and see how it reflects the future, and see the outline of the Singularity that is approaching.

>> No.11265438

>>11252180
>What should one study to make revolutionary achievements in the field of AI and machine learning?
Neuroscience, physics, and engineering.
First you need to invent scanning technology that can map everything going on in a living brain in realtime.
From there you can gather data to analyze which eventually will show you how a living human brain produces the phenomena we call 'consciousness', 'sentience', 'cognition', and so on.
Once you understand how all that works then you can begin figuring out how to build real Artificial Intelligence, not the cheap knock-offs they 'branded' 'AI'.

See, we have no idea how our own brains work, not really. We know how parts of it work -- sort of, at least. But as a whole system? No. No clue. As stated above, we don't even have the technology to 'see' it working, not anywhere near at the level we need to understand it. That's the first step.

All this crap they keep calling 'AI'? It's not much better than smoke and mirrors.
>Throw terabytes of data at it, and (hopefully!) it'll 'learn' to do 'X' job
That's what their marketing departments keep trotting out and selling everyone on.

>> No.11265763

>>11265438
Why would you need to understand how the brain works to build something similarly intelligent?

We don't need to know how to build artificial birds or fish to build planes and boats/submarines. Exactly deconstructing all mechanisms of a complex machine and rebuilding it is often much harder than building one yourself, especially if said machine was created in a non-straightforward evolutionary process and not by engineers.

Personally I think we will get to artificial human level intelligence before the brain is fully understood. And even that AI will not be fully understood at first just like the current models are only understood on a very "meta" level despite how well they perform.