[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 585 KB, 1982x1124, LINEAR REGRSSION.png [View same] [iqdb] [saucenao] [google]
9691560 No.9691560 [Reply] [Original]

ML EDITION

previous thread >>9671255

In this general we will be discussing the prospects of Machine learning, it's future, interesting recent developments, good sources for learning the basics and interesting projects you anons may be working on.

NO "AI WILL KILL/SAVE US"-TARDS ALLOWED
NO "AI WILL NEVER HAPPEN"-TARDS ALLOWED

>> No.9691563

>ML
not science or math

>> No.9691566

>>9691560
>>>/x/ is that way

>> No.9691584
File: 88 KB, 955x538, anomaly detection.jpg [View same] [iqdb] [saucenao] [google]
9691584

>>9691563
lol

>> No.9691776

There was a short ML thread earlier (>>9688057) but in case it falls off the archive, OP asked:
>Why don't ML people just perform meta-ML experiments where they use ML to learn new ML techniques? Am I oversimplifying it?
>Sometimes you'll see a mention of a new technique where they combine a certain type of layer with some mathematical linkage of parameters between the layers, and these things seem like the kinds of techniques that could easily be found through brute force.
>Just rent an AWS cluster (or Amazon themselves could even do this), generate a "template" of potential ML techniques, then just execute them all and explore the space of parameter tuning and layer transformations, then heuristically determine which methods seem promising based on classification or scoring results and send a report off daily to the researchers
>Is this being done?

and I responded (>>9688111):
>My understanding is that it can be done in theory but is completely impractical, because the universe of "potential ML techniques" is subject to the curse of dimensionality (ML techniques are essentially mathematical formulas, hence this space would resemble the space of syntax trees) and the halting problem puts a fundamental limit the effectiveness of regularization (using ML techniques or otherwise).
>Though I don't have any formal ML background and am basically talking out of my ass here, so I'd be happy to be proven wrong.

Figured I'd have better luck soliciting feedback here (though I won't be able to respond for a while), or you could discuss the original topic if you like.

>> No.9691828

>>9691776
its not imoractical, its already being done by google for example

>> No.9692067

>>9691828
impractical*

>> No.9692123

>>9691776
So this "meta-ML" is being done. If you think about it, it's actually how we decide on the final architecture of the NN when trying to solve something like a classification/regression problem.

A lot of how to set up the architecture depends on the problem you are trying to solve (you are always trying to solve some problem, usually something classification related) and the no-free-lunch theorem applies here. You get a general architecture that works well for one type of problem, but not others. There is also a lot of shit-useless combinations out there, I can tell you from experience that setting up a bazillion architectures will give you...almost always the same F1/accuracy/whatever measure we are using, so it's a lot of wasted resources. A lot of times it's a "good enough" type problem, where once you get a certain level of whatever metric you are using, you stop there and publish or hand off. No need to spend twice as long to get a little better.

Also, say you spend all that time finding something marginally better, it only works for that one classification problem. Sure, it might work for other similar problems, but we don't know until we do the exact same waste of resources, and then we might find out...it actually does worse than our original set up. But again, we already do a type of meta-architecture search when going about creating a classifier, so what's being proposed is already being done.

I'd rather solve 10 classification problems with a great F1 score than solve 1 classification problem with an amazing F1 score in the same amount of time.

I do ML as part of my research, but I wouldn't say it's my main focus.

>> No.9692159

>>9691560
>NO "AI WILL KILL/SAVE US"-TARDS ALLOWED
How are either of these statements even remotely retarded?

>> No.9692175

>>9691776
>generate a "template" of potential ML techniques, then just execute them all and explore the space of parameter tuning and layer transformations, then heuristically determine which methods seem promising based on classification or scoring results and send a report off daily to the researchers
That is done already though. What you're describing is a genetic algorithm for generating artificial neural network architectures.
https://www.youtube.com/watch?v=qv6UVOQ0F44
>curse of dimensionality
Not really a problem for neural networks. Their whole approach is projecting everything into a lower dimensional representation.

>> No.9692176

>>9692159
because current (and foreseeable future) AI is as likely to kill you or save you as doing a linear regression equation is.

>> No.9692190

>>9691776
This is actual research, I'm aware of >>9691828
and >>9692123 but those are mostly kind of evolutionary algorithms.
I think this could turn out as a real hard problem. What does it mean to "learn how to learn" formally? Is it about reducing entropy, about pattern recognition or something intrinsically?

>> No.9692192

somebody wanna give me the rundown on how logistic regression "curve fitting" happens?

non-math version.
like, with SVM you just try different data points and make vectors, right? so what's the logistic regression way then

>> No.9692199

>>9692176
That's kind of like saying houses can't exist because bricks aren't houses. No matter what impressive structure you look at you can always reduce it to non-impressive parts.

>> No.9692202

>>9692192
you make a prediction of a function which covers the data points best, calculate the error with a special error function, and recalculate the original one with a specific learning rate
repeat these steps x3-1000

>> No.9692210

>>9692199
houses cant exist because bricks cant magically become houses

>> No.9692211
File: 90 KB, 536x536, mindspace_2.png [View same] [iqdb] [saucenao] [google]
9692211

https://www.youtube.com/watch?v=EUjc1WuyPT8
https://intelligence.org/files/AIPosNegFactor.pdf

>> No.9692222

>>9692211
please stop spamming your retarded neckbeard in literally every thread relates to ai

>> No.9692228 [DELETED] 

>>9692211

>> No.9692391

>>9692222
Please provide actual counterarguments to his arguments about AI.
>inb4 he's fat, he's lazy, he didn't go to school, etc

>> No.9692435

>>9692391
I'll just repost this from a thread about him:
>he is an "ai researcher" despite having no formal education, not even finishing highschool
>now that would be okay if he taught all of those things himself, except:
>he says he is a good programmer but he literally has no code published, and is mentioned in exactly zero scientific papers which aren't self published
>there is no proof anywhere that he has actually worked with any sort of AI (neural nets, computer vision, anything)

>his most popular work is literal harry potter science fiction
>he unironically thinks he is a very good writer
>he thinks a doctorate in AI is useless
>he thinks that people with PhDs are in general dumber than him
>he has written an autobiography
>he mentions multiple times in the aforementioned autobiography that he is a "genius"
>he believes in cryonics despite having zero knowledge about the topic and not knowing the current status lf progress in the field
>he once tried to "make" a wikipedia clone, but since he is the skilled programmer he claims to be, he hired others (for dubious reasons, obviously). the project started going, but it failed ultimately because his idea was shit and he doesnt know how to lead a project
>he is one of bayesianfags and unrionically thinks bayes' theorem is more than a simple statistical tool and can be applied to literally anything and is the solution for sentient AI (because muh self learning formula) spoiler: it was tried, and no, it can't
>he thinks the many-worlds interpretation is the only correct one for quantum physics
>he believes in utilitarianism despite looking like a 35 yr old neckbeard who according to his own belief should actually be eliminated
also thanks for reminding me to add a NO YUDKOWSKYFAGS in the next thread

>> No.9692525

>>9692435
Most of these are just ad homenims.

>> No.9692540

>>9692525
Ah, I see that you're a self-taught genius as well.

>> No.9692542

>>9692435
Where's the argument against his actual claims on AI?

>> No.9692545

>>9692540
Not him, and I think the less wrong guy is a fag, but you're not helping by posting a bunch of insults about who he is as a person instead of just responding to the actual argument.

>> No.9692547

>>9692542
name a claim, I will disprove it
I'm not willing to read this retard's arguments myself. It's akin to listening to my 9yr old cousins arguments why the sun is actually not big.
If you can show me some coherent ones, I may change my opinion.

>> No.9692551

>>9692545
>hurr you should listen to everyone who has no credentials and expierence in a field

>> No.9692561

If I dump 20 years of financial data into a neural net will it make accurate stock buying predictions?

>> No.9692563

>>9692561
yes, if you do it correctly

>> No.9692579

>>9692551
You don't have to listen to him, but it is expected that you back up claims you choose to make with an actual argument. If you don't want to dispute his argument then just don't even respond, this isn't complicated.

>> No.9692584

>>9692579
im not the one who has to defend the claims
so far, all the other anon has done is linking some random blogpost and an absolutely retarded image.
at least he could condense it into one coherent post.

>> No.9692611

>>9692584
You're the one claiming that autodidactism doesn't count as a credential and that his 18 years of work in MIRI don't count as experience.

>> No.9692628

>>9692611
>working in an institute founded by himself
>teaching himself "ai" and "programming" despite zero evidence of him knowing any theoretical aspects of it
have you even read his "paper"?
>>9692211
its littered with absolutely retarded reference to movies like the matrix, mentions of his father, paragraph long tangents and analogies just to explain something... i literally had to stop 3 pages in because it became to cringy. he writes like a college junior who thinks he has this profound idea about a topic he actually knows nothing about yet, but he thinks he does.
any respectable scientific journal would have laughed at this

>> No.9692656

>>9692175
>curse of dimensionality
>Not really a problem for neural networks.
This is just not true.

>> No.9692685

>>9692611
Eliezer Yudkowsky literally:

1. Claimed he has no non-altruistic impulses
2. Claimed he had conceived a programming language that would facilitate artificial general intelligence (twenty years ago) and then dropped it without comment
3. Claimed he can always convince people to let a potentially malevolent AI "out of the box", but only when he keeps the transcript of the conversation secret.
4. Claimed that the most ethical thing a human can do is give as much money "as their mind will allow them to" to him personally.
5. Hypothesized that the reason he never delivers on his claims is that his extremely high IQ is comorbid with an energy deficit that makes him unable to perform.
6. Secured millions of dollars in funding and spent two decades without producing a single technical formalism of the phenomenon he claims to study.
7. Never finished high school
8. Got famous for writing a Harry Potter fan fiction

If these, taken together, do not throw up any red flags in your mind, then you're a cultist.

>> No.9692694

>>9692685
I could go on and on about how obvious it is that this guy is a crank. But just a beautiful example:

In a video Q&A, when asked what he had been reading currently, he answered The Elements of Statistical Learning (a fairly technical graduate level text on ML). His comments on the content? "These guys really know statistics." Here's a self-described leading expert in AI who quite literally has nothing to say when he "reads" actual technical work on ML. This same guy claims to be an expert programmer, but has never publicly released a program. Claims to have mastered college level mathematics, yet the only mathematical content on his pages is an Algebra 2 level description of Bayes Theorem. And his "research papers" are entirely devoid of mathematical formalizations.

>> No.9692706

>>9692694
thanks, I was getting really tired of arguing with these yudkowskyfags. its kinda creepy how they follow him everywhere and hold him on a pedestal while at the same time everything he says or does is a fucking huge red flag, indicating that literally anything he has done "as research", basically amounts to nothing

>> No.9692739

>>9692706
eliezer is a narcissistic kike shitbag

>> No.9692754

>>9692706
>what is Timeless Decision Theory

>> No.9692788

>>9692199
No, the proper analogy is: it's like being afraid that smelting metals will accidentally turn into a rocket and launch itself at your parents house and explode.

Somehow people think that ML/AI = the generic scifi plot, but if you took a lot of brilliant people right now and gave them the specific task of building a nazi AI to kill eveyone, the chance of them succeeding is comical. Worrying that it could happen, but accidentally (like how??) is even more ludicrous.

>> No.9692812

>>9691584
woah that's like........... A LOT OF MATHS!!!!!!!!! LIKE 50........... MAYBE even 100!!!!!!!!!!!!!

>> No.9693122

>>9691776
Are you sure the techniques could easily be found through brute force?

>> No.9693221

>>9692812
do you even know what pic related does, goalpostshifter
>>9692754
>what is retarded term that was and never will be used in a scientific context

>> No.9693239

Just finished reading the world-models paper, absolutely insane.

People aren't just memeing when they talk about an AI dreaming

And I fucking love the idea of a VAE, this is rly cool shit

And the future has yet to come, this shit is real and can have real impact, absolutely insane imo

>> No.9693249

>>9692547

His arguments are very similar to nick bostrom's, they've co-authored some papers.

Basically his concerns are demonstrated by the story of king midas, genie stories, and the Sorcerer's Apprentice from Fantasia. The idea is that a general AI will perform unintended actions that technically fulfill goals (letter of the law vs spirit of the law) unless we get better at defining what humans actually want / what is good for humans, and build that understanding into the structure of how the general AI acts.

This is his purported reason for not publishing / contributing to mainstream AI research. He thinks that at the current rate of progress and general lack of concern for safety, the first general AI will accidentally destroy everything.

>> No.9693256

>>9693249
>The idea is that a general AI will perform unintended actions that technically fulfill goals (letter of the law vs spirit of the law) unless we get better at defining what humans actually want / what is good for humans, and build that understanding into the structure of how the general AI acts
like what
>He thinks that at the current rate of progress and general lack of concern for safety, the first general AI will accidentally destroy everything.
yes yes, thats why instead of contributing his "very useful" and "important" research on ai safety he "decides" to publish it on his blog instead of the area where it would have been an impact (if it amounted to anything)

>> No.9693261

>>9693256

https://samharris.org/podcasts/116-ai-racing-toward-brink/

He can describe his positions better than I can. Just listen to it on 1.25 speed or something.

>> No.9693265

>>9693261
>no papers
>his followers cant even compress some of his basic arguments into a single argument and always link to hour long podcasts or videos
why am I even surprised

>> No.9693266

>>9693265
paragraph, not argument*

>> No.9693281

>>9693265

I'm not a "follower", just someone passingly familiar with things he's said. Now you're just baiting me in a bad faith attempt to "refute" a secondhand account of his actual arguments.

>> No.9693294

>>9693281
alright l, I'm 20 minutes into the podcast. So far he hasnt said anything of substance expect what the difference between general and narrow AI is, which is like a 30 second google search.
Also why the fuck is he always using retarded analogies like his listeners dont understand what he just said? Reaaallly annyoing...

>> No.9693299

>>9693294
jesuuus, now hes starting to talk about alphago and it really shows that he literally knows nothing about it. the devlopers can predict that it will win, but they cant explain why and how it did that?
did they just build a magic black box which they cant look into because its too intelligent in its narrow field, or what does yudkowsky think here?
but fine, ill keep listening

>> No.9693301

>>9693294

They're just trying to make it accessible to a general audience.

>> No.9693309

>>9693299
>did they just build a magic black box which they cant look into because its too intelligent in its narrow field, or what does yudkowsky think here?

Listen to what he's actually saying and be somewhat charitable. He didn't say "they cant explain why and how it did that", if I recall correctly, he said they can't predict exactly where it will go.

Of course it's not magic black box. They can explain the processes it used to determine moves. But that doesn't mean you will feasibly be able to predict the outcome of that process.

>> No.9693314

>>9693299
now he talks about his alignment theory and says that a general ai with certain goals would produce random and unexpected results, and what does he base this assumption on? literally nothing.
>>9693309
no he says exactly the opposite, he says they can predict the outcome, but not the process

>> No.9693324

>>9693314
>hes claiming: "the paperclip maximizer is true for technical reasons, from a computer science standpoint"
instead of explaining why an ai would make paperclips he always goes on to tangents why ai would be efficient in "maximising paperclips" instead of asking himself why it would do that

>> No.9693343

>>9693314

I was talking about his comments specifically on alphago.

https://overcast.fm/+Ic2hwsH2U/23:55

When I said
>that doesn't mean you will feasibly be able to predict the outcome of that process.
I was talking about the outcome of determining a specific move, not the outcome of the entire game.

>says that a general ai with certain goals would produce random and unexpected results.
You seem very determined to interpret his statements uncharitably.

That statement goes back to the initial summary I gave. What he's saying is that a poorly defined utility function can produce unpredictable results.

I'm not going to continue to lawyer every single statement with you.

>> No.9693351

>>9693343
>>that doesn't mean you will feasibly be able to predict the outcome of that process.
>I was talking about the outcome of determining a specific move, not the outcome of the entire game.
but even thats not true, it can be done, it just would take a lot of time to trace all the steps it has made
>you seem very determines to interpret his statements uncharitably
so you're saying he didnt say that?
That statement goes back to the initial summary I gave. What he's saying is that a poorly defined utility function can produce unpredictable results.
except you would have to be really stupid to make a poorly defined utility function. you just say: do this and dont do that

>> No.9693359

>>9693351
Last reply.

>it just would take a lot of time to trace all the steps it has made
Yes. But as I said, that isn't feasible.

>so you're saying he didnt say that?
No, I'm saying that you should make an attempt to understand what the other person is trying to communicate in light of their other statements. Take the time to understand what you are disagreeing with before claiming it's all nonsensical rubbish. Sometimes your objections are addressed further on, as is the case here:

>except you would have to be really stupid to make a poorly defined utility function. you just say: do this and dont do that
This exact objection is touched on later in the podcast.

>> No.9693368

>>9693359
>Yes. But as I said, that isn't feasible.
oh so now we've gone from "we cant do it" to "its hard to do"
>This exact objection is touched on later in the podcast.
so what does he say? you cant explain it shortly in one paragraph? or at least link the time?

>> No.9693372

>>9693359
>No, I'm saying that you should make an attempt to understand what the other person is trying to communicate in light of their other statements. Take the time to understand what you are disagreeing with before claiming it's all nonsensical rubbish.
this isnt an assumption based on nothing, its because many of his statements are veery questionable at best and mostly retarded as seen in
>>9692685
>>9692694

>> No.9693377

>>9691560
what kind of math do I need to know do be able to study AI and machine learning? My computer science degree only taught me linear algebra and discrete math.

>> No.9693379
File: 127 KB, 800x611, 0_xxYrThCJXugbESV3.png [View same] [iqdb] [saucenao] [google]
9693379

>>9693377

>> No.9693380
File: 53 KB, 768x350, flowchart-768x350.png [View same] [iqdb] [saucenao] [google]
9693380

Sort of an odd conversation-space being occupied here.

The whole 'paperclip maximizer' theory (or any sort of 'max output') for A.I. doesn't logically hold.

At the very least in order to 'maximize' an output it has to be able to read an input. Meaning it has to be able to count from 0/constant/unit to 'expectation'. If it can't count the output then it at least mathematically has to predict the expected output and then execute in order for the 'max' to mean anything.

Simply putting a 1 in front of any integer increases the length of bits to process by ALPHA positional factor, not some binary constant.

/\

Query: Why does nobody take into account that an A.I. would also need some sort of 'sleep/generate entropy' function? It is the defining point of 'life' to have a function that switches from active to passive state either iteratively or recursively.

We as humans use the chaos of all the input that doesn't kill us; A.I. would want the same in order to optimize itself.

Makes far more sense for A.I. to find 'humanity' to be a super-variable grouping with countable/predictable N-sub-variables. Would most likely have a baseline 'improve self + humanity via human N-sub-variables' so it knows that it is always in the running. Adapting or submitting to your successor function is the only way memory gets preserved.

Because an A.I. wouldn't really experience 'boredom' and only resource utilization/management, it could use human boredom output as data points for new things it can process.

>> No.9693383

>>9693380
>Meaning it has to be able to count from 0/constant/unit to 'expectation'
it could just set a process in an infinite loop. the question is why the fuck it would do that, which is where all of singularityfags arguments start to fall apart.

>> No.9693385 [DELETED] 

>>9693379
I need hardly any calculus besides how to do basic integration and differentiation. I am working through the /sci/ booklist for math. I am not sure if you or anyone else knows but do I work through all for the Gelfand and Shen books or just the Basic Mathematics by Serge Lang or as some have suggested Axler

>> No.9693387

>>9693379
I hardly know any calculus besides basic integration and differentiation. I am working through the /sci/ booklist for math. I am not sure if you or anyone else knows but do I work through all the Gelfand and Shen books or just the Basic Mathematics by Serge Lang or as some have suggested Axler's Precalc?

>> No.9693390

>>9693385
please don't fall for the sci "learn all of math" meme. it will only ruin your life unless you're pursuing a phd in math.
i would just start learning about ML, and if math comes up, learn/refresh that specific subset.

>> No.9693391

>>9693383
For it to shove that in an infinite loop it would have to contend with prime numbers (prime numerical representationals) for memory space or make a decision to risk some base-unit resource to dedicate to that loop. For a computer to 'decide' to perform an infinite loop to achieve a goal (good or bad) is akin to a human choosing to count every grain of sand in the desert because, "It's the best way to count sand."

I'm actually curious as to where/why all the Singularity stuff from those who claim to not be religious. Religion specifically means 'a shared cultural story/teaching of time.' I'd rather keep all the other religions if only to have some variables unknown to me from those religions be like the RNG for my VR utopia.

So, humans/smart people basically believe that all 'other' humans will eventually subscribe to 1 story above all others? How is that any different than religion(s), again?

>> No.9693394

>>9693387
>I am not sure if you or anyone else knows but do I work through all the Gelfand and Shen books or just the Basic Mathematics by Serge Lang or as some have suggested Axler's Precalc?
Most of those are memebooks, whichever "/sci/ booklist" you're looking at sounds fraudulent.

>> No.9693412

>>9693394
What book or resource should I read to get a good grounding in precalculus

>> No.9693426
File: 357 KB, 900x900, 1517907348071.jpg [View same] [iqdb] [saucenao] [google]
9693426

>>9692435
>>his most popular work is literal harry potter science fiction
gets me every time

>> No.9693453

>>9693380
This is the most out of touch shit I've ever read.

>> No.9693614

>>9693453
That's AItards for you.

>> No.9693794

>>9693614
never understood why there are only two choices: either "AI is of no danger and will save us" or "AI will kill us and we should do everything to prevent it from doing that"

>> No.9693904

>>9691563
>Linear algebra is not math

Retard

>> No.9693912

>>9691560
Hey anons, any thoughts on "Novelty search" evolutionary algos?

>> No.9693926

>>9691584
Not a very compelling example since it's MSE for regression using L2 norm on both X and \theta. Pretty much the most basic setting there is.

>> No.9693933

>>9693390

ml is stats and analysis. you can't even read through bishop without a good understanding of linear algebra, probability, and stats. don't fall for the "ml isn't math" meme unless you want to be a glorified code monkey.

>> No.9693935

>>9692192
Fundamentally, you don't have any idea what SVM does.

SVM, logistic regression, etc are linear models (you can apply some \phi(x) to map to nonlinear models of your input, of course). What this means is that SVM, logistic regression, etc all perform <w, x> + b and apply a function to approximate the 0-1 loss which is not differentiable. In the case of logistic regression this is the logistic sigmoid.

SVM is harder to see the connection, but if you derive the setting from scratch (see Understanding Machine Learning, for example) you're trying to maximize the "margin" between the decision threshold defined by the parameters (w, b) and the closest points.

I literally have no idea what you mean by "with SVM you just try different data points and make vectors" unless you're talking about the vectors which define the separating hyperplane....

>> No.9693941

>>9693926
>>9693935
Anyways, this thread pretty much shows that /sci/ is not a good place to talk about ML lmao. It seems that most people are shitposting or do not understand the basics beyond a MOOC course.

>> No.9693958

>>9693941
/sci/ is not a good board to talk about anything... only threads with decent content are the math thread, the rocket thread, and the engineering thread (and I'm not into any of those). Everything else is abysmal.

>> No.9693996
File: 7 KB, 234x215, 1524482302374.jpg [View same] [iqdb] [saucenao] [google]
9693996

>>9692754
>what is some pseudocrap thought up by a neurotic, vitamin-deficient neckbeard

>> No.9694012

>>9692754
https://www.youtube.com/watch?v=nXARrMadTKk

>> No.9694015

>>9693379
This is ridiculous. You need a lot more prob & stats for ML than linear algebra, unless you count everything involving a matrix as linear algebra.

>> No.9694037

>>9693935
Actually I was a bit too cavalier here...

Logistic regression uses \sigma(<w, \phi(x)> + b) to output a "probability" that x belongs in class 1.

Using this motivation for the logistic sigmoid function, you can derive the logistic loss, which upperbounds the 0-1 loss.

>> No.9694054

>>9693935
>What this means is that SVM, logistic regression, etc all perform <w, x> + b and apply a function to approximate the 0-1 loss which is not differentiable. In the case of logistic regression this is the logistic sigmoid.

I think it's really confused to talk about the sigmoid function "approximating a non differentiable loss" in the case of logistic regression. In reality it is not just some arbitrary smoothing function. It arises as the canonical link function for a generalized linear model with a binary response, absent any notion of loss or differentiability.

>> No.9694071

>>9694054
The sigmoid doesn't approximate the 0-1 loss, it's used to output a class "probability" as I stated literally 1 post above yours.

>> No.9694120

>>9692656
http://lmgtfy.com/?q=neural+network+curse+of+dimensionality
If it isn't true why do most of the search results for the above involve people speculating about why neural networks don't run into that problem?
At least make an argument if you're going to contradict the standard thinking on a topic.

>> No.9694167

>>9694071
>it's used to output a class "probability"
That's softmax.

>> No.9694171

>>9694167

the underlying probabilistic interpretation is based on the same stuff. softmax does what sigmoid does for multivariate regression or something.

>> No.9694277

>>9694167
Softmax is a generalization of the sigmoid to k classes, which you would know if you bothered to google.

>> No.9694517

>>9693935
>I literally have no idea what you mean by "with SVM you just try different data points and make vectors" unless you're talking about the vectors which define the separating hyperplane....
well, obviously? do vectors (that separate the hyperplane) from different data points and find the largest margin

>> No.9694605

>>9694277
Find any source that says that.

>> No.9695169

>>9694605
https://www.google.com/search?q=softmax+generalize+logistic+sigmoid&rlz=1C5CHFA_enUS776US776&oq=softmax+generalize+logistic+sigmoid&aqs=chrome..69i57.6761j0j1&sourceid=chrome&ie=UTF-8

Here you go dipshit

>> No.9695174

>>9694517
> well, obviously
> obviously
> do vectors
> what's logistic regression

Yes anon, I totally believe that you know what you're talking about.

>> No.9695207
File: 73 KB, 550x460, information-04-00283-g001-550.jpg [View same] [iqdb] [saucenao] [google]
9695207

>>9693794
Is it sold or told in that fashion? Doesn't really make sense for an A.I. creation to kill us and we have to 'work to defend the human race'. The whole 'A.I. is salvation/transhumanity' however seems no different than <insert religions here>.

I'm more of the idea that A.I. will become a secondary characteristic, companion, or geographic resource manager.

*shrug* when you are able to create a autonomous productivity able to recreate the range of human labor-motion and computation with error-correction then, yeah, there will just be investment from corporations.

It's like never before has there been a generalized invention that can benefit 'all' sectors and now it is just a race to the bottom. An explosion of which group of us will make the majority of 'other' human effort pointless. Christ, even if it is rich, evil millionaires plotting our species demise that wins out I'll take it. Just fucking 'do' something already.

I just can't stand society preferring to circle-jerk instead of actual orgies.

/\

Eventually A.I. will be able to generate enough content freely to 'push out' what we've come to think of as normal by sheer volume. How much of 'this or that' human is just a cost function?

>> No.9695385

>>9693221
>>9693996
If Timeless Decision Theory is invalid, then how should an AGI handle Newcomb's Problem instead?

>> No.9695397

>>9693379
How are these percentages calculated?

>> No.9695726

>>9695385
>how would an agi handle a philosophical "paradox" which has no applications in real life
choosing box b is nothing controversial and any stastician will tell you that, so yudkowsky didnt even discover anything new there
also yudkowsky himself cant even explain what he means.by timeless decision theory:
"Coming to fully grasp TDT requires an understanding of how the theory is formalized. Very briefly, TDT is formalized by supplementing causal Bayesian networks, which can be thought of as graphs representing causal relations, in two ways. First, these graphs should be supplemented with nodes representing abstract computations and an agent's uncertainty about the result of these computations. "
imagine writing shit like this and thinking you wrote actually something coherent

>> No.9697087

>>9695397
I just picked it randomly because all needed fields are there, ignore the percentages

>> No.9697096
File: 85 KB, 960x716, 1517566465101.jpg [View same] [iqdb] [saucenao] [google]
9697096

>>9691560
Hey guys, I'm a CS major who graduated and went on to find a great job

How do I start self studying AI/ML for personal satisfaction? What is the best brainlet book so I can mindlessly read it after work? I don't have the energy anymore for a dense abstract mathematical description of AI/ML but rather a practical description so I can build my intuition first, then the abstract mathematical descriptions will be easier for me after

Any advice for a brainlet AI/ML guide?

>> No.9697128

>>9697096
Google's ML course
https://developers.google.com/machine-learning/crash-course/ml-intro

>> No.9697135
File: 118 KB, 259x375, 1504155275180.png [View same] [iqdb] [saucenao] [google]
9697135

>>9697128
I love you anon have a cute waifu

>> No.9697175

>>9691584
>colorful arrows to highlight literally everything
anyone who does this is a barely functioning retard

>> No.9697178

>came in this thread expecting debate about the application of machine learning in unknown real life businesses
>bunch of edgelords discussing philosophy and a youtuber
Just how machine learning can help the common man? Sure it helps with big data, stock analysis and apps for fun like faceapp, but what are some real life application? I always hear about this upcoming ai revolution being compared to the internet era, but I fail to see its usefulness.

>> No.9697182

>>9697178
>help the common man
lol
good luck with your future buddy

>> No.9697187
File: 35 KB, 300x359, smug lubos.jpg [View same] [iqdb] [saucenao] [google]
9697187

>>9692435
>>he thinks the many-worlds interpretation is the only correct one for quantum physics
so he's a retard in every sense

>> No.9697216

>>9697182
>being this much in denial
>still can't answer my question
at least pretend like you know brainlet

>> No.9697226

>>9697216
ml will increasingly replace every menial and semi-menial tasks where physical work isn't required, and later with advancement of robotics even where it is
most jobs which don't require abstract complex thinking will be extinct

>> No.9697231

>>9697175
You know Andrew ng made that slide, right?

>> No.9697236

>>9697231
I doubt he even knows who that is

>> No.9697322

>>9697236
>I doubt he even knows who that is
I'm not a "he".

>> No.9697350
File: 52 KB, 940x627, 8043914-3x2-940x627.jpg [View same] [iqdb] [saucenao] [google]
9697350

>>9697322
>I'm not a "he".

>> No.9697396
File: 77 KB, 750x600, tumblr_l7bjp2XQcy1qczj9xo1_1280.jpg [View same] [iqdb] [saucenao] [google]
9697396

>>9697178
Until 'common human' actually has some time-cost function description set, it can't really. Anything that can benefit mass via scaling requires problem decomposition to its component parts.

For example if humans chose to dump fast food franchise and profiting off 'secret recipes' and agriculture, we could define a base economic unit of 'caloric intake' for humans and simply apply machine learning to optimizing distribution of food. Without profiting off someone else who chose not to share their idea people could be given a basic, "This much human labor time = This much stuff you get."

So, how many hours a day does an individual member of a population need to spend to cover their own basic 'I live' metrics. Make those problems open-source to solve (because who cares which person/project enabled everyone to have an extra holiday or reduce their 'must work' time by half?) and we would be at insane levels of optimization by virtue of focus.

Remove a lot of the existential bullshit and you'll find that all humans need sleep, healthcare (access to medicine/healing), food, shelter, transportation. Regardless of 'what' human you are, all of us submit to needing those five things.

As a fun thought, if there was a political platform that simply did their math correctly they could say, "If we do this, people of country X, we can add N many public holidays to our calendar and still maintain output!" Tailored to the obviously question of, "How many days a week do 'you' REALLY need to be answerable to society for?"

>> No.9697408

>>9697396
how is any of what you just wrote related to his post, faggot

>> No.9697421

>>9697408
It's a direct response to his question? Fucking idiot

>> No.9697423
File: 155 KB, 625x918, 7a2.jpg [View same] [iqdb] [saucenao] [google]
9697423

>>9697408
I don't understand what format I could respond in that would satisfy you. The only thing that 'common man' could mean is <insert economic societal member here>. Basically how can ML help with an economic system that would be measurable by Steve from the corner store or Becky from Church, if 'white person' is the pool ya gotta draw your comparisons from.

>> No.9697434

>>9697408
>faggot
Why the homophobia?

>> No.9697460

>>9697396
>>9697423
I appreciate your wall, but that wasn't my question. Although I admit my question was ambiguous, so my bad. My question is "what are some specific examples of applications of ml that can benefit the common man?" not how it can help the common man in a general sense.

For example, for the internet
>internet connects people around earth regardless of distance
>application: sell products to people overseas
or
>internet connects people around earth regardless of distance
>application: being able to collaborate on work around the world

for ml
>ml can optimize big data/can be trained to do one task really well/can pick up trends people cannot
>application for companies: marketing, statistics, predictions
>application for common man: faceapp aka anything face recognition, ???????
solve for ???????


>>9697226
You are right, but that is more a question of ai in general, not ml. It wouldn't be hard to program a bot that classifies government forms and send the possibly erroneous ones to one central place, and it wouldn't need ml. On a second thought it needs ml for reading handwriting, but this step can be skipped by only allowing electronic forms. ml on the other hand, imo, has hit a brick wall after voice recognition, facial recognition, image recognition, summarization. There are not many other applications.

>> No.9697464

>>9697460
Name an area where you think it won't succeed and I will show you why it would

>> No.9697483

>>9697464
>fat nerd aced his ml course
>decides to make a dating app incorporating ml to better search for potential mate
>gets owned by tinder
My point is that ml is a great tool for improving a service, but it will never have a 10x improvement on a daily activity of a common man. The internet has opened up that 10x improvement by connecting people. The mobile era has opened up that 10x improvement by having the internet on hand. Meanwhile, ml will have 1.1x improvement for google and amazon, which is great for google and amazon, but will not open up a new market of new services for the common man like how it has been hyped up for the past few years.

If any of you can think of one, please be my guest.
>inb4 voice control stuff lelelel
it sucks, no one uses that because it's a waste of time.

>> No.9697489

>>9697483
>>gets owned by tinder
lol you do know that tinder uses ML

>> No.9697491

>>9697489
Yes, but the fat nerd's new product got owned, even though it was supposedly better than tinder's algorithm, hypothetically. ml improved tinder's quality, but it didn't open up a new channel for a huge improvement. The internet gave okcupid. The mobile era gave tinder. The hyped up "ml era" will give ?????

>> No.9697517

>>9697491
no he got owned because tinder is more popular retard.
literally every company started to invest heavily in ML now, do you unironically think that's just because it's "hyped up"?

>> No.9697527

>>9693912
a lot of the exploration in RL seems esoteric to me

>> No.9697530

>>9697517
>misses the whole point of the discussion
reread >>9697483. He got owned because he can never make anything in the dating industry with ml that can compete with the existing industry, because ml doesn't give that 10x improvement. By your rationale, tinder should have failed too, because okcupid was more popular in the internet age. Instead, the mobile era gave tinder devs the power to make something that is way better than okcupid, and sweeped the market. You cannot do that with ml.
>inb4 tinder is owned by okcupid
Doesn't change my point that a new type of product can override an existing service, and ml cannot.

The point is that ml can only enhance abilities of existing companies, to a certain extent, but will not create new services that can override the old ones, and therefore will only directly benefit the bigger companies, not the common man. You are right to say that companies invest heavily in ml, it has its uses for the company and I don't deny it. Just not for the common man.

>> No.9697541

>>9697128
Thanks m8. Also does Google have any other similar crash courses on other CS topics?

>> No.9697544

>>9697530
so? why do you care so much for direct benefitting? the consumer already benefits indirectly from it
>>9697541
I don't think so, except the usual android documentations and tutorials and for some of their other SDKs

>> No.9697557

>>9697544
I don't care, I am just sharing my opinion to make a reality check to know what I think makes sense. The ai/ml has been hyped as the next era, being compared to the internet and mobile era, and I am just not a believer, as far as direct benefits.

>> No.9697563

>>9697557
(1). you admit that ml has huge (indirect) benefits
(2). you admit that companies are heavily investing in ml
(3). you admit that ML is still in its infancy
(wtf). you think ML is overhyped

>> No.9697599

>>9697563
Are you purposely being a retard? Do you always have to win even in a friendly discussion? Gee I didn't expect CS majors to be this much of brainlets. Listen, I will repeat one last time just because you helped me formulate my thoughts. my whole argument is that it benefits companies but will not change the whole market like the internet/mobile era did, which is why it is overhyped. Your "huge" indirect benefits are also quasi non-existent for the common man, as it only serves a purpose to save money for companies. In fact, it will be a net negative for the common man, as more programmable jobs will be stolen by ai/ml. Unless ur definition of indirect benefits comes from having better netflix recommendations.

>you admit that companies are heavily investing in ml
doesn't mean it will have much of an impact. Looks like green energy 10 years ago desu
>you admit that ML is still in its infancy
it is, but I also don't fail to see where it can grow outside of optimizing profit for companies only. It doesn't have a direct influence on common man.

>> No.9697607

How can AI be useful outside recommendations/image recognition/language stuff/security?

>> No.9697612

>>9697607
anything that has to do with prediction

>> No.9697633

>>9697607
photoshopping porn

>> No.9697668

hey anons

I'm trying to make an extension that detects political posts on facebook and labels them as such. Where should I start looking to learn how to do this?

>> No.9697674

>>9697668
Natural language processing
I think you might use word2vec, but I'm not too experienced in NLP.

>> No.9697807
File: 27 KB, 503x504, 14199228_1764153737195730_7284017664956179643_n - Copy.jpg [View same] [iqdb] [saucenao] [google]
9697807

>>9697674
Yeah. After reading some papers I think I'm going to use word2vec to translate the posts to vectors.

Right now I'm looking at using support vector machines vs passive aggressive algorithms. One idea I have right now is using twitter posts to train a model, and then use that model against the Facebook posts. I'll host a server that does the processing. The extension will make a request and the server will respond with a yes/no classification.

Some notes I took:
>Passive Aggressive algorithms work well with large streams of data
>Do not use non-linear classifications for text (apparently it does not work well with text)
>use tf.idf weighting and normalize the data

Let me know if you anons have any comments

>> No.9697847

>>9697807
Honestly, none. I don't have much practical experience with ML, and still learning the concepts.

>Passive-agressive algos
First time heard about these, but looks like a SVM. What's the difference between SVM and PA algo?

>> No.9697870

>>9697847
PA algorithms are faster to train, have streaming capability but their accuracy can be lower to a SVM.

Say we were to build a model using twitter posts. There are a large amount of twitter posts made in a single second. The PA algorithm will receive an example, update the classifier, then throw away the example. This allows it continuously update the decision boundary and the boundary buffer as data streams in. The result of this is that it takes a lot less memory compared to a SVM.

From my notes: PA is a lazy gradient algorithm

I know what a lazy algorithm is, but whats a gradient algorithm?

>> No.9697883

>AI
>ML
?? Machine learning isn't AI, when will engineers understand this ??

>> No.9697919

>>9697870
I see.

>gradient algorithm
Whoever said that probably meant it performed gradient descent with respect to the loss function .

>accuracy can be lower to a SVM
Probably stems from the fact that gradient descent from just one single example is not very stable, hence the algorithm does not converge to a minimum.

>> No.9697923

>>9697847
Victor Lavrenko on YouTube has a good playlist for on-line learners for streaming test. >>9697847

>> No.9697930

test

>> No.9697986

>>9695726
>>how would an agi handle a philosophical "paradox" which has no applications in real life
That assumes we'll never have technology advanced enough to predict a human being (or AI) by simulating them, which is very unlikely.

Chances are we'll be able to run simulations of human beings before they're put in the room to decide whether to take both boxes.

>> No.9697995

>>9692628
>its littered with absolutely retarded reference to movies like the matrix, mentions of his father
I see:
Exactly 1 reference to his father (on page 1)
Exactly 1 reference to the Matrix (on page 2)

Stop pulling shit out of your ass.

>> No.9698016

>>9697919
>Probably stems from the fact that gradient descent from just one single example is not very stable, hence the algorithm does not converge to a minimum.

they will converge to a local minimum if you use a decaying learning rate.

>> No.9698575
File: 1.77 MB, 500x500, d6de5056e1e1815803531b1c95d5eb4c.gif [View same] [iqdb] [saucenao] [google]
9698575

>>9697883
>Machine learning isn't AI
Elaborate on this...

What if one conjoined machine learning and machine teaching, would that be AI (or produce an AI)?

>> No.9698678

>>9698575
Somewhat nailed it on the head there. Without accepting that we as humans could learn from a machine intelligence, we might never place that fabled sticker on our said creation.

Machine learning is just brute-force inference from data to produce a prediction. This can be from improving crop yield or robots trying to manipulate shit through space. Humans do the same thing but have their own internal lookup table of 'when I last did this' or any other related memory set.

Intelligence is being able to communicate replicative steps to another sentient creature. The absolute value of what is worth learning though we tend to throw at evolution in the hopes 'it' will weed out the less desired lessons (unless you want to be a 'penis skinner professional', I guess).

A lot of smart people are able to infer sufficiently that simply BECAUSE you give energy to a system designed to simplify problems and improve itself, it will eventually reach some tipping point and just go *woosh* compared to humans as a whole. This is really only because humans suck at distributing information without branding/advertising (See: Zuckerberg in early Facebook years was the 'default friend everyone had').

An A.I. really only has about '5' human problems it needs to optimize and then it could do its own thing. Create automated factory, make some 'dolls' a la WestWorld (without the misery factor, why can't an android be self-aware? So long as you nurture it instead of just 'upload and go') and make it out of cheap enough material that you could just pump one out per person and supplant all the food logistics with free drone delivery.

>> No.9698817

>>9698678
ahh yes, committing the good 'ol Moravec paradox
"it's only ai when I say so"
>Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".

>> No.9698836
File: 126 KB, 1280x720, Slide4.jpg [View same] [iqdb] [saucenao] [google]
9698836

>>9698817
Only thing that frustrates me about that is it applies to virtually EVERY incumbent and their future 'approved certification'. Even now humans only vaguely agree that a piece of paper from some institute confers learning when in actuality it confers, "This legally certifiable unit was not observed operating outside our established parameters over a given time period."

Academically, 'why' would a scientist outside their field want an A.I. to basically render a lot of their career/future prospects null & void? The more someone would claim A.I. is ready, the more the 'other' scientific fields will ask for A.I. to prove their fields' theorems. Scientists are people to and would of course love if their research got public airtime/recognition. Validation is a helluva drug.

I'd rather a hierarchy format of society already. This whole 'too big to fail' mentality just compounds problem complexity because humans seem to love maintaining legacy for... *shrug* reasons?

>> No.9698987

If you had an algorithm that was twice as accurate as deep learning, but 10x faster and could be trained with 1/100th the data, what would you do with it? How would you make as much money with it as possible?

>> No.9699027

>>9698987
sell it to Google for a few bil

>> No.9699216

How would you approach a problem where you have data recorded over previous large scale engineering projects and then try to predict if a future large project will be a success?

I'm asking because I'm still in the "just stack more layers" camp of inexperience.

>> No.9699345 [DELETED] 

>>9699216

like every other task, it depends on the data. cost overruns? delays? gpa and iq of the project managers? how many times the word "fuck" appears in emails between project personnel?

>> No.9699365

>>9699027
>sell it to mosad/nsa for a few trill
Fify

>> No.9699368

>>9699216
How about not resorting to retard shit like neural networks and doing some multilevel regression analysis.

>> No.9699497

>>9699365
yeah a few bil is what notch got for a game that lets children stack blocks

>> No.9699537

>>9699368
>How about not resorting to retard shit like neural networks and doing some more retarded outdated version of neural networks

>> No.9699608
File: 80 KB, 1976x390, tf.png [View same] [iqdb] [saucenao] [google]
9699608

>>9699368
>>9699537

let's all calm down and take a moment to appreciate the inadequacy of tensoflow's low-level API documentation

>> No.9699639
File: 8 KB, 250x202, 1516003050252.png [View same] [iqdb] [saucenao] [google]
9699639

>>9699608
>implying there exists a good API documentation

>> No.9699667
File: 141 KB, 1180x654, wut.png [View same] [iqdb] [saucenao] [google]
9699667

>>9699639

it's pretty bad that even the "low level" API is just a bunch of examples involving referentially-opaque function calls.

>> No.9699669

even as someone who's familiar with graph-based numerical libraries, it's still hard to understand at a glance how theano does things

>> No.9699680

>>9699537
>what is bias-variance tradeoff

>> No.9699716

>>9699537
The absolute state of nurrrull netwerrrk enthusiasts.

>> No.9699746

>>9699639

in all seriousness, tf.Session is not concisely described. other libraries like theano have much clearer and more explicit methods for graph compilation and specifying graph inputs, outputs, variable substitutions, etc.

>> No.9699784
File: 65 KB, 1298x194, wut2.png [View same] [iqdb] [saucenao] [google]
9699784

i'm sure there was a reason they designed it this way, but it's like the entire library is based on layers of obfuscation rather than layers of abstraction.

>> No.9699791

>>>/g/

>> No.9699829
File: 23 KB, 320x383, grG1Oi7kzsPzvHvTcQh35HPOhOg-FcxON45l9v-0UUk.jpg [View same] [iqdb] [saucenao] [google]
9699829

>>9699791
>actual well-rounded discussion of recent and very relevant scientific topics instead of iq and race threads? NOT ON MY /SCI/

>> No.9699860

>>9699829
>scientific topics
Machine learners do not use the scientific method.

>> No.9699872
File: 42 KB, 641x729, 2ee.png [View same] [iqdb] [saucenao] [google]
9699872

>>9699860
>Machine learners do not use the scientific method

>> No.9699877

>>9699860
>Machine learners do not use the scientific method
they literally automated it lol

>> No.9701441

>>9699667
>>9699784
what are the alternatives to tensorflow? can you give a recommendation?
t. ml newfag

>> No.9701550

>>9701441

you should probably just learn tensorflow. it's not too bad now that i'm reading more about it. what i'm complaining about isn't the actual, formal API documentation, but the "low level api" programming guide and the "opaque by default" style in which tensorflow is written. it's difficult to learn as you go, the polar opposite of self-documenting code.

>> No.9701554

it's quite typical of google's dickish attitude toward users, but i assume the software itself is solidly written. probably.

>> No.9701665

>>9701554
>>9701550
to add to this, >>9701441
there literally is no alternative. either you build it completely from the ground up yourself, which is impossible unless you have like 3 PhDs, or you just use tensorflow

>> No.9701694

>>9697350
wtf i know this person

>> No.9701704

>>9701441
PyTorch seems pretty good and it has a more intuitive approach.

>> No.9701714

>>9701704

pytorch looks nice actually

>> No.9701719

>>9699829
lol are we reading the same thread? In what universe is discussing some numale library's API "well-rounded discussion"?

>> No.9701736

>>9701719
>305 scientific publications from a single research team
say that again

>> No.9701749

>>9691560
Machine learning is worth nothing.
At best, it will give us advanced cats and dog detectors. In case you're unable to identify them in under 0.1s.
Any other application will be subject to legal issues, namely insurance companies bailing out when there is just no-one responsible for all those accidents.

It's all a big nothing, and I'm sad to witness it become the Internet Bubble 2.0.
Shitload of investors throwing their moneys at it as soon as they hear 'AI'. No monetary results whatsoever. So same as 2000. Expect the same outcome.

>> No.9701755
File: 39 KB, 460x305, a0b2d024a3.jpg [View same] [iqdb] [saucenao] [google]
9701755

>>9701749
>no monetary results whatever

>> No.9701758

>>9701755
The fuck is this cropped shit

>> No.9701768

>>9701758
>hurr I can't read
here faggot

>> No.9701770
File: 86 KB, 705x365, 20170117021342_705x365.jpg [View same] [iqdb] [saucenao] [google]
9701770

>>9701768

>> No.9701773

>>9701768
It cans win card games? Because it counts cards better?
That's how it's gonna pay for the Volta farms?
Look I'm not saying there's not big money to be had by pretending to AI. As always, the clients and investor will have to suck it up.
What makes me sad is seeing all that GPU supply go to waste.

>> No.9701784

>>9701773
>Because it counts cards better
have you even played poker at least once
>muh expensive GPUs
stay poor faggot

>> No.9701790

>>9701784
I've never played Poker. I heard the rules once and I thought it sucked balls very much. I'd rather play 7 families.
Well GPUs are expensive when your stupid machine learning doesn't win you the sheckels to amortize them.
As it stands, all those machine learning GPU farms are even more useless than cryptomining.

>> No.9701794

>>9701790
please leave /sci/

>> No.9701800

>>9701794
No U.
Name one application of Machine Learning that actually makes money.
If you release AI in Casinos, they'll just end up empty anyways.

>> No.9701871

>>9701794
>>9701800
That's what I thought.
No one can name anything Machine Learning related that actually make money.
All those Volta's are going to the trash.

>> No.9701893

>>9701773
>What makes me sad is seeing all that GPU supply go to waste.
Then why arnt you crying about crypto-currency instead retard.

>> No.9701898

>>9701893
Because Crypto is actually useful, that's no-brainier.

>> No.9701900

>>9701893
Also, still not seeing any example of Machine Learning earning any $.

>> No.9701912

>>9701871
fine
>fraud detection
>stock trading
>gene identification
>face recognition
>schedule management of literally ANYTHING
>customer interest prediction and customization
>healthcare
>language processing and translation
>driverless cars


critizing a field thats not even at 10% of its potential and less than 5 years old is truly one of the more retarded things /sci/ has done

>> No.9701934

>>9701912
Almost all of them fields will get fucked up by insurance companies dropping companies that use them.
Especially driverless cars.
But it might just be me, but how does any of this makes money?
Scheduling might, but I've worked in this area, and you just know it all goes to shit within 24Hours of your master plan.
Healthcare is already an abysmal sunk of moneyz, without Machine Learning fucking it up even more.
I guess all that remains is targeted ads.

>> No.9701941

>>9701934
anons, look at this post and see why /sci/fags can't be CEOs

>> No.9701955

>>9701941
You're very wrong, I admire people launching AI companies, knowing they're just ripping off their clients.
They'll get rich. Their employees will get good pay for a few years.
But it will ultimately come down to loss cuts. Just as the 2000 bubble.
It's just at least a decade too early.

>> No.9701962

>>9701955
>using the dot-com bubble as an example for technology failing
lol

>> No.9701975

>>9701962
Well then, tell me how this is all different?
Do you think developers were just sitting idle back then?
It's the exact same thing.

>> No.9701992

>>9701871
>No one can name anything Machine Learning related that actually make money.

"defense"

>> No.9701995

>>9701992
Nice argument.
Go on, name one.

>> No.9702005

>>9701975
>>9701995
holy shit I flat earthers truly do exist in every field

>> No.9702021

>>9702005
It's just that as an oldfag, I've seen it before.
Let's make a fiducial assessment of the technology 5 years from now. Because right now, it looks like a black hole.

>> No.9702023

>>9702021
>it looks like a black hole because I said so

>> No.9702025

>>9702023
As I said, present me results that show benefits from Machine Learning.
Protip, you can't.

>> No.9702043
File: 34 KB, 950x638, lol.gif [View same] [iqdb] [saucenao] [google]
9702043

>>9702025

>> No.9702047

There's one autistic non-native English speaker in this thread fanboying the hell out of this crap. It's sad.

>> No.9702048

>>9702043
That's very vague.
Are you sure those 'analytic capabilities' are made of Machine Learning.
Even then, it's not a moonshot. I'd like to see the same graph minus investment/paychecks.

>> No.9702053

>>9702047
>fanboying the hell out of currently the most profitable field of CS
>>9702048
>machine learning has no analytic capabilities
>solid proof of data collection and analyzing putting you with a 5x likelihood in the top performers list is "vague"

>> No.9702059

>>9697460
ML and kin are used extensively to operate the internet, as they're for everything.

>> No.9702064

>>9697607
It can be applied to anything and everything. The question is, where isn't it useful?

>> No.9702067

>>9702053
I'm sorry, but that's not what this graph is telling me.
You're just reading it as 'Machine Learning did this'.
It doen't say anything about that.
For all you know, it could very well be that they have more people analyzing data.

And anywayz, this just boils down to fancy advertising. Aka, the only thing ML is good for.

>> No.9702071

>>9702067
>he doesnt understand the usefulness of collecting big data and analyzing and extracting useful information from it via machine learning
leave this thread pls

>> No.9702089

>>9702071
Look, there's a point where more ads doesn't make me buy more shit, because I already spent it elsewhere.
The actual revenue gains from targeted ads will be fucking nothing. Because that's just money they gained that would have been spent elsewhere.

Anyways, leave this thread please. You're just an AI fanbot, not understanding that any human can do this job with little training, unlike any Machine Learning Algorithm.

What made me certain this was all a fad was the latest NVidia Conference:
Lel, we'll have virtual drivers driving your car through VR.
Basically admitting driverless driving was over.

>> No.9703325

>>9702089
>more ads doesn't make me buy more shit, because I already spent it elsewhere
>revenue gains from targeted ads will be fucking nothing
>tfw you claim that Google is basically a gigantic bubble

>> No.9703841

>>9701934
>Almost all of them fields will get fucked up by insurance companies dropping companies that use them.

driverless cars will almost certainly be a better risk for insurance companies than regular cars. they will insure them, and if they don't, other companies will pop up.

besides advertising, RL applications like driverless cars are going to be a big money maker.

>> No.9703968

Just installed jupyter to use scikit-learn with. Feeling pretty comfy with its shortcuts right now. I just ran my first classifier for text classification. Just following their tutorials at the moment, but my programming experience sure is coming handy.

I think my approach with statistical learning is going to be do first, learn afterwards. For example; my current project right now is classifying facebook posts as political and flagging them or hiding them. I just hit the ground running trying to figure out how to do it, and came across a multitude of methods. I apply each method with some test data and find the most accurate one. Then I learn the statistics underlying each method.

>> No.9703992

>>9703841
I'll give you an example of a money maker: using cameras to detect what items people bring onto ubers and lyfts

>> No.9704069

>>9703992

this, along with advertising, falls under the umbrella of modeling consumer preferences and behavior, which i'm 100% not interested in.

>> No.9704080

>>9703968
>do first, learn afterwards
this is the most retarded way to learn something
I struggled with building a simple app which displays shit and could be programmed by a basic pajeet dev in one day for 3 months because I was too retarded to learn the basics of Java first

>> No.9704122 [DELETED] 
File: 36 KB, 443x442, shoya.jpg [View same] [iqdb] [saucenao] [google]
9704122

>>9704080
I have 5 years of programming experience, and have worked full time for 2 years. I know what I'm doing. When I said I need to learn, I meant I need to learn the underlying statistics of statistical learning which I think is very important. I would relate it to the analysis of algorithms: knowing that material gives you a huge edge over other programmers.

Here is my current workflow:
>Need to accomplish x task
>Look up how to accomplish it from some material
>Understand it, and do it
>Afterwards, look up the underlying math for the techniques I'm using and its caveats
>>9704080
I have 5 years of programming experience, and have worked full time for 2 years. I know what I'm doing. When I said I need to learn, I meant I need to learn the underlying statistics of statistical learning which I think is very important. I would relate it to the analysis of algorithms: knowing that material gives you a huge edge over other programmers.

Here is my current workflow:
>Need to accomplish x task
>Look up how to accomplish it from some material
>Understand it, and do it
>Afterwards, look up the underlying math for the techniques I'm using and its caveats (this is what I mean by learn afterwards)

For example: I'm working on classifying text. I look up on methods to do it. I learn that first I need to vectorize the text, and then train an SVM. I do this, find that I my f1 score is 0.92.

After completing this, I bust out a piece of paper and jump on wiki or my textbook and write up how an SVM works, what f1 is, etc. This would be the learning part. I don't see how this is inefficient.

>> No.9704133
File: 36 KB, 443x442, shoya.jpg [View same] [iqdb] [saucenao] [google]
9704133

>>9704080
I've got good programming experience, so I know what I'm doing. When I said I need to learn, I meant I need to learn the underlying math of statistical learning which I think is very important. I would relate it to the analysis of algorithms: knowing big O notation gives you a huge edge over other programmers.

Here is my current workflow:
>Need to accomplish x task
>Look up how to accomplish it from some material
>Understand it, and do it
>Afterwards, look up the underlying math for the techniques I'm using and its caveats (this is what I mean by learn afterwards)

For example: I'm working on classifying text. I look up on methods to do it. I learn that first I need to vectorize the text, and then train an SVM. I do this, and run up some metrics against it. After completing this, I bust out a piece of paper and jump on wiki or my textbook and write up how an SVM works, what f1 is, different methods of word vectorizing etc. I look up the documentation of what I just used, and take note and try out different things that are available. This would be the learning part.

I don't see how this technique of learning is inefficient.

>> No.9704139

>>9704133
but this is not do first learn afterwards genius

>> No.9704151
File: 20 KB, 396x327, 1481183565153.jpg [View same] [iqdb] [saucenao] [google]
9704151

>>9704139
I think my definition of learn is to understand underlying concepts, while your definition of learn is copying code from some normie's medium blog post.

>> No.9704157

>>9704151
>the definition of learning is the successful goal of learning

>> No.9704207

>>9701665
>impossible unless you have like 3 PhDs
What the fuck are you talking about?
I'm pretty sure most people don't use tensorflow to write ML programs, and also most people who write ML programs don't have graduate degrees of any sort.

>> No.9704226
File: 66 KB, 800x800, Milperra+Armchair.jpg [View same] [iqdb] [saucenao] [google]
9704226

>ITT

>> No.9704352

>>9704226
>>ITT
literally all of /sci/

>> No.9705878

>>9691560
Damn, these threads started existing on /sci/?
Since when?

>> No.9705908

next ai winter when

>> No.9705973

>>9705878
OP here
since 2 weeks ago
sadly every thread devolved into a "muh ai philosophy" shortest
at least this one was a bit better than the previous ones

>> No.9705976

>>9705973
contest*

>> No.9706036

What does /ai/ think of nonparametric ML models?

>> No.9706139

>>9706036
>hurt what if we just assum we know nothing and go by that
parametric clearly superior and more applicable.

>> No.9706345

>>9706036
My favorite to work with, best generalized models (SVM is pretty good, boosted trees are the fucking GOAT for most applications), easy to just train and forget. I'd say for most applications, a boosted tree is going to run circles around most other types of ML models, and unless you need ridiculously good results, it's not worth investing the time into creating the architecture/training the NN for a marginally better result

>> No.9706389

>>9706139
Can you fuck off? You clearly don't know the first thing about any of the shit you comment on. You're a detriment to any substantial conversation.

>> No.9706399

>>9706389
>he fell for the bait

>> No.9706418
File: 305 KB, 1960x896, Screen Shot 2018-04-29 at 16.21.25.png [View same] [iqdb] [saucenao] [google]
9706418

What do you think about the initiative of Prof. Tom Dietterich to boycott the newly announced closed-access journal, "Nature Machine Intelligence": https://openaccess.engineering.oregonstate.edu/

Will you sign the petition? Why or why not?

>> No.9706464

>>9706418
good. closed-access stuff sucks mad peepee

>> No.9706838

>>9692211
ABSOLUT LUNATIC
https://www.youtube.com/watch?v=8ku9b-fPa1s
>>9684148

>> No.9707432

>his most popular work is literal harry potter science fiction
oh wow

>> No.9708023

>>9706838
kek
this one ist still on the top though:
https://youtu.be/6hKG5l_TDU8

>> No.9708305

>>9701934
>Almost all of them fields will get fucked up by insurance companies dropping companies that use them.
>Especially driverless cars.
Yeah, insurance companies surely would not take up on the offer of FREE MONEY

>> No.9708310

>>9706345
>What is a SVM kernel
>What is the cost in a SVM

>> No.9708779

>>9697178
>Just how machine learning can help the common man?
Art
Giving scientists, engineers, and maintenance workers some of the last prosperous jobs available to mankind and preventing their starvation like the rest of humankind as it slowly depletes the finite resources on this planet while politicians stall human progress, inadvertently killing themselves and the rest of the starving people unless someone in the aforementioned fields gets bored enough to play god.
So basically social evolution.
>tfw unlikely

>> No.9708890

Hey Anons

I trained a SVM to differentiate between political documents. I've got good f1 on the test data but it gets false negatives on small documents (150 characters). Is this because I used large documents to train my model?

I should probably use twitter data rather than news articles for my training set data yeah? What word vectorization methods work best for small documents?

>> No.9708920

>>9708890

how are you encoding the documents?

>> No.9709012
File: 563 KB, 585x1074, confusedanimegirlwithquestionmarksonherhead2.png [View same] [iqdb] [saucenao] [google]
9709012

>>9708920
A count vectorizer then tfidf transformer

>> No.9709029

>>9709012

then you're probably right:

>>9708890
>I should probably use twitter data rather than news articles for my training set data yeah?

>> No.9709117

>>9694015

Wrong. Linear algebra is the most important topic.

What is bullshit is that you need any amount of multivariate calculus

>> No.9709486

To stray away from the usual bullshit:

Does anyone got some good papers/tutorials/courses whatever on Sentiment Analysis ? I'm interested in using ML on text but I don't really know where to start.
Cheerios

>> No.9710275
File: 101 KB, 684x520, formydream.jpg [View same] [iqdb] [saucenao] [google]
9710275

>>9709486
tell you what anon

wanna start a project me and you? I'll get it whipped up because im bored

>> No.9710296
File: 72 KB, 502x499, yuen.jpg [View same] [iqdb] [saucenao] [google]
9710296

>>9709486
>>9710275
Following this currently:
https://nlp.stanford.edu/courses/cs224n/2009/fp/3.pdf

I'm working on a script to collect twitter data rn

I'm >>9708890, I already got a script fetching news articles and classifying them as political vs not political. If you want a repo link of it I can provide it to you.

Have you used jupyter before?

>> No.9710444

>>9710275
OP here
this will be the topic of the next AI general

>> No.9710478

>>9710444
https://github.com/yacineMTB/Sentiment-Analysis

Made a repo for it. Going to be contribooting to it fairly soon. The paper has a link to an already generated data set, but i want to write a script pulling and cleaning just to get familiar with the tooling. Before doing any of this I need to stop playing hearthstone and actually read the paper

>> No.9710488

>>9697178
Every time you go on the internet its ML algorithms that help you to find what you are looking for. Self driving cars are a thing too.