[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 222 KB, 1000x666, 1*1mpE6fsq5LNxH31xeTWi5w.jpg [View same] [iqdb] [saucenao] [google]
12730797 No.12730797 [Reply] [Original]

This century is fucking retarded.

I cannot fucking believe what these brainlets are doing. They are building production level systems that can kill people based on a technology that is totally based on EMPIRICAL TINKERING

Holy fucking shit. Us in numerics, we spend fucking ages using all sorts of functional analysis voodoo to painstakingly prove that every numerical method is guaranteed to behave nicely, and these fucking lunatics they just grab a bunch of code, glue it together, throw a dataset at it and observe whether the model "works", and if it does they put it in your self driving car.

Holy fucking wow. What the fuck is this even.
Literally what the fuck, lol.

Just so you know, there is literally ZERO math behind this shit. There aren't even people trying to build a mathematical foundation. It's literally all experiments with datasets.

I just, I don't even.

>> No.12730847

BTW, why has the ML hype seem to have not infected /sci/?

In my university literally every department of literally every faculty offers at least one ML class now.
Even fucking lawyers and dentists are learning ML now.

>> No.12730920

>>12730797
mathfag BTFO by practical application. Go draw lines on a moebius strip or something, loser

>> No.12730930

>>12730920
Point is, people are putting their lives in the fate of a total black box that was tested on a dataset and that's all the guarantee they have that it will not kill them.
That's insane to me.

>> No.12730944

>>12730930
Small-minded mathfag that you are, you only see the world as numbers. People put blind faith in anything and everything. Qanon, air travel, religion, even crossing the street just to name a few

>> No.12730948

>>12730847
>BTW, why has the ML hype seem to have not infected /sci/?
because people here are either paranoid schizos or larping dropouts. I'm personally very skeptical about this ML hype. I understand that these days you need to process a lot of data and ML can help with that. But at the same time a lot of this data is absolute trash created simply because people are no longer efficient about memory space. Programs created in the 70s and 80s were masterpieces in terms of efficiently handling memory allocation. The code was tight, efficient, and thought out. Today when every normie uses software for every single aspect of life and memory storage is less of an issue, you get shitty pajeet code that will hog resources just because it's intrinsically inefficient. Just think about how old iphones are pretty much impossible to use with modern iOS software. This is not because of power limitations, but because there's just so much garbage added with every patch.
I think ML is a Pandora's box: with ML becoming more and more prevalent, creating clever and intelligible code will become less and less important. It will devolve into literal 40k mechanicus levels of retardation, where people make a data sacrifice to a computer and pray to Omnissiah for something useful to come out of it.

>> No.12730952
File: 709 KB, 2048x2048, do i look cute in these nodes.jpg [View same] [iqdb] [saucenao] [google]
12730952

>>12730797
That's why they're hiring experts in AI Ethics. Not because they want to make the system safe or ethical but because they need a Kafka layer to make it appear they're doing something to prevent negative outcomes. When bad things happen, they point to their racially and ethnically diverse Board of Ethics for AI/ML and say they tried really hard to keep this type of thing from happening but it's just the nature of progress. You're not racist enough to disagree with an expert in AI Ethics, are you? You're not going to stand in the way of progress, are you?

>> No.12730955

>>12730930
When you drive a car you are putting your faith in the black box of every other drivers brain

>> No.12730964

>>12730948
We're going to end up with ML therapists using talk therapy to understand why some net seemingly randomly decided to kill a dozen children and to help the net understand why it shouldn't do that again.

>> No.12730969

>>12730955
That's a good reason for making getting a license to drive more difficult.

>> No.12730971

>>12730955
That's an entirely different situation. We know how humans function on the road. You can do empirical studies of that. You just physically can't interpret what the fuck ML does, only how it does it. A human brain didn't evolve to solve linear algebra in 10^10 dimensions.

>> No.12730980

>>12730952
i love it how chicks like that make their w letters look like asses/tits

>> No.12730987

>>12730980
DAMMIT! Cannot unsee that now.

>> No.12730989

>>12730980
anon, I... that's an omega...

>> No.12730999

>>12730971
>implying you can't do empirical studies on the behavior of self driving cars
>implying you know how the human brain works

>> No.12731010

>>12730955
Humans are self-conscious so they can actually think and react intelligently.

The self driving model is not doing any high level thinking like that.
If it does not get data that is very similar to whatever it was trained with, it will just fail catastrophically.

>> No.12731016

>>12730999
>human behavior in a set environment with a defined set of rules
>needing to know how the ENTIRETY of the human brain works to figure that out
you should be hanged, drawn, and quartered for wasting these trips on such a low IQ post.

>> No.12731021

>>12730989
you spell "weights" with an omega?

>> No.12731036
File: 51 KB, 832x1000, gigasweater.jpg [View same] [iqdb] [saucenao] [google]
12731036

>>12731021
why yes

>> No.12731056

>>12730971
>We know how humans function on the road
Yes. Terribly.

>> No.12731068

>>12730980
Her w's are clearly not autophysiological.

>> No.12731081

>>12731056
You sound like a TSLA holder.

>> No.12731096

>>12730797
Explain to me how a human operator doing those tasks is any less empirical. I'll wait.

>> No.12731103

>>12730797
its the superglue solution for the future, shhh, dont let the normies know, but even if they did they wouldnt care, they would convince themselves that "they know what their doing " lawl.

>> No.12731111

>>12731096
Humans are operating at a much higher level, therefore they are more robust, so they fail softly.

>> No.12731115

>>12731111
How do you calculate the failure rate of an individual human (the individual, not in aggregate)? What if they show up to work drunk?

>> No.12731124

>>12730797
MATHcels absolutely, irrefutably, indubitably BTFO by engisneed CHADS
Keep seething while we keep solving real problems, chuds

>> No.12731151

>>12731124
It's all hype anyway.
It's easy to throw compute power at a model and solve a toy problem where failure is not an issue.
When failure becomes expensive, then suddenly a model based on magic voodoo is not good enough anymore.

>> No.12731156 [DELETED] 
File: 885 KB, 1080x2340, Screenshot_20201117_085035_com.facebook.orca.jpg [View same] [iqdb] [saucenao] [google]
12731156

>>12730797
Tiring yourself out versus tiring out the universal audience known as attention

>> No.12731437

>>12730797
Heuristics are stronger than formulas. That's why we have religion and culture. Get over it, faggot.

>> No.12732289

>>12731437
This. Several problems that are "NP-complete" and "hopeless" are actually solvable in 99% of practical cases using heuristics.

>> No.12732323

>>12730980
I hate how grills default font is worse than comic sans

>> No.12732350

>>12730797
>Just so you know, there is literally ZERO math behind this shit
Translation:
>I never bothered to look anything up so I'm fucking upset!
Sore loser.

>> No.12732357

>>12732289
I don't think you are appreciate the subtle difference here.
In complexity theory you have a deterministic algorithm and you consider the whole input space.
Then you prove that "hopeless" problems actually behave nicely most of the time.
There is actual math behind this.

With deep neural networks it is very different. You have a black box trained a a dataset that is very very small compared to the actual input space. And there is no math at all guaranteeing anything.

There is no guarantee that if given a perfect dateset, you get a perfect prediction, or even a good enough prediction. And of course the dataset is very far from perfect.

>> No.12732359

>>12732350
There is no math.

>> No.12732397

>>12730797
It just werks bruh

>> No.12732423

If you think that's bad, just wait until it starts going into autonomous military vehicles. They already released results of ML airplanes absolutely ruining meatbag controlled planes in simulations.

>Oh you can't sustain a 9g turn? Lol

>> No.12732428

>>12731151
It may be a hype but ML is here to stay. A lot of technologies we use today depend on ML models and ML is overall a nice tool. The problems OP is mentioning are well known in ML research and there already efforts to solve them (see explainable AI or alternatives to classical DNNs)

>> No.12732540

>>12730969
People do train and test these algorithms before shipping them, you know.

>> No.12732609

>>12732540
You cannot really verify that the model works
All you can do is verify that is behaves nicely when you input your dataset.
But there is not even a way to quantify what this means in real life.

There is not even a way to detect when the model is going to be unreliable. So you cannot even program the model in such a way that is does something only if it recognizes the input, because you cannot tell when it recognizes the input and when it does not.

>> No.12732623

>>12730944
That's literally not the point he is making. For all intents and purposes, the machine could be making an assumption that because a blue jay flew in front of it, it must turn left always. Although granted specific hardware and dedicated software pieces are properly compartmentalized so that specific jobs work out on specific ques...but ONE mistake labeling a compartment and that component will mess up. Better hope it's a blinker.

>> No.12732765

>>12731096
This is a very good point. You have to remember, humans are imperfect drivers. And a portion of drivers at any time will be drunk, high, tired, absent minded in a way that a machine will not be.

>> No.12732791

>>12730930
If they can statistically prove that it is safe I don't really care if we don't understand the inner workings of neural networks.

>> No.12732868

>>12732609
>There is not even a way to detect when the model is going to be unreliable.
Not that dude, but yes there is; applicability domain, for example, is an easy test we include with all our models that determines if the new input is represented enough in the training set for the predictions to be accurate (there's a few more metrics which can be computed as well like decidability domain). We literal make a decision on whether we can use our models reliably on new data before application.
Further, not to out myself too much, but we can easily use external validation to see if the models are good; basically, look at long-term verified success. We know in our field that random screens give like ~0.1% hit rates, but if "pre-screened" using model, the enrichment factor goes up 1000-10000x; hit rates becomes anywhere from 10% to 100% (depending on number screened vs. where the true-positives are). So for example; screen library of 10,000 with a true confirmation test, get maybe 10 hits. Use model to pick the top 100 from the library of 10,000, however, and when you test for frue-positives you will get those 10 hits, and what we've seen is they are generally in the top 10 best scoring proposed (so the first 3 are virtually guaranteed to be true positives).
/sci/ can take its CS 101 and pretend they know anything at all, but there's a reason that ML is getting gobs of fucking money: It's saving companies a fuckton of money in return investment, and the models are getting better (generative models have come into vogue like....this year in certain areas, and we are just now getting real data that suggests they are gonna be amazingly good at what they do).

>> No.12732879

>>12732623
>For all intents and purposes, the machine could be making an assumption that because a blue jay flew in front of it, it must turn left always.
It will not, as nothing that gregarious will ever make it through testing. This idea of "black boxness" has somehow come to mean "you literally can't tell anything about it's predictions", and I have no idea why people think that. You can easily, EASILY determine what a ML application will do for any range of inputs, and you can tell without even trying that it certainly wouldn't make up a rule of "turn left if blue jay in front of car".

>> No.12732930

I can contribute to this thread, i currently part time work in an AI development company focused in cars, we do the manual labors like the basic boxing the cars on the roads and we do this shit again and again to be paid like little bit above minimum in our country.

>> No.12733002

>>12732791
They cannot prove that it is statistically safe though. Mathematically, they cannot prove shit, because there is literally zero math involved.
It's literally just tinkering on a massive scale.

>>12732868
All you are saying is that you can test it in multiple different ways against your data to check that it behaves nicely.
I am not disputing that.
It is still crazy that you would trust a black box model like that with life and death just because it behaved nicely on all the data that was available to you.

>It will not, as nothing that gregarious will ever make it through testing
You won't find out until after the fact. Tesla already have multiple fatalities on their record attributed to such incidents.

>> No.12733030

>>12733002
>All you are saying is that you can test it in multiple different ways against your data to check that it behaves nicely.
You are pretending that "behaves nicely" and "the model works" are two different things- they are one and the same. If you can verify your program "behaves nicely", you are verifying that it works, end of story.
>It is still crazy that you would trust a black box model like that with life and death just because it behaved nicely on all the data that was available to you.
Not at all. In fact, nearly every single thing in life today is controlled behind-the-scenes by ML replacing more expensive-to-compute algorithms.
In fact, if we know the accident/death rate of human-driven cars is X, and we can show that AI drive cars reduce the number of accidents/deaths by some factor n, we cannot logically justify why humans should be allowed to drive over ML, even if ML isn't perfect- as long as its better than error-prone humans.
Humans are shit drivers and make FAR more illogical decisions than a ML algorithm ever could (ML driving would never, for example, stop in the middle of the freeway and back up toward the exit ramp 0.5 miles back while traffic is coming at the car at 70mph; it would never run red lights on purpose; it would never "drive drunk" since that's literally impossible; it would not brake check the semi behind it because of road rage; etc etc etc).
Humans make far more illogical errors than the current state of the art ML-driven cars.

>> No.12733037

>>12730797
haha math fags btfo

>> No.12733060

>>12732879
>It will not, as nothing that gregarious will ever make it through testing
If only he knew how bad companies really are

>> No.12733133

>>12733030
But what makes you think that by the time you expand your self driving model to the whole real world, you won't end up with a model that is as equally unreliable as a human driver?
It might not fail in the same ways as a human would, but it would fail in other ways.
For example, current perception of the real world is very poor. Certainly human perception is much better. Then of course humans have a great fail safe mechanism: they can actually think on the spot and come up with solutions far beyond what a model trained on any dataset could.

>> No.12733212

>>12733133
>But what makes you think that by the time you expand your self driving model to the whole real world, you won't...
That's why testing is done in stages anon. It's not "we tested this and got some good numbers, okay now lets just literally release the fleet of self driving cars all over america at once".
self-driving starts in the lab, then it goes into closed-course roads which have as many varieties of challenges as is thought to be encountered IRL. Then it's released in small, real-world areas, at safe speeds. Then it's deployed and tested in less-safe, faster scenarios (And in all of these, a human is there to supervise and take control at any point). Continue rollouts in a step-wise fashion, correcting anything that needs to be corrected along the way. The self-driving will need to be at acceptable performance with a human before they take the human away.
>For example, current perception of the real world is very poor. Certainly human perception is much better.
I don't know what this means, but we have exact data on how bad humans perform. And they perform badly.
>Then of course humans have a great fail safe mechanism: they can actually think on the spot and come up with solutions far beyond what a model trained on any dataset could.
Incorrect. Humans have limited, forward-only vision that can't see in all directions, and our processing speed is extremely slow- it takes a very long time to process information and pass that information to our limbs, and even then in split-second decision scenarios we often underperform terribly.
self-driving cars, however, have 360 degree vision, all at once, and can "see" far more than humans (LIDAR, etc). They can also make split second decisions logically. They've literally performed correct dodging maneuvers against threats humans didn't even see yet in test scenarios.

>> No.12733377

>>12733030
Would never brake check a semi but I love brake checking angry little thotties in their fucking convertibles.

>> No.12733402
File: 8 KB, 474x106, banach FUCK YOU.jpg [View same] [iqdb] [saucenao] [google]
12733402

>>12730797
>Just so you know, there is literally ZERO math behind this shit.
FUCK YOU.
FUCK YOU.
FUCK YOU.
I absolutely fucking HATE people like you.

You're the piece of shit who just fills your entire paper with meaningless fucking equations and never prove that what you do works. Meanwhile you get paid actual money to do this USELESS FUCKING SHIT, solving your little spastic puzzles, and then what happens?
Months or years later some of those "tinkering monkey coders" figure out all your beautiful math doesn't amount to shit and does not produce a working machine.
Theoretical scientist have to be, every single last one of them, absolutely no exceptions, fired and forced to work on construction or some shit. People like you are absolute cancer.

>> No.12733412

Nuclear bomb creator were not sure if will not destroy the whole Earth. You have to handle some risk if you want to be successful.

>> No.12733427
File: 269 KB, 2500x1667, stock.jpg [View same] [iqdb] [saucenao] [google]
12733427

>Doctor says you need surgery because they diagnosed a cancer.
>But doctor your brain is a black box that is not reducible to first principles! How can I trust your decision making?

>> No.12734334

>>12730797
>the virgin rationalist vs the chad empiricist

>> No.12734675
File: 36 KB, 500x647, 1588870289609.jpg [View same] [iqdb] [saucenao] [google]
12734675

>>12730797
based and math pilled
>NOOOO MATH PEOPLE ARE USELESS
Yes.

>> No.12734917

>>12733002
Statistical learning theory exists you retard.

>> No.12735086

Neural networks are really just differentiable functions that do piecewise approximations. It’s like being scared of Fourier series expansions. That being said they will still probably lead to humanity’s downfall.

>> No.12735204
File: 51 KB, 1500x1060, page_1.jpg [View same] [iqdb] [saucenao] [google]
12735204

>>12733402
>bpo hoo my efforts can be compressed into a smaller timeframe and all my past effort discarded with the click of a button and manipulation of hyperparameters!

>> No.12735265

>>12730797
bro medicine is literally "it just werks"

>> No.12735298

>>12733427
Now that you mention, doctors really are often crapshoots who make catastrophic failures in diagnosis all the time. AI will be way better.

>> No.12735323

>>12733212
Computers NEVER make decisions. I don't know why retards don't get this.

>> No.12735348
File: 40 KB, 437x630, images - 2021-02-21T124658.103.jpg [View same] [iqdb] [saucenao] [google]
12735348

>>12735298
Consistent results.

AI governene based on all internet porn now?

>GAN-GASM! THE HOT NEW NERD PORN FOR MACHINE LEARNING SPECIALISTS!