[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 375 KB, 720x1046, thetimestheyareachangin.jpg [View same] [iqdb] [saucenao] [google]
10494788 No.10494788 [Reply] [Original]

>"For too long, many scientists’ careers have been built around the pursuit of a single statistic: p<.05."

>"In many scientific disciplines, that’s the threshold beyond which study results can be declared “statistically significant,” which is often interpreted to mean that it’s unlikely the results were a fluke, a result of random chance."

>"Though this isn’t what it actually means in practice. “Statistical significance” is too often misunderstood — and misused. That’s why a trio of scientists writing in Nature this week are calling “for the entire concept of statistical significance to be abandoned.”"

>"Their biggest argument: “Statistically significant” or “not statistically significant” is too often easily misinterpreted to mean either “the study worked” or “the study did not work.” A “true” effect can sometimes yield a p-value of greater than .05. And we know from recent years that science is rife with false-positive studies that achieved values of less than .05 (read my explainer on the replication crisis in social science for more)."

>"The Nature commentary authors argue that the math is not the problem. Instead, it’s human psychology. Bucketing results into “statistically significant” and “statistically non-significant,” they write, leads to a too black-and-white approach to scrutinizing science."

What does /sci/ think about the proposal?

>> No.10494791

>>10494788
>What does /sci/ think about the proposal?
The problem is p-hacking and people that don't actually know their stats

>> No.10494793

>>10494788
>P-values

Lol... pee values XD

>> No.10494794

I think we already have enough problems with the replication that we shouldn't further degrade what is considered acceptable testing

>> No.10494797

>>10494791
Not just this however
the p < 0.05 is a completely arbitrary cutoff, and that means 1 in 20 studies is basically going to find a result anyway without p-hacking, which is going to be skewed by the fact that only studies which find a result will get published and the other 19 in 20 are going to be thrown away if there isn't a financial reason to publish it

>> No.10494798
File: 36 KB, 752x763, Psychology Science &amp; Engineering.png [View same] [iqdb] [saucenao] [google]
10494798

>>10494788
The problem here is what data has been measured/collected. That alone can change the outcome of a study.

>> No.10494802

>>10494788
It sounds like rather than report the findings of the study as a binary result, they want to start treating the public like adults and give them some numbers.

>> No.10494803

>>10494794
Replication also helps eradicate issues i mentioned here >>10494797
For me I wish that there was an incentive to go from the "published" stage to the "replicated" stage, i.e. a study is considered replicated if it can be reliably replicated by at least 3 other researches in a different nation with a different data set

>> No.10494806

>>10494802
The public is retarded and will view it as a binary result however
People can't even understand that a 30% increase in risk for a certain type of cancer doesn't mean that you have a 30% chance overall of getting it

>> No.10494819

The acceptable p-value cutoff is dependent on the field in question. If CERN released a paper with just p < 0.05, it would be the laughing stock of particle physics for decades.

p < 0.05 doesn't mean it should be considered right, it's a flag indicating that there might be something worth further investigation. It's meant to cut out 95% of pointless bullshit from even being considered.

>> No.10494838

>>10494788
Why the fuck would you tell item science based on some nonsense from a nobody.

>> No.10494941

>>10494788
Don't know much about stats and don't really care, but my stats professor did say in our first lesson:
"If you wanna lie about something, then use statistics."

>> No.10494964
File: 57 KB, 800x540, d41586-019-00857-9_16551622.jpg [View same] [iqdb] [saucenao] [google]
10494964

>>10494788
I thought pic related made a pretty good point
A fuckton of people will erroneously say the two studies contradict one another
A smaller but still fuckton-level of people will try to say the study with the smaller p-value had a better methodology

>> No.10495021

>>10494788
800 out of 10+ million scientists around the world is not a statistically significant number

>> No.10495120

Just give confidence intervals

>> No.10495350

>>10494788
Statistics, and its respective methods, were invented to have useful ways for private testing and public lying. Are these people really complaining about "misunderstandings" in a magazine? Journalism is based around misunderstandings, exaggerations and dishonesty through sentence structure and phrasing.

Also: Where's the statistically relevant sample that proves for them to be "widely misunderstood"?

>> No.10496313

>>10494788
What kind of scientist outside the social sciences even thinks in black-white terms on statistics like that?

>> No.10496327

>>10494788
even based Motl has a post about this

https://motls.blogspot.com/2019/03/a-strange-letter-against-statistical.html

>> No.10496333

>>10494788
They are right we live in a sunshine state. While others do things the traditional way. I heard they have been able to avoid a whole bunch of situation with lab created viruses. Not meant to kill but just control. And use of frequencies to disrupt animal behavior. I see animals with very human behavior.

>> No.10496339

>>10496333
Perhaps Fee Del Castro was right.

>> No.10496548

>>10494788
>Statistical significance is too often misunderstood
>so let's abandon the entire concept
...and race to the bottom of the intellectual barrel.

>> No.10496628

>>10494788
This mongs better think about why it is a problem of understanding than just abandon one of the the most widely accepted things in science

>> No.10496692

>>10496313
There are still quite a few experimental biologists who put too much faith in p-values unfortunately.

>> No.10496733

>>10494803
last time I brought up replicability and reproducibility at one of my grad school meetings (students + advisors + invited researchers) I got leered at and told is not relevant for Computer Science
-_-

>> No.10497011

>>10494819
>it's a flag indicating that there might be something worth further investigatio. It's meant to cut out 95% of pointless bullshit from even being considered.
It's really, really not for that, and it would do a terrible job of accomplishing that if it was

>> No.10497019

>>10495021
ladies and gentlemen, anon who doesn't understand p-values or statistics

>> No.10497513

Statistical Significance is an oppressive, patriachal measure, we need to replace it with Emotional Significance.

>> No.10497646
File: 37 KB, 586x578, 1507428132684.png [View same] [iqdb] [saucenao] [google]
10497646

>>10496548
>>10497513
great responses, guys.