[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 318 KB, 447x380, 9bc5mqq6aebz.png [View same] [iqdb] [saucenao] [google]
10207104 No.10207104 [Reply] [Original]

Hey anons, I've been meaning to ask. How can you differentiate a faked/badly done/infactual study from a real one? I've been just checking for control groups, placebo groups, etc.

>> No.10207223

>small sample (especially for control groups, e.g. Gilles-Erig Seralini on GMOs)
>poor sample selection (e.g. Kinsey report)
>no blind experiment (e.g. just about any paranormal phenomenon pseudo-scientific experiment)
>no double blind when possible (e.g. all experiments sponsored by homeopathy labs)
>no or bad randomizing
>inferring causality from correlation (e.g. people living near high tension cables dying younger)
>inferring causality from retrospective study (e.g. Andrew Wakefield on vaccines, though he did other bigger mistakes in that paper)
>[math]\alpha\;>\;1\,\%[/math] (makes p-hacking too easy, e.g. everything in social sciences)
>author lies on conflicts of interest (e.g. Gilles-Eric Seralini and Andrew Wakefield)
>relies on obsolete shit (e.g. citing Darwin or Pasteur in 2018 like here: https://doi.org/10.1016/j.pbiomolbio.2018.03.004))
>assumes unproven facts/theorems without pointing out they're unproven (e.g. pretty much all of cosmology)
>peer-reviewers are irrelevant (e.g. the previous paper on the Cambrian explosion had no geologist among peer-reviewers, hence it said it happened ~500 billion years ago yet still got published)
>anything prescriptive (science is descriptive; prescriptions don't belong in academia)
>asserts something is reliable with only one study (strong evidence requires either huge-sampled studies or metastudies)
>stays qualitative when quantitative measurements could be made (e.g. "it's higher" without calculating the [math]p[/math]-value)

>> No.10207224

too many authors (>3): likelihood of bullshit increases
too few authors (<3): likelihood of bullshit increases
reading studies is pointless anyway in 99.999% of the cases unless you have to cite them

>> No.10207247

>>10207104
one quick way to find quack studies: is it from china?

>> No.10207346

>>10207224
>just enough authors (=3): definitely bullshit trying to pass as legitimate by hiding/adding extra names

>> No.10207354

>>10207104
>>10207223
Everything above is good.

Also good to know people in your field, which only comes with time. You get a pretty decent sense for who's actually doing real work and who is skeezy after a couple social interactions at conferences or whatever