[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 127 KB, 640x480, coachmguirk.jpg [View same] [iqdb] [saucenao] [google]
4343901 No.4343901 [Reply] [Original]

Hey guys.

When evaluating their results, what if we required scientists to create two alternative conclusions instead of just one?

Here's why I figure this would work:
- Scientists would be motivated to acknowledge discrepancies in their figures. "Bad" numbers for one model could be "Good" numbers for another.
- Creating multiple solutions to a single problem requires a combination of in depth analysis and creativity. These are both qualities that should be valued in science to begin with.
- Minimization of bias. Two theories are harder to discredit than one. A scientist will only lose face if both models are deemed unworthy, at which point they deserve to be rejected.
- If both models are "successful", then both are explored regardless of the scientist's preference.
- If one model is significantly lacking, it could be viewed as a lack of understanding of the results or a bias in observation, if not a combination of the two.

Granted the scientific method works fine as it is, there are still problems with hubris and bias that arise within the community from time to time. I believe these changes would separate the "good" scientists from "great" scientists and make in-depth analysis a self-interest as opposed to a "defense" against peers.

Thoughts, /sci/?

>> No.4343904

No, impossible.
The holy book of scinece says the scientific method is perfect and cannot be improved.

>> No.4343932

>implying you don't usually have a bunch of different hypotheses and the experiments you do aren't meant to find the best among those

Never done any research, uh?

>> No.4343933

>>4343901
Whatever "faults" you can find in the execution of the scientific method could be found just as easily in your proposed alternative. Simply put, its human nature, not the method itself, that is the flaw.

>> No.4343954

>>4343932
>Implying implications
Yeah, k, bro.

>> No.4343969

>>4343933
>There are always going to be problems in the world, so why even bother trying to fix them?
If the scientific method is "too advanced" for people to use properly, then why don't we use a different method more suited adapted to our flaws?

Or are you saying we should stop trying to adapt to human nature altogether on the grounds that it can never be removed entirely?

>> No.4343973

the problem is that you assume that all scientists rigorously follow some sort of universal scientific method, when in reality the scientific method is a pretty loose thing that has been defined after the fact.

>> No.4343976

also I read that whole post in his voice

>> No.4344003

>>4343973
I'll agree with you as long as you're admitting that the scientific method is not perfect and can be improved.

>> No.4344068

http://www.wired.com/wiredscience/2010/12/the-truth-wears-off/

>> No.4344377
File: 6 KB, 200x160, bump.jpg [View same] [iqdb] [saucenao] [google]
4344377

>> No.4345303

BUMP.

C'mon, /sci/.

>> No.4345333
File: 14 KB, 237x237, stooges_curly.jpg [View same] [iqdb] [saucenao] [google]
4345333

>When evaluating their results, what if we required scientists to create two alternative conclusions instead of just one?
We already do this.
>Scientists would be motivated to acknowledge discrepancies in their figures. "Bad" numbers for one model could be "Good" numbers for another.
we already do this.
>Creating multiple solutions to a single problem requires a combination of in depth analysis and creativity. These are both qualities that should be valued in science to begin with.
We already do this.
>Minimization of bias. Two theories are harder to discredit than one. A scientist will only lose face if both models are deemed unworthy, at which point they deserve to be rejected.
We already do this. Also, who gives a fuck if your hypothesis was wrong? go make a new one. It's alright, nobody minds.
>If both models are "successful", then both are explored regardless of the scientist's preference.
We already do this.
>If one model is significantly lacking, it could be viewed as a lack of understanding of the results or a bias in observation, if not a combination of the two.
we already do this.


tl;dr we already do this. Multiple hypotheses are not uncommon - you can have as many as you want as long as you back them up and test them all.

>> No.4345413
File: 10 KB, 207x160, girls are weird.jpg [View same] [iqdb] [saucenao] [google]
4345413

>>4345333
Oh great. I accidentally got the attention of the most insufferable prick in this entire board.

Go away, valjean. You're not exactly a "discussion" person. You're more of a "my opinions are facts" kind of guy. Sure, you're right with everything you said. But you completely missed the point I was getting at, and understanding other people's perspectives is kind of your weak point.

You're smart. And I like that. But you're a huge fucking asshole, so that ruins all motive I have of discussing anything with you. Ever.

>> No.4345440

>>4345413
Sorry for the assholery, UNC just lost to Duke by a buzzer-beater 3 and I'm out for blood.

I'm glad we agree about multiple hypotheses though. Fortunately, that's a common practice. especially in fields like zoology where results are tough to predict.

>> No.4345511

>>4345440
Alright. I'll roll with it.

Could you explain the difference between a conclusion and a hypothesis for me? I want to be sure that this is more than a semantic issue.

>> No.4345522

>>4345511
I'm trying to only be an asshole in shitty threads nowadays.

Hypothesis is formulated before doing any tests - a conclusion is drawn from the test data and may or may not support the hypothesis.

>> No.4345539
File: 26 KB, 313x343, 1326068494340.jpg [View same] [iqdb] [saucenao] [google]
4345539

>>4345333

This. The point of an alternative conclusion is to demonstrate potential deviation, not so much point out the flaws of the findings. When analyzing research achieved via the scientific method its assumed due to human error there could be a potential problem with the findings. The alternative conclusion is just a self awareness of that, so a second would be redundant.

>> No.4345578

>>4345522
>a conclusion is drawn from the test data and may or may not support the hypothesis.
This is the part that I'm focusing on more than anything else. People aren't territorial over hypotheses, however, they are territorial over their conclusions. The hypotheses are based off of the works of others, but the conclusions are a personal achievements of their hard work.

It may seem like a moot point, but it's the difference between critically analyzing your own work vs. analyzing the works of others. People tend to go easier on themselves, and that's when bias and hubris slip into the equation. I'm not saying this is always the case, however, it happens often enough for things like >>4344068 to occur.


As I figure, if every conclusion was perfect, we wouldn't need peer review in the first place. By requiring multiple conclusions, this would detach a person from their experiment just enough to provide a slightly more objective view. This means not just saying "Here's where I could have gone wrong", but saying "Here's how my results also could have been interpreted".

If it's not already a common practice, it's something I'd like to see more of. Occam's razor cuts away more than just fat sometimes, and the most obvious conclusions aren't always the most accurate.