[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 4 KB, 362x282, robust-regression.png [View same] [iqdb] [saucenao] [google]
2919706 No.2919706 [Reply] [Original]

Answer statistics questions for about 30 minutes.. If you aren't in grad school or something, then I should be able to give you a hand.

>> No.2919714

If I put 2,500$ in the bank, and it has 7% interest, how much money will I have in 1000 years.

>> No.2919718

Why do you use n-1 when calculating the standard deviation of a population, but n when calculating the standard deviation of a sample?

>> No.2919731

>>2919714
That has nothing to do with statistics.

>>2919718
You lose a degree of freedom because you are estimating a parameter.

>> No.2919752

Is there a fire-and-forget value for significance I can access in Mathematica? It gives me stuff like T and P values, but I can't see those giving me much of a significance there.
Is there some version of chisquare or the correlation coefficnent that doesn't need additional calculations to be interpretable?

>> No.2919754

>>2919714
2500(1.07)^1000
= $6.04947511 × 10^32

>> No.2919756

Why do statisticfags use the standard deviation instead of the average absolute deviation?

>> No.2919763

>>2919731
What is it then? Accounting?

>> No.2919773

>>2919756
It's a lot easier to work with squaring something than taking the absolute value from a computational and derivation standpoint. It mostly dates back to the days of yore when it just wasn't feasible to work with the absolute value because it wrecked too much havoc with computation and derivations, and has since just sort of become standard. The mean absolute deviation is starting to become more popular now that technology has evolved, but it is probably will never overtake squared deviation, just because it is so much more of a pain to work with.

>>2919763
Probably some random pre-cal problem.

>> No.2919803

which is better and why:
The pooled-variance method or the Welch method

>> No.2919802

bump

>> No.2919827

>>2919803
I've never encountered the Welch Method, sorry.

>> No.2919845

>>2919827
welch: sqrt(s_1^2 + s_2^2)

>> No.2919854

How is the Wilcoxon test any better than the t-test or ANOVA?
And how do you guys come up with such tests ... what is the underlaying theory in designing a new test?

>> No.2919856

Your linear regression does not amuse me.

>> No.2919921

>>2919845
Try to see if they are both unbiassed estimators of the variance. Then look at their mean square.

>>2919854
>How is the Wilcoxon test any better than the t-test or ANOVA?
The Wilxocon isn't really any better or worse than the t-test (I know the t-test is unbiased UMP, and I am fairly sure the Wilxocon is too), but ANOVA is better than both if you are making more than two comparisons.

>And how do you guys come up with such tests ... what is the underlaying theory in designing a new test?
There are a few ways that this happens. At times it can be a very ad hoc, but there are a few ways that show a lot of rigor to them. The most common rigorous way to get a new test is using something called a generalized likelihood ratio test, which is basically a ratio of the likelihood estimation function, each taken with respects from different parts of the parameter space under the null hypothesis. This is how you get the basic t-test, for example, but the method to derive the multiple t-test is a little more ad hoc (if I recall correctly).