[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 59 KB, 648x575, drpepperthalia.jpg [View same] [iqdb] [saucenao] [google]
1783601 No.1783601 [Reply] [Original]

Hello, /sci/. Anyone know how trigonometric functions are typically implemented in C libraries? I notice my windows machine and google both generate an incorrect number for <span class="math">\tan(1.570796326794896)[/spoiler] which should be <span class="math">1614905391523416.2330174501911377442006039392666729414845\ldots[/spoiler] but yields <span class="math">1374823386397210.2[/spoiler], an error of over 14%.

>> No.1783610

the hell..tan(1.5) isn't that much, did you fail precalc or something

>> No.1783609

nice use of jsMath there buddy

>> No.1783613

I'd fuck that dr. Pepper

>> No.1783615
File: 41 KB, 800x640, hurr-bj.jpg [View same] [iqdb] [saucenao] [google]
1783615

>>1783610

>> No.1783617

turn calculator from DEG to RAD you...you...NIGGER!

>> No.1783627

>>1783610
actually, hes correct: tan1.5 = 14.1014
but he's an idiot to because >>1783601 is also correct

>> No.1783630
File: 47 KB, 419x333, 1282796622837.jpg [View same] [iqdb] [saucenao] [google]
1783630

>>1783615
tan(1.5)≠1*10^20 either

>> No.1783638

OP how did you get 161...? Mathematica also gives 137...

>> No.1783661

>>1783638
wolfram alpha gives 161

>> No.1783739

>>1783661
My library which doesn't use floating point agrees with Wolfram Alpha. It was this difference that made me wonder what's going on in floating point libraries.

This calculation is close to <span class="math">\tan(\frac{\pi}{2})[/spoiler] which is undefined. (limit is +infinity when approaching from values less than pi/2.) My guess is that the floating point libraries are using something like a truncated taylor or maclaurin series with interpolation. This would yield highly erroneous values near poles.

>> No.1783759

Python also gives this result <span class="math">1.37\times 10^{15}[/spoiler] result.

>> No.1783776

>>1783601
> Anyone know how trigonometric functions are typically implemented in C libraries?

If the CPU has built-in trig instructions, the library will probably just use that. In fact, the "function" is probably a directive to the compiler to just use the CPU instruction.

All Intel Pentium CPUs have a built-in FPU with a "tan" instruction.

One thing to bear in mind: Intel's floating-point is 80-bit by default, but a C "double" is only 64 bits. This means that whether a floating-point value is stored in RAM or kept in a register affects its value. If you're calculating tan(x) where x is close to pi/2, a miniscule difference in x can have a huge impact upon the result.

>> No.1783785

>>1783776
Ah, interesting. I figured it was a rote library call. I don't know much about modern processor design. Thanks.

>> No.1783817
File: 44 KB, 787x523, 1256113119955.jpg [View same] [iqdb] [saucenao] [google]
1783817

Try compiling with strict IEEE flags. Sometimes your compiler will have --fast-math (or whatever the equivalent is) turned on by default, or as part of a -Ox optimization, because it's very rare that anyone needs that kind of precision. If you use strict IEEE, it might force the machine to evaluate the function more rigorously.

>> No.1783837

>>1783817
That might be right, I'll have to experiment. The calculator app that comes with Windows 7 gives:
1614905391523416.2330174502176542 win7
1614905391523416.2330174501911377 correct value

which is much better.

>> No.1783870

>>1783817
This may help. OP, your problem isn't the trig functions so much as floating-point numbers in general.

A regular "float" variable is 32-bits, of which only 23 bits are used as the mantissa. That works out to about 7 decimal digits of precision. You're using numbers with ridiculous precision and the error adds up with every operation that you perform.

You could try using "doubles" but that only buys you another 7 digits or so. Your best bet for something like this is some sort of arbitrary precision math library. There are plenty of these for C++.

>> No.1783888
File: 643 KB, 300x203, animated.gif [View same] [iqdb] [saucenao] [google]
1783888

>>1783870
>doubles

>> No.1783896
File: 269 KB, 531x397, 1284438386513.png [View same] [iqdb] [saucenao] [google]
1783896

>>1783888
>888

>> No.1783911

>>1783870
Oh it is more of an issue with testing the library I wrote. I was testing against floating point calculations to about ten digits (among other tests) but I thought to move to some edge cases and this came up. I'm confident in comparing against wolframalpha but I was curious as to the source of this error.

I do understand the perils of floating point, I avoid it like the plague if I can, especially in statistical work where small errors accumulate quickly. This was just a curiosity as a direct function call has no accumulation of errors: the function just yields the incorrect value directly. (Which, incidentally, I expected.)

>> No.1784626

Perhaps this helps:

#include <math.h>
#include <stdio.h>
int main(int argc, char **argv)
{
double x1 = 1.570796326794896;
double x0 = x1, x2 = x1;
(*(long long *)&x0)--;
(*(long long *)&x2)++;
printf("tan(%.17g) = %.1f\n", x0, tan(x0));
printf("tan(%.17g) = %.1f\n", x1, tan(x1));
printf("tan(%.17g) = %.1f\n", x2, tan(x2));
return 0;
}

tan(1.5707963267948957) = 1053287125572247.2
tan(1.5707963267948959) = 1374827208772837.8
tan(1.5707963267948961) = 1978945885716843.0

Those are the results for adjacent "double" (64-bit floating point, 53-bit mantissa) arguments. Note that the value you give isn't exactly representable; it's between the second and third values.

Using "long double" (80-bit, 64-bit mantissa) will give greater precision, but you need to use tanl() (which is C99) rather than tan() to make use of it.

>> No.1784643
File: 8 KB, 251x251, why_I don't even.jpg [View same] [iqdb] [saucenao] [google]
1784643

>>1783888

>> No.1784645

>>1783888
why.jpg

>> No.1785015

>>1784626
It's really interesting how incorrect those are.

>> No.1785040
File: 7 KB, 300x300, 21iPmIJs-kL._SL500_AA300_.jpg [View same] [iqdb] [saucenao] [google]
1785040

A dash of Angostura Bitters goes really well with Dr. Pepper on ice.

>> No.1785072
File: 374 KB, 638x878, dr-pepper-kiss.jpg [View same] [iqdb] [saucenao] [google]
1785072

>>1785040

>> No.1785084

The problem is likely due to the fact that it is dividing by a number close to zero. This number is probably calculated by taking a difference of two numbers that are close to each other. With a fixed amount of precision, this operation ends up being rather inaccurate.

Solving arctan(x) - 1.5707936326794896 = 0 to an appropriate level of accuracy using, say, Newton's method or the bisection method is probably a good plan. You can code up a script to do this in about 2 minutes in python.

>> No.1785104

>>1785084
aww, crap, I'm failing again by subtracting two very similar numbers and expecting meaningful results, small errors in atan are going to cause big problems.

>> No.1785150

>>1785084
This seems to be accurate, as both 1/cos(1.5707963267948961) and tan(1.5707963267948961) give the same answer, as the sine function at that level of precision is already rounded to 1.0.

In fact, the sine of 1.5707963267948961 is
0.9999999999999999999999999999998651994172...

Converting this to binary, you need over 103 bits before you encounter the first 0, hopelessly out of reach of floating point.

>> No.1785174
File: 24 KB, 500x328, fig1.jpg [View same] [iqdb] [saucenao] [google]
1785174

Fucking floating fucking point.

>> No.1785183

>>1785174
Kind of an unfair representation because even the computable real numbers are only countably many, but they do fare a bit better than floating point.