The trigonometric functions provided in Excel omit some functions, and the functions provided in VBA are still more restricted. Notably VBA does not provide an ATan2 function, which is required to define an angle over a full 360 degree range, based on X, Y coordinates.
The download files below provide the full range of functions, including all inverse functions and hyperbolic functions. The VBA version should work in all versions of Excel from 2000 up, but the Fortran version links to a dll that requires a 32 bit operating system, and does not work in 64 bit versions of Windows.
VBA version: http://interactiveds.com.au/software/TrigFuncsVBA.zip
Fortran version (32 bit only): http://interactiveds.com.au/software/Trig%20Funcs%20dll.zip
Further details of the files are given at Trigonometric Functions in VBA and Trigonometric Functions in VBA – Update for the VBA version and Trigonometric Functions – dll for the Fortran dll version.
Hi Doug, unfortunately, the VBA code starts with a flaw that many trig.-funcs. libraries published on the web have in common: Const PI As Double = 3.14159265358979. You may want to write a function get_PI that just returns this value. Harnessing xNumbers functions you may then want to use a spreadsheet in order to compare the results of =dbl2hex(get_pi()) to that of =dbl2hex(PI()), You will find that the XL spreadsheet function PI() returns a number that in big-endian hexadecimal notation reads 40 09 21 FB 54 44 2D 18 , whereas the hex output of get_PI ends on “11”. So your value for PI is 7 ULPs off, as e.g. xNumbers can be harnessed to prove that PI() returns the correct double. As XL&VBA are known to limit numerical entries to 15 significant digits, a strategy to overcome this design flaw is to use statements like Const PI As Double = 3.14159265358979 + 3.116E-15 . 3.141592653589793116 is the value of the double closest to Pi correctly rounded to 19 digits. This trick does not work reliably, one has to check the hex value using tools like xNumbers for every single constant,
Moreover, there even may occur catastrophic cancellation in all the statements containing code like “PI – something”. If the value of “something” is getting close to Pi, the result may become totally inaccurate, in particular, if the value of Pi is flawed. A dear Italian friend of mine has a blog entry on this aspect concerning the tangetn function near pi/2: http://carolomeetsbarolo.wordpress.com/2012/07/ . The idea is to provide the values of constants in “double double” precision, and then write code like y = (PI_hi – x) + PI_lo, for example.
Of course, not only XL&VBA are affected by these problems, but they are much more error prone than other software due to the limit of 15 digits for numerical values that are directly entered by users!
Hi Georg – to get the extra precision for Pi in VBA / Excel, you could also try these:
Const PI As Double = “3.1415926535897932”
As another example, using “1.7976931348623157e+308” returns the maximum double value whereas entering the 15 digit displayed version causes an overflow.
AFAIK, VBA uses all of the digits in internal calculations and only displays values rounded to 15 digits (which means that decimal values like 0.2 do not show small rounding errors in the 16th decimal). I seem to remember in a previous post we concluded that VBA uses subnormal numbers and extended precision for intermediate calculations which actually gives it more accuracy than many other languages. Excel formula calculations on the other hand do sometimes truncate the results of final calculations which can reduce rounding errors in some decimal/currency calculations but may be less precise in other calculations. Provided one is aware of these issues, I don’t really think there is any great cause for concern.
Hi Lori (??), John Beyers, the current maintainer of xNumbers, once told me that VBA uses at maximum 20 digits when parsing strings to doubles. I proved that statement to be true for a lot of numerical strings, for example, the one given in this post http://www.exploringbinary.com/a-bug-in-the-bigcomp-function-of-david-gays-strtod/ . Fortunately, the quantisation error is limited to 1 ULP if at least 17 significant digits are provided. Another “trick” J.B. proposed to me is to use something like =”12345678901234567E-16″ if =”1.2345678901234567″ is meant in case there is no underflow, because in string parsing the character used as decimal separator may be different from a dot.
It depends on the user’s recommendations whether an input error of 7 ULPs is tolerable or not. One could try to estimate a quantity called condition number for the problem under consideration, and then estimate how many sig. digits can be lost in the calculations in the worst case. From that, the maximum tolerable input error may be estimated.
You can force XL to drop its forced-round-to-integer strategy by bracketing all expressions in a cell formula even if this was not necessary from a mathematical point of view. Unfortunately, I have not yet found a way to prevent XL from flushing subnormal VBA-results to zero.
I bet very few people are aware of the fact that even the results of “simple” intrinsic functions like “SUM” (not only in XL!!!) can be off by a huge amount of ULPs in case the mean of non-integer data being summed up is close to zero. IMHO, these “unkwon unknowns” are really dangerous…
Georg – Interesting points. A maximum of 20 decimal places for a numeric string makes sense for storing the 64-bit mantissa in an FPU register. In C++ the _control87 command can be used to override the default PC53 setting with PC64 and it looks like this could also override the denormal flag too in excel dll functions but I haven’t tested this, see: http://support.microsoft.com/kb/263213.
Also, I just noticed that you actually need to use cdbl("3.1415926535897932") for the named value command i posted above but strangely the max double example doesn’t seem to require the cdbl conversion. As you say enclosing formulas in parentheses can stop the rounding in formula results so with a bit of care it’s possible to work around Excel’s truncation behavior but it’s far from straightforward… Lori
Hi Lori, I think there a two fundamentally different problems: 1. a binary number comprising a known amount of bits is to be uniquely represented in decimal notation as opposed to 2. a decimal string is to be parsed to the closest binary number. As you stated, the solutions of the first problem for binary numbers with 53 and 64 mantissa bits are 53/3.322 = 16 -> 17 and 64/3.322 = 19.-> 20 (3.322 = log10(2)), resp. On the other hand, there is no rule that allows to calculate the amount of decimal digits that are needed in order to correctly parse a string of decimal digits to the value of the closest binary number of a given data type, because this amount depends on the distance between the value of the string and that of the closest binary number. If the string.that is to be parsed corresponds to an exact double (53bits), 17 digits are sufficient, of course, but if it corresponds to a value close to a tie between two binary numbers, several hundred decimal digits may be needed. In this case, 20 digits just assure that the parsing error is smaller than 1 ULP.
AFAIK, the 80bit internal FPU registers you mentioned seem to have reached the end of their lifetime, because the modern SSE2 instruction set does not make any use of them in favour of high-speed parallel data processing (unfortunately, VBA7 is single-threaded, XL14 is not,,,). At least in my XL14x64/VBA7, any function calls in a const-statement are prohibited. So what I do is to specify (hexadecimal) string constants that are later converted into numbers in an initialisation routine. I do not trust any parser anymore… especially those by MS, because in an AJAX-related project, it turned out that many major browsers get the example from exploringbinary.com right, whereas IE7/8/9 do not.
Of course, it depends on the sensivity of an algorithm to small errors whether these effects matter or not.
Hi Georg and Lori,
Interesting discussionl (as usual).
One peculiarity I discovered (but I have not thoroughy tested) is that if you enter in the VBE:
Const PI As Double = 3.1415926535897932
then the editor truncates it to 3.14159265358979, but the constant creteated has the correct value!
So for instance:
RoughPi = 3.14159265358979
BetterPi = 3.14159265358979
[B8] = RoughPi – BetterPi
(where BetterPi is entered as 3.1415926535897932) will return -3.10862446895044E-15 to cell B8!
What I find even more surprising is that the VBA compiler remembers the extra figures, so you get the same result if you shut down Excel, restart, and then run the routine again!
I’m not suggesting that is good practice by the way, I think the best way is probably to either use the cdbl method, or just use Worksheetfunction.Pi.
you seem to deeply understand Microsoft’s sense of humour! At least, I can confirm the effects you described. In my XL14x64/VBA7, this trick only works if the code is found in a module attached to the XL-file in which a constant is used. Neither do the extra figures make it into an exported *.bas file nor does re-typing them in an external *.xlam-file bring up this effect. I guess XL stores the binary values of VBA-constants somewhere in the xlsm-file (perhaps even the complete compiled code), and does not re-compile the macro-code unless changes have been made.
Having in mind the many constants usually needed for Padee approximants of special functions, e.g., it seems to be good programming practise to use either 17 decimal figures corresponding to the closest double (trailing 312 instead of 324 in case of pi) or the (big-endian) hexadecimal notation that makes parsing completely obsolete.
Doug – quirky find – interesting! just to complete the hierarchy of VBA data types:
BestPi = cdec("3.1415926535897932384626433833")
Georg – don’t really know anything about 64bit architecture,as am always way behind when it comes to technology but agree with comments about numerical errors in sums. Fixed point can be used for mitigating errors with decimals or Kahan summation for floating point but of course all methods have limitations…