Finding the angle between two lines in 2D is easy, just find the angle of each line with the x-axis from the slope of the line and take the difference. In 3D it is not so obvious, but it can be shown (using the Cosine Rule) that the angle θ between two vectors a and b is given by:
Cos θ = (a.b)/(|a||b|)
Unfortunately this gives poor accuracy for angles close to zero; for instance an angle of 1.00E-7 radians evaluates with an error exceeding 1%, and 1.00E-8 radians evaluates as zero. A similar formula using the sine of the angle:
Sin θ = |(axb)|/(|a||b|)
has similar problems with angles close to 90 degrees, but combining the two gives:
Tan θ = |(axb)|/ (a.b)
which is accurate for all angles, and since the (|a||b|) values cancel out the computation time is similar to the other expressions.
I have added a new User Defined Function (UDF) to the VectorFunc.xlsb spreadsheet, which was described in Dots and Crosses. The revised spreadsheet can be downloaded from the link above, including full open-source code.
The input for the new ANG() function is either two or three points, each of which are defined by a row of 2 or 3 values for 2D or 3D lines. Each point must have the same number of dimensions. If the third point is omitted the function returns the angle at the origin between the lines to Point 1 and Point 2. If the third point is supplied, the function returns the angle at this point. Angles are returned in radians by default, or degrees if the optional fourth argument is set to 1. The screen shot below shows examples of each of these options.
unfortunately, we disagree on your conclusion that the ratio of the two different products of vectors yields accurate results. Imho, just the opposite is true: it fails for angles close to zero and close to pi/2 as well. The reason for the inaccurate values is that catastrophic cancellation occurs when summing up the terms in the dot product near 0 deg and in the vector product near 90 deg, resp. The inaccuracy grows with both the dimension and the length of the vectors involved. The norms of the individual vectors as well as their product are much less error prone.
The only means to circumvent these problems is to use higher precision for the intermediate calculations, see e.g. http://www.ti3.tuhh.de/paper/rump/OgRuOi05.pdf (Siegfried Rump has published some other very interesting papers, so you might want to browse his home page. According to my experience, the algorithms published in his freely available papers do not suffer from any “typos”.).
Calculations done in VBA (even VBA 7 on an x64 machine still uses the 64 significand bits provided by the extended double registers of the x87 FPU!) are usually more accurate than those done in an external DLL (on an x64 machine, VS2010 uses the 53 significand bits provided by the SSEnn FPU, and there is no way to change that!). Contrary, the FTN95 compiler uses the x87 FPU by default, but it is available in x32 only, and thus may not be linked to from an x64 Excel VBA UDF.
On the other hand, linking to an external x64 double-double library (DLL) that uses the SSE FPU from within an x64 VBA UDF is very efficient, because the SSE FPU guarantees rounding all intermediate results directly to a precision of 53 significand bits. Only the extended exponent range (15 instead of 11 bits) of the x87 FPU is not available when using SSE. Thus, a programmer must take care not to cause an intermediate under- and overflow, resp.
Hi Georg, thanks for the comment and links, but I can’t agree that the method I have used is as bad (or worse) as using the dot product. Doing some checks with angles of the order of 1E-8 radians the UDF seems to give results correct to 15 significant figures, whereas using the ACOS(dot product) has errors of 10%-100%. For angles close to 90 degrees both methods seem to be accurate to 15 significant figures.
Or am I missing something?
Hi Doug, unfortunately, I cannot answer your question in general. It is all a question of how many significant digits of your input data have to make it into your result. For example, if you are dealing with dimensions less than 100m and need a final accuracy of 1mm, then you might use your atan solution because a double can easily hold 10 significant digits. Contrary, if you had to plan a bridge over some Strait-of-nowhere, and your input data was in UTM format, then you would need to transform these data to a local coordinate system before starting your calculations. This is necessary because the square of a distance that is larger than 1000km and needs to be specified with an accuracy of 1mm exceeds the accuracy of the data type double. Obviously, dealing with raw GPS data concerned with distances of 20,000km or larger is even worse.
On the other hand, as there is neither an ASIN nor an ACOS function even in VBA7, using ATN instead is not too a bad idea, because calculating an expression like 1/sqrt(1-x*x), which occurs when ASIN/ACOS are to be calculated in terms of ATN, poses its own threats if x is fairly close to 1. I think I’ll set up a spreadsheet on this subject and mail it to you, because I want to try out the latest version of xNumbers anyway…
Imho, the situation here seems to be very similar to that of finding the roots of a quadratic equation x*x+p*x+q=0. The effects of cancellation that occur if p is much larger than q can be avoided in double precision by choosing an appropriate formula, but in case (p/2)^2 nearly equals q, using extended precision for the calculation of q, (p/2)^2, and the discriminant as well is the only means by which accurate results can be obtained.
Concluding, I think an adaptive strategy is needed; at first, the angle might be calculated using a your ATN formula. If it turns out to be fairly separated from 0deg and 90deg as well, this value may be used, otherwise, some special treatment is to be applied that accounts for other parameters like the given input and required output precision, resp., as well. If I remember correctly, Rump mentiones XBLAS, which is available from netlib, but I have not yet checked whether this problem is solved therein.
here is just a brief summary, I will mail the details later this week.
As expected, the quotient that is given by the norm of the vector product in the numerator and the scalar product in the denominator suffers from cancellation errors for both very small (cross product is close to zero -> vectors are nearly (anti-)parallel) and very large values (scalar product is close to zero .> vectors are nearly perpendicular to each other). Due to the vanishing slope of the arctan function in the regime of large arguments (angles near odd multiples of pi/2), only the cancellation errors occurring in the vector product are propagated into errors of the calculated angle (even linearly, because the slope of arctan is 1 near zero). That means needle-like triangles pose a numerical problem. I remember a paper by Kahan on that subject… So extended precision is definitely needed in the calculation of the vector product if the estimated angle is small. The limit depends on the accuracy that is desired in the final value of the angle.
Moreover, when checking the accuracy of the two different product kinds, it turned out that XL14x64 ARCTAN is a bit more accurate than VBA7x64 ATN is, and both seem to limit the error to +/-1 ULP. XL14x64 ARCSIN is a bit worse, and seems to limit the error to +/- 2 ULPs. XL14x64 ARCCOS is severely flawed: the error may reach several thousand ULPs, so it should not be used to calculate an angle from a scalar product. Checking the precision of my test data revealed that XL14x64 RAND() is flawed as well; you need to use something RAND()*(1-1/2^37) in order to randomise the trailing bits.
Due to the fact that even VBA7x64 is single-threaded, all calculations in VBA feel very slow. Therefore, I will create a spreadsheet that allows for all the doubledouble functions needed to peform basic xBLAS calculations. This will be much more efficient than using VBA, because cell values definitely are of type double.
fortunately, I found a paper by Kahan addressing cancellation errors in the cross product: http://www.cs.berkeley.edu/~wkahan/MathH110/Cross.pdf . He proposes to use (page 15ff) the vector identity cross(a;b) = cross(a;(b-factor*a)), which is true algebraically regardless of the value of the scalar quantity factor because cross(a;a)==0, in order to “rectangularise” the needle-like parallelogram that arises in case of nearly (anti-)parallel vectors a and b. Kahan claims that In case norm(cross(a;b)) < norm(a)*norm(b)/10, setting factor = dot(a:b)/(norm(b)^2) does the job. Before introducing this trick, he writes "Understand that the preconditioning process described hereunder is an Act of Desperation justified only when extra-precise arithmetic is impracticable”. So I’ll better continue to work on my doubledouble spreadsheet…
Thanks for all your work on this Georg. As soon as I get my current project (3D frame analysis) to a publishable state I’ll work through this and see how it affects my angle finding routine. For my purposes 15 significant figures is more than enough, but if we can improve the precision without too much extra computation, that’s worth doing.
Georg – see my latest post on this topic: https://newtonexcelbach.wordpress.com/2013/01/28/precise-angles-and-xnumbers/
The default calculation method (using ATan( |(axb)|/ (a.b) ) ) seems to give the same precision as the method given in your link, at least for the examples I have looked at.
There is also a link for the latest version of XNumbers, that you might be interested in.
Pingback: Precise Angles and XNumbers | Newton Excel Bach, not (just) an Excel Blog
Pingback: The angle between two vectors, Python version | Newton Excel Bach, not (just) an Excel Blog