Microsoft.DirectX.Vector3.Normalize () mismatch

Two ways to normalize a Vector3 object; by calling Vector3.Normalize () and another by normalizing from scratch:

class Tester {
    static Vector3 NormalizeVector(Vector3 v)
    {
        float l = v.Length();
        return new Vector3(v.X / l, v.Y / l, v.Z / l);
    }

    public static void Main(string[] args)
    {
        Vector3 v = new Vector3(0.0f, 0.0f, 7.0f);
        Vector3 v2 = NormalizeVector(v);
        Debug.WriteLine(v2.ToString());
        v.Normalize();
        Debug.WriteLine(v.ToString());
    }
}

      

The above code creates the following:

X: 0
Y: 0
Z: 1

X: 0
Y: 0
Z: 0.9999999

      

Why?

(Bonus points: Why me?)

0


source to share


5 answers


See how they implemented it (for example in asm).

Perhaps they wanted to be faster and created something like:

 l = 1 / v.length();
 return new Vector3(v.X * l, v.Y * l, v.Z * l);

      



trade 2 divisions versus 3 multiplications (because they thought multis were faster than divs (which is most often not valid for modern fpus)). This imposed another level of work, therefore less precision.

This will be the often-cited "premature optimization".

+2


source


Do not worry about that. There is always some error when using floats. If you're curious, try changing the value to double and see if that happens.



0


source


You have an interesting discussion about formatting string numbers.

For reference only:

Your number requires 24 bits, which means you are using the entire float mantissa (23 bits + 1 implied bit).
Single.ToString () ends up being implemented with a native function, so I can't tell exactly what's going on, but I'm guessing it uses the last digit to round off the entire mantissa.
This might be because you often get numbers that cannot be represented exactly in binary, so you end up with a long mantissa; for example 0.01 is represented internally as 0.00999 ... as you can see by writing:

float f = 0.01f;
Console.WriteLine ("{0:G}", f);
Console.WriteLine ("{0:G}", (double) f);

      

rounding up to the seventh digit returns you to "0.01", which is expected.

For what can be seen above, numbers with only 7 digits will not show this issue as you have already seen.

Just to be clear: rounding only happens when you convert your number to a string: your computation, if any, will use all available bits.

The floats have 7 digit precision externally (9 internally), so if you go above then rounding (with potential quirks) is done automatically.
If you put float up to 7 digits (e.g. 1 left, 6 right) then it will work and the string conversion will also.

As for bonus points:

Why do you? Because this code "wanted to fool you".
(Volcano ... blow ... approx. Lamest. Punt. Ever)

0


source


You should expect this when using float, the main reason is that the computer is processing binaries and it doesn't match exactly the decimal.

For an intuitive example of problems between different bases, consider a 1/3 fraction. It cannot be represented exactly in Decimal (it's 0.333333 .....), but it can be represented in Terniary (like 0.1).

Typically, these problems are much less obvious with doublings, at the cost of computational costs (twice the number of bits to control). However, in view of the fact that a floating level of accuracy is enough to reach a man to the moon, you really shouldn't be obsessed :-)

These issues are kind of Computer Theory 101 (as opposed to Programming 101, which you are clearly far beyond) and if your heading towards Direct X code where things like this can happen on a regular basis, I would suggest it might be good an idea to pick up a basic book on computer theory and read it quickly.

0


source


If your code is broken down into minute floating point rounding errors, then I'm afraid you need to fix that as it's just a fact of life.

0


source







All Articles