"Linear" package truncates values ​​close to 0 when using `normalize`

I spent a few minutes debugging an issue that was tracking "Linear" truncation values ​​close to zero when using "Linear.normalize". Specifically, I was taking the cross product of very small triangles and normalizing the result, which surprisingly misbehaved until I noticed what was wrong and multiplied the cross product by 10,000.

Why is this necessary? How can I get rid of this behavior?

Edit: just for fun, here's a video of the error . Notice that the sphere loses color when the number of triangles close to it is large enough? Yes, good luck debugging that ...!

+3


source to share


1 answer


Looking at for normalize

, you will see that it is defined as

-- | Normalize a 'Metric' functor to have unit 'norm'. This function
-- does not change the functor if its 'norm' is 0 or 1.
normalize :: (Floating a, Metric f, Epsilon a) => f a -> f a
normalize v = if nearZero l || nearZero (1-l) then v else fmap (/sqrt l) v
  where l = quadrance v

      

This means that if your score is really close to 0, you will get the wrong value. To avoid this, you can write your own function normalize

without this check as

normalize' :: (Floating a, Metric f) => f a -> f a
normalize' v = fmap (/ sqrt l) v where l = quadrance v

      

And with any luck, it should solve your problem.

Another way could be to scale your values, do the calculations, then scale them back, something like



normalize' factor = (* factor) . normalize . (/ factor)

      

So you can call

normalize' 10e-10 (V3 1e-10 2e-10 3e-10)

      

but this can easily introduce rounding errors due to how IEEE floating point numbers are stored.

The EDIT: . As cchalmers points out, this is already implemented as signorm

in Linear.Metric

, so use this function.

+2


source







All Articles