Choose between map or unordered_map for keys consisting of computed double values.

To quickly find coplanar entities in a bundle of plane entities in 3d space, I want to create a mapping from 3D planes to a set of objects lying in this plane (estimated maximum about 1000 planes and ~ 100000 objects)

I can create my own custom class to represent 3D planes, but basically this class needs (mathematically) four twins to uniquely identify the plane: 3 coordinates for a normal vector and one coordinate for specifying a point on the plane. So we have:

struct Plane3d
{
    double n_x, n_y, n_z;
    double u;   // (u*n_x, u*n_y, u*n_z) is a point on the plane

    // some constructors etc.
}

      

These four doubles are calculated each time from the object in question, so rounding errors and floating point comparison issues must be considered. Suppose I have calculated the appropriate (relative) error:

const double EPSILON;

      

However, I don't want to compare the coplanarity of all pairs of objects one by one (categorization in O (n ^ 2)), but create a map to categorize my objects.

The most ideal would be unordered_map (creation in O (n) time):

unordered_map<Plane3d, vector<Entity>, PlaneHash, PlaneEquality> mapping;

      

This requires writing two functors: PlaneEquality is not a problem, but ...

  • Is it possible to write a Hash function for four two-local (or even just regular doubles) that account for error comparison tolerances.
  • in the spec I read that the equality functor is only used to allocate equal keys inside the same bucket. Is there a way to ensure that equal keys actually end up in the same bucket?

Another option is to use a normal map (fixed creation in O (n log n))

map<Plane3d, vector<Entity>, PlaneCompare> mapping; 

      

The PlaneCompare functor seems to be doable, I could use the lexicographic ordering of the four doubles and check for each "less" using EPSILON

. But I still have a few questions:

  • Will this really work? Are there pitfalls?
  • The spec states that the equality of keys like p1 and p2 is determined by the test !PlaneCompare(p1,p2) && !PlaneCompare(p2,p1)

    . If I used lexicographic ordering it should be equivalent to a straightforward equality test with an error margin, but isn't that slower?
+3


source to share


2 answers


"it is possible to write a hash function for four paired (or even just regular doubles) that account for error comparison tolerances."

No, it is not.

This sounds like a very definite statement, how can I be sure?

Let's say you want a tolerance of 0.00001. The value doesn't matter, we'll just use it as an example. This would mean that for this hash function:

  • hash (1.0) must return the same value as hash (1.00001)

so that they can be considered equal. But it also means:



  • hash (1.00001) should return the same value as hash (1.00002)

for the same reason, and

  • hash (1.00002) should return the same value as hash (1.00003)

... and so on, all the way up to the largest possible double - effectively ad infinitum. The same is true for values ​​below 1.

So any hash function that allows tolerance will have to return the same hash for all values , making it useless.

PS To really recommend an approach that actually works, a 4D quadrina (technically something like sedecimtree) is probably best.

+3


source


You can get around your doubles, for example, all values ​​in a range are [n - epsilon, n + epsilon)

rounded to n

, where n mod epsilon == 0

, and hashed them. In this case, your close values ​​will have the same hash values.



How best hash 4 doubles depends on your case, but even summing them up can be good enough.

0


source







All Articles