Averaging a set of points on a google map in a smaller set

I am displaying a small google map on a web page using the Google Maps Static API.

I have a set of 15 coordinates that I would like to represent as points on a map.

Due to the fact that the map is quite small (184 x 90 pixels) and an upper limit of 2000 characters in the Google Maps URL, I cannot represent every point on the map.

So instead I want to create a small list of coordinates representing the average of the large list.

So, instead of having 15 sets, I would get 5 sets that position about 15 positions. Let's say there are 3 points that are closer to each other from each other than to any other point on the map, these points will be collapsed into 1 point.

So, I think I am looking for an algorithm that can do this.

Not asking anyone to explain each step, but perhaps point me in the direction of a mathematical principle or a universal function for this kind of thing?

I'm sure a similar function is used in, say, graphics software when pixelating an image.

(If I allow this, I will definitely post my results.)

+2


source to share


3 answers


I recommend K- clustering when you need to group N objects into a known number of K <N clusters, which seems to be your case. Note that one cluster can have one outlier point and another with 5 points very close to each other: this is OK, it will look closer to your original set than if you had exactly 3 points in each cluster! -)



+3


source


If you are searching for such functions / classes, have a look at the MarkerClusterer and MarkerManager utility classes. MarkerClusterer exactly matches the described functionality as shown in this demo .



+1


source


In general, I think the area to look for is "Vector Quantization". I have an old book title, Vector Quantization and Signal Compression, by Allen Gershaw and Robert M. Gray, which contains many examples.

From memory, Lloyd Iteration was a good algorithm for this kind of thing. It can take the input set and reduce it to a fixed set of points. Basically, evenly or randomly distribute your points around the space. Match each of your inputs to the nearest quantized point. Then calculate the error (for example, the sum of the distances or the root-mean-square length). Then, for each output point, set it to the center of the set that maps to it. This will move the point and possibly even change the set to be matched against it. Do this iteratively until changes are detected from one iteration to the next.

Hope it helps.

0


source







All Articles