# Using math in Apple's pARk sample code

I looked into the pARK example project (http://developer.apple.com/library/IOS/#samplecode/pARk/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011083) so that I can apply some of its basics in the app I'm working on. I understand almost everything except:

• The way it should calculate whether a point of interest should be displayed or not. It gets the ratio, multiplies it by the projection matrix (to get the rotation in GL coordinates?), Then multiply this matrix with the coordinates of the point of interest, and finally look at the last coordinate of that vector to see if the point of interest should be shown. What are the mathematical underpinnings of this?

Thank you so much!

+3

source to share

I assume you mean the following method:

``````- (void)drawRect:(CGRect)rect
{
if (placesOfInterestCoordinates == nil) {
return;
}

mat4f_t projectionCameraTransform;
multiplyMatrixAndMatrix(projectionCameraTransform, projectionTransform, cameraTransform);

int i = 0;
for (PlaceOfInterest *poi in [placesOfInterest objectEnumerator]) {
vec4f_t v;
multiplyMatrixAndVector(v, projectionCameraTransform, placesOfInterestCoordinates[i]);

float x = (v[0] / v[3] + 1.0f) * 0.5f;
float y = (v[1] / v[3] + 1.0f) * 0.5f;
if (v[2] < 0.0f) {
poi.view.center = CGPointMake(x*self.bounds.size.width, self.bounds.size.height-y*self.bounds.size.height);
poi.view.hidden = NO;
} else {
poi.view.hidden = YES;
}
i++;
}
}
```

```

This performs an OpenGL vertex transformation at the points of interest to check if they are in visible clipping. The truncation is created on the following line:

``````createProjectionMatrix(projectionTransform, 60.0f*DEGREES_TO_RADIANS, self.bounds.size.width*1.0f / self.bounds.size.height, 0.25f, 1000.0f);
```

```

This sets a clipping with a 60 degree field of view, a clipping plane of about 0.25, and a flat clipping plane of 1000. Any point of interest that is further than 1000 units will then not be visible.

So, to execute the code, we first multiply the projection matrix, which sets the clipping, and the camera view matrix, which simply rotates the object so that it is correct in relation to the camera. Then, for each POI, its location is multiplied by the viewProjection matrix. This will project the location of the POI into clipping using rotation and perspective.

The next two lines then convert the transformed site location to what is known as normalized device coordinates. A 4-component vector must be folded into three-dimensional space, this is achieved by projecting it onto the plane w == 1 by dividing the vector by its w-component v [3]. Then it is possible to determine whether the point lies in the truncation of the projection by checking whether its coordinates are in a cube with side length 2 with the origin [0, 0, 0]. In this case, the x and y coordinates are shifted from the range [-1 1] to [0 1] to match the coordinate system `UIKit`

, adding 1 and dividing by 2.

The component v [2], z, is then checked if it is greater than 0. This is actually not true, since it was not biased, it should be checked to see if it is greater than -1. This will determine if the POI is in the first half of the projection clipping if that means the object is considered visible and rendered.

If you're not familiar with vertex projection and coordinate systems, this is a huge topic with a pretty steep learning curve. However, there are many online resources, here are a few links to help you get started:

Luck//

+11

source

All Articles