Extract 32-bit RGBA value from NSColor

I have an NSColor and I actually need the 32bit RGBA value it represents. Is there any easy way to get this beyond highlighting the float components, then multiplying and ORing, and generally doing crude, user-dependent things?

Edit: Thanks for the help. In fact, what I was hoping for was a Cocoa function that already did this, but I'm great at doing it myself.

+1


source to share


3 answers


Another brute force approach is to create a temporary CGBitmapContext and fill with color.

NSColor *someColor = {whatever};
uint8_t data[4];

CGContextRef ctx = CGBitmapContextCreate((void*)data, 1, 1, 8, 4, colorSpace, kCGImageAlphaFirst | kCGBitmapByteOrder32Big);

CGContextSetRGBFillColor(ctx, [someColor redComponent], [someColor greenComponent], [someColor blueComponent], [someColor alphaComponent]);

CGContextFillRect(ctx, CGRectMake(0,0,1,1));

CGContextRelease(ctx);

      

FWIW, there is no problem with an entity with an 8-bit component color value. Endianness only has 16 bits or more integers. You can lay out the memory however you want, but the 8-bit integer values ​​are the same whether it's a big-end machine or a small-end machine. (ARGB is the default 8-bit format for Core Graphics and Core Image, which I believe.)

Why not only this ?:



uint32_t r = (uint32_t)(MIN(1.0f, MAX(0.0f, [someColor redComponent])) * 255.0f);
uint32_t g = (uint32_t)(MIN(1.0f, MAX(0.0f, [someColor greenComponent])) * 255.0f);
uint32_t b = (uint32_t)(MIN(1.0f, MAX(0.0f, [someColor blueComponent])) * 255.0f);
uint32_t a = (uint32_t)(MIN(1.0f, MAX(0.0f, [someColor alphaComponent])) * 255.0f);
uint32_t value = (r << 24) | (g << 16) | (b << 8) | a;

      

Then you know exactly how it is laid out in memory.

Or, if this is clearer to you:

uint8_t r = (uint8_t)(MIN(1.0f, MAX(0.0f, [someColor redComponent])) * 255.0f);
uint8_t g = (uint8_t)(MIN(1.0f, MAX(0.0f, [someColor greenComponent])) * 255.0f);
uint8_t b = (uint8_t)(MIN(1.0f, MAX(0.0f, [someColor blueComponent])) * 255.0f);
uint8_t a = (uint8_t)(MIN(1.0f, MAX(0.0f, [someColor alphaComponent])) * 255.0f);

uint8_t data[4];
data[0] = r;
data[1] = g;
data[2] = b;
data[3] = a;

      

+5


source


Not all colors are RGBA. They may be approximated in RGBA, but this may or may not be accurate. In addition, there are "colors" that Core Graphics draws as templates (for example, the background color on some versions of Mac OS X).



+3


source


Convert 4 floats to their integer representation, however you want to do this, this is the only way.

+1


source







All Articles