Converting image to Gray scale in the iOS

Recently I was working on iOS image processing app. Among many features I am planning to spin out, most of them were related to making image a Grayscale before applying any effects.

So in this post, I am going to mention what I did to make to take a color image as an input and convert that to Gray scale.

The reason I did it was, I was planning to add multiple colored layers over it. Those layers would look pretty cool if image were first converted to gray scale.

This code is not written by me, but I found somewhere on the on this link. Just putting it forward for anyone who wants to implement this functionality

since we are going to process an instance of UIImage, we will just make a simple category on UIImage. Say UIImage+toGrayScale.

- (UIImage*)toGrayscale {  
    // Create image rectangle with current image width/height.  
    CGRect imageRect = CGRectMake (0, 0, self.size.width * self.scale, self.size.height * self.scale);  
    NSInteger width = imageRect.size.width;
    NSInteger height = imageRect.size.height;  
    // The pixels will be painted to this array.
    uint32_t* pixels = (uint32_t*)malloc (width * height * sizeof (uint32_t));  
    // Clear the pixels so any transparency is preserved.  
    memset (pixels, 0, width * height * sizeof (uint32_t));
    // Create a context with RGBA pixels.  
    CGContextRef context = CGBitmapContextCreate (pixels, width, height, BITS_PER_SAMPLE, width * sizeof (uint32_t),
    CGColorSpaceCreateDeviceRGB (), kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);  
    // Paint the bitmap to our context which will fill in the pixels array.  
    CGContextDrawImage (context, CGRectMake (0, 0, width, height), [self CGImage]);
    for (NSInteger y = 0; y < height; y++) {
        for (NSInteger x = 0; x < width; x++) {
           uint8_t* rgbaPixel = (uint8_t*)(&PIXEL (x, y));
           // Convert to grayscale using recommended method.
           uint8_t gray = (uint8_t)((30 * rgbaPixel[RED] + 59 * rgbaPixel[GREEN] + 11 * rgbaPixel[BLUE]) / 100);
           // set the pixels to gray
           rgbaPixel[RED] = gray;
           rgbaPixel[GREEN] = gray;
           rgbaPixel[BLUE] = gray;
    // Create a new CGImageRef from our context with the modified pixels.  
    CGImageRef image = CGBitmapContextCreateImage (context);   
    // We're done with the context, color space, and pixels.
    CGContextRelease (context);
    free (pixels);  
    // Make a new UIImage to return.  
    UIImage* resultUIImage = [UIImage imageWithCGImage:image scale:self.scale orientation:UIImageOrientationUp];   
    // We're done with image now too.
    CGImageRelease (image);
    return resultUIImage;

And that's it. It was quite simple than I thought earlier. Again thanks to this StackOverflow post for valuable guidance and directions.