CGContextDrawLayerInRect from CGBitmapContextCreate failing iPhone 7 Plus

Vote count: 0

I’ve got an Objective C project which at one point takes a photo from either your photo library or from the camera, and then chops them up into little squares (tiles), and eventually creates new UIImage objects from the chopped up squares.

The code works perfectly on my older iPhone 5 and on my iPad Mini 2.

On my iPhone 7 Plus, however, it produces blank output (instead of chopped up parts of my input). The iPad Mini 2 and the iPhone 7 Plus are both running iOS 10.3.3, and are both 64-bit devices.

The complete function is here:

 -(void) createTilesFromImage:(UIImage*)img inRect:(CGRect)rect inContext:(CGContextRef)context {      if (tilesDefined) {             return;     }      tilesDefined = YES;       // Tileboard layer     CGSize tileboardsize = rect.size;     tileboardrect = rect;      // Chop image     CGLayerRef imagetochop = CGLayerCreateWithContext(context, tileboardsize, NULL);     CGContextRef ctx2 = CGLayerGetContext(imagetochop);     UIGraphicsPushContext(ctx2);     [img drawInRect:CGRectMake(0, 0, tileboardsize.width, tileboardsize.height)];     UIGraphicsPopContext();      // Tile layer size (CG & CA)     tilesize = CGSizeMake(ceilf(tileboardsize.width / dims.x),ceilf( tileboardsize.height / dims.y ));      // Bitmap context     CGContextRef bmpContext = [[QPDrawing shared] newCGBitmapContextWithSize:tilesize withData:YES];      // Create tile layers     tiles = [[NSMutableArray alloc] initWithCapacity:dims.x * dims.y];     int i = 0;     for (int y = 0; y < dims.y; y++) {             for (int x = 0; x < dims.x; x++) {                     tileLayers[i] = CGLayerCreateWithContext(context, tilesize, NULL);                                     CGContextRef squarecontext = CGLayerGetContext(tileLayers[i]);                     CGPoint offset = CGPointMake(x * tileboardsize.width / dims.x * -1, y * tileboardsize.height / dims.y * -1);                      // Invert the layer prior to drawing                     CGContextTranslateCTM(squarecontext, 0, tilesize.height);                     CGContextScaleCTM(squarecontext, 1.0, -1.0);                     CGContextDrawLayerAtPoint(squarecontext, offset, imagetochop);                      CGContextDrawLayerInRect(bmpContext, (CGRect){0,0,tilesize.width,tilesize.height}, tileLayers[i]);                     CGImageRef imageRef = CGBitmapContextCreateImage( bmpContext );                      [tiles addObject: [UIImage imageWithCGImage: imageRef]];                      CGImageRelease(imageRef);                      i++;             }     }      // Cleanup     CGLayerRelease(imagetochop);     CGContextRelease(bmpContext);      rect = CGRectInset(rect, 2, 2);     CGContextAddRect(context, rect);     CGContextSetStrokeColorWithColor(context, [UIColor colorWithRed:1 green:0 blue:0 alpha:1].CGColor);     CGContextSetLineWidth(context, 2);     CGContextStrokePath(context); }  

This calls one outside piece when creating the CGContextRef, bmpContext — newCGBitmapContextWithSize:tileSize:withData. Code for that bit is below:

 - (CGContextRef) newCGBitmapContextWithSize:(CGSize)size withData:(BOOL)includeData {      CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();     int width = (int)ceilf(size.width);     int height = (int)ceilf(size.height);     size_t bitsPerComponent = 8;     size_t bytesPerPixel    = 4;     size_t bytesPerRow      = (width * bitsPerComponent * bytesPerPixel + 7) / 8;     size_t dataSize         = bytesPerRow * height;      unsigned char *data = NULL;      if (includeData) {             data = malloc(dataSize);             memset(data, 0, dataSize);     }      CGContextRef context = CGBitmapContextCreate(data, size.width, size.height,             bitsPerComponent, bytesPerRow, colorSpace,             kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);      CGColorSpaceRelease(colorSpace);     return context; }  

At first I thought it was the “drawInRect” call that somehow wasn’t working, now I’m convinced that isn’t the case. I think it’s somehow related to the CGBitmapContextCreate call.

Thanks!

asked 3 hours ago
drewster

1 Answer

Vote count: 0

It seems to me that you’re doing much more work than you need to; perhaps you are overthinking the problem how to divide the image into tiles. You might be interested in a much simpler way. This is what I do in my Diabelli’s Theme app. It’s Swift but I don’t think you’ll have any trouble reading it (I do have the old Objective-C code sitting around if you really need it).

Obviously, I loop through the “tiles” into which the image is to be divided. I have already worked out the number of rows and colums, and the size of a tile, w by h. The original image is im. This is all I have to do:

 for row in 0..<ROWS {     for col in 0..<COLS {         UIGraphicsBeginImageContextWithOptions(CGSize(             width: CGFloat(w),height: CGFloat(h)), true, 0)         im.draw(at: CGPoint(x: CGFloat(-w*col),y: CGFloat(-h*row)))         let tileImage = UIGraphicsGetImageFromCurrentImageContext()         UIGraphicsEndImageContext()         // do something with tileImage     } }  
edited 1 hour ago
answered 1 hour ago
matt

Google Alert – “iphone 7”