Hri*_*shi 4 iphone objective-c iphone-privateapi ios4 ios
这是一个由两部分组成的问题。我有以下代码工作,它抓取当前显示表面并从表面创建一个视频(一切都发生在后台)。
for(int i=0;i<100;i++){
IOMobileFramebufferConnection connect;
kern_return_t result;
IOSurfaceRef screenSurface = NULL;
io_service_t framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleH1CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleM2CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleCLCD"));
result = IOMobileFramebufferOpen(framebufferService, mach_task_self(), 0, &connect);
result = IOMobileFramebufferGetLayerDefaultSurface(connect, 0, &screenSurface);
uint32_t aseed;
IOSurfaceLock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
uint32_t width = IOSurfaceGetWidth(screenSurface);
uint32_t height = IOSurfaceGetHeight(screenSurface);
m_width = width;
m_height = height;
CFMutableDictionaryRef dict;
int pitch = width*4, size = width*height*4;
int bPE=4;
char pixelFormat[4] = {'A','R','G','B'};
dict = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(dict, kIOSurfaceIsGlobal, kCFBooleanTrue);
CFDictionarySetValue(dict, kIOSurfaceBytesPerRow, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &pitch));
CFDictionarySetValue(dict, kIOSurfaceBytesPerElement, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &bPE));
CFDictionarySetValue(dict, kIOSurfaceWidth, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &width));
CFDictionarySetValue(dict, kIOSurfaceHeight, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &height));
CFDictionarySetValue(dict, kIOSurfacePixelFormat, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, pixelFormat));
CFDictionarySetValue(dict, kIOSurfaceAllocSize, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &size));
IOSurfaceRef destSurf = IOSurfaceCreate(dict);
IOSurfaceAcceleratorRef outAcc;
IOSurfaceAcceleratorCreate(NULL, 0, &outAcc);
IOSurfaceAcceleratorTransferSurface(outAcc, screenSurface, destSurf, dict, NULL);
IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
CFRelease(outAcc);
// MOST RELEVANT PART OF CODE
CVPixelBufferCreateWithBytes(NULL, width, height, kCVPixelFormatType_32BGRA, IOSurfaceGetBaseAddress(destSurf), IOSurfaceGetBytesPerRow(destSurf), NULL, NULL, NULL, &sampleBuffer);
CMTime frameTime = CMTimeMake(frameCount, (int32_t)5);
[adaptor appendPixelBuffer:sampleBuffer withPresentationTime:frameTime];
CFRelease(sampleBuffer);
CFRelease(destSurf);
frameCount++;
}
Run Code Online (Sandbox Code Playgroud)
PS:最后4-5行代码是最相关的(如果你需要过滤的话)。
1) 制作的视频有人工制品。我以前处理过视频,以前也遇到过这样的问题。我想这可能有两个原因:
i。传递给适配器的 PixelBuffer 在处理(编码 + 写入)完成之前被修改或释放。这可能是由于异步调用造成的。但我不确定这本身是否是问题以及如何解决它。
ii. 传递的时间戳不准确(例如,具有相同时间戳的 2 帧或具有比前一帧低的时间戳的帧)。我注销了时间戳值,这似乎不是问题。
2)当播放视频或玩游戏时,上面的代码无法抓取表面。我得到的只是输出中的空白屏幕。这可能是由于在这种情况下发生的硬件加速解码。
对问题的两个部分中的任何一个的任何输入都将非常有帮助。另外,如果您有任何关于 IOSurfaces 的好链接可以阅读,请在此处发布。
我做了一些实验并得出结论,即使在内容传输完成之前(调用 IOSurfaceAcceleratorTransferSurface() ),复制内容的屏幕表面也会发生变化。我正在使用锁(尝试了异步和只读),但它被 iOS 覆盖。我将锁定/解锁部分之间的代码更改为以下最小:
IOSurfaceLock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
aseed1 = IOSurfaceGetSeed(screenSurface);
IOSurfaceAcceleratorTransferSurface(outAcc, screenSurface, destSurf, dict, NULL);
aseed2 = IOSurfaceGetSeed(screenSurface);
IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
Run Code Online (Sandbox Code Playgroud)
GetSeed 函数会告知表面的内容是否已更改。而且,我记录了一个计数,指示种子更改的帧数。计数不为零。所以,下面的代码解决了这个问题:
if(aseed1 != aseed2){
//Release the created surface
continue; //Do not use this surface/frame since it has artefacts
}
Run Code Online (Sandbox Code Playgroud)
然而,这确实会影响性能,因为许多框架/表面由于人工制品而被拒绝。对此的任何添加/更正都会有所帮助。