我打算做的事情如下:
修复彩色物体.使用带有camshift的直方图反投影方法在视频帧中跟踪它.我使用以下代码,它总是最终检测到皮肤.我知道我犯了一些非常简单的错误.如果有人能指出它会很有帮助.
//I have included only the integral parts of code. There are no compilation errors.
int lowerH =80, upperH =100, lowerS =80, upperS =255, lowerV =80, upperV =255;
CvScalar output_min =cvScalar(lowerH, lowerS, lowerV, 0); //Color Track
CvScalar output_max =cvScalar(upperH, upperS, upperV, 0);
CvScalar output_min2 =cvScalar(0, lowerS, lowerV, 0); //Color Track
CvScalar output_max2 =cvScalar(180, upperS, upperV, 0);
while(true){
frame =cvQueryFrame(capture);
cvCvtColor(frame, output, CV_BGR2HSV);
cvInRangeS(output, output_min, output_max, output_mask);
blobs =CBlobResult(output_mask, NULL, 0);
blobs.Filter(blobs, B_EXCLUDE, CBlobGetArea(), B_LESS, 35);
int num_blobs =blobs.GetNumBlobs();
for(int …Run Code Online (Sandbox Code Playgroud) 这个问题是对以下内容的扩展:
Link-1:从ios表面创建图像并保存它
Link-2:从iOS应用程序中截取屏幕截图 - 模拟显示记录器(在内部查询)
(参考链接-1)我有在后台工作时截取屏幕截图的代码.但如上所述,它需要2秒的睡眠才能不间断地工作,否则操作系统会暂停应用程序.我想出原因可能是我没有明确发布我创建的IOSurface.
原因 - 使用Victor Ronin给出的链接http://pastie.org/pastes/3734430,即使没有睡眠,捕获也能正常工作.在我每次写入图像后,我尝试使用CFRelease释放destSurf(我创建的目标表面),但这不起作用.
任何有关何时,如何以及是否发布创建的IOSurface的帮助都将非常有用.谢谢.
更新
所以,这就是确切发生的事情.(参考链接-1)
IOSurfaceRef destSurf = IOSurfaceCreate(dict);
IOSurfaceAcceleratorRef outAcc;
IOSurfaceAcceleratorCreate(NULL, 0, &outAcc);
CFDictionaryRef ed = (__bridge CFDictionaryRef)[NSDictionary dictionaryWithObjectsAndKeys: nil];
IOSurfaceAcceleratorTransferSurface(outAcc, screenSurface, destSurf, ed, NULL);
IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, IOSurfaceGetBaseAddress(destSurf), (width*height*4), NULL);
CGImageRef cgImage=CGImageCreate(width, height, 8, 8*4, IOSurfaceGetBytesPerRow(destSurf), CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little, provider, NULL, YES, kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage: cgImage];
CGImageRelease(cgImage);
UIImageWriteToSavedPhotosAlbum(image, self, nil, nil);
CFRelease(destSurf);
Run Code Online (Sandbox Code Playgroud)
当应用程序进入后台状态时我正在进行一些处理,我把它放在beginBackgroundTaskWithExpirationHandler()之下,因此应该分配十分钟.但是,我相信该应用程序在此之前被暂停.我正在执行的任务是内存和CPU密集型,因此操作系统是否可能将我的应用程序暂停状态?
如果是这样,有没有办法绕过这些限制.(我愿意使用私有API).
以下是启动后台任务的代码:
if([[UIDevice currentDevice] respondsToSelector:@selector(isMultitaskingSupported)]){
if([[UIDevice currentDevice] isMultitaskingSupported]){
__block UIBackgroundTaskIdentifier bgTask;
UIApplication *application = [UIApplication sharedApplication];
bgTask = [application beginBackgroundTaskWithExpirationHandler:^{
[application endBackgroundTask:bgTask];
bgTask = UIBackgroundTaskInvalid;
}];
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[self captureImage];
[application endBackgroundTask:bgTask];
bgTask = UIBackgroundTaskInvalid;
});
}
}
Run Code Online (Sandbox Code Playgroud) 我正在尝试在窗口上捕获屏幕的方法,从而决定最快的方法.最常见的是GDI方式.而且性能也不错.根据系统负载和静态/非静态屏幕内容,屏幕捕获率范围为27-47 fps(Windows 7,Intel i5 @2.6 GHz,8 GB RAM).
现在,使用DirectX前端缓冲区方法(使用GetFrontBufferData()API),性能相当,但稍微偏慢(我无法达到48 fps).
我仔细阅读了这篇文章:屏幕截图的最快方法,并尝试使用getRenderTarget()和getRenderTargetData()在接受的答案中建议,但正如评论中所建议的,我得到的只是一个黑色图像.这是我的完整代码,包括设备的初始配置:
IDirect3DSurface9* pRenderTarget=NULL;
IDirect3DSurface9* pDestTarget=NULL;
// sanity checks.
if (g_pd3dDevice == NULL){
return;
}
HRESULT hr;
// get the render target surface.
hr = g_pd3dDevice->GetRenderTarget(0, &pRenderTarget);
// get the current adapter display mode.
//hr = pDirect3D->GetAdapterDisplayMode(D3DADAPTER_DEFAULT,&d3ddisplaymode);
// create a destination surface.
hr = g_pd3dDevice->CreateOffscreenPlainSurface(1600, 900, D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, &pDestTarget, NULL);
//copy the render target to the destination surface.
hr = g_pd3dDevice->GetRenderTargetData(pRenderTarget, pDestTarget);
D3DLOCKED_RECT lockedRect;
hr =pDestTarget->LockRect(&lockedRect,NULL, D3DLOCK_NO_DIRTY_UPDATE|D3DLOCK_NOSYSLOCK|D3DLOCK_READONLY);
for( int …Run Code Online (Sandbox Code Playgroud) 这是一个由两部分组成的问题。我有以下代码工作,它抓取当前显示表面并从表面创建一个视频(一切都发生在后台)。
for(int i=0;i<100;i++){
IOMobileFramebufferConnection connect;
kern_return_t result;
IOSurfaceRef screenSurface = NULL;
io_service_t framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleH1CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleM2CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleCLCD"));
result = IOMobileFramebufferOpen(framebufferService, mach_task_self(), 0, &connect);
result = IOMobileFramebufferGetLayerDefaultSurface(connect, 0, &screenSurface);
uint32_t aseed;
IOSurfaceLock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
uint32_t width = IOSurfaceGetWidth(screenSurface);
uint32_t height = IOSurfaceGetHeight(screenSurface);
m_width = width;
m_height = height;
CFMutableDictionaryRef dict;
int pitch = width*4, size = width*height*4;
int bPE=4;
char pixelFormat[4] = {'A','R','G','B'};
dict = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(dict, kIOSurfaceIsGlobal, kCFBooleanTrue); …Run Code Online (Sandbox Code Playgroud) 我有一个视频文件保存在我的应用程序的文档文件夹中的本地目录中.我想在用户点击我创建的嵌入式表格视图中的项目时播放该文件.我播放视频的代码如下:
NSString* documentPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString* path = [documentPath stringByAppendingPathComponent:@"DemoRecording.mp4"];
NSURL* movieURL = [NSURL fileURLWithPath: path];
[player.view setFrame: self.view.bounds];
[self.view addSubView: player.view];
[player play];
MPMoviePlayerViewController* moviePlayer = [[MPMoviePlayerViewController alloc] initWithContentURL: movieURL];
[self presentMoviePlayerViewControllerAnimated: moviePlayer];
Run Code Online (Sandbox Code Playgroud)
视频无法播放.路径正确,文件存在.
我知道这是一个重复的问题.我已经提到了这些链接:
使用Cocoa-Touch
播放文件目录中的下载视频MPMoviePlayer加载并播放保存在应用文档中的电影
但是找不到明确的答案.
请帮我解决这个问题.我正在使用iOS 5.1.1,如果这改变了什么.
编辑:
它确实有效.我忘了将URL传递给电影播放器.