为什么这不适用于swift 3?它在运行时崩溃说:
' - [my_app_name.displayOtherAppsCtrl tap:]:无法识别的选择器发送到实例0x17eceb70'
override func viewDidLoad() {
super.viewDidLoad()
// Uncomment the following line to preserve selection between presentations
// self.clearsSelectionOnViewWillAppear = false
// Register cell classes
//self.collectionView!.register(ImageCell.self, forCellWithReuseIdentifier: reuseIdentifier)
// Do any additional setup after loading the view.
let lpgr = UITapGestureRecognizer(target: self, action: Selector("tap:"))
lpgr.delegate = self
collectionView?.addGestureRecognizer(lpgr)
}
func tap(gestureReconizer: UITapGestureRecognizer) {
if gestureReconizer.state != UIGestureRecognizerState.ended {
return
}
let p = gestureReconizer.location(in: self.collectionView)
let indexPath = self.collectionView?.indexPathForItem(at: p)
if let index = indexPath { …Run Code Online (Sandbox Code Playgroud) 我正在努力执行续.AVCapture在iOS 10 beta上使用语音识别.我已经设置captureOutput(...)了不断获得CMSampleBuffers.我将这些缓冲区直接放入SFSpeechAudioBufferRecognitionRequest我之前设置的中,如下所示:
... do some setup
SFSpeechRecognizer.requestAuthorization { authStatus in
if authStatus == SFSpeechRecognizerAuthorizationStatus.authorized {
self.m_recognizer = SFSpeechRecognizer()
self.m_recognRequest = SFSpeechAudioBufferRecognitionRequest()
self.m_recognRequest?.shouldReportPartialResults = false
self.m_isRecording = true
} else {
print("not authorized")
}
}
.... do further setup
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
if(!m_AV_initialized) {
print("captureOutput(...): not initialized !")
return
}
if(!m_isRecording) {
return
}
let formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer)
let mediaType = CMFormatDescriptionGetMediaType(formatDesc!)
if (mediaType …Run Code Online (Sandbox Code Playgroud) 我正在尝试将音频输入移植到 Mac Catalyst。截至目前,我使用的是 xcode13GM、macOS 10.15 Beta 8 (19A558d)、ios13beta8。以下代码不返回任何输入端口。
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
var mic : AVAudioSessionPortDescription? = nil
for input in audioSession.availableInputs! {
if input.portType == AVAudioSession.Port.builtInMic {
mic = input
} else {
print("Not internal mic")
}
// here: 'mic' is nil
Run Code Online (Sandbox Code Playgroud)
我已授予“Hardend Runtime - Audio Input”权利,该应用程序要求获得授予的麦克风权限。
访问 audioSession.availableInputs 时,控制台中会显示以下错误:
[avas] AVAudioSession_MacOS.mm:258:-[AVAudioSession getChannelsFromAU:PortName:PortID:]: 获取 auScope 1768845428 元素 1 的频道布局时出错
这是由于整个内容的测试状态导致的错误还是我遗漏了什么?
谢谢
我有一个CGImage,它由CVPixelbuffer(ARGB)构成.我想将CGImage转换为MTLTexture.我用:
let texture: MTLTexture = try m_textureLoader.newTexture(with: cgImage, options: [MTKTextureLoaderOptionSRGB : NSNumber(value: true)] )
Run Code Online (Sandbox Code Playgroud)
后来我想在具有3个通道的MPSImage中使用纹理:
let sid = MPSImageDescriptor(channelFormat: MPSImageFeatureChannelFormat.float16, width: 40, height: 40, featureChannels: 3)
preImage = MPSTemporaryImage(commandBuffer: commandBuffer, imageDescriptor: sid)
lanczos.encode(commandBuffer: commandBuffer, sourceTexture: texture!, destinationTexture: preImage.texture)
scale.encode (commandBuffer: commandBuffer, sourceImage: preImage, destinationImage: srcImage)
Run Code Online (Sandbox Code Playgroud)
现在我的问题:textureLoader.newTexture(...)如何将四个ARGB通道映射到MPSImageDescriptor中指定的3个通道?如何确保使用RGB组件而不是ARG?有没有办法指定频道映射?
谢谢,克里斯
我有一个(动画)UIView-Hierarchy,我想定期将 UIView 内容渲染到 MTLTexture 中以进行进一步处理。
我尝试过的是子类化我的父 UIView 和
override public class var layerClass: Swift.AnyClass {
return CAMetalLayer.self
}
Run Code Online (Sandbox Code Playgroud)
但 nextDrawable() 的纹理是黑色的,不显示视图内容。
有什么想法如何获取包含视图内容的 MTLTexture 吗?