我正在使用Xcode 8.0 beta 4.
在以前的版本中,UIViewController具有设置状态栏样式的方法
public func preferredStatusBarStyle() -> UIStatusBarStyle
Run Code Online (Sandbox Code Playgroud)
但是,我发现它在Swift 3中改为"Get ONLY varaiable".
public var preferredStatusBarStyle: UIStatusBarStyle { get }
Run Code Online (Sandbox Code Playgroud)
如何在我的UIViewController中提供使用的样式?
如何像"相机"那样实时进行人脸检测?
我注意到,AVCaptureStillImageOutput为10.0后弃用,因此我用 AVCapturePhotoOutput代替.但是,我发现我为面部检测保存的图像并不那么令人满意?有任何想法吗?
UPDATE
在尝试了@Shravya Boggarapu之后提到.目前,我AVCaptureMetadataOutput用来检测脸部没有CIFaceDetector.它按预期工作.然而,当我试图画出脸部的边界时,似乎错位了.任何的想法?
let metaDataOutput = AVCaptureMetadataOutput()
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera, mediaType: AVMediaTypeVideo, position: .back)
do {
let input = try AVCaptureDeviceInput(device: backCamera)
if (captureSession.canAddInput(input)) {
captureSession.addInput(input)
// MetadataOutput instead
if(captureSession.canAddOutput(metaDataOutput)) {
captureSession.addOutput(metaDataOutput)
metaDataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
metaDataOutput.metadataObjectTypes = [AVMetadataObjectTypeFace]
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.frame = cameraView.bounds
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
cameraView.layer.addSublayer(previewLayer!)
captureSession.startRunning()
}
}
} catch {
print(error.localizedDescription)
}
Run Code Online (Sandbox Code Playgroud)
和
extension CameraViewController: AVCaptureMetadataOutputObjectsDelegate {
func captureOutput(_ captureOutput: …Run Code Online (Sandbox Code Playgroud) 我有新的反应.我正在按照安装节点的教程步骤进行操作.但是,当我跑步时npm start,错误会一直显示出来.
Failed to compile.
./src/index.js
Module build failed: Error: Failed to load plugin import: Cannot find module 'eslint-plugin-import'
Referenced from:
at Array.forEach (native)
at Array.reduceRight (native)
Run Code Online (Sandbox Code Playgroud)
另外,我试图安装'eslint-plugin-import',但是徒劳无功......
npm install eslint-plugin-import -g
npm WARN eslint-plugin-import@2.7.0 requires a peer of eslint@2.x - 4.x but none was installed.
+ eslint-plugin-import@2.7.0
Run Code Online (Sandbox Code Playgroud)
系统信息
eslint --version
v4.2.0
npm -v
5.2.0
npm list -g | grep eslint
??? eslint@4.2.0
? ??? eslint-scope@3.7.1
??? eslint-plugin-import@2.7.0
? ??? eslint-import-resolver-node@0.3.1
? ??? eslint-module-utils@2.1.1
??? eslint-plugin-react@7.1.0
Run Code Online (Sandbox Code Playgroud) 自动调整掩码是否适用于iPhoneX?
当Apple 去年推出Auto Layout的新功能时,一切正常,没有任何限制.
但是,当我在iPhoneX模拟器上尝试自动布局时,它不适用于安全区域.
(✓使用安全区域布局指南)
Auto Layout (without constraint)
当我使用 storyboard segue 时,在屏幕上显示另一个视图控制器非常流畅。但是,当我不使用 storyboard 时,只需添加一行简单的代码 with navigationController?.pushViewController(UIViewController(), animated: true),它的过渡有点滞后。
我还阅读了推送视图控制器 (iOS) 时的延迟。但即使我正在推送一个全新的视图控制器(里面没有额外的代码),转换仍然有点滞后,知道吗?
我怎么能ARNode指向一个ARAnchor?
我想art.scnassets/ship.scn在屏幕中央使用显示并指向我刚刚放置在场景中的对象.
/// ViewController Class
func placeObject() {
let screenCentre : CGPoint = CGPoint(x: self.sceneView.bounds.midX, y: self.sceneView.bounds.midY)
guard let hitTestResult = sceneView.hitTest(screenCentre, types: [.featurePoint]).first else { return }
// Place an anchor for a virtual character.
let anchor = ARAnchor(name: identifierString, transform: hitTestResult.worldTransform)
sceneView.session.add(anchor: anchor)
// add to item model
ItemModel.shared.anchors.append((identifierString, anchor)
}
func showDirection(of object: ARAnchor) { // object: saved anchor
if !Guide.shared.isExist {
let startPoint = SCNVector3(0, 0 , -1)
let targetPoint …Run Code Online (Sandbox Code Playgroud) 我已经使用MapKit实现了 Apple SwiftUI 教程的略微更改版本。
当我运行它时,我收到错误消息并调整我得到的相应视图的大小:
[SwiftUI] NSHostingView is being laid out reentrantly while rendering its SwiftUI content. This is not supported and the current layout pass will be skipped.
查看调用代码:
MapView(coordinate: location?.coordinates)
.frame(minHeight: 200)
.overlay(
GeometryReader{
proxy in
Button("Open in Maps") {
if (self.location?.coordinates != nil){
let destination = MKMapItem(placemark: MKPlacemark(coordinate: self.location!.coordinates))
destination.name = "the car"
destination.openInMaps()
}
}
.frame(width: proxy.size.width, height: proxy.size.height, alignment: .bottomTrailing)
.offset(x: -10, y: -10)
}
)
Run Code Online (Sandbox Code Playgroud)
地图视图结构:
import SwiftUI
import MapKit
/// a …Run Code Online (Sandbox Code Playgroud) 我遇到了与内存相关的问题,也就是说,每当我转到另一个视图然后回退(关闭)时,内存就会不断堆积。
我的第二个viewController中有以下代码。但是,它不会取消分配内存。
override func viewWillDisappear() {
super.viewWillDisappear()
self.dismissController(self)
self.removeFromParentViewController()
}
Run Code Online (Sandbox Code Playgroud)
提前致谢。
我目前正在使用 CoreBluetooth 开发 BLE 设备。我可以通过找到我的设备CBCentralManagerDelegate并连接到我的设备。
当我想发现一个服务的特性时,我可以得到正确的uuid,但是特性的值是nil。有任何想法吗?
func peripheral(_ peripheral: CBPeripheral, didDiscoverCharacteristicsFor service: CBService, error: Error?) {
if error != nil {
print("ERROR DISCOVERING CHARACTERISTICS: \(error?.localizedDescription)")
return
}
if let characteristics = service.characteristics {
for characteristic in characteristics {
print("--------------------------------------------")
print("Characteristic UUID: \(characteristic.uuid)")
print("Characteristic isNotifying: \(characteristic.isNotifying)")
print("Characteristic properties: \(characteristic.properties)")
print("Characteristic descriptors: \(characteristic.descriptors)")
print("Characteristic value: \(characteristic.value)")
}
}
}
-------------------------------------------------------------------
Characteristic UUID: FA01
Characteristic isNotifying: false
Characteristic properties: CBCharacteristicProperties(rawValue: 26)
Characteristic descriptors: nil
Characteristic value: nil
Run Code Online (Sandbox Code Playgroud)
根据蓝牙 …
我想计算没有循环的SVM的损失。但我无法把它做好。需要一些启示。
和
def svm_loss_vectorized(W, X, y, reg):
loss = 0.0
scores = np.dot(X, W)
correct_scores = scores[y]
deltas = np.ones(scores.shape)
margins = scores - correct_scores + deltas
margins[margins < 0] = 0 # max -> Boolean array indexing
margins[np.arange(scores.shape[0]), y] = 0 # Don't count j = yi
loss = np.sum(margins)
# Average
num_train = X.shape[0]
loss /= num_train
# Regularization
loss += 0.5 * reg * np.sum(W * W)
return loss
Run Code Online (Sandbox Code Playgroud)
它应该输出与以下函数相同的损失。
def svm_loss_naive(W, X, y, reg):
num_classes = …Run Code Online (Sandbox Code Playgroud)