我遇到这个问题,我使用CloudKit向icloud保存了一些内容,但是立即获取结果并不会返回插入的最新数据.
例
let todoRecord = CKRecord(recordType: "Todos")
todoRecord.setValue(todo, forKey: "todotext")
publicDB.saveRecord(todoRecord, completionHandler: { (record, error) -> Void in
NSLog("Saved in cloudkit")
let predicate = NSPredicate(value: true)
let query = CKQuery(recordType: "Todos",
predicate: predicate)
self.publicDB.performQuery(query, inZoneWithID: nil) {
results, error in
if error != nil {
dispatch_async(dispatch_get_main_queue()) {
self.delegate?.errorUpdating(error)
return
}
} else {
NSLog("###### fetch after save : \(results.count)")
dispatch_async(dispatch_get_main_queue()) {
self.delegate?.modelUpdated()
return
}
}
}
Run Code Online (Sandbox Code Playgroud)
结果:
Before saving in cloud kit : 3
CloudKit[22799:882643] Saved in cloudkit
CloudKit[22799:882643] ###### …
Run Code Online (Sandbox Code Playgroud) 我的用例
我想通过时间戳DESC订购结果.但我不希望时间戳成为主键中的第二列,因为这将取决于我的查询功能
例如
create table demo(oid int,cid int,ts timeuuid,PRIMARY KEY (oid,cid,ts)) WITH CLUSTERING ORDER BY (ts DESC);
Run Code Online (Sandbox Code Playgroud)
需要查询:
I want the result for all the below queries to be in DESC order of timestamp
select * from demo where oid = 100;
select * from demo where oid = 100 and cid = 10;
select * from demo where oid = 100 and cid = 100 and ts > minTimeuuid('something');
Run Code Online (Sandbox Code Playgroud)
我正在尝试使用CLUSTERING ORDER IN CQL创建此表并获取此错误
cqlsh:v> create table demo(oid int,cid int,ts …
Run Code Online (Sandbox Code Playgroud) 知道为什么火花会这样做StandardScaler
吗?根据以下定义StandardScaler
:
StandardScaler将一组特征标准化为零均值和标准差为1.标志withStd将数据缩放到单位标准差,而标志withMean(默认为false)将在缩放之前将数据居中.
>>> tmpdf.show(4)
+----+----+----+------------+
|int1|int2|int3|temp_feature|
+----+----+----+------------+
| 1| 2| 3| [2.0]|
| 7| 8| 9| [8.0]|
| 4| 5| 6| [5.0]|
+----+----+----+------------+
>>> sScaler = StandardScaler(withMean=True, withStd=True).setInputCol("temp_feature")
>>> sScaler.fit(tmpdf).transform(tmpdf).show()
+----+----+----+------------+-------------------------------------------+
|int1|int2|int3|temp_feature|StandardScaler_4fe08ca180ab163e4120__output|
+----+----+----+------------+-------------------------------------------+
| 1| 2| 3| [2.0]| [-1.0]|
| 7| 8| 9| [8.0]| [1.0]|
| 4| 5| 6| [5.0]| [0.0]|
+----+----+----+------------+-------------------------------------------+
Run Code Online (Sandbox Code Playgroud)
在numpy世界
>>> x
array([2., 8., 5.])
>>> (x - x.mean())/x.std()
array([-1.22474487, 1.22474487, 0. ])
Run Code Online (Sandbox Code Playgroud)
在sklearn世界
>>> scaler = StandardScaler(with_mean=True, with_std=True)
>>> data
[[2.0], [8.0], …
Run Code Online (Sandbox Code Playgroud) 我使用CoreSpotLight api索引一些内容.出于某种原因,我在SpotLight中搜索时无法找到数据.
let atset:CSSearchableItemAttributeSet = CSSearchableItemAttributeSet()
atset.title = "Simple title"
atset.contentDescription = "Simple twitter search"
let item = CSSearchableItem(uniqueIdentifier: "id1", domainIdentifier: "com.shrikar.twitter.search", attributeSet: atset)
CSSearchableIndex.defaultSearchableIndex().indexSearchableItems([item]) { (error) -> Void in
print("Indexed")
}
Run Code Online (Sandbox Code Playgroud)
当我运行应用程序时,我看到数据已编入索引,错误为零.此外,我已将CoreSpotLight和MobileCoreServices添加到构建阶段.
我开始在本地玩火花,发现这个奇怪的问题
1)点安装pyspark == 2.3.1 2)pyspark> 将熊猫作为pd导入 从pyspark.sql.functions导入pandas_udf,PandasUDFType,udf df = pd.DataFrame({'x':[1,2,3],'y':[1.0,2.0,3.0]}) sp_df = spark.createDataFrame(df) @pandas_udf('long',PandasUDFType.SCALAR) def pandas_plus_one(v): 返回v + 1 sp_df.withColumn('v2',pandas_plus_one(sp_df.x))。show()
从这里以这个例子https://databricks.com/blog/2017/10/30/introducing-vectorized-udfs-for-pyspark.html
知道为什么我会不断收到此错误吗?
py4j.protocol.Py4JJavaError:调用o108.showString时发生错误。 :org.apache.spark.SparkException:作业由于阶段失败而中止:阶段3.0中的任务0失败1次,最近一次失败:阶段3.0中的任务0.0(TID 8,本地主机,执行程序驱动程序)丢失:org.apache.spark .SparkException:Python worker意外退出(崩溃) 在org.apache.spark.api.python.BasePythonRunner $ ReaderIterator $$ anonfun $ 1.applyOrElse(PythonRunner.scala:333) 在org.apache.spark.api.python.BasePythonRunner $ ReaderIterator $$ anonfun $ 1.applyOrElse(PythonRunner.scala:322)中 在scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) 在org.apache.spark.sql.execution.python.ArrowPythonRunner $$ anon $ 1.read(ArrowPythonRunner.scala:177) 在org.apache.spark.sql.execution.python.ArrowPythonRunner $$ anon $ 1.read(ArrowPythonRunner.scala:121) 在org.apache.spark.api.python.BasePythonRunner $ ReaderIterator.hasNext(PythonRunner.scala:252) 在org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) 在org.apache.spark.sql.execution.python.ArrowEvalPythonExec $$ anon $ 2。(ArrowEvalPythonExec.scala:90) 在org.apache.spark.sql.execution.python.ArrowEvalPythonExec.evaluate(ArrowEvalPythonExec.scala:88) 在org.apache.spark.sql.execution.python.EvalPythonExec $$ anonfun $ doExecute $ 1.apply(EvalPythonExec.scala:131) 在org.apache.spark.sql.execution.python.EvalPythonExec $$ anonfun $ doExecute $ 1.apply(EvalPythonExec.scala:93) 在org.apache.spark.rdd.RDD $$ anonfun …
我在listbuffer类型上使用prepend方法并观察一些奇怪的行为.prepend操作返回一个可接受的新列表.但是它不应该修改ListBuffer吗?在预先添加之后,我仍然看到ListBuffer的长度没有改变.我在这里错过了什么吗?
scala> val buf = new ListBuffer[Int]
buf: scala.collection.mutable.ListBuffer[Int] = ListBuffer()
scala> buf += 1
res47: buf.type = ListBuffer(1)
scala> buf += 2
res48: buf.type = ListBuffer(1, 2)
scala> 3 +: buf
res49: scala.collection.mutable.ListBuffer[Int] = ListBuffer(3, 1, 2)
scala> buf.toList
res50: List[Int] = List(1, 2)
Run Code Online (Sandbox Code Playgroud) 我最近开始向 cassandra 添加一些数据以进行性能测试,并查看 nodetool cfstats 并且即使插入大量数据后 sstable 计数仍然为 0。甚至 live 和 total 使用的空间仍然是 0。我错过了什么吗?
Keyspace: perftest
Read Count: 0
Read Latency: NaN ms.
Write Count: 126056
Write Latency: 0.028907025449006793 ms.
Pending Tasks: 0
Column Family: items
SSTable count: 0
Space used (live): 0
Space used (total): 0
SSTable Compression Ratio: 0.0
Number of Keys (estimate): 0
Memtable Columns Count: 252112
Memtable Data Size: 214612059
Memtable Switch Count: 0
Read Count: 0
Read Latency: NaN ms.
Write Count: 126056
Write Latency: …
Run Code Online (Sandbox Code Playgroud) apache-spark ×2
cassandra ×2
ios ×2
pyspark ×2
cloudkit ×1
cql ×1
cql3 ×1
icloud ×1
icloud-api ×1
ios9 ×1
listbuffer ×1
nodetool ×1
pandas ×1
prepend ×1
pyarrow ×1
scala ×1
scikit-learn ×1
search ×1
swift ×1