我正在开发一个用于研究Apache Cassandra和Java EE 6的小型Web应用程序.Cassandra版本是1.1.6.
有一个问题让我发疯...我创建了一个带计数器的表(使用cqlsh v.3.0.0)
CREATE TABLE test (
author varchar PRIMARY KEY,
tot counter
)
Run Code Online (Sandbox Code Playgroud)
并以这种方式放置一些值:
update test set tot = tot +1 where author = 'myAuthor';
Run Code Online (Sandbox Code Playgroud)
列系列完全更新
author | tot
----------+-----
myAuthor | 1
Run Code Online (Sandbox Code Playgroud)
但是,如果您尝试删除此行然后再次更新(使用相同的密钥),则没有任何反应!该表没有更新,我无法理解为什么:在我看来,一旦你使用了一个键,你就不能再使用它了.我在datasax文档(http://www.datastax.com/docs/1.1/references/cql/cql_lexicon)中查找了线索,但没有设法找到解决方案.
有人能帮我吗?预先感谢
我试图用kafka读取火花流时遇到一些问题.
我的代码是:
val sparkConf = new SparkConf().setMaster("local[2]").setAppName("KafkaIngestor")
val ssc = new StreamingContext(sparkConf, Seconds(2))
val kafkaParams = Map[String, String](
"zookeeper.connect" -> "localhost:2181",
"group.id" -> "consumergroup",
"metadata.broker.list" -> "localhost:9092",
"zookeeper.connection.timeout.ms" -> "10000"
//"kafka.auto.offset.reset" -> "smallest"
)
val topics = Set("test")
val stream = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topics)
Run Code Online (Sandbox Code Playgroud)
我之前在端口2181启动了zookeeper,在端口9092启动了Kafka服务器0.9.0.0.但是我在Spark驱动程序中遇到以下错误:
Exception in thread "main" java.lang.ClassCastException: kafka.cluster.BrokerEndPoint cannot be cast to kafka.cluster.Broker
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6$$anonfun$apply$7.apply(KafkaCluster.scala:90)
at scala.Option.map(Option.scala:145)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6.apply(KafkaCluster.scala:90)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6.apply(KafkaCluster.scala:87)
Run Code Online (Sandbox Code Playgroud)
Zookeeper日志:
[2015-12-08 00:32:08,226] INFO Got user-level KeeperException when processing sessionid:0x1517ec89dfd0000 type:create cxid:0x34 zxid:0x1d3 …Run Code Online (Sandbox Code Playgroud) apache-kafka apache-spark spark-streaming spark-streaming-kafka
我是Spring框架的新手.我刚开始学习以下各种指南(http://spring.io/guides),我正在尝试完成有关Web服务的完整教程(http://spring.io/guides/tutorials/bookmarks/).
我完全停留在JPA数据源定义上,因为我收到以下错误
Exception in thread "main" org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'init' defined in main.Application: Unsatisfied dependency expressed through constructor argument with index 0 of type [bookmarks.AccountRepository]: : No qualifying bean of type [bookmarks.AccountRepository] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {}; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [bookmarks.AccountRepository] found for dependency: expected at least 1 bean which qualifies as autowire …Run Code Online (Sandbox Code Playgroud) 我的scala解释器/编译器有一个非常奇怪的行为.
Welcome to Scala version 2.10.3 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_45).
Type in expressions to have them evaluated.
Type :help for more information.
scala> class Foo {
| def bar = {
| println("Foo is bar!")
| }
| }
defined class Foo
scala> var f = Foo()
<console>:7: error: not found: value Foo
var f = Foo()
^
scala>
Run Code Online (Sandbox Code Playgroud)
我也尝试将它放在一个文件main.scala中
class Foo {
def bar = {
println("foo is bar!")
}
}
object Main {
def main(args: …Run Code Online (Sandbox Code Playgroud) 我正在研究 Aparapi ( https://code.google.com/p/aparapi/ ),并且其中包含的示例之一有一种奇怪的行为。示例是第一个“添加”。构建并执行它,就可以了。我还放了下面的代码来测试GPU是否真的被使用
if(!kernel.getExecutionMode().equals(Kernel.EXECUTION_MODE.GPU)){
System.out.println("Kernel did not execute on the GPU!");
}
Run Code Online (Sandbox Code Playgroud)
而且效果很好。但是,如果我尝试将数组的大小从 512 更改为大于 999 的数字(例如 1000),则会得到以下输出:
!!!!!!! clEnqueueNDRangeKernel() failed invalid work group size
after clEnqueueNDRangeKernel, globalSize[0] = 1000, localSize[0] = 128
Apr 18, 2013 1:31:01 PM com.amd.aparapi.KernelRunner executeOpenCL
WARNING: ### CL exec seems to have failed. Trying to revert to Java ###
JTP
Kernel did not execute on the GPU!
Run Code Online (Sandbox Code Playgroud)
这是我的代码:
final int size = 1000;
final float[] a = new float[size];
final float[] b = …Run Code Online (Sandbox Code Playgroud) 我正在进行Google Python练习,并且不了解min()内置函数的行为,这似乎没有产生预期的结果.练习是"babynames",我正在用'baby1990.html'文件(https://developers.google.com/edu/python/exercises/baby-names)测试代码
def extract_names(filename):
f = open(filename, 'r').read()
res = []
d = {}
match = re.search(r'<h3(.*?)in (\d+)</h3>', f)
if match:
res.append(match.group(2))
vals = re.findall(r'<td>(\d+)</td><td>(\w+)</td><td>(\w+)</td>', f)
for n, m, f in vals:
if m=='Adrian' or f=='Adrian':
if m not in d:
d[m] = n
else:
d[m] = min(n, d[m])
if f not in d:
d[f] = n
else:
print "min( "+str(n)+", "+str(d[f])+") = "+str( min(n, d[f]) )
d[f] = min( [n, d[f]] )
for name,rank in sorted(d.items()):
res.append(name+" …Run Code Online (Sandbox Code Playgroud) 我开始玩Spark 2.0.1了.新的数据集API非常干净,但我遇到了非常简单的操作问题.
也许我错过了什么,希望有人可以提供帮助.
这些说明
SparkConf conf = new SparkConf().setAppName("myapp").setMaster("local[*]");
SparkSession spark = SparkSession
.builder()
.config(conf)
.getOrCreate();
Dataset<Info> infos = spark.read().json("data.json").as(Encoders.bean(Info.class));
System.out.println(infos.rdd().count());
Run Code Online (Sandbox Code Playgroud)
产生一个
java.lang.NegativeArraySizeException
Run Code Online (Sandbox Code Playgroud)
和JVM(1.8)检测到的致命错误.
使用数据集api处理数据(即,选择,依靠信息对象)可以正常工作.
如何在数据集和RDD之间切换?