我正在尝试构建一个使用lambda表达式的Kafka Streams应用程序.
我的maven构建配置是mvn clean install
当我执行RunAs> Maven构建时,我收到以下错误:
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /home/junaid/eMumba/StreamsExample/streams.examples/src/main/java/myapps/Pipe.java:[53,38] lambda expressions are not supported in -source 1.5
(use -source 8 or higher to enable lambda expressions)
[INFO] 1 error
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.265 s
[INFO] Finished at: 2018-02-24T14:50:04+05:00
[INFO] Final Memory: 11M/150M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project streams.examples: Compilation failure
[ERROR] /home/junaid/eMumba/StreamsExample/streams.examples/src/main/java/myapps/Pipe.java:[53,38] lambda expressions are not supported in …Run Code Online (Sandbox Code Playgroud) 我正在按键对流进行分组,并尝试按分组的键来汇总值。我正在遵循streams-developer-guide
我在遇到错误withValueSerde。它说:
The method withValueSerde(Serde<Object>) in the type Materialized<Object,Object,StateStore> is not applicable for the arguments (Serde<Long>)
码:
KStream<String, String> inputStream = builder.stream("input_topic");
KStream<String, Integer> transformedStream = inputStream.map(
(key, value) -> KeyValue.pair(getKey(value), getValue(value)));
KGroupedStream<String, Integer> groupedStream = transformedStream.groupByKey();
KTable<String, Long> aggregatedStream = groupedStream.aggregate(() -> 0L,
(aggKey, newValue, aggValue) -> aggValue + newValue,
Materialized.as("aggregated-stream-store").withValueSerde(Serdes.Long()));
Run Code Online (Sandbox Code Playgroud) 我在查询中收到sql异常:
[SqlException(0x80131904):'?'附近的语法不正确.关键字"用户"附近的语法不正确.]
我究竟做错了什么?这个例外意味着什么?
protected void Submit_Click(object sender, EventArgs e)
{
string myConnection = @"Data Source=REDDEVIL;Initial..."
SqlConnection conn = new SqlConnection(myConnection);
HttpPostedFile postedFile = FileUpload1.PostedFile;
string fileName = Path.GetFileName(postedFile.FileName);
string fileExtension = Path.GetExtension(fileName);
if (fileExtension.ToLower() == ".jpg" || fileExtension.ToLower() == ".bmp" ||
fileExtension.ToLower() == ".gif" || fileExtension.ToLower() == ".png")
{
Stream stream = postedFile.InputStream;
BinaryReader binaryReader = new BinaryReader(stream);
byte[] bytes = binaryReader.ReadBytes((int)stream.Length);
string query2 = "INSERT INTO Manager (ID,Name,Address,Phone,Cell,Email,DOB,Commission,Comments,Photo,User ID,IsActive) VALUES (?ID,?Name,?Address,?Phone,?Cell,?Email,?DOB,?Commission,?Comments,?Photo,?User_ID,?IsActive)";
SqlCommand cmd2 = new SqlCommand(query2, conn);
cmd2.Parameters.AddWithValue("?ID", mgrID.Text); …Run Code Online (Sandbox Code Playgroud) 我正在使用 Bootstrap.css,并且有一个名为“form-control”的类。我正在使用这个类来设置我的 aspx 控件的样式。问题是它的文本框字体颜色是灰色的,我希望它是黑色的。我对 CSS 一无所知。有没有办法可以将其从灰色更改为黑色?谢谢。
我想提取每个消息中嵌入的时间戳,并将它们作为json有效负载发送到我的数据库中。
我想获得以下三个时间戳。
活动时间: The point in time when an event or data record occurred, i.e. was originally created “by the source”.
处理时间: The point in time when the event or data record happens to be processed by the stream processing application, i.e. when the record is being consumed.
摄取时间: The point in time when an event or data record is stored in a topic partition by a Kafka broker.
这是我的流应用程序代码:
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-pipe");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, BROKER_URL + …Run Code Online (Sandbox Code Playgroud) 我正在尝试使用 JDBC-sink 连接器将 Kafka 与 Postgres Sink 一起使用。
例外:
INFO Unable to connect to database on attempt 1/3. Will retry in 10000 ms. (io.confluent.connect.jdbc.util.CachedConnectionProvider:91)
java.sql.SQLException: No suitable driver found for jdbc:postgresql://localhost:5432/casb
at java.sql.DriverManager.getConnection(DriverManager.java:689)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at io.confluent.connect.jdbc.util.CachedConnectionProvider.newConnection(CachedConnectionProvider.java:85)
at io.confluent.connect.jdbc.util.CachedConnectionProvider.getValidConnection(CachedConnectionProvider.java:68)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:56)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:69)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:288)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:198)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:166)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Run Code Online (Sandbox Code Playgroud)
Sink.properties:
name=test-sink
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics=fp_test
connection.url=jdbc:postgresql://localhost:5432/casb
connection.user=admin
connection.password=***
auto.create=true
Run Code Online (Sandbox Code Playgroud)
我已经设定 plugin.path=/usr/share/java/kafka-connect-jdbc
在/usr/share/java/kafka-connect-jdbc我有以下文件:
kafka-connect-jdbc-4.0.0.jar …
postgresql apache-kafka apache-kafka-connect confluent-platform
我正在尝试从 shell 脚本在 postgres 容器内执行命令。这是我到目前为止:
kubectl exec -it <postgres_pod> -n <deployment> -- bash -c "psql -U postgres -d database -c 'select count from table where name='FOO';'"
我收到以下错误:
ERROR: column "foo" does not exist
LINE 1: select count from table where name=FOO;
^
查询在容器内运行良好,因此我传递命令的方式一定有问题。我确实尝试了另一个查询:
kubectl exec -it <postgres_pod> -n <deployment> -- bash -c "psql -U postgres -d database -c 'select * from table;'"
Run Code Online (Sandbox Code Playgroud)
这运行良好。所以,我猜这与我传递 where 子句的方式有关where name='FOO'。我怎样才能让它发挥作用。请帮帮我。
更新:
尝试使用以下方法进行转义:
1:双引号
kubectl exec -it <postgres_pod> -n <deployment> -- bash -c …
我想将纪元以来的天数转换为日期。假设我有以下内容Timestamp in days: 17749,那么它应该转换为Monday Aug 06 2018.
我正在尝试使用以下代码:
Date date = new SimpleDateFormat("D").parse(String.valueOf("17749"));
System.out.println(date);
Run Code Online (Sandbox Code Playgroud)
但我正在得到Sun Aug 05 00:00:00 PKT 2018。日期比应有的日期早一天,如何将其转换为 yyyy-mm-dd?
我想从列表中删除每个第3项.例如:
list1 = list(['a','b','c','d','e','f','g','h','i','j'])
Run Code Online (Sandbox Code Playgroud)
删除多个三的索引后,列表将是:
['a','b','d','e','g','h','j']
Run Code Online (Sandbox Code Playgroud)
我怎样才能做到这一点?