我正在尝试阅读有关Kafka主题的消息,但我无法阅读它.一段时间后该进程被杀死,无需读取任何消息.
这是我得到的重新平衡错误:
[2014-03-21 10:10:53,215] ERROR Error processing message, stopping consumer: (kafka.consumer.ConsoleConsumer$)
kafka.common.ConsumerRebalanceFailedException: topic-1395414642817-47bb4df2 can't rebalance after 4 retries
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:428)
at kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$reinitializeConsumer(ZookeeperConsumerConnector.scala:718)
at kafka.consumer.ZookeeperConsumerConnector$WildcardStreamsHandler.<init>(ZookeeperConsumerConnector.scala:752)
at kafka.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:142)
at kafka.consumer.ConsoleConsumer$.main(ConsoleConsumer.scala:196)
at kafka.consumer.ConsoleConsumer.main(ConsoleConsumer.scala)
Consumed 0 messages
Run Code Online (Sandbox Code Playgroud)
我试图跑ConsumerOffsetChecker
,这是我得到的错误.我不知道如何解决这个问题.任何人,任何想法?
./kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zkconnect localhost:9092 --topic mytopic --group topic_group
Group Topic Pid Offset logSize Lag Owner
Exception in thread "main" org.I0Itec.zkclient.exception.ZkNoNodeException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /consumers/
at org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:685)
at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:766)
at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:761)
at kafka.utils.ZkUtils$.readData(ZkUtils.scala:459)
at kafka.tools.ConsumerOffsetChecker$.kafka$tools$ConsumerOffsetChecker$$processPartition(ConsumerOffsetChecker.scala:59)
at kafka.tools.ConsumerOffsetChecker$$anonfun$kafka$tools$ConsumerOffsetChecker$$processTopic$1.apply$mcVI$sp(ConsumerOffsetChecker.scala:89)
at kafka.tools.ConsumerOffsetChecker$$anonfun$kafka$tools$ConsumerOffsetChecker$$processTopic$1.apply(ConsumerOffsetChecker.scala:89)
at kafka.tools.ConsumerOffsetChecker$$anonfun$kafka$tools$ConsumerOffsetChecker$$processTopic$1.apply(ConsumerOffsetChecker.scala:89) …
Run Code Online (Sandbox Code Playgroud) 我使用以下代码块来生成MD5哈希:
public static String encode(String data) throws Exception {
/* Check the validity of data */
if (data == null || data.isEmpty()) {
throw new IllegalArgumentException("Null value provided for "
+ "MD5 Encoding");
}
/* Get the instances for a given digest scheme MD5 or SHA */
MessageDigest m = MessageDigest.getInstance("MD5");
/* Generate the digest. Pass in the text as bytes, length to the
* bytes(offset) to be hashed; for full string pass 0 to text.length()
*/
m.update(data.getBytes(), 0, data.length()); …
Run Code Online (Sandbox Code Playgroud) 我有一个非常基本的Spring Boot应用程序,它期望来自命令行的参数,并且没有它不起作用.这是代码.
@SpringBootApplication
public class Application implements CommandLineRunner {
private static final Logger log = LoggerFactory.getLogger(Application.class);
@Autowired
private Reader reader;
@Autowired
private Writer writer;
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
@Override
public void run(String... args) throws Exception {
Assert.notEmpty(args);
List<> cities = reader.get("Berlin");
writer.write(cities);
}
}
Run Code Online (Sandbox Code Playgroud)
这是我的JUnit测试类.
@RunWith(SpringRunner.class)
@SpringBootTest
public class CityApplicationTests {
@Test
public void contextLoads() {
}
}
Run Code Online (Sandbox Code Playgroud)
现在,Assert.notEmpty()
要求传递一个论点.但是,现在,我正在编写JUnit测试.但是,我得到了以下异常加注Assert
.
2016-08-25 16:59:38.714 ERROR 9734 --- [ main] o.s.boot.SpringApplication : Application startup failed …
Run Code Online (Sandbox Code Playgroud) 为什么相同的JSON目标代码使用ul
元素生成输出,但不生成table
标记.
我有我的Mustache模板,如:
<div id="template-ul">
<h3>{{name}}</h3>
<ul>
{{#students}}
<li>{{name}} - {{age}}</li>
{{/students}}
</ul>
</div>
<div id="template-table">
<table>
<thead>
<th>Name</th>
<th>Age</th>
</thead>
<tbody>
{{#students}}
<tr>
<td>{{name}}</td>
<td>{{age}}</td>
</tr>
{{/students}}
</tbody>
</table>
</div>
Run Code Online (Sandbox Code Playgroud)
这是javascript代码:
var testing = {
"name" : "student-collection",
"students" : [
{
"name" : "John",
"age" : 23
},
{
"name" : "Mary",
"age" : 21
}
]
};
var divUl = document.getElementById("template-ul");
var divTable = document.getElementById("template-table");
divUl.innerHTML = Mustache.render(divUl.innerHTML, testing);
divTable.innerHTML = Mustache.render(divTable.innerHTML, testing);
Run Code Online (Sandbox Code Playgroud)
这是jsFiddle上的代码 …
我正在尝试启动ActiveMQ 5.11,我看到WARNING
如下:
WARN | Transport Connection to: tcp://127.0.0.1:40890 failed: java.io.EOFException
Run Code Online (Sandbox Code Playgroud)
我activemq.xml
的如下:
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:${JMS_PORT}" />
<transportConnector name="stomp" uri="stomp://0.0.0.0:${JMS_STOMP_PORT}"/>
<transportConnector name="ssl" uri="ssl://0.0.0.0:${JMS_SSL_PORT}"/>
</transportConnectors>
<sslContext>
<sslContext
keyStore="file:${JMS_KEY_STORE}"
keyStorePassword="${JMS_KEY_STORE_PASSWORD}"
trustStore="file:${JMS_TRUST_STORE}"
trustStorePassword="${JMS_TRUST_STORE_PASSWORD}"
/>
</sslContext>
<networkConnectors>
<networkConnector
name="host1 and host2"
uri="static://(${JMS_X_SITE_CSV_URL})?wireFormat=ssl&wireFormat.maxInactivityDuration=30000"
dynamicOnly="true"
suppressDuplicateQueueSubscriptions = "true"
networkTTL="1"
/>
</networkConnectors>
Run Code Online (Sandbox Code Playgroud)
这是整个控制台日志.
Java Runtime: Oracle Corporation 1.7.0_05 /usr/java/jdk1.7.0_05/jre
Heap sizes: current=1004928k free=994439k max=1004928k
JVM args: -Xmx1G -Dorg.apache.activemq.UseDedicatedTaskRunner=true -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=/home/dragon/activemq/conf/login.config -Dcom.sun.management.jmxremote -Djava.io.tmpdir=/home/dragon/activemq/tmp -Dactivemq.classpath=/home/dragon/activemq/conf; -Dactivemq.home=/home/dragon/activemq -Dactivemq.base=/home/dragon/activemq -Dactivemq.conf=/home/dragon/activemq/conf -Dactivemq.data=/home/dragon/activemq/data
Extensions classpath:
[/home/dragon/activemq/lib,/home/dragon/activemq/lib/camel,/home/dragon/activemq/lib/optional,/home/dragon/activemq/lib/web,/home/dragon/activemq/lib/extra]
ACTIVEMQ_HOME: /home/dragon/activemq
ACTIVEMQ_BASE: /home/dragon/activemq …
Run Code Online (Sandbox Code Playgroud) 考虑一个hadoop集群,其默认块大小为64MB hdfs-site.xml
.但是,后来团队决定将此更改为128MB.以下是我对上述场景的疑问?
我想在map中计算类似的值,其中key是Hive表列中的值,对应的值是count.
例如,对于下表:
+-------+-------+
| Col 1 | Col 2 |
+-------+-------+
| Key1 | Val1 |
| Key1 | Val2 |
| Key2 | Val1 |
+-------+-------+
Run Code Online (Sandbox Code Playgroud)
所以hive查询应该返回类似的东西
Key1=2
Key2=1
Run Code Online (Sandbox Code Playgroud) java ×5
hadoop ×3
hive ×2
apache-kafka ×1
broker ×1
collections ×1
eclipse ×1
encoding ×1
flume ×1
hash ×1
hdfs ×1
ide ×1
javascript ×1
json ×1
junit ×1
md5 ×1
mustache ×1
spring ×1
spring-boot ×1
templates ×1
unit-testing ×1