你有没有遇到过关于 kafka 的类似问题?我收到此错误:Too many open files。我不知道为什么。以下是一些日志:
[2018-08-27 10:07:26,268] ERROR Error while deleting the clean shutdown file in dir /home/weihu/kafka/kafka/logs (kafka.server.LogD)
java.nio.file.FileSystemException: /home/weihu/kafka/kafka/logs/BC_20180821_1_LOCATION-87/leader-epoch-checkpoint: Too many open fis
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.createFile(Files.java:632)
at kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:45)
at kafka.server.checkpoints.LeaderEpochCheckpointFile.<init>(LeaderEpochCheckpointFile.scala:62)
at kafka.log.Log.initializeLeaderEpochCache(Log.scala:278)
at kafka.log.Log.<init>(Log.scala:211)
at kafka.log.Log$.apply(Log.scala:1748)
at kafka.log.LogManager.loadLog(LogManager.scala:265)
at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:335)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2018-08-27 10:07:26,268] ERROR Error while deleting the clean shutdown file in dir /home/weihu/kafka/kafka/logs (kafka.server.LogD)
java.nio.file.FileSystemException: /home/weihu/kafka/kafka/logs/BC_20180822_PARSE-136/leader-epoch-checkpoint: Too many open files
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.createFile(Files.java:632)
at kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:45)
at kafka.server.checkpoints.LeaderEpochCheckpointFile.<init>(LeaderEpochCheckpointFile.scala:62)
at kafka.log.Log.initializeLeaderEpochCache(Log.scala:278)
at kafka.log.Log.<init>(Log.scala:211)
at kafka.log.Log$.apply(Log.scala:1748)
at kafka.log.LogManager.loadLog(LogManager.scala:265)
at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:335)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2018-08-27 10:07:26,269] ERROR Error while deleting the clean shutdown file in dir /home/weihu/kafka/kafka/logs (kafka.server.LogD)
java.nio.file.FileSystemException: /home/weihu/kafka/kafka/logs/BC_20180813_1_STATISTICS-402/leader-epoch-checkpoint: Too many opens
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.createFile(Files.java:632)
at kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:45)
at kafka.server.checkpoints.LeaderEpochCheckpointFile.<init>(LeaderEpochCheckpointFile.scala:62)
at kafka.log.Log.initializeLeaderEpochCache(Log.scala:278)
at kafka.log.Log.<init>(Log.scala:211)
at kafka.log.Log$.apply(Log.scala:1748)
at kafka.log.LogManager.loadLog(LogManager.scala:265)
at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:335)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Run Code Online (Sandbox Code Playgroud)
在 Kafka 中,每个主题都(可选)分成许多分区。对于每个分区,代理维护一些文件(用于索引和实际数据)。
kafka-topics --zookeeper localhost:2181 --describe --topic topic_name
Run Code Online (Sandbox Code Playgroud)
将为您提供 topic 的分区数topic_name。每个主题的默认分区数num.partitions定义在/etc/kafka/server.properties
如果代理托管许多分区并且特定分区具有许多日志段文件,则打开文件的总数可能非常大。
您可以通过运行查看当前的文件描述符限制
ulimit -n
Run Code Online (Sandbox Code Playgroud)
您还可以使用lsof以下命令检查打开文件的数量:
lsof | wc -l
Run Code Online (Sandbox Code Playgroud)
要解决此问题,您需要更改打开文件描述符的限制:
ulimit -n <noOfFiles>
Run Code Online (Sandbox Code Playgroud)
或者以某种方式减少打开文件的数量(例如,减少每个主题的分区数量)。
| 归档时间: |
|
| 查看次数: |
7160 次 |
| 最近记录: |