我最近收到了运行Node.js的网站的大量流量.随着流量的增加,它已经开始大量崩溃,这在以前从未发生过.我在日志中收到以下错误:
{ [Error: connect EMFILE] code: 'EMFILE', errno: 'EMFILE', syscall: 'connect' }
Error: connect EMFILE
at errnoException (net.js:670:11)
at connect (net.js:548:19)
at net.js:607:9
at Array.0 (dns.js:88:18)
at EventEmitter._tickCallback (node.js:192:40)
Run Code Online (Sandbox Code Playgroud)
有谁知道为什么会崩溃?和想法如何解决?
我正在使用Express.js和Socket.io.它运行在Ubuntu上.
我正在学习graphql并prisma-binding用于graphql操作。我在nodemon启动节点服务器时遇到了此错误,它为我提供了模式文件的路径,该文件由a自动生成graphql-cli。谁能告诉我这个错误是什么意思?
错误:
Internal watch failed: ENOSPC: System limit for number of file watchers reached, watch '/media/rehan-sattar/Development/All projects/GrpahQl/graph-ql-course/graphql-prisma/src/generated
Run Code Online (Sandbox Code Playgroud)
谢谢大家!!
我正在尝试在Linux上的Jetty 7.0.1中运行的Java webapp中调试文件描述符泄漏.
由于打开的文件过多而导致请求开始失败,应用程序已经愉快地运行了一个月左右,并且必须重新启动Jetty.
java.io.IOException: Cannot run program [external program]: java.io.IOException: error=24, Too many open files
at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
at java.lang.Runtime.exec(Runtime.java:593)
at org.apache.commons.exec.launcher.Java13CommandLauncher.exec(Java13CommandLauncher.java:58)
at org.apache.commons.exec.DefaultExecutor.launch(DefaultExecutor.java:246)
Run Code Online (Sandbox Code Playgroud)
起初我认为问题在于启动外部程序的代码,但它使用的是commons-exec,我没有看到它有什么问题:
CommandLine command = new CommandLine("/path/to/command")
.addArgument("...");
ByteArrayOutputStream errorBuffer = new ByteArrayOutputStream();
Executor executor = new DefaultExecutor();
executor.setWatchdog(new ExecuteWatchdog(PROCESS_TIMEOUT));
executor.setStreamHandler(new PumpStreamHandler(null, errorBuffer));
try {
executor.execute(command);
} catch (ExecuteException executeException) {
if (executeException.getExitValue() == EXIT_CODE_TIMEOUT) {
throw new MyCommandException("timeout");
} else {
throw new MyCommandException(errorBuffer.toString("UTF-8"));
}
}
Run Code Online (Sandbox Code Playgroud)
在服务器上列出打开的文件我可以看到大量的FIFO:
# lsof -u jetty
... …Run Code Online (Sandbox Code Playgroud) 我有一个脚本,它使用PHP中的curl_multi_*函数运行1000个cURL请求.
超时背后的瓶颈是什么?
是CPU使用率?就服务器处理的出站连接数量而言,是否有一些更有效的方法来执行此操作?
我无法更改功能,请求本身就是对远程API的简单调用.我只是想知道限制是什么 - 我需要增加服务器,Apache连接或CPU的内存吗?(或者我错过的其他东西)
1167:M 26 Apr 13:00:34.666 # You requested maxclients of 10000 requiring at least 10032 max file descriptors. 1167:M 26 Apr 13:00:34.667 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted. 1167:M 26 Apr 13:00:34.667 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'. 1167:M 26 Apr 13:00:34.685 # Creating Server TCP listening socket 192.34.62.56??:6379: Name or …
我一直在使用JMeter对我的REST API进行负载测试.
当遇到1000个并发用户时,我收到以下错误:
Too many open files. Stacktrace follows:
java.net.SocketException: Too many open files
at java.net.Socket.createImpl(Socket.java:397)
at java.net.Socket.getImpl(Socket.java:460)
at java.net.Socket.setSoTimeout(Socket.java:1017)
at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:126)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:640)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:479)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at groovyx.net.http.HTTPBuilder.doRequest(HTTPBuilder.java:476)
at groovyx.net.http.HTTPBuilder.doRequest(HTTPBuilder.java:441)
at groovyx.net.http.HTTPBuilder.request(HTTPBuilder.java:390)
Run Code Online (Sandbox Code Playgroud)
我的服务器尝试命中另一个REST API以获取数据并对其进行处理,最后返回JSON响应.
如何增加Linux中打开文件的数量?
以下是我对另一台服务器的调用
Map getResponse(Map data, String url){
HTTPBuilder httpBuilder = new HTTPBuilder(url);
httpBuilder.request(Method.POST, JSON) {
headers.'Authorization' = AppConfig.config.appKey;
headers.'Content-type' = 'application/json'
body = data
response.success = { resp, reader ->
return reader as Map;
}
response.failure = { …Run Code Online (Sandbox Code Playgroud) 我正在运行代码,有时几小时后有时几分钟后会失败并出现错误
OSError: [Errno 24] Too many open files
Run Code Online (Sandbox Code Playgroud)
我在调试这个时遇到了很大的麻烦。错误本身总是由下面代码片段中的标记行触发
try:
with open(filename, 'rb') as f:
contents = f.read() <----- error triggered here
except OSError as e:
print("e = ", e)
raise
else:
# other stuff happens
Run Code Online (Sandbox Code Playgroud)
但是,我在这部分代码中看不到任何问题(对吗?),所以我猜代码的其他部分没有正确关闭文件。然而,虽然我确实经常打开文件,但我总是使用“with”语句打开它们,并且我的理解是,即使发生错误,文件也会被关闭(对吗?)。所以我的代码的另一部分看起来像这样
try:
with tarfile.open(filename + '.tar') as tar:
tar.extractall(path=target_folder)
except tarfile.ReadError as e:
print("e = ", e)
except OSError as e:
print("e = ", e)
else:
# If everything worked, we are done
return
Run Code Online (Sandbox Code Playgroud)
上面的代码确实经常遇到 ReadError,但即使发生这种情况,文件也应该关闭,对吧?所以我只是不明白我怎么会遇到太多打开的文件?抱歉,这对您来说是无法重现的,因为我无法对其进行足够的调试,我只是在这里寻找一些提示,因为我迷路了。任何帮助表示赞赏...
编辑:我在 MacBook 上。这是 ulimit -a 的输出
core file …Run Code Online (Sandbox Code Playgroud) 我在一些客户端的Linode服务器上有一个Hibernate,Spring,Debian,Tomcat,MySql堆栈.它是一个Spring-Multitenant应用程序,可为大约30个客户端托管网页.
应用程序启动正常,过了一会儿,我得到这个错误:
java.net.SocketException: Too many open files
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
at java.net.ServerSocket.implAccept(ServerSocket.java:453)
at java.net.ServerSocket.accept(ServerSocket.java:421)
at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:60)
at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:216)
at java.lang.Thread.run(Thread.java:662)
Run Code Online (Sandbox Code Playgroud)
然而,在抛出此错误之前,nagios警告我ping服务器停止响应.
以前,我有nginx作为代理,并且每个请求都得到这个nginx错误,并且不得不重新启动tomcat:
2014/04/21 12:31:28 [error] 2259#0: *2441630 no live upstreams while connecting to upstream, client: 66.249.64.115, server: abril, request: "GET /catalog.do?op=requestPage&selectedPage=-195&category=2&offSet=-197&page=-193&searchBox= HTTP/1.1", upstream: "http://appcluster/catalog.do?op=requestPage&selectedPage=-195&category=2&offSet=-197&page=-193&searchBox=", host: "www.anabocafe.com"
2014/04/21 12:31:40 [error] 2259#0: *2441641 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 200.74.195.61, server: abril, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "www.oli-med.com"
Run Code Online (Sandbox Code Playgroud)
这是我的server.xml连接器配置:
<Connector …Run Code Online (Sandbox Code Playgroud) 我有运行配置单元查询,它运行正常的小数据集.但我正在运行2.5亿条记录,我已经在日志中遇到错误
FATAL org.apache.hadoop.mapred.Child: Error running child : java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:640)
at org.apache.hadoop.mapred.Task$TaskReporter.startCommunicationThread(Task.java:725)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:362)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
2013-03-18 14:12:58,907 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: Cannot run program "ln": java.io.IOException: error=11, Resource temporarily unavailable
at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
at java.lang.Runtime.exec(Runtime.java:593)
at java.lang.Runtime.exec(Runtime.java:431)
at java.lang.Runtime.exec(Runtime.java:369)
at org.apache.hadoop.fs.FileUtil.symLink(FileUtil.java:567)
at org.apache.hadoop.mapred.TaskRunner.symlink(TaskRunner.java:787)
at org.apache.hadoop.mapred.TaskRunner.setupWorkDir(TaskRunner.java:752)
at org.apache.hadoop.mapred.Child.main(Child.java:225)
Caused by: java.io.IOException: java.io.IOException: error=11, Resource temporarily unavailable
at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
at java.lang.ProcessImpl.start(ProcessImpl.java:65)
at …Run Code Online (Sandbox Code Playgroud) 我有一个运行multiprocessing.Pool的python脚本来分别处理很多文件.我通常有一个cpu限制为8.我的问题是在运行一段时间后我总是得到"IOError:[Errno 24]太多打开的文件".每个子进程打开一些文件,只能使用file.open()进行读取.然后将这些文件处理程序传递给多个函数以检索数据.在每个子进程结束时,这些文件将使用file.close()关闭.我也尝试了with语句,但没有解决问题.有没有人知道什么是错的.我用Google搜索,但没有找到任何答案.我正在关闭文件,函数正在返回,所以保持文件处理程序.
我的设置是使用python 2.6的Mac 10.5
谢谢
奥根
from custom import func1, func2
# func1 and func2 only seek, read and return values form the file
# however, they do not close the file
import multiprocessing
def Worker(*args):
f1 = open("db1.txt")
f2 = open("db2.txt")
for each in args[1]:
# do many stuff
X = func1(f1)
Y = func2(f2)
f1.close()
f2.close()
return
Data = {1:[2], 2:[3]}
JobP= multiprocessing.Pool(8)
jobP.map_async(Worker, Data.items())
jobP.close()
jobP.join()
Run Code Online (Sandbox Code Playgroud)