linux 上用户打开的文件数和 linux 系统范围内打开的文件数是多少?

Cla*_*ied 4 unix linux lsof

对不起,这个问题有几个层次,但都处理打开文件的数量。

我在我正在开发的应用程序的应用程序日志中收到“打开的文件太多”消息。有人建议我:

  1. 查找当前正在使用的打开文件数,系统范围和每个用户
  2. 找出系统和用户打开文件的限制。

我跑了ulimit -n,它返回 1024。我还查看了 /etc/limits.conf,该文件中没有任何特殊内容。/etc/sysctl.conf 也没有修改。我将在下面列出文件的内容。我也跑了lsof | wc -l,它返回了 5000+ 行(如果我使用正确的话)。

所以,我的主要问题是:

  1. 如何找到每个用户允许打开的文件数?软限制是在 /etc/limits.conf 中找到/定义的 nofile 设置吗?因为我没有接触 /etc/limits.conf,所以默认值是多少?
  2. 如何找到系统范围内允许的打开文件数?它是limits.conf 中的硬限制吗?如果limits.conf 没有修改,默认数字是多少?
  3. ulimit 为打开的文件返回的数字是多少?它说 1024 但当我运行 lsof 并计算行数时,它超过 5000+,所以我没有点击。我是否应该运行其他 cmds 或查看文件以获得这些限制?在此先感谢您的帮助。

limit.conf 的内容

# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain>        <type>  <item>  <value>
#
#Where:
#<domain> can be:
#        - an user name
#        - a group name, with @group syntax
#        - the wildcard *, for default entry
#        - the wildcard %, can be also used with %group syntax,
#                 for maxlogin limit
#
#<type> can have the two values:
#        - "soft" for enforcing the soft limits
#        - "hard" for enforcing hard limits
#
#<item> can be one of the following:
#        - core - limits the core file size (KB)
#        - data - max data size (KB)
#        - fsize - maximum filesize (KB)
#        - memlock - max locked-in-memory address space (KB)
#        - nofile - max number of open files
#        - rss - max resident set size (KB)
#        - stack - max stack size (KB)
#        - cpu - max CPU time (MIN)
#        - nproc - max number of processes
#        - as - address space limit (KB)
#        - maxlogins - max number of logins for this user
#        - maxsyslogins - max number of logins on the system
#        - priority - the priority to run user process with
#        - locks - max number of file locks the user can hold
#        - sigpending - max number of pending signals
#        - msgqueue - max memory used by POSIX message queues (bytes)
#        - nice - max nice priority allowed to raise to values: [-20, 19]
#        - rtprio - max realtime priority
#
#<domain>      <type>  <item>         <value>
#

#*               soft    core            0
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#@student        -       maxlogins       4

# End of file
Run Code Online (Sandbox Code Playgroud)

sysctl.conf 的内容

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536

# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# the interval between the last data packet sent and the first keepalive probe
net.ipv4.tcp_keepalive_time = 600

# the interval between subsequential keepalive probes
net.ipv4.tcp_keepalive_intvl = 60

# the interval between the last data packet sent and the first keepalive probe
net.ipv4.tcp_keepalive_time = 600

# the interval between subsequential keepalive probes
net.ipv4.tcp_keepalive_intvl = 60

# the number of unacknowledged probes to send before considering the connection dead and notifying the application layer
net.ipv4.tcp_keepalive_probes = 10

# the number of unacknowledged probes to send before considering the connection dead and notifying the application layer
net.ipv4.tcp_keepalive_probes = 10

# try as hard as possible not to swap, as safely as possible
vm.swappiness = 1
fs.aio-max-nr = 1048576
#fs.file-max = 4096
Run Code Online (Sandbox Code Playgroud)

小智 5

没有每个用户的文件限制。您需要注意的是系统范围和每个进程。每个进程的文件数限制乘以每个用户的进程数限制理论上可以提供每个用户的文件数限制,但在正常值下,产品会很大,以至于实际上不受限制。

此外,lsof 的最初目的是列出打开的文件,但它现在已经增长并列出了其他内容,例如 cwd 和 mmap 区域,这是它输出比您预期的更多行的另一个原因。

错误消息 "Too many open files" 与 errno value 相关EMFILE,每个进程的限制,在你的情况下似乎是 1024。如果你能找到正确的选项来限制 lsof 只显示单个进程的实际文件描述符,您可能会发现其中有 1024 个,或者非常接近。

现在系统范围的文件描述符限制很少需要手动调整,因为它的默认值与内存成正比。如果需要,您可以在 找到它/proc/sys/fs/file-max以及有关当前使用情况的信息/proc/sys/fs/file-nr。您的 sysctl 文件的值为4096for file-max,但它已被注释掉,因此您不必认真对待它。

如果您设法达到系统范围的限制,您将收到 errno ENFILE,这会转换为错误消息“文件表溢出”或“系统中打开的文件太多”。