Ara*_*rav 6 perl solaris file ulimit
我需要在Perl脚本中打开超过10,000个文件,因此我要求系统管理员将我帐户的限制更改为14,000.ulimit -a现在显示这些设置:
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 14000
pipe size (512 bytes, -p) 10
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 29995
virtual memory (kbytes, -v) unlimited
Run Code Online (Sandbox Code Playgroud)
在更改之后,我运行了一个测试Perl程序,该程序打开/创建256个文件,并在脚本结束时关闭256个文件句柄.当它创建253个文件时,程序会死于说太多打开的文件.我不明白为什么我会收到这个错误.
我正在使用Solaris 10平台.这是我的代码
my @list;
my $filename = "test";
for ($i = 256; $i >= 0; $i--) {
print "$i " . "\n";
$filename = "test" . "$i";
if (open my $in, ">", ${filename}) {
push @list, $in;
print $in $filename . "\n";
}
else {
warn "Could not open file '$filename'. $!";
die;
}
}
for ($i = 256; $i >= 0; $i--) {
my $retVal = pop @list;
print $retVal . "\n";
close($retVal);
}
Run Code Online (Sandbox Code Playgroud)
Sch*_*ern 16
根据这篇文章,这是32位Solaris的默认限制.程序通常仅限于使用前256个文件编号.STDIN,STDOUT和STDERR取0,1和2,留下253.这不是一个简单的过程来解决它,ulimit不会这样做,我不知道Perl是否会尊重它.
以下是关于Perlmonks的讨论,其中有一些建议的解决方法,比如FileCache.
虽然Solaris限制是不可原谅的,但通常有数百个打开的文件句柄表明您的程序可以更好地设计.
您可以使用FileCache Core模块解决限制(保持打开的文件比系统允许的数量多).
使用cacheout而不是open,我能够在Linux上打开100334文件:
#david@:~/Test$ ulimit -n
1024
#david@:~/Test$ perl plimit.pl | head
100333
100332
100331
100330
100329
#david@:~/Test$ perl plimit.pl | tail
test100330
test100331
test100332
test100333
#david@:~/Test$ ls test* | wc -l
100334
Run Code Online (Sandbox Code Playgroud)
修改后的脚本版本(plimit.pl)
my @list;
use FileCache;
$mfile=100333;
my $filename="test";
for($i = $mfile; $i >= 0; $i--) {
print "$i " . "\n" ;
$filename = "test" . "$i";
#if (open my $in, ">", ${filename}) {
if ($in = cacheout( ">", ${filename}) ) {
push @list,$in;
print $in $filename . "\n";
} else {
warn "Could not open file '$filename'. $!";
die;
}
}
for($i = $mfile; $i >= 0; $i--) {
my $retVal = pop @list;
print $retVal . "\n";
close($retVal);
}
Run Code Online (Sandbox Code Playgroud)
更新
FileCache 如果超出系统的最大文件描述符数或建议的最大maxopen(sys/param.h中定义的NOFILE),则自动关闭并重新打开文件.
就我而言,在Linux机器上,它是256:
#david@:~/Test$ grep -B 3 NOFILE /usr/include/sys/param.h
/* The following are not really correct but it is a value
we used for a long time and which seems to be usable.
People should not use NOFILE and NCARGS anyway. */
#define NOFILE 256
Run Code Online (Sandbox Code Playgroud)
使用lsof(列表打开文件)命令,脚本的修改版本最多打开了100个文件中的260个:
#david@:~/Test$ bash count_of_plimit.sh
20:41:27 18
new max is 18
20:41:28 196
new max is 196
20:41:29 260
new max is 260
20:41:30 218
20:41:31 258
20:41:32 248
20:41:33 193
max count was 260
Run Code Online (Sandbox Code Playgroud)
count_of_plimit.sh
#!/bin/bash
# count open files with lsof
#
# latest revision:
# ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/
# latest FAQ:
# ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/FAQ
perl plimit.pl > out.txt &
pid=$!
##adapted from http://stackoverflow.com/a/1661498
HOW_MANY=0
MAX=0
while [ -r "/proc/${pid}" ];
do
HOW_MANY=`lsof -p ${pid} | wc -l`
#output for live monitoring
echo `date +%H:%M:%S` $HOW_MANY
# look for max value
if [ $MAX -lt $HOW_MANY ]; then
let MAX=$HOW_MANY
echo new max is $MAX
fi
# test every second
sleep 1
done
echo max count was $MAX
Run Code Online (Sandbox Code Playgroud)