我需要对bed
文件随机排序10000 次,每次取前 1000 行。目前,我正在使用以下代码:
for i in {1..100}; do
for j in {1..100}; do
sort -R myfile.bed_sorted | tail -n 1000 > myfile.bed.$i.$j.bed
done
done
Run Code Online (Sandbox Code Playgroud)
为每个文件执行此操作需要将近 6 个小时。我有大约 150 个需要解决。有没有更快的解决方案?
我有一个数据样本(myfile.bed_sorted):
chr1 111763899 111766405 peak1424 1000 . 3224.030 -1 -1
chr1 144533459 144534584 peak1537 998 . 3219.260 -1 -1
chr8 42149384 42151246 peak30658 998 . 3217.620 -1 -1
chr2 70369299 70370655 peak16886 996 . 3211.600 -1 -1
chr8 11348914 11352994 peak30334 990 . 3194.180 -1 -1
chr21 26828820 26830352 peak19503 988 . 3187.820 -1 -1
chr16 68789901 68791150 peak11894 988 . 3187.360 -1 -1
chr6 11458964 11462245 peak26362 983 . 3169.750 -1 -1
chr1 235113793 235117308 peak2894 982 . 3166.000 -1 -1
chr6 16419968 16422194 peak26522 979 . 3158.520 -1 -1
chr6 315344 321339 peak26159 978 . 3156.320 -1 -1
chr1 111756584 111759633 peak1421 964 . 3110.520 -1 -1
chrX 12995098 12997685 peak33121 961 . 3100.000 -1 -1
chr9 37408601 37410262 peak32066 961 . 3100.000 -1 -1
chr9 132648603 132651523 peak32810 961 . 3100.000 -1 -1
chr8 146103178 146104943 peak31706 961 . 3100.000 -1 -1
chr8 135611963 135614649 peak31592 961 . 3100.000 -1 -1
chr8 128312253 128315935 peak31469 961 . 3100.000 -1 -1
chr8 128221486 128223644 peak31465 961 . 3100.000 -1 -1
chr8 101510621 101514237 peak31185 961 . 3100.000 -1 -1
chr8 101504210 101508005 peak31184 961 . 3100.000 -1 -1
chr7 8173062 8174642 peak28743 961 . 3100.000 -1 -1
chr7 5563424 5570618 peak28669 961 . 3100.000 -1 -1
chr7 55600455 55603724 peak29192 961 . 3100.000 -1 -1
chr7 35767878 35770820 peak28976 961 . 3100.000 -1 -1
chr7 28518260 28519837 peak28923 961 . 3100.000 -1 -1
chr7 104652502 104654747 peak29684 961 . 3100.000 -1 -1
chr6 6586316 6590136 peak26279 961 . 3100.000 -1 -1
chr6 52362185 52364270 peak27366 961 . 3100.000 -1 -1
chr6 407805 413348 peak26180 961 . 3100.000 -1 -1
chr6 32936987 32941352 peak26978 961 . 3100.000 -1 -1
chr6 226477 229964 peak26144 961 . 3100.000 -1 -1
chr6 157017923 157020836 peak28371 961 . 3100.000 -1 -1
chr6 137422769 137425128 peak28064 961 . 3100.000 -1 -1
chr5 149789084 149793727 peak25705 961 . 3100.000 -1 -1
chr5 149778033 149783125 peak25702 961 . 3100.000 -1 -1
chr5 149183766 149185906 peak25695 961 . 3100.000 -1 -1
Run Code Online (Sandbox Code Playgroud)
ter*_*don 14
假设您有足够的内存来处理文件,您可以尝试
perl -e 'use List::Util 'shuffle'; @k=shuffle(<>); print @k[0..999]' file.bed
Run Code Online (Sandbox Code Playgroud)
由于您想这样做 10000 次,我建议将重复集成到脚本中并改组索引而不是数组本身以加快速度:
$ time perl -e 'use List::Util 'shuffle';
@l=<>; for $i (1..10000){
open(my $fh, ">","file.$i.bed");
@r=shuffle(0..$#l);
print $fh @l[@r[0..999]]
}' file.bed
real 1m12.444s
user 1m8.536s
sys 0m3.244s
Run Code Online (Sandbox Code Playgroud)
上面从一个包含 37000 行的文件中创建了 10000 个 1000 行的文件(您的示例文件重复了 1000 次)。如您所见,在我的系统上花费了三分钟多一点的时间。
use List::Util 'shuffle';
:这会导入一个 Perl 模块,该模块提供shuffle()
随机化数组的功能。@l=<>;
: 将输入文件 ( <>
)加载到数组中@l
。for $i (1..10000){}
: 运行 10000 次。@r=shuffle(0..$#l);
:$#l
是元素的数量@l
so@r
现在是数组索引号的随机列表@l
(输入文件的行)。open(my $fh, ">","file.$i.bed");
: 打开一个file.$i.bed
需要写入的文件。$i
将采用从 1 到 10000 的值。print $fh @l[@r[0..999]]
: 取混洗数组中的前 1000 个索引并打印相应的行(元素@l
)。另一种方法是使用shuf
(感谢@frostschutz):
$ time for i in {1..10000}; do shuf -n 1000 file.bed > file.$i.abed; done
real 1m9.743s
user 0m23.732s
sys 0m31.764s
Run Code Online (Sandbox Code Playgroud)
如果您想要一个基准测试来看看它可以完成多快,请将其复制粘贴到10kshuffle.cpp
并编译g++ 10kshuffle.cpp -o 10kshuffle
. 然后你可以运行它:
10kshuffle filename < inputfile
Run Code Online (Sandbox Code Playgroud)
filename
用于输出文件的基本路径在哪里;它们将被命名为filename.0
、filename.1
等,并且每个都包含 shuffle 的前 1000 行。它会写下每个文件的名称。
#include <cerrno>
#include <cstdlib>
#include <cstring>
#include <fcntl.h>
#include <fstream>
#include <iostream>
#include <string>
#include <sstream>
#include <unistd.h>
#include <vector>
using namespace std;
unsigned int randomSeed () {
int in = open("/dev/urandom", O_RDONLY);
if (!in) {
cerr << strerror(errno);
exit(1);
}
unsigned int x;
read(in, &x, sizeof(x));
close(in);
return x;
}
int main (int argc, const char *argv[]) {
char basepath[1024];
strcpy(basepath,argv[1]);
char *pathend = &basepath[strlen(basepath)];
// Read in.
vector<char*> data;
data.reserve(1<<16);
while (!cin.eof()) {
char *buf = new char[1024];
cin.getline(buf,1023);
data.push_back(buf);
}
srand(randomSeed());
for (int n = 0; n < 10000; n++) {
vector<char*> copy(data);
// Fisher-Yates shuffle.
int last = copy.size() - 1;
for (int i = last; i > 0; i--) {
int r = rand() % i;
if (r == i) continue;
char *t = copy[i];
copy[i] = copy[r];
copy[r] = t;
}
// Write out.
sprintf(pathend, ".%d", n);
ofstream file(basepath);
for (int j = 0; j < 1000; j++) file << copy[j] << endl;
cout << basepath << endl;
file.close();
}
return 0;
}
Run Code Online (Sandbox Code Playgroud)
在单个 3.5 Ghz 内核上,运行时间约为 20 秒:
time ./10kshuffle tmp/test < data.txt
tmp/test.0
[...]
tmp/test.9999
real 19.95, user 9.46, sys 9.86, RSS 39408
Run Code Online (Sandbox Code Playgroud)
data.txt
从问题中复制了 37000 行。如果您希望输出文件中的整个 shuffle 而不是前 1000 行,请将第 54 行更改为:
for (int j = 0; j < copy.size(); j++) file << copy[j] << endl;
Run Code Online (Sandbox Code Playgroud)