Bru*_*her 4 python queue machine-learning pytorch tensor
我需要Tensor在 pyTorch 中创建一个固定长度的,它就像一个 FIFO 队列。
我有这个功能来做到这一点:
def push_to_tensor(tensor, x):
tensor[:-1] = tensor[1:]
tensor[-1] = x
return tensor
Run Code Online (Sandbox Code Playgroud)
例如,我有:
tensor = Tensor([1,2,3,4])
>> tensor([ 1., 2., 3., 4.])
Run Code Online (Sandbox Code Playgroud)
然后使用该函数将给出:
push_to_tensor(tensor, 5)
>> tensor([ 2., 3., 4., 5.])
Run Code Online (Sandbox Code Playgroud)
然而,我想知道:
我实现了另一个 FIFO 队列:
def push_to_tensor_alternative(tensor, x):
return torch.cat((tensor[1:], Tensor([x])))
Run Code Online (Sandbox Code Playgroud)
功能是一样的,但后来我检查了它们的速度性能:
# Small Tensor
tensor = Tensor([1,2,3,4])
%timeit push_to_tensor(tensor, 5)
>> 30.9 µs ± 1.26 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit push_to_tensor_alternative(tensor, 5)
>> 22.1 µs ± 2.25 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
# Larger Tensor
tensor = torch.arange(10000)
%timeit push_to_tensor(tensor, 5)
>> 57.7 µs ± 4.88 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit push_to_tensor_alternative(tensor, 5)
>> 28.9 µs ± 570 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Run Code Online (Sandbox Code Playgroud)
似乎这样push_to_tensor_alternative使用torch.cat(而不是将所有项目向左移动)更快。
| 归档时间: |
|
| 查看次数: |
1493 次 |
| 最近记录: |