Yos*_*shi 5 python signal-processing numba
为了在1D阵列上进行希尔伯特变换,必须:
我正在使用PyCuLib进行FFT.我的代码到目前为止
def htransforms(data):
N = data.shape[0]
transforms = nb.cuda.device_array_like(data) # Allocates memory on GPU with size/dimensions of signal
transforms.dtype = np.complex64 # Change GPU array type to complex for FFT
pyculib.fft.fft(signal.astype(np.complex64), transforms) # Do FFT on GPU
transforms[1:N/2] *= 2.0 # THIS STEP DOESN'T WORK
transforms[N/2 + 1: N] = 0+0j # NEITHER DOES THIS ONE
pyculib.fft.ifft_inplace(transforms) # Do IFFT on GPU: in place (same memory)
envelope_function = transforms.copy_to_host() # Copy results to host (computer) memory
return abs(envelope_function)
Run Code Online (Sandbox Code Playgroud)
我觉得它可能与Numba的CUDA接口本身有关......它是否允许像这样修改数组(或数组切片)的各个元素?我认为它可能,因为变量transforms是a numba.cuda.cudadrv.devicearray.DeviceNDArray,所以我想也许它有一些与numpy相同的操作ndarray.
简而言之,使用Numba device_arrays,在片上进行简单操作的最简单方法是什么?我得到的错误是
不支持的操作数类型*=''DeviceNDArray'和'float'
我会使用pytorch
您使用 pytorch 的函数(并且我删除了abs以返回复数值)
def htransforms(data):
N = data.shape[-1]
# Allocates memory on GPU with size/dimensions of signal
transforms = torch.tensor(data).cuda()
torch.fft.fft(transforms, axis=-1)
transforms[:, 1:N//2] *= 2.0 # THIS STEP DOESN'T WORK
transforms[:, N//2 + 1: N] = 0+0j # NEITHER DOES THIS ONE
# Do IFFT on GPU: in place (same memory)
return torch.abs(torch.fft.ifft(transforms)).cpu()
Run Code Online (Sandbox Code Playgroud)
但你的转换实际上与我在维基百科上找到的不同
维基百科版本
def htransforms_wikipedia(data):
N = data.shape[-1]
# Allocates memory on GPU with size/dimensions of signal
transforms = torch.tensor(data).cuda()
transforms = torch.fft.fft(transforms, axis=-1)
transforms[:, 1:N//2] *= -1j # positive frequency
transforms[:, (N+2)//2 + 1: N] *= +1j # negative frequency
transforms[:,0] = 0; # DC signal
if N % 2 == 0:
transforms[:, N//2] = 0; # the (-1)**n term
# Do IFFT on GPU: in place (same memory)
return torch.fft.ifft(transforms).cpu()
Run Code Online (Sandbox Code Playgroud)
def htransforms_wikipedia(data):
N = data.shape[-1]
# Allocates memory on GPU with size/dimensions of signal
transforms = torch.tensor(data).cuda()
transforms = torch.fft.fft(transforms, axis=-1)
transforms[:, 1:N//2] *= -1j # positive frequency
transforms[:, (N+2)//2 + 1: N] *= +1j # negative frequency
transforms[:,0] = 0; # DC signal
if N % 2 == 0:
transforms[:, N//2] = 0; # the (-1)**n term
# Do IFFT on GPU: in place (same memory)
return torch.fft.ifft(transforms).cpu()
Run Code Online (Sandbox Code Playgroud)
data = torch.zeros((1, 2**10))
data[:, 2**9] = 1;
tdata = htransforms(data).data;
plt.plot(tdata.real.T, '-')
plt.plot(tdata.imag.T, '-')
plt.xlim([500, 525])
plt.legend(['real', 'imaginary'])
plt.title('inpulse response of your version')
Run Code Online (Sandbox Code Playgroud)
您的版本的脉冲响应是1 + 1j * h[k]维基h[k]百科版本的脉冲响应。如果您正在使用真实数据,维基百科版本很好,因为您可以使用rfft和irfft生成一个线性版本
data = torch.zeros((1, 2**10))
data[:, 2**9] = 1;
tdata = htransforms_wikipedia(data).data;
plt.plot(tdata.real.T, '-');
plt.plot(tdata.imag.T, '-');
plt.xlim([500, 525])
plt.legend(['real', 'imaginary'])
plt.title('inpulse response of Wikipedia version')
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
347 次 |
| 最近记录: |