我有一个MelSpectrogram
生成自:
eval_seq_specgram = torchaudio.transforms.MelSpectrogram(sample_rate=sample_rate, n_fft=256)(eval_audio_data).transpose(1, 2)
Run Code Online (Sandbox Code Playgroud)
所以eval_seq_specgram
现在有一个size
of torch.Size([1, 128, 499])
,其中 499 是时间步数,128 是n_mels
.
我正在尝试反转它,所以我正在尝试使用GriffinLim
,但在此之前,我想我需要反转melscale
,所以我有:
inverse_mel_pred = torchaudio.transforms.InverseMelScale(sample_rate=sample_rate, n_stft=256)(eval_seq_specgram)
Run Code Online (Sandbox Code Playgroud)
inverse_mel_pred
拥有size
的torch.Size([1, 256, 499])
然后我尝试使用GriffinLim
:
pred_audio = torchaudio.transforms.GriffinLim(n_fft=256)(inverse_mel_pred)
Run Code Online (Sandbox Code Playgroud)
但我收到一个错误:
Traceback (most recent call last):
File "evaluate_spect.py", line 63, in <module>
main()
File "evaluate_spect.py", line 51, in main
pred_audio = torchaudio.transforms.GriffinLim(n_fft=256)(inverse_mel_pred)
File "/home/shamoon/.local/share/virtualenvs/speech-reconstruction-7HMT9fTW/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shamoon/.local/share/virtualenvs/speech-reconstruction-7HMT9fTW/lib/python3.8/site-packages/torchaudio/transforms.py", line 169, …
Run Code Online (Sandbox Code Playgroud)