如何使用Microsoft语音对象库将wav字节数组写入响应?

Al *_*ndo 7 c# asp.net text-to-speech

我正在尝试使用Microsoft语音对象库在C#中将文本转换为音频.我将音频直接保存到wav文件时已成功完成此操作但是我的主要目标是将音频保存到字节数组,然后我可以将其写入asp.net中的响应(因此最终用户可以将其下载到他们的机器).

当我尝试打开写入到下载的响应的wav文件时,没有播放,并且说明了Windows媒体播放器无法打开文件的错误.

下面的代码显示了我的工作和不工作.

在尝试将字节数组作为wav写入响应时,任何人都对第二部分中可能缺少的内容有任何想法?

        ////////////////////////////////////////////////
        // THIS WORKS
        //SpVoice my_Voice = new SpVoice();                   //declaring and initializing SpVoice Class
        //SpeechVoiceSpeakFlags my_Spflag = SpeechVoiceSpeakFlags.SVSFlagsAsync; // declaring and initializing Speech Voice Flags

        //SpFileStream spFileStream = new SpFileStream();     //declaring and Initializing fileStream obj
        //SpeechStreamFileMode spFileMode = SpeechStreamFileMode.SSFMCreateForWrite;  //declaring fileStreamMode as to Create or Write
        //spFileStream.Open("C:\\temp\\hellosample.wav", spFileMode, false);
        //my_Voice.AudioOutputStream = spFileStream;
        //my_Voice.Speak("test text to audio in asp.net", my_Spflag);
        //my_Voice.WaitUntilDone(-1);
        //spFileStream.Close();
        ////////////////////////////////////////////////

        ////////////////////////////////////////////////
        // THIS DOES NOT WORK
        SpVoice my_Voice = new SpVoice();                   //declaring and initializing SpVoice Class
        SpeechVoiceSpeakFlags my_Spflag = SpeechVoiceSpeakFlags.SVSFlagsAsync; // declaring and initializing Speech Voice Flags

        SpMemoryStream spMemStream = new SpMemoryStream();
        spMemStream.Format.Type = SpeechAudioFormatType.SAFT11kHz8BitMono;
        object buf = new object();
        my_Voice.AudioOutputStream = spMemStream;
        my_Voice.Speak("test text to audio!", my_Spflag);
        my_Voice.WaitUntilDone(-1);
        spMemStream.Seek(0, SpeechStreamSeekPositionType.SSSPTRelativeToStart);
        buf = spMemStream.GetData();
        byte[] byteArray = (byte[])buf;
        Response.Clear();
        Response.ContentType = "audio/wav";
        Response.AppendHeader("Content-Disposition", "attachment; filename=mergedoutput.wav");
        Response.BinaryWrite(byteArray);
        Response.Flush();
        ////////////////////////////////////////////////
Run Code Online (Sandbox Code Playgroud)

Jin*_*ung 3

我建议您使用程序SpeechSynthesizer集中的类System.Speech而不是 Microsoft Speech Object Library,因为程序集包含在 .NET 库中。

我在下面发布了一个示例,解释如何使用SpeechSynthesizer在 ASP.NET MVC 上创建的类来解决您的问题。我希望这能解决您的问题。

public class HomeController : Controller
{
    public async Task<ActionResult> Index()
    {
        Task<FileContentResult> task = Task.Run(() =>
        {
            using (var synth = new SpeechSynthesizer())
            using (var stream = new MemoryStream())
            {
                synth.SetOutputToWaveStream(stream);
                synth.Speak("test text to audio!");
                byte[] bytes = stream.GetBuffer();
                return File(bytes, "audio/x-wav");
            }
        });

        return await task;
    }
}
Run Code Online (Sandbox Code Playgroud)