微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

在 dot net core c#

如何解决在 dot net core c#

我使用 Save to Wav file the audio recorded with .NetCore C# on Raspberry pi 中的解决方案在 dotnet 核心上使用 OpenTK 进行录音,在 Windows 和 Linux 上都使用 Mono16 格式。但是,如果我尝试使用 Stereo16 格式,音频会断断续续 - 您会丢失一半数据。

我在 http://soundfile.sapp.org/doc/WaveFormat/ 处通读了 WAVE Format Spec 摘要,并成功地在波形文件中手动创建了第二个通道(仍然使用 Mono16,只是将每个样本写入两次)。这是有效的,因为每个通道的数据在波形文件中交错(左样本、右样本、左样本、右样本……)。上一篇文章中的代码可用于在您更改 numChannels=2 后放置第二个频道,并添加第二个频道数据。但是,当我更改时它不起作用

ALFormat alFormat = ALFormat.Mono16;

ALFormat alFormat = ALFormat.Stereo16;

查看流,它仍然具有来自设备的相同数量的数据(不是第二个通道的数据的两倍)。如果采样率为 44100hz,它仍然只在一秒内返回 44100 个样本。我将延迟从 1000 毫秒增加到 5000 毫秒,当你播放时,你在这里录制了 2.5 秒,然后你失去了接下来的 2.5 秒(录制时间是你预期的一半)。

我尝试读取可用样本的两倍,但这只会导致请求的数据多于可用数据的错误。我试图查看 OpenTK.OpenAL 规范,看看它是否需要二维数组或其他东西,但包装类只接受一维数组。

知道我做错了什么吗?这是(非常轻微的)修改后的代码

using OpenTK.Audio;
using OpenTK.Audio.OpenAL;
using System;
using System.IO;
using System.Threading;

namespace audiotest
{

    class Program
    {
        static void Main(string[] args)
        {
            var recorders = AudioCapture.AvailableDevices;
            for (int i = 0; i < recorders.Count; i++)
            {
                Console.WriteLine(recorders[i]);
            }
            Console.WriteLine("-----");

            const int samplingRate = 44100;     // Samples per second

            //const ALFormat alFormat = ALFormat.Mono16;
            //const ushort numChannels = 1;       // Mono16 has 1 channel

            const ALFormat alFormat = ALFormat.Stereo16;
            const ushort numChannels = 2;       // Stereo16 has 2 channel

            const ushort bitsPerSample = 16;    // Mono16 and Stereo16 have 16 bits per sample

            using (var f = File.OpenWrite(@"C:\sarbar\test.wav"))
            using (var sw = new BinaryWriter(f))
            {
                // Read This: http://soundfile.sapp.org/doc/WaveFormat/

                sw.Write(new char[] { 'R','I','F','F' });
                sw.Write(0); // will fill in later
                sw.Write(new char[] { 'W','A','V','E' });
                // "fmt " chunk (Google: WAVEFORMATEX structure)
                sw.Write(new char[] { 'f','m','t',' ' });
                sw.Write(16); // chunkSize (in bytes)
                sw.Write((ushort)1); // wFormatTag (PCM = 1)
                sw.Write(numChannels); // wChannels
                sw.Write(samplingRate); // dwSamplesPerSec
                sw.Write(samplingRate * numChannels * (bitsPerSample / 8)); // dwAvgBytesPerSec
                sw.Write((ushort)(numChannels * (bitsPerSample / 8))); // wBlockAlign
                sw.Write(bitsPerSample); // wBitsPerSample
                                         // "data" chunk
                sw.Write(new char[] { 'd','a','a' });
                sw.Write(0); // will fill in later

                // 10 seconds of data. overblown,but it gets the job done
                const int bufferLength = samplingRate * 50;
                int samplesWrote = 0;

                Console.WriteLine($"Recording from: {recorders[0]}");

                using (var audioCapture = new AudioCapture(
                    recorders[0],samplingRate,alFormat,bufferLength))
                {
                    var buffer = new short[bufferLength];

                    audioCapture.Start();
                    for (int i = 0; i < 10; ++i)
                    {
                        Thread.Sleep(5000); // give it some time to collect samples

                        var samplesAvailable = audioCapture.AvailableSamples;
                        audioCapture.ReadSamples(buffer,samplesAvailable);
                        for (var x = 0; x < samplesAvailable; ++x)
                        {
                            sw.Write(buffer[x]);
                            //sw.Write(buffer[x]);
                        }

                        samplesWrote += samplesAvailable;

                        Console.WriteLine($"Wrote {samplesAvailable}/{samplesWrote} samples...");
                    }
                    audioCapture.Stop();
                }

                sw.Seek(4,SeekOrigin.Begin); // seek to overall size
                sw.Write(36 + samplesWrote * (bitsPerSample / 8) * numChannels);
                sw.Seek(40,SeekOrigin.Begin); // seek to data size position
                sw.Write(samplesWrote * (bitsPerSample / 8) * numChannels);
            }
        }
    }
}

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。