微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

将Raspberry Pi上用.NetCore C#录制的音频保存到Wav文件

如何解决将Raspberry Pi上用.NetCore C#录制的音频保存到Wav文件

我发现很难找到一种方法来将使用OpenTk.NetStandard捕获的音频存储到NetCore C#中的适当.WAV文件中。

我正在寻找一种可以在RaspBerry pi上运行的解决方案,因此NAudio或任何Windows特定方法都无法解决我的问题。

我找到了另外两个SO答案,它们显示了如何使用opentk捕获音频,但是没有关于如何将其存储在wav文件中的信息。

这是应从另一个问题中提取的麦克风中读取数据的代码的一部分,我看到AudioCapture类是要进行以下操作的类:

  const byte SampletoByte = 2;
  short[] _buffer = new short[512];
  int _sampling_rate = 16000;
  double _buffer_length_ms = 5000;
  var _recorders = AudioCapture.AvailableDevices;
  int buffer_length_samples = (int)((double)_buffer_length_ms  * _sampling_rate * 0.001 / BlittableValueType.StrideOf(_buffer));

  using (var audioCapture = new AudioCapture(_recorders.First(),_sampling_rate,ALFormat.Mono16,buffer_length_samples))
  {
      audioCapture.Start();
      int available_samples = audioCapture.AvailableSamples;        

      _buffer = new short[MathHelper.NextPowerOfTwo((int)(available_samples * SampletoByte / (double)BlittableValueType.StrideOf(_buffer) + 0.5))];

      if (available_samples > 0)
      {
          audioCapture.ReadSamples(_buffer,available_samples);

          int buf = AL.GenBuffer();
          AL.BufferData(buf,buffer,(int)(available_samples * BlittableValueType.StrideOf(_buffer)),audio_capture.SampleFrequency);
          AL.sourceQueueBuffer(src,buf);

         // Todo: I assume this is where the save to WAV file logic should be placed...
      }

  }

任何帮助将不胜感激!

解决方法

这是.NET Core控制台程序,它将使用Mono 16位数据写入WAV文件。您应该通读源代码中的链接,以了解正在写入的值的含义。

这将记录10秒的数据并将其保存为WAV格式的文件:

using OpenTK.Audio;
using OpenTK.Audio.OpenAL;
using System;
using System.IO;
using System.Threading;

class Program
{
    static void Main(string[] args)
    {
        var recorders = AudioCapture.AvailableDevices;
        for (int i = 0; i < recorders.Count; i++)
        {
            Console.WriteLine(recorders[i]);
        }
        Console.WriteLine("-----");

        const int samplingRate = 44100;     // Samples per second

        const ALFormat alFormat = ALFormat.Mono16;
        const ushort bitsPerSample = 16;    // Mono16 has 16 bits per sample
        const ushort numChannels = 1;       // Mono16 has 1 channel

        using (var f = File.OpenWrite(@"C:\users\andy\desktop\out.wav"))
        using (var sw = new BinaryWriter(f))
        {
            // Read This: http://soundfile.sapp.org/doc/WaveFormat/

            sw.Write(new char[] { 'R','I','F','F' });
            sw.Write(0); // will fill in later
            sw.Write(new char[] { 'W','A','V','E' });
            // "fmt " chunk (Google: WAVEFORMATEX structure)
            sw.Write(new char[] { 'f','m','t',' ' });
            sw.Write(16); // chunkSize (in bytes)
            sw.Write((ushort)1); // wFormatTag (PCM = 1)
            sw.Write(numChannels); // wChannels
            sw.Write(samplingRate); // dwSamplesPerSec
            sw.Write(samplingRate * numChannels * (bitsPerSample / 8)); // dwAvgBytesPerSec
            sw.Write((ushort)(numChannels * (bitsPerSample / 8))); // wBlockAlign
            sw.Write(bitsPerSample); // wBitsPerSample
            // "data" chunk
            sw.Write(new char[] { 'd','a','a' });
            sw.Write(0); // will fill in later

            // 10 seconds of data. overblown,but it gets the job done
            const int bufferLength = samplingRate * 10;
            int samplesWrote = 0;

            Console.WriteLine($"Recording from: {recorders[0]}");

            using (var audioCapture = new AudioCapture(
                recorders[0],samplingRate,alFormat,bufferLength))
            {
                var buffer = new short[bufferLength];

                audioCapture.Start();
                for (int i = 0; i < 10; ++i)
                {
                    Thread.Sleep(1000); // give it some time to collect samples

                    var samplesAvailable = audioCapture.AvailableSamples;
                    audioCapture.ReadSamples(buffer,samplesAvailable);
                    for (var x = 0; x < samplesAvailable; ++x)
                    {
                        sw.Write(buffer[x]);
                    }

                    samplesWrote += samplesAvailable;

                    Console.WriteLine($"Wrote {samplesAvailable}/{samplesWrote} samples...");
                }
                audioCapture.Stop();
            }

            sw.Seek(4,SeekOrigin.Begin); // seek to overall size
            sw.Write(36 + samplesWrote * (bitsPerSample / 8) * numChannels);
            sw.Seek(40,SeekOrigin.Begin); // seek to data size position
            sw.Write(samplesWrote * (bitsPerSample / 8) * numChannels);
        }
    }
}

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。