Android Wear Audio Recorder using ChannelAPI

I am trying to create an audio recording app for Android Wear. Right now, I can capture the sound on the watch, transfer it to my phone, and save it to a file. However, there are gaps or cut-offs in the audio file.

I found these asked questions related to my link1 , link2 problem , but they couldn't help me.


Here is my code:

Firstly, from the time zone side, I create a channel using channelAPI and successfully send the audio recorded on the watch to the smartphone.

//here are the variables values that I used

//44100Hz is currently the only rate that is guaranteed to work on all devices
//but other rates such as 22050, 16000, and 11025 may work on some devices.

private static final int RECORDER_SAMPLE_RATE = 44100; 
private static final int RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_MONO;
private static final int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
int BufferElements2Rec = 1024; 
int BytesPerElement = 2; 

//start the process of recording audio
private void startRecording() {

    recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
            RECORDER_SAMPLE_RATE, RECORDER_CHANNELS,
            RECORDER_AUDIO_ENCODING, BufferElements2Rec * BytesPerElement);

    recorder.startRecording();
    isRecording = true;
    recordingThread = new Thread(new Runnable() {
        public void run() {
            writeAudioDataToPhone();
        }
    }, "AudioRecorder Thread");
    recordingThread.start();
}

private void writeAudioDataToPhone(){

    short sData[] = new short[BufferElements2Rec];
    ChannelApi.OpenChannelResult result = Wearable.ChannelApi.openChannel(googleClient, nodeId, "/mypath").await();
    channel = result.getChannel();

    Channel.GetOutputStreamResult getOutputStreamResult = channel.getOutputStream(googleClient).await();
    OutputStream outputStream = getOutputStreamResult.getOutputStream();

    while (isRecording) {
        // gets the voice output from microphone to byte format

        recorder.read(sData, 0, BufferElements2Rec);
        try {
            byte bData[] = short2byte(sData);
            outputStream.write(bData);
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
    try {
        outputStream.close();
    } catch (IOException e) {
        e.printStackTrace();
    }
}

      

Then, from the smartphone side, I get the audio data from the channel and write it to a PCM file.

public void onChannelOpened(Channel channel) {
    if (channel.getPath().equals("/mypath")) {
        Channel.GetInputStreamResult getInputStreamResult = channel.getInputStream(mGoogleApiClient).await();
        inputStream = getInputStreamResult.getInputStream();

        writePCMToFile(inputStream);

        MainActivity.this.runOnUiThread(new Runnable() {
            public void run() {
                Toast.makeText(MainActivity.this, "Audio file received!", Toast.LENGTH_SHORT).show();
            }
        });
    }
}

public void writePCMToFile(InputStream inputStream) {
    OutputStream outputStream = null;

    try {
        // write the inputStream to a FileOutputStream
        outputStream = new FileOutputStream(new File("/sdcard/wearRecord.pcm"));

        int read = 0;
        byte[] bytes = new byte[1024];

        while ((read = inputStream.read(bytes)) != -1) {
            outputStream.write(bytes, 0, read);
        }

        System.out.println("Done writing PCM to file!");

    } catch (Exception e) {
        e.printStackTrace();
    } finally {
        if (inputStream != null) {
            try {
                inputStream.close();
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
        if (outputStream != null) {
            try {
                // outputStream.flush();
                outputStream.close();
            } catch (Exception e) {
                e.printStackTrace();
            }

        }
    }
}

      


What am I doing wrong or your suggestions for creating the perfect silent audio file on your smartphone? Thanks in advance.

+3


source to share


1 answer


I noticed in your code that you are all reading into a short [] array and then converting it to a byte [] array for the send feed API. Your code also creates a new byte [] array through each iteration of the loop, which will create a lot of work for the garbage collector. In general, you want to avoid allocation within loops.

I would allocate a single byte [] array at the top and let the AudioRecord class store it directly in a byte [] array (just make sure you allocate twice as many bytes as you did shorts), with code like this:

mAudioTemp = new byte[bufferSize];

int result;
while ((result = mAudioRecord.read(mAudioTemp, 0, mAudioTemp.length)) > 0) {
  try {
    mAudioStream.write(mAudioTemp, 0, result);
  } catch (IOException e) {
    Log.e(Const.TAG, "Write to audio channel failed: " + e);
  }
}

      



I also tested this with a 1 second sound buffer using code like this and it worked well. I don't know what the minimum buffer size is before it starts to experience problems:

int bufferSize = Math.max(
  AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT),
  44100 * 2);

      

+1


source







All Articles