Currently, I have two methods available for sending a WAV file to the server. Users can either upload the file directly or record it using their microphone. After the files are sent, they undergo a similar processing workflow. The file is uploaded to S3 and can be played later by clicking on a link that triggers the audio playback via
new Audio('https://S3.url').play()
When dealing with a file recorded from the microphone:
- The
audio.play()
function appears to work, but the sound does not play through the speakers. On the other hand, for an uploaded file, the sound plays as expected. - Visiting the URLs directly opens the sound player in Chrome or initiates a download for a WAV file in Firefox. The sound player correctly plays both sounds, and the downloaded WAV files contain the respective sound that can be played by other programs.
- If the microphone-recorded sound is downloaded and then uploaded back to the server manually, everything works as intended. The re-uploaded WAV file functions like any other uploaded file.
To capture sound from the user's microphone, I display a record button on my webpage using WebAudioTrack. After the user stops their recording and hits the submit button, the following code is executed:
saveRecButton.addEventListener("click", function() {
save_recording(audioTrack.audioData.getChannelData(0));
});
In this code snippet, audioTrack.audioData
is an AudioBuffer containing the recorded sound. The getChannelData(0)
function returns a Float32Array
representing the sound, which is then sent to the server (using Django) via AJAX:
function save_recording(channelData){
var uploadFormData = new FormData();
uploadFormData.append('data', $('#some_field').val());
...
uploadFormData.append('audio', channelData);
$.ajax({
'method': 'POST',
'url': '/soundtests/save_recording/',
'data': uploadFormData,
'cache': false,
'contentType': false,
'processData': false,
success: function(dataReturned) {
if (dataReturned != "success") {
[- Do Some Stuff -]
}
});
}
Subsequently, a WAV file is generated from the array using wavio:
import wavio
import tempfile
from numpy import array
def save_recording(request):
if request.is_ajax() and request.method == 'POST':
form = SoundForm(request.POST)
if form.is_valid():
with tempfile.NamedTemporaryFile() as sound_recording:
sound_array_string = request.POST.get('audio')
sound_array = array([float(x) for x in sound_array_string.split(',')])
wavio.write(sound_recording, sound_array, 48000, sampwidth=4)
sound_recording.seek(0)
s3_bucket.put_object(Key=some_key, Body=sound_recording, ContentType='audio/x-wav')
return HttpResponse('success')
When it's time to listen to the sound:
In Python:
import boto3
session = boto3.Session(aws_access_key_id='key', aws_secret_access_key='s_key')
bucket = self.session.resource('s3').Bucket(name='bucket_name')
url = session.client('s3').generate_presigned_url('get_object', Params={'Bucket':bucket.name, Key:'appropriate_sound_key'})
And in JavaScript:
audio = new Audio('url_given_by_above_python')
audio.play()
While the audio plays correctly when uploading a file, it doesn't play at all when using the user's microphone. Is there something specific about WAV files that I might be overlooking when I upload sound recorded from the microphone to S3 and then re-download it? I can't seem to identify any differences between the two files; even the Audio
objects generated from the user's mic URL and a manually uploaded file that was re-downloaded appear identical (except for the URL, which plays both sounds upon visiting or downloading).
There seems to be a discrepancy here, but I'm unsure what it could be and have been grappling with this issue for a few days now. :(