Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AudioTag backend. #192

Closed
wants to merge 1 commit into from
Closed

AudioTag backend. #192

wants to merge 1 commit into from

Conversation

jlegoff
Copy link

@jlegoff jlegoff commented May 14, 2014

Hi,

First, thanks for your project! It works really well and the code is clear and easy to modify.

So I've been tinkering with it in order to be able to load large audio files (>1 hour), which didn't work for me. I'm aware of the streaming mode, but I want the wave to be drawn straight away. The only solution I've found is to get my server to compute them and send them to wavesurfer.

So I did two things:

  1. add a new parameter 'getPeaks', which is a function that gets called instead of backend.getPeaks if provided
  2. add a new backend which does not make use of the webaudio api. It just adds an audio tag, just like loadStream does. This allows me to make the waveform compatible with browsers which don't support the web api.

This is a work in progress and not very well tested. I wanted to let you know about it in case you think this might interest other people.

A couple of comments:

At some point I started moving the downloadArrayBuffer and loadArrayBuffer into the WebAudio backend. My goal was to be able to do something like:

var params = {
            backend: "AudioTag",
            getPeaks: getPeaks
};

wave.load(url);  // don't download the file via ajax and don't decode it

var params = {
           backend: "WebAudio"
};

wave.load(url);  // use the WebAudio backend just like now

but then I realized I'd broken the loadBlob function. I could have fixed it but I prefered dropping you a word in order to know what your take is on this. Right now my feeling is that websurfer.js is too aware of the stream handling and that this could be delegated to the backend (or something else).

I also don't like passing in a function in the parameters, but this was the easiest way of getting it to work for me.

So there you go, sorry for the long comment :-)

@katspaugh
Copy link
Owner

Thank you, @jlegoff!

Indeed, there is demand for both pure audio tag (for fallback) and audio-tag-in-Web-Audio (for filters and such).

However, I wouldn't like to support a pure audio tag solution. There's waveform.js that does pretty much that. JSON + audio tag (and no Web Audio).

What I would like to support, is an optional parameter of loadStream, which would draw from JSON instead of draw-as-it-plays.

So, one would call it like this:

wavesurfer.ajax({
    url: '/get-peaks.json
}, function (json) {
    wavesurfer.loadStream('/audio.ogg', json.peaks);
});

@michaelwills
Copy link

Yes there is interest. :) The thing about waveform.js is that it doesn't handle audio playback and interactivity. It only lets you display the waveform unless you are streaming a soundcloud specific file.

@katspaugh
Copy link
Owner

Ah, I didn't know. Well, then let's go for it.

I think the audio tag backend might share some methods with the MediaElementSource backend to avoid code duplication. The peaks as the second parameter to loadStream would be nice too (there are too many global options already).

@jlegoff
Copy link
Author

jlegoff commented May 14, 2014

So do you think that loadStream should always use the AudioTag backend, then either call drawAsItPlays or drawBuffer depending on whether the peaks are passed in? In this case shouldn't the backend initialized in the load method instead of init?

@katspaugh
Copy link
Owner

What I mean is, we shouldn't make getPeaks a global option. It should be a parameter of the loadStream function as an array of peaks, not callback. Backend should still be initialized on init according to a global option.

The thing is, the media backend also needs this second parameter.

In case of the audiotag backend, this second parameter should be required (throw an error, if not present). In case of the media backend, if pre-loaded peaks aren't preset, draw as it plays (the current default).

@katspaugh
Copy link
Owner

Otherwise, create a separate load function, like loadAudioWithPeaks(url, peaks), that would initialize the audiotag backend. That way, we will avoid adding the global backend option.

@jlegoff
Copy link
Author

jlegoff commented May 15, 2014

Ok, I updated the code. It got a little more complicated than I had planned. This is what I did:

  1. I extracted the audio tag functions in a new object, WaveSurfer.AudioTag.Media. It does all the interaction with the audio tag element - set the current time, get the duration, etc.
  2. I created a new backend, AudioTag, which uses the aforementioned object. The webaudio backend also uses it in streaming mode.
  3. I created a base backend object that gathers the common code between the audiotag and webaudio backends.
  4. the function loadStreams gets an optional parameter "peaks". If it is provided, the wave is drawn straight way form it; if not, it is drawn as the file plays.
  5. the "backend" parameter is still there. I thought it made sense to be able to configure it from the global parameters the same way it is possible with the renderer.

ps. I hope you didn't get notified each time I pushed a commit!

@katspaugh
Copy link
Owner

Great, thanks! A separate Backend object seems to be an overkill, but I'll take a closer look on this weekend.

@jlegoff
Copy link
Author

jlegoff commented May 15, 2014

Yeah, I wasn't sure about the Backend object, but it's the only way I found to avoid duplicating those functions. Let me know if you find a better solution.

@katspaugh
Copy link
Owner

Well, those three methods in Backend are no problem to duplicate maybe.

@jlegoff
Copy link
Author

jlegoff commented May 19, 2014

Right, I can remove that file and duplicate the functions, no problem. Any other comments? I'll be travelling in the next few days so I won't be able to work on it until the end of the week.

This backend allows to reproduce an audio file without the webaudio api.
The peaks then need to be provided as an argument in the parameters.
@katspaugh
Copy link
Owner

Seems like it won't merge automatically now.

@katspaugh
Copy link
Owner

@jlegoff I had to reimplement the audio tag backend in terms of the Web Audio media backend. Please check the master branch. Thanks for driving this idea and enlightening discussions in this PR.

@katspaugh katspaugh closed this Jun 14, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants