https://developer.apple.com/documentation/audiounit
Apple has a number of audio frameworks we could use. This uses the Audio
Unit framework, as it gives us most control over the rendering of the
audio frames (such as being able to quickly pause / discard buffers).
From some reading, we could implement niceties such as fading playback
in and out while seeking over a short (10ms) period. This patch does not
implement such fancy features though.
Co-Authored-By: Andrew Kaster <akaster@serenityos.org>
This implementation is very naive compared to the PulseAudio one.
Instead of using a callback implemented by the audio server connection
to push audio to the buffer, we have to poll on a timer to check when
we need to push the audio buffers. Implementing cross-process condition
variables into the audio queue class could allow us to avoid polling,
which may prove beneficial to CPU usage.
Audio timestamps will be accurate to the number of samples available,
but will count in increments of about 100ms and run ahead of the actual
audio being pushed to the device by the server.
Buffer underruns are completely ignored for now as well, since the
`AudioServer` has no way to know how many samples are actually written
in a single audio buffer.
This adds an abstract `Audio::PlaybackStream` class to allow cross-
platform audio playback to be done in an opaque manner by applications
in both Serenity and Lagom.
Currently, the only supported audio API is PulseAudio, but a Serenity
implementation should be added shortly as well.