If upsampling the audio by a significant factor, it may be necessary to
process more than one buffer at a time, rather than lose input.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
The visualization buffer now holds up to 45 seconds of loop, and the
latency measurement code now caps this at 30 seconds, and restarts the
output if latency exceeds 30 seconds, such as if a sound output is
reset.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
For one thing, the example code I followed was Swift and handled auto
releasing handles in the background, while Objective-C requires manual
handle reference management.
For two, there was no autoreleasepool around the block handling the
input audio chunks, which need to be released as they are pulled out and
disposed of. This also contributed to memory leakage.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
Correctly configure AVFoundation with the channel layouts supported by
WAVEFORMATEXTENSIBLE speaker position flags, which includes varied
formats supported by FFmpeg and Core Audio inputs.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
Stop output when requested, except on natural completion of the last
track in the play queue. Also fix deadlocks with stopping and
restarting.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
The output now uses AVSampleBufferAudioRenderer to play all formats, and
uses that to resample. It also supports Spatial Audio on macOS 12.0 or
newer. Note that there are some outstanding bugs with Spatial Audio
support. Namely that it appears to be limited to only 192 kHz at mono or
stereo, or 352800 Hz at surround configurations. This breaks DSD64
playback at stereo formats, as well as possibly other things. This is
entirely an Apple bug. I have reported it to Apple with reference code
FB10441301 for reference, in case anyone else wants to complain that it
isn't fixed.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>