The output implementation has a block size of 4096, so the class
implementation should also use that.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
This is the code that actually needs to be added to make it process
audio. This insertion makes the whole app crash when processing audio at
all. Weirdly, simply reverting these two files makes the audio code work
again. I can't explain it.
Also, commenting out CMAudioFormatDescriptionCreate makes it work, too.
There's something weird going on with that function.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
This is a working implementation of FreeSurround, but I can't get it to
work in the Cog code base, as the whole project crashes head over heels
if this code is inserted into the output chain.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
Move the Float32 converter to a different location, for any future plans
to support decoding audio files to common data for any other purpose.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
This should also seal up any potential hole for problems if there's an
audio format change and no audio buffered.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
The input chain could hang up indefinitely, and MAD decoder didn't
indicate end of file properly.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
readAudio now returns an AudioChunk object directly, and all inputs have
been changed to accomodate this. Also, input and converter processing
have been altered to better work with this.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
Most projects needed to be changed to enable C or Objective C modules.
Hopefully, this improves debugging.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
Apparently, Apple's Spatial Audio processor doesn't really support weird
configurations like this. So we need to filter them down to stereo.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
This filter replaces the old one, and uses OpenAL Soft presets. Since
there aren't that many of those, I've left off configuration for now,
except to turn it on or off.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
Prevent the player from locking up in certain circumstances, by not
locking chainQueue the entire time this function is processing.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
Remove a single .inc include from CogAudio build phase, as it's included
but not compiled as Pascal like Xcode thinks. Also remove a bunch of
files from being copied into the resulting .framework and .bundle files
during link stage, as we don't need to distribute that stuff.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
The deinterleaved format was being specified incorrectly. Now it asks
for the correct format, which is deinterleaved, and the bytes per frame
or packet sizes are relative to a single channel's buffer, not all
buffers. Oops, that could have been more clear in the documentation.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
Moved external cover art reader to a place where it can be used for any
format, even formats unsupported by Metadata Reader interfaces.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
This callback should be unregistered when plugin loading completes,
otherwise we could end up processing bundles loaded by external stuff,
like Audio Units loading for MIDI playback.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
Equalizer was copying the output of the equalizer repeatedly to the
first output channel, instead of copying each channel correctly. This
had the effect of making the equalizer output adjusted audio to only the
left channel in stereo output, and possibly render the stream sounding
weird.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
If somehow a plugin doesn't load, skip cuesheet should skip it anyway,
as we don't want any recursive loops.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
When restarting playback on the current track, restart the correct
track, in case restarting near the end of it.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
Track play counts for the correct track, even on short tracks. Also
correctly track the play count of the last played item in the play queue
which stops with bufferChain set to nil, so the previous iteration was
not tracking it.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
Previously, the cleanup thread was not being run. Also, only reset the
metadata deduplication store when the cache is first emptied.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
Wait for the equalizer to be shut down properly by the main thread
before destroying it. Otherwise, the main thread could crash on stop,
due to accessing the equalizer handle while it's being torn down in the
output thread.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
Now the API makes both PCM and FFT data optional, and will do nothing if
neither are requested. Also, it now supports a latency offset in seconds
with floating point precision. The two built-in visualizations currently
request zero larency. Increasing the latency asks for even older samples
while specifying a negative count requests samples from the "future"
relative to what the listener is hearing.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
Cuesheets can now expose which URLs they contain, which may help with
sandbox path configuration. That is, if the CUE sheets are already
readable.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
If upsampling the audio by a significant factor, it may be necessary to
process more than one buffer at a time, rather than lose input.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
The visualization buffer now holds up to 45 seconds of loop, and the
latency measurement code now caps this at 30 seconds, and restarts the
output if latency exceeds 30 seconds, such as if a sound output is
reset.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
For one thing, the example code I followed was Swift and handled auto
releasing handles in the background, while Objective-C requires manual
handle reference management.
For two, there was no autoreleasepool around the block handling the
input audio chunks, which need to be released as they are pulled out and
disposed of. This also contributed to memory leakage.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
Correctly configure AVFoundation with the channel layouts supported by
WAVEFORMATEXTENSIBLE speaker position flags, which includes varied
formats supported by FFmpeg and Core Audio inputs.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
Stop output when requested, except on natural completion of the last
track in the play queue. Also fix deadlocks with stopping and
restarting.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
The output now uses AVSampleBufferAudioRenderer to play all formats, and
uses that to resample. It also supports Spatial Audio on macOS 12.0 or
newer. Note that there are some outstanding bugs with Spatial Audio
support. Namely that it appears to be limited to only 192 kHz at mono or
stereo, or 352800 Hz at surround configurations. This breaks DSD64
playback at stereo formats, as well as possibly other things. This is
entirely an Apple bug. I have reported it to Apple with reference code
FB10441301 for reference, in case anyone else wants to complain that it
isn't fixed.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>
All optional fallback code for older versions has also been removed, and
everything now assumes 10.13.0 or newer. Some cases are still included
for point releases, such as 10.13.2.
Signed-off-by: Christopher Snowhill <kode54@gmail.com>