ECSGuy: Hey!
Actually, we normalize all of the frequency AND time-based data to a 64-bit floating-point value. From there, we use that data to generate a LOT of data that our game needs; there are all kinds of data structures, etc. We compress this data and THAT is what stays in memory. Also, it cannot be streamed as it needs to index values randomly (we have to determine "cool" moments, compare, etc.)
So the gating factor is really the total time... not the bit-depth, sample-rate or number of channels (well, ok, we don't support multi-channel just yet... it's on our feature list :) )...
Does that make sense?
Matt
OK. That makes it understandable.
I already got the random access thing because you need to analyze the song as a whole and temp files on the hard drive would be very slow. After that if you convert it all to the same data format then length would be the only thing that matters.
I'd assume that means there's no way to reduce memory footprint without redesigning the whole analyzer which would be way to much work even if it didn't compromise quality.
An option that set maximum length by available memory should still work though, right? Worst case scenario it looks a the track's length, tries to allocate the necessary memory, and then tells you "sorry, pick a different song" if you don't have enough.
Plan B would be an "offline" utility that generated the level cache overnight or something via great big temp files and lots and lots of drive flogging but I don't know if would be too hard to implement.