|
|
On 10/05/2011 20:15, Alain wrote:
> Maybe that the original recording was done in a lossy format, or even a
> non-lossy format but with a sample rate set to low and a sample
> resolution also to low... Like 4000 kHz (or even less), 4 bits...
> (I had a single CD that contained the whole Beatles discography encoded
> as .wav at that level or about...)
A normal CD is 40 kHz, so 4000 kHz would be 10x *higher* resolution than
normal. And 4 bits per sample would be almost unrecognisable.
>> 2. If I can tell that it's compressed, despite not having the
>> uncompressed original to compare to, doesn't that mean that there's more
>> redundancy in the signal than the codec is taking advantage of?
>
> It's just that you have reasons to expect a higher chromatic range than
> the one you have.
Chromatic range? I think perhaps you meant dynamic range.
> Even the best codec set at the highest quality can't do miracle if the
> source is bad...
In this case, that's unlikely to be the problem.
Post a reply to this message
|
|