|
![](/i/fill.gif) |
> On 10/05/2011 20:15, Alain wrote:
>
>> Maybe that the original recording was done in a lossy format, or even a
>> non-lossy format but with a sample rate set to low and a sample
>> resolution also to low... Like 4000 kHz (or even less), 4 bits...
>> (I had a single CD that contained the whole Beatles discography encoded
>> as .wav at that level or about...)
>
> A normal CD is 40 kHz, so 4000 kHz would be 10x *higher* resolution than
> normal. And 4 bits per sample would be almost unrecognisable.
I missed the decimal point.
When I got that CD, I was realy surprised that they managed to cram so
many tracks on it. I looked attentively at the files and their
formating. I was rather incredulous at the 4 bits, but after checking
with another programm, the result was the same.
And the sound was very surprisingly good.
>
>>> 2. If I can tell that it's compressed, despite not having the
>>> uncompressed original to compare to, doesn't that mean that there's more
>>> redundancy in the signal than the codec is taking advantage of?
>>
>> It's just that you have reasons to expect a higher chromatic range than
>> the one you have.
>
> Chromatic range? I think perhaps you meant dynamic range.
Dynamic range is about the range from low volume to high, chromatic
range is about the low to high frequency, in particular, the fidelity of
the harmonics.
At least, according to the texts that I have read over over 40 years...
>
>> Even the best codec set at the highest quality can't do miracle if the
>> source is bad...
>
> In this case, that's unlikely to be the problem.
Post a reply to this message
|
![](/i/fill.gif) |