|
![](/i/fill.gif) |
DJ Wiza wrote:
> John VanSickle wrote:
>
>> * Don't use full red against black; the color compression is not kind
>> to red-against-black edges. A yellow roller coaster would work just
>> as well for test purposes.
>
> Why is that? I don't know much about the technical details of lossy
> compression. Why would yellow on black work better than red on black?
MPEG converts each RGB pixel into a YUV pixel, according to these formulae:
Y = 0.299R + 0.587G + 0.114B
U = 0.492(B − Y)
= − 0.147R − 0.289G + 0.436B
V = 0.877(R − Y)
= 0.615R − 0.515G − 0.100B
Yellow (and white) against black will have transitions that are most
marked in the Y value (max value = .886) with lesser values in U and V
(-.436 and .1, respectively), while red's transitions max out at .299 in
Y, -.147 in U, and .615 in V.
The Y values are sampled at their original resolution, but U and V are
hacked in two both vertically and horizontally. In other words, when
the MPEG coder goes to work on one of my IRTC frames, it turns the
320x240 R, G, and B frames into a 320x240 Y frame, a 160x120 U frame,
and a 160x120 V frame. When the results are decompressed in your MPEG
player, the U and V frames are expanded back to 320x240, resulting in
both U and V being blocky-looking (or blurred) on decompression.
When the R, G, and B values are grabbed back out of the Y, U, and V, the
higher resolution of the Y data will result in higher-resolution
yellow-black edges, with a bit of funny coloring along them, while the
subsampled V data will result in blocky-looking red-black edges.
You can see this on TV; whenever there is a bright red object on the
screen, you'll see that the edge of the redness tends to stray from the
edge of the object.
Note that the MPEG standard does not require subsampling of the U and V
data, but subsampling doesn't affect the viewer's perception of the
results very often, so many MPEG encoders don't give you the option of
shutting it off.
It's how MPEG gets the first 50% of its compression, so it's fairly
important.
If you look at my January 1999 IRTC entry ("Death to Rusty!"), you'll
see this effect on the opening title lettering, which is red on a black
background. Scout's honor, I rendered it at 320x240, but after MPEG got
through with it it looked like a 160x120 render scaled by 200%.
> One of my biggest problems is the track ties. I don't know an efficient
> method of spacing the track ties out an equal distance from each other,
> since the track is defined by beziers that will almost always be varying
> length.
>
> Anyways, it currently takes 26 seconds to parse each frame because it
> calculates the locations of all the track ties every time. I need to
> change that so it only does it on the first frame, saves the locations
> to a text file, then reads it back on all the other frames.
If your Beziers are quadratic, and you go for heavy math lifting, there
is a formula that will tell you exactly how long a given section of the
spline is. Shockwave Flash files use strictly quadratic splines, with
good results; you need twice as many splines, but the math is so much
easier that it pays off.
For cubic splines, my method was to move the spline's sampling point an
educated-guess distance along, check the actual distance, and adjust the
sampling point by the discrepancy between the actual distance and the
desired distance, and repeat until I got close enough not to make a
difference to the viewer (within .1 mm is good enough for the ties on a
roller coaster, methinks).
As for saving the locations to a text file, you could have the POV-Ray
SDL write all of the data to an .INC file in SDL (with all the #declare
and other statements), and then invoke that .INC file.
Regards,
John
Post a reply to this message
|
![](/i/fill.gif) |