|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
(For those that can view DivX 6 files, get it at
http://www.sohcahtoa.net/videos/bezier.avi (506K))
Same test track as last time, but now its an actual track instead of a
bunch of spheres. Track is generated using 3 sphere_sweeps and many
instances of a mesh of triangles for the track ties.
Unfortunately, due to a bug in my code that I haven't figured out, the
track ties on the first segment are upside down and their geometry
appears to be just totally wrong, and the last segment doesn't even HAVE
track ties, yet I know what is causing it, I'm just not sure the easiest
way to fix it.
My method of finding where to put the track ties is rather brute force.
I slowly step through each bezier until I've traveled a certain
distance in space, then a track tie is placed. Repeat until I've
reached the end. The problem is, if I allow it to use the last segment,
then the way the code is written, it tries to step past the last bezier,
resulting in an index out of bounds error. A simple act of adding a
couple #ifs could fix some things though.
My next version will most likely be a something that could actually be a
real roller coaster, one of my own favorite designs, Frozen Ray. A
screenshot of that coaster as seen in No Limits can be seen at
http://www.sohcahtoa.net/images/FrozenRay.png (145K). My version won't
yet have the wooden supports nor the wooden style track. Supports will
come later, with the track style even farther out.
-DJ
Post a reply to this message
Attachments:
Download 'bezier.m1v.mpg' (408 KB)
|
|
| |
| |
|
|
From: Zeger Knaepen
Subject: Re: Roller Coaster WIP 2 (MPEG1, DivX link)
Date: 21 Apr 2006 04:22:51
Message: <4448965b@news.povray.org>
|
|
|
| |
| |
|
|
looking good! needs a background though :)
cu!
--
#macro G(b,e)b+(e-b)*C/50#end#macro _(b,e,k,l)#local C=0;#while(C<50)
sphere{G(b,e)+3*z.1pigment{rgb G(k,l)}finish{ambient 1}}#local C=C+1;
#end#end _(y-x,y,x,x+y)_(y,-x-y,x+y,y)_(-x-y,-y,y,y+z)_(-y,y,y+z,x+y)
_(0x+y.5+y/2x)_(0x-y.5+y/2x) // ZK http://www.povplace.com
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
DJ Wiza wrote:
> (For those that can view DivX 6 files, get it at
> http://www.sohcahtoa.net/videos/bezier.avi (506K))
To make this look better in MPEG:
* Don't use full red against black; the color compression is not kind to
red-against-black edges. A yellow roller coaster would work just as
well for test purposes.
* Don't use a checkered plane, especially black and white. A granite
pattern, scaled large, with softer colors, will compress just as much
with far less artifacting.
* If you're not using anti-aliasing, do so. Smooth edges compress far
better than jagged ones.
Regards,
John
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
John VanSickle wrote:
> * Don't use full red against black; the color compression is not kind to
> red-against-black edges. A yellow roller coaster would work just as
> well for test purposes.
Why is that? I don't know much about the technical details of lossy
compression. Why would yellow on black work better than red on black?
> * Don't use a checkered plane, especially black and white. A granite
> pattern, scaled large, with softer colors, will compress just as much
> with far less artifacting.
I knew from the start that the checkered plane wasn't a good idea. I'm
now using this to create some nice grass:
plane {
y, -1.0
texture { pigment { granite color_map { [0 color <0,.2,0>] [1 color
<0,.4,0>] } } }
texture { pigment { bumps color_map { [0 color rgbt <0,1,0,0.6>] [1
color rgbt <0,.6,0,0.6>] } scale 25 } }
}
Though I suppose I should get rid of the granite. While it looks very
nice, it creates a level of detail that would require too much data to
show in a test render that I'm posting here.
> * If you're not using anti-aliasing, do so. Smooth edges compress far
> better than jagged ones.
All my current renders that I've posted use AA, but I'm currently
rendering a longer section of track taken from a coaster I've actually
designed, and since this is a test render, I want it to be somewhat
fast. Its been going for 16 hours now and is on frame 369, and its
going to be 30 fps.
One of my biggest problems is the track ties. I don't know an efficient
method of spacing the track ties out an equal distance from each other,
since the track is defined by beziers that will almost always be varying
length. I mentioned this in the original post. The test I'm rendering
right now only has the first drop (Which turns right 225 degrees while
going down, then swerves left 45 degrees), the first hill which has a
double-down (A drop that straightens out partway down it, then drops
again), then a rising turn to the left.
At least, that's what its SUPPOSED to do...No Limits has its Z axis
flipped compared to POV-Ray's, so the coaster comes out a mirror image.
Anyways, it currently takes 26 seconds to parse each frame because it
calculates the locations of all the track ties every time. I need to
change that so it only does it on the first frame, saves the locations
to a text file, then reads it back on all the other frames.
-DJ
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
DJ Wiza wrote:
> John VanSickle wrote:
>
>> * Don't use full red against black; the color compression is not kind
>> to red-against-black edges. A yellow roller coaster would work just
>> as well for test purposes.
>
> Why is that? I don't know much about the technical details of lossy
> compression. Why would yellow on black work better than red on black?
MPEG converts each RGB pixel into a YUV pixel, according to these formulae:
Y = 0.299R + 0.587G + 0.114B
U = 0.492(B − Y)
= − 0.147R − 0.289G + 0.436B
V = 0.877(R − Y)
= 0.615R − 0.515G − 0.100B
Yellow (and white) against black will have transitions that are most
marked in the Y value (max value = .886) with lesser values in U and V
(-.436 and .1, respectively), while red's transitions max out at .299 in
Y, -.147 in U, and .615 in V.
The Y values are sampled at their original resolution, but U and V are
hacked in two both vertically and horizontally. In other words, when
the MPEG coder goes to work on one of my IRTC frames, it turns the
320x240 R, G, and B frames into a 320x240 Y frame, a 160x120 U frame,
and a 160x120 V frame. When the results are decompressed in your MPEG
player, the U and V frames are expanded back to 320x240, resulting in
both U and V being blocky-looking (or blurred) on decompression.
When the R, G, and B values are grabbed back out of the Y, U, and V, the
higher resolution of the Y data will result in higher-resolution
yellow-black edges, with a bit of funny coloring along them, while the
subsampled V data will result in blocky-looking red-black edges.
You can see this on TV; whenever there is a bright red object on the
screen, you'll see that the edge of the redness tends to stray from the
edge of the object.
Note that the MPEG standard does not require subsampling of the U and V
data, but subsampling doesn't affect the viewer's perception of the
results very often, so many MPEG encoders don't give you the option of
shutting it off.
It's how MPEG gets the first 50% of its compression, so it's fairly
important.
If you look at my January 1999 IRTC entry ("Death to Rusty!"), you'll
see this effect on the opening title lettering, which is red on a black
background. Scout's honor, I rendered it at 320x240, but after MPEG got
through with it it looked like a 160x120 render scaled by 200%.
> One of my biggest problems is the track ties. I don't know an efficient
> method of spacing the track ties out an equal distance from each other,
> since the track is defined by beziers that will almost always be varying
> length.
>
> Anyways, it currently takes 26 seconds to parse each frame because it
> calculates the locations of all the track ties every time. I need to
> change that so it only does it on the first frame, saves the locations
> to a text file, then reads it back on all the other frames.
If your Beziers are quadratic, and you go for heavy math lifting, there
is a formula that will tell you exactly how long a given section of the
spline is. Shockwave Flash files use strictly quadratic splines, with
good results; you need twice as many splines, but the math is so much
easier that it pays off.
For cubic splines, my method was to move the spline's sampling point an
educated-guess distance along, check the actual distance, and adjust the
sampling point by the discrepancy between the actual distance and the
desired distance, and repeat until I got close enough not to make a
difference to the viewer (within .1 mm is good enough for the ties on a
roller coaster, methinks).
As for saving the locations to a text file, you could have the POV-Ray
SDL write all of the data to an .INC file in SDL (with all the #declare
and other statements), and then invoke that .INC file.
Regards,
John
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
John VanSickle wrote:
> (Stuff on MPEG compression)
What to Y, U, and V stand for, anyways?
> You can see this on TV; whenever there is a bright red object on the
> screen, you'll see that the edge of the redness tends to stray from the
> edge of the object.
You especially see it on older screens with the color, contrast, or
brightness knob turned up too high.
> If your Beziers are quadratic, and you go for heavy math lifting, there
> is a formula that will tell you exactly how long a given section of the
> spline is. Shockwave Flash files use strictly quadratic splines, with
> good results; you need twice as many splines, but the math is so much
> easier that it pays off.
>
> For cubic splines, my method was to move the spline's sampling point an
> educated-guess distance along, check the actual distance, and adjust the
> sampling point by the discrepancy between the actual distance and the
> desired distance, and repeat until I got close enough not to make a
> difference to the viewer (within .1 mm is good enough for the ties on a
> roller coaster, methinks).
This page has the formulas used for calculating the beziers:
http://www.moshplant.com/direct-or/bezier/math.html
> As for saving the locations to a text file, you could have the POV-Ray
> SDL write all of the data to an .INC file in SDL (with all the #declare
> and other statements), and then invoke that .INC file.
I always forget about .INC files and the fact that I can write my own
using SDL.
-DJ
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> (Stuff on MPEG compression)
>
> What to Y, U, and V stand for, anyways?
http://en.wikipedia.org/wiki/YUV
Post a reply to this message
|
|
| |
| |
|
|
From: John VanSickle
Subject: Re: Roller Coaster WIP 2 (MPEG1, DivX link)
Date: 22 Apr 2006 18:17:44
Message: <444aab88@news.povray.org>
|
|
|
| |
| |
|
|
DJ Wiza wrote:
> John VanSickle wrote:
>
>> (Stuff on MPEG compression)
>
> What to Y, U, and V stand for, anyways?
Y stands for luminance, which is the brightness of the colors. U and V
are Chrominance values. U maxes out at full blue (0,0,1), and V maxes
out for full red (1,0,0).
Regards,
John
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I have been playing with some RGB <-> YUV (for OCR)
I't very strange to see how little information it actually in the U and V.
The top-right image is composed form just the U and V data (Y fixed at 50%)
The bottom image is just the Y data.
Also note that green has a lot of effect on Y, because our eyes are more
sensitive for green.
I have seen the red blockiness very often in jpg's as well. There was a post
about it in binary-images, I think it was an image of red pencils. It is
possible to disable sub-sampling of the red channel in jpg's (at least with
gimp) don't know if it's possible with mpg.
It often can be solved by using a less saturated red, like: <0.8,0.1,0.1>
Post a reply to this message
Attachments:
Download 'col_split_1.jpg' (204 KB)
Preview of image 'col_split_1.jpg'
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> I't very strange to see how little information it actually in the U
> and V. The top-right image is composed form just the U and V data (Y
> fixed at 50%) The bottom image is just the Y data.
Another good way to see how little is in U and V is to split the image
up as you have done, and then blur one of the planes, and then
recombine them into a single image. Blurring the U and V planes has very
little effect compared to blurring the Y plane by the same amount.
Another fun demonstration is to combine the Y plane from one image with
the U and V planes from an unrelated image. The result looks much more
like the Y image.
There are lots of other sets of data you can do the same sort of demos
on, for example: the phase of the Fourier transform of an image conveys
much more information than the amplitude.
--
I can blink the lights and cat you files full of sad things, We can play
XPilot just for two, I can serenade and gently play all your ogg files,
Be your 'pache server just for you. (with apologies to Freddie Mercury)
even more disdain for popular culture at http://surreal.istic.org/songs/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|