|
|
In article <Xns### [at] 204213191226>,
"Rafal 'Raf256' Maj" <raf### [at] raf256com> wrote:
> How to implement it ? quite simple.
Oh, yes, very simple...why don't you explain how to project it onto all
the different camera types? Taking camera normals into account?
> 4. reverse-project it - find point more close to A, that projected on
> screen gives A'. i.e. P'=<101,245,380>
What is the point of this? What do you do with this reverse-projected
point? Treat it as an intersection? Or some kind of atmospheric effect,
like a glow?
> The intersection test of 'pixel' is same as sphere with radius==epsilon (or
> some tolerance value)
Why even bother? The chances of a ray hitting the sphere are vanishingly
slow, so might as well not intersect with it at all. It would just slow
things down.
I really don't see how this is a good idea. POV is a raytracer, and this
isn't a raytracing primitive. And why just pixels? Why not lines,
circles, polygons, etc...and what about antialiasing? If everything is
unantialiased, it won't be very useful.
It might make sense as a post-process filter working in image space,
since it is really 2D drawing on top of the image.
Really, the best thing I can think of would be:
Somehow tell POV to use a higher minimum antialiasing sampling on parts
of the image that hit a certain object (the sky object).
Use "super-bright" colors (with components > 1) for the stars, make the
stars about the size of a pixel.
Clip colors after doing the antialiasing filter, as some kind of flag or
render option if you prefer it that way.
The minimum supersampling makes it more likely a pixel gets hit, the
clipping change allows the star to contribute the proper amount to the
pixel.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|