|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> The only use I can think of would be hanging wires.
Or other such thin lines, like in the bricks pattern, or when you're trying
to simulate fur and hair.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Wed, 13 Feb 2002 12:01:20 -0800, Ben Chambers wrote:
> Simplified version (for the final viewing plane):
> Check the object's bounding hierarchy for the maximum and minimum elevations
> above the horizon, as well as the rotations from center. Plot as a box.
> AFAIK, this is the method used for visa buffer.
Actually, the vista buffer is somewhat more complex, in that it actually
projects the bounding box onto the camera plane as a generalized hexagon.
> The current sampling method is color based, and works to ensure all portions
> of the screen look nice. This method, being intersection based, deals with
> edges / small objects very nicely, but will not address aliased textures.
Correction: it deals with edges that are within a pixel of the edge of the
bounding box. This is a far cry from all edges, even for simple boxes and
cylinders (consider rotation.)
--
#macro R(L P)sphere{L F}cylinder{L P F}#end#macro P(V)merge{R(z+a z)R(-z a-z)R(a
-z-z-z a+z)torus{1F clipped_by{plane{a 0}}}translate V}#end#macro Z(a F T)merge{
P(z+a)P(z-a)R(-z-z-x a)pigment{rgbf 1}hollow interior{media{emission 3-T}}}#end
Z(-x-x.2x)camera{location z*-10rotate x*90normal{bumps.02scale.05}}
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Ben Chambers <bdc### [at] yahoocom> wrote:
:> Does it work for reflections/refractions as well?-)
: Does the current method? I was under the impression that supersampling was
: done only on the rays shot from the view plane, and not on reflection /
: refracted rays.
Of course it works for reflections/refractions. If the pixel color
difference threshold is reached, then another ray is shot from the camera.
This ray will (probably) also reflect/refract from the same object, thus
sampling whatever is reflected/refracted.
However, the problem here was that if there was such a small object/detail
that no ray hits it and thus is not detected, how could it be done so that
it would be detected anyways and rays shot at it. This might be possible
only if this small object/detail is viewed directly from the camera, but not
if it's seen only in a reflection or refraction.
--
#macro M(A,N,D,L)plane{-z,-9pigment{mandel L*9translate N color_map{[0rgb x]
[1rgb 9]}scale<D,D*3D>*1e3}rotate y*A*8}#end M(-3<1.206434.28623>70,7)M(
-1<.7438.1795>1,20)M(1<.77595.13699>30,20)M(3<.75923.07145>80,99)// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3c6ad4df@news.povray.org>,
"Ben Chambers" <bdc### [at] yahoocom> wrote:
> Such a setting belongs in the INI file, not in the scene.
The .ini file is no place for an object-specific attribute.
> The only use I can think of would be hanging wires.
Hairs, any kind of cabling, ropes or strings, screens or gratings, chain
link fences, stars, cracks (like in floors or walls), wood, small
text...I could come up with more. You can usually get good results with
with the existing adaptive method, but not always.
--
Christopher James Huff <chr### [at] maccom>
POV-Ray TAG e-mail: chr### [at] tagpovrayorg
TAG web site: http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Ron Parker wrote:
> Not in every case. Some things have to be approximated, which
> is a messy and expensive task. Examples include the poly object
> for higher-order polynomials, the isosurface object, and lots of
> others.
So because it can't be done for everything, it isn't done at all.
Gotcha.
--
Tim Cook
http://empyrean.scifi-fantasy.com
-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GFA dpu- s: a?-- C++(++++) U P? L E--- W++(+++)>$
N++ o? K- w(+) O? M-(--) V? PS+(+++) PE(--) Y(--)
PGP-(--) t* 5++>+++++ X+ R* tv+ b++(+++) DI
D++(---) G(++) e*>++ h+ !r--- !y--
------END GEEK CODE BLOCK------
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3c6ac710@news.povray.org>,
"Ben Chambers" <bdc### [at] yahoocom> wrote:
> Does the current method? I was under the impression that supersampling was
> done only on the rays shot from the view plane, and not on reflection /
> refracted rays.
The current method uses the resulting color of the pixel, so anything
that causes a change in the color gets antialiased. The pixels are what
get supersampled, not the rays.
--
Christopher James Huff <chr### [at] maccom>
POV-Ray TAG e-mail: chr### [at] tagpovrayorg
TAG web site: http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
From: Thorsten Froehlich
Subject: Re: More methods? was Re: anti-aliasing
Date: 13 Feb 2002 17:12:17
Message: <3c6ae4c1@news.povray.org>
|
|
|
| |
| |
|
|
In article <chr### [at] netplexaussieorg> , Christopher
James Huff <chr### [at] maccom> wrote:
> So, if a pixel hit an object but an adjacent one didn't, supersample the
> surrounding ones to make sure they didn't hit it, and supersample
> outwards until you run out of pixels that hit the object?
Basically, yes. It will always succeed given the object meets the
requirements I mentioned. Your method could still fail, but is far simpler to
implement, of course.
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3C6AE046.5D095B2C@scifi-fantasy.com>,
"Timothy R. Cook" <tim### [at] scifi-fantasycom> wrote:
> So because it can't be done for everything, it isn't done at all.
It would be very difficult, probably requiring a rewrite, and about the
only thing it would work for would be triangles and polygons. Even
spheres would probably be too difficult...where's the edge of an
unevenly scaled and rotated sphere? And then you have things like the
different camera types, camera normals, etc, which would make it
unuseable for any object...
It is simply not worth implementing for the few cases it would actually
be any help.
POV is a raytracer, it only knows where the surface of an object is by
intersections with rays, it is not aware of the "edges". The only way to
do it for most objects is to take samples, which is exactly what is done
now.
--
Christopher James Huff <chr### [at] maccom>
POV-Ray TAG e-mail: chr### [at] tagpovrayorg
TAG web site: http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Wed, 13 Feb 2002 16:22:33 -0500, "Timothy R. Cook"
<tim### [at] scifi-fantasycom> wrote:
>But they're defined by mathematical formulae. You know (or can find
>out) exactly where the surface is by solving for the formula.
You're always welcome to solve for the formula defining the
intersection of a complex isosurfaces with noise, and a atanh julia
fractal :)
Peter Popov ICQ : 15002700
Personal e-mail : pet### [at] vipbg
TAG e-mail : pet### [at] tagpovrayorg
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
One can always render at 10 times the final resolution
and then use any resampling method available in
image processing packages to produce the final image.
As I understand povray simply averages the sub-pixels.
Anyone tried cosine, cone or some other weighting
distribution?
_____________
Kari Kivisalo
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |