 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On Thu, 12 Oct 2000 06:45:26 -0700, ian wrote:
>:D
>Very nice!!
>
>#declare Ultimate_Povray_Programmer =
> merge {
> object {
> Chris_Huff
> }
> object {
> Nathan_Kopp
> }
> object {
> Pov_Team
> }
>}
But then you wouldn't be able to see parts of Nathan, because you'd get a
coincident surface problem with the two copies of him.
--
Ron Parker http://www2.fwi.com/~parkerr/traces.html
My opinions. Mine. Not anyone else's.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Chris Huff wrote:
>
> I have been working on a new transparence patch, eventually it will
> allow transparency to be controlled independantly of the pigment
> color(which will make use of transparent image_maps easier), but
> currently, only blurring is implemented. I have added two blur
> algorithms, one is based on the one used in the blurred reflection
> patch, which shoots a number of randomly jittered rays and averages
> their results, and a new one which attempts to solve the graininess
> problem of the original but is less accurate and has other aliasing
> problems. Basically, it shoots evenly spaced rays in two planes parallel
> to and intersecting along the ray, and perpendicular to each other.
> I plan to update the blurred reflection patch as well...my blurring code
> is at a deeper level, so the same code can be used for both reflection
> and transparence.
>
That looks promising, but if i understand things right both blurred reflection
and transparency are only fakes for rough surfaces (of course that doesn't mean
they are obsolete) The structures in the second version seem to have some
similarities with shadows from area lights.
I estimate both algorithms take quite long to render compared to normal
transparency depending much on what's behind the transparent object.
BTW, I remember you working on some pattern blurring function, did you have any
results in that direction ?
Christoph
--
Christoph Hormann <chr### [at] gmx de>
Homepage: http://www.schunter.etc.tu-bs.de/~chris/
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <39E60BF7.A7CE2A8D@schunter.etc.tu-bs.de>,
chr### [at] gmx de wrote:
> That looks promising, but if i understand things right both blurred
> reflection and transparency are only fakes for rough surfaces (of
> course that doesn't mean they are obsolete)
True, they are intended to simulate evenly rough surfaces. You can get
more precise and controllable results using a normal, but you need
antialiasing to get the effects to be visible, and that affects the
whole scene and is slow.
If I can figure out a way to supersample specific textures or objects, I
will try to implement it...it could use a method similar to the blur
algorithms I am working on now. If I can work out how to get an estimate
of the ray footprint, a sort of imitation differential ray-tracing might
be possible.
> The structures in the second version seem to have some similarities
> with shadows from area lights.
Yes, because the rays "fan out" and are evenly spaced, the spacing gets
larger with distance, similar to the way area shadows spread out.
These artifacts should be less visible with more random backgrounds(the
checker pigment used here really shows the problem because of it's
repetitive nature).
I suppose if the original algorithm is like media method 1, the current
one is like media method 2. I have a couple ideas for anti-aliasing the
blur:
1) Sample along 2 perpendicular directions, like the existing method 2,
but super-sample between two samples when their difference in color
exceeds a threshold. This would be sort of like media method 3...
2) Send rays out in a triangular pattern, dividing into sub-triangles
when necessary. This would have the advantage of covering an area of
space instead of sampling along two directions...the recursive triangle
pattern might make the aliasing less noticeable, too.
> I estimate both algorithms take quite long to render compared to normal
> transparency depending much on what's behind the transparent object.
Compared to ordinary transparency: yes, it is much slower. However, the
second version can sometimes produce smoother results with the same
number of samples than the first version, so the slowdown isn't too bad,
especially if it only covers a small area of the image.
These images both had 12 samples, and rendered in about the same
time(around 2 minutes, though I haven't performed a real test of
rendering speed yet).
> BTW, I remember you working on some pattern blurring function, did
> you have any results in that direction ?
I am planning to make another try, the first one wasn't very successful
or easy to use(it used a 3D convolution matrix, which allowed effects
other than blur, but was a pain to use and too slow with matrices large
enough to get decently smooth blur. I plan to allow this and another,
easier to use and faster method in my next try.). I am debating whether
to blur patterns or pigments...probably both.
--
Christopher James Huff
Personal: chr### [at] mac com, http://homepage.mac.com/chrishuff/
TAG: chr### [at] tag povray org, http://tag.povray.org/
<><
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Chris Huff wrote:
>
[...]
> Yes, because the rays "fan out" and are evenly spaced, the spacing gets
> larger with distance, similar to the way area shadows spread out.
> These artifacts should be less visible with more random backgrounds(the
> checker pigment used here really shows the problem because of it's
> repetitive nature).
> I suppose if the original algorithm is like media method 1, the current
> one is like media method 2. I have a couple ideas for anti-aliasing the
> blur:
> 1) Sample along 2 perpendicular directions, like the existing method 2,
> but super-sample between two samples when their difference in color
> exceeds a threshold. This would be sort of like media method 3...
> 2) Send rays out in a triangular pattern, dividing into sub-triangles
> when necessary. This would have the advantage of covering an area of
> space instead of sampling along two directions...the recursive triangle
> pattern might make the aliasing less noticeable, too.
How about combining geometric and random techniques, that would help adjusting
graininess, but i'm not sure if that's easy to implement.
>
> > BTW, I remember you working on some pattern blurring function, did
> > you have any results in that direction ?
>
> I am planning to make another try, the first one wasn't very successful
> or easy to use(it used a 3D convolution matrix, which allowed effects
> other than blur, but was a pain to use and too slow with matrices large
> enough to get decently smooth blur. I plan to allow this and another,
> easier to use and faster method in my next try.). I am debating whether
> to blur patterns or pigments...probably both.
>
I always thought a convolution matrix can only be applied to a rastered pattern,
at least you would have to specify some kind of scale factor for the matrix IMO.
If i understand things right, it's worth trying adaptive calculations (like used
with antialiasing) when working with a larger matrix.
Christoph
--
Christoph Hormann <chr### [at] gmx de>
Homepage: http://www.schunter.etc.tu-bs.de/~chris/
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <39E62E83.2104786E@schunter.etc.tu-bs.de>,
chr### [at] gmx de wrote:
> How about combining geometric and random techniques, that would help
> adjusting graininess, but i'm not sure if that's easy to implement.
I plan to add jittering to my patch eventually, it should be quite easy.
> I always thought a convolution matrix can only be applied to a
> rastered pattern,
Why?
> at least you would have to specify some kind of scale factor for the
> matrix IMO.
What? I don't know what you mean by "scale factor"...
My patch allowed you to specify the size of the matrix separate from the
number of elements, is that what you meant?
> If i understand things right, it's worth trying adaptive calculations
> (like used with antialiasing) when working with a larger matrix.
That is the other method I was talking about. :-)
Basically, sample along each axis, possibly with anti-aliasing and/or
jitter.
--
Christopher James Huff
Personal: chr### [at] mac com, http://homepage.mac.com/chrishuff/
TAG: chr### [at] tag povray org, http://tag.povray.org/
<><
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Chris Huff wrote:
>
> I plan to add jittering to my patch eventually, it should be quite easy.
>
Sounds like a good idea to me.
> > I always thought a convolution matrix can only be applied to a
> > rastered pattern,
>
> Why?
>
> > at least you would have to specify some kind of scale factor for the
> > matrix IMO.
>
> What? I don't know what you mean by "scale factor"...
> My patch allowed you to specify the size of the matrix separate from the
> number of elements, is that what you meant?
>
I mean the convolution matrix has some scale in relation to the pattern, the
distance between two neighbored elements in the matrix must have some
representation in Pov-units. Of course that's independent from the number of
elements...
> > If i understand things right, it's worth trying adaptive calculations
> > (like used with antialiasing) when working with a larger matrix.
>
> That is the other method I was talking about. :-)
> Basically, sample along each axis, possibly with anti-aliasing and/or
> jitter.
>
That's not exactly what i was thinking of, although it also sounds interesting.
I meant a regular convolution matrix but leaving out elements in calculation
that are not neccessary. Imagine fo example blurring a checker pattern leads to
a lot of identical elements in the matrix.
After thinking a bit further about this idea it's probably not that feasible,
maybe just forget about it :-)
Christoph
--
Christoph Hormann <chr### [at] gmx de>
Homepage: http://www.schunter.etc.tu-bs.de/~chris/
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Hello!
I'm really glad whenever I see people want to improve POV, and are able
to do it! ;o) Now, pardon me for being dumb maybe but I still wonder
why you are not using a simpler method for blurring.
I can imagine my idea isn't working with transparent surfaces, but.. In
my opinion it wouldn't produce a wrong result if: POV renders the
reflection in the "ordinary" old way, but into a separate buffer instead
of the screen. Then use simple and fast gaussian blur, and them UV map
it onto the object again, as a pigment.
I don't know if nearby objects are supposed to blur less, according to
physics, but if so, then we could use the same method as post-processing
camera depth blur.
I know there's a tricky thing according to edges of the object, if it's
blurred my way. But my idea is that the buffer should contain a
reflection of the object that "follows the objects shape".
As you can see, I'm not used to explaining this.. But I can try further,
if necessary.. :o)
Hugo
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Hugo wrote:
>
> Hello!
>
> I'm really glad whenever I see people want to improve POV, and are able
> to do it! ;o) Now, pardon me for being dumb maybe but I still wonder
> why you are not using a simpler method for blurring.
>
> I can imagine my idea isn't working with transparent surfaces, but.. In
> my opinion it wouldn't produce a wrong result if: POV renders the
> reflection in the "ordinary" old way, but into a separate buffer instead
> of the screen. Then use simple and fast gaussian blur, and them UV map
> it onto the object again, as a pigment.
>
> I don't know if nearby objects are supposed to blur less, according to
> physics, but if so, then we could use the same method as post-processing
> camera depth blur.
>
Your idea has strong similarities with megapov's post-processed focal blur and
has the same pros and cons (like being strongly resolution dependant). As you
said near objects should have less blur than those far away, so you would use
the depth information like with focal blur. Right now post processing neither
supports transparency nor reflection, so things would not work correctly in many
cases. Furthermore results with semitransparent objects would be very bad
anyway.
Your mapping idea does not seem that good to me, because it would only lead to
interferences and the need for interpolation or high resolution and therefore
slower rendering times.
> I know there's a tricky thing according to edges of the object, if it's
> blurred my way. But my idea is that the buffer should contain a
> reflection of the object that "follows the objects shape".
>
The edges are handled quite well with focal blur, so that would not be the
problem. There are quite a lot of problems with post processed blur like the
enormous memory use when working with large images and the things mentioned
above, so it probably could not replace other methods. Anyway it could be a
nice addition to the depth post processing (which is used by focal blur IIRC) to
support transparency and reflection, although i don't think it's that easy to
implement.
Christoph
--
Christoph Hormann <chr### [at] gmx de>
Homepage: http://www.schunter.etc.tu-bs.de/~chris/
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <39E### [at] mailme dk>, Hugo <hug### [at] mailme dk>
wrote:
> I'm really glad whenever I see people want to improve POV, and are able
> to do it! ;o) Now, pardon me for being dumb maybe but I still wonder
> why you are not using a simpler method for blurring.
>
> I can imagine my idea isn't working with transparent surfaces, but.. In
> my opinion it wouldn't produce a wrong result if: POV renders the
> reflection in the "ordinary" old way, but into a separate buffer instead
> of the screen. Then use simple and fast gaussian blur, and them UV map
> it onto the object again, as a pigment.
This sounds like "environment mapping", rendering an image from the
position of the object and using that for reflection/transparency
information. The disadvantages of this are a big loss in the accuracy of
the reflections, the finite resolution of the environment map, memory
use, time spent generating the maps, etc...blurred reflections and
transparency are easy to fake when using environment mapping, but adding
this to POV would not be so easy. And to get even slightly accurate blur
effects, you would have to store depth information too...and reflections
of objects close to or on the surface would look really bad.
The advantage to this method is speed, and it can be used easily in
scan-line renderers(most scan-line renderers use this method, and some
can resort to the ray-tracing method POV uses now for higher quality
reflections).
Or are you talking about some kind of post process, marking the area of
the image and saving the necessary data for later processing? How would
this handle reflections of reflecting objects, etc? Also, it would still
not be as accurate, and wouldn't exactly be easier to implement.
My patch can blur anything that uses the Trace() function, which could
make it useful for a couple other things I have in mind besides
reflection and transparence...your idea sounds like it would require
special handling for this.
> I don't know if nearby objects are supposed to blur less, according to
> physics, but if so, then we could use the same method as post-processing
> camera depth blur.
If you think about it, they would have to blur less...
Get a piece of some kind of blurry transparent plastic(or sandpaper a
piece of clear plastic). Get a book and press the plastic against a
page, and slowly pull the it away.
And the camera blur post_process isn't totally accurate either, it has
problems with transparent and reflecting objects. These problems aren't
bugs, but a result of the shortcuts taken to allow it to be done later
as a blur filter. The original kind is more accurate(and uses a method
similar to my patch, shooting a bunch of sample rays).
--
Christopher James Huff
Personal: chr### [at] mac com, http://homepage.mac.com/chrishuff/
TAG: chr### [at] tag povray org, http://tag.povray.org/
<><
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
We could try intersection, or blob...but those might look rather gross. :p
ian
Ron Parker wrote in message ...
>But then you wouldn't be able to see parts of Nathan, because you'd get a
>coincident surface problem with the two copies of him.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |