|
|
Warp wrote:
> it seemed clear to me that you were talking about the lightmap resolution.
Given your definition of lightmap, yes, I think that's what I was
talking about. When I learned it, it was with the term "chips", which
were basically 3D polygons in space, each of which had a particular
reflectivity spectrum and emission spectrum (for glowing places like
lightbulbs).
> The basic radiosity algorithm is basically calculating the illumination
> of surfaces into lightmaps (which can be applied to those surfaces).
Yes. Good so far.
> A lightmap is basically just an image map, but instead of telling the
> color of the surface at that point (which is something a texture map does)
> it tells the lighting (ie. brightness and coloration) of the surface at
> that point. A rendering engine filters the texture map with the light map
> in order to get the final surface color.
The version I learned had each chip having a particular color. There
weren't any surfaces big enough to apply a texture to. You applied the
texture to the wall, and the resolution of the texture gave you the
resolution of the chips, basically.
> Radiosity is an algorithm for calculating such lightmaps. For each pixel
> in the lightmap, the "camera" is put onto the surface of the object
> corresponding to that lightmap pixel, facing outwards, and the half-world
> seen from that point of view is averaged into that lightmap pixel. This is
> done multiple times in order to get diffuse inter-reflection of light
> between surfaces.
Hmmmm... What you describe might be isomorphic to what I learned. What I
remember is this:
You start with your 3D surface and break it down into "chips":
triangles, for convenience, each with a normal and an
emissive/reflective color.
Then, for each chip, you calculate how much of each other chip it can
see, and at what angle, and add the reflection of that into the chip
under consideration.
The nice thing was you could do this with one giant matrix multiply (one
row and column for each chip) with everything but the diagonal being
zero to start with (IIRC). And once you've taken it to the precision you
want, you have the color of each chip, and you can redraw from different
angles without recalculating the lighting.
The bad thing, of course, is that you don't have reflection because each
chip's contribution is 100% diffuse over the surface of that chip.
And, having written that, yes, I think what you described and what I
described are describing the same thing.
I think maybe your "lightmap bitmap" concept is assigning one pixel of a
bitmap to each of the "chips" in the algorithm I know.
I was thinking it could be adaptive because if you made some chips
larger (like, a smooth plain wall with inch-square chips) and some chips
smaller (like the surfaces of the paintings on the walls) your matrix
would be smaller. If you're already assuming you're calculating a bitmap
to be layered over a surface, it would be difficult to have
variable-sized pixels in it.
> The great thing about radiosity is that calculating the lightmaps can
> be done with 3D hardware, making it quite fast (although still not
> real-time).
Yeah, it seemed the kind of algorithm that one could put in hardware
rather easily.
--
Darren New / San Diego, CA, USA (PST)
Remember the good old days, when we
used to complain about cryptography
being export-restricted?
Post a reply to this message
|
|
|
|
Darren New <dne### [at] sanrrcom> wrote:
> You start with your 3D surface and break it down into "chips":
> triangles, for convenience, each with a normal and an
> emissive/reflective color.
That indeed reminds me of an alternative (but much less popular) method
for calculating radiosity.
Instead of calculating the lighting into lightmaps, the lighting is
instead calculated at and stored in the vertices of each polygon (basically
in the same way as you would do it for each individual pixel in the
lightmap). The polygon itself is gouraud/phong-shaded using these vertex
lighting values.
In order to get more accuracy polygons can be subdivided into smaller
polygons and radiosity lighting calculated at each of the new vertices.
This would indeed allow subdividing more at places where there is more
variation in lighting.
AFAIK this method is not very popular because rendering lightmaps onto
polygons is much faster and efficient than rendering several orders of
magnitude more polygons (which still need gouraud/phong shading, which
is not relevantly faster than lightmapping; could even be slower).
This is especially true in real-time rendering using hardware, as polygon
counts should be minimized, possibly being replaced with more detailed
textures.
--
- Warp
Post a reply to this message
|
|
|
|
Warp wrote:
> In order to get more accuracy polygons can be subdivided into smaller
> polygons and radiosity lighting calculated at each of the new vertices.
I think that's what I was talking about. But without the
multiple-normals-per-polygon, even. It was a pretty theoretical class.
> This is especially true in real-time rendering using hardware, as polygon
> counts should be minimized, possibly being replaced with more detailed
> textures.
This class was long before anyone was making special-purpose graphics
chips. Heck, this was probably a decade before software jpeg compression
was feasible. :-) I think a pen plotter and a tektronics tube was
cutting edge graphics.
--
Darren New / San Diego, CA, USA (PST)
Remember the good old days, when we
used to complain about cryptography
being export-restricted?
Post a reply to this message
|
|
|
|
Tom York wrote:
> No, being a biased method it definitely isn't guaranteed to (even ignoring
> limits on quality settings that others have mentioned).
Well, not being a true simulation of quantum-dynamical behaviour (such
as wave-particle duality and superposition), an unbaised renderer isn't
guaranteed to produce scientifically correct images either. But who
cares? Computer graphics is all about finding something close enough
without actually simulating the entire Real World. ;-)
>> Your point?
>
> The nice thing (or one of the nice things) about the unbiased
> methods is that you can wind them up and let them go and the quality will
> definitely increase over time - minimising tweaking/re-rendering to avoid
> stubborn artefacts.
That does indeed sound nice.
There's probably a way to add this to POV-Ray without drastically
altering the algorithm though. ;-)
Post a reply to this message
|
|