|
|
> At present, height fields normally basically copy the underlying image
> data into a local data container within the height field data structure,
> only changing the binary representation of the values from whatever the
> source provides into 16-bit values.
OK, I wasn't sure if the parser freed the image data after the height field
object was created, but your method would be much better overall for all
types of height_field.
> That said, normals could then be precomputed from the data in the image
> container (rather than the local 16-bit copy of that data); 3x16-bit
> storage for the normals should still be enough though, I see no need for
> change there.
Are you sure that 3x16 bit will be enough under all conditions to give a
perfect result? Worst case I am thinking of is some convex curved surface
that has been scaled a lot in one axis (eg <1,100,1>) and then has a mirror
finish. Any tiny error in the surface normals will be easily visible.
Post a reply to this message
|
|