|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Please see my recent p.b.i thread for background, plus the excellent
analysis by clipka.
I propose to change some of the storage types used that are related to
height_field data. It seems to me these were written with minimum storage
space in mind rather than accuracy (and maybe when integer processing was
faster than float?). The changes should help when non-integer images are
used for height_fieds, and also when functions are used within SDL to
directly generate the height_field.
There are some minor changes needed to hfield.h and hfield.cpp to avoid
casting and storing heights/normals as integers.
Then in Parser::Make_Pattern_Image (in parstxtr.cpp) it should create a
float-based image to store the height values, and not an integer based one.
My very basic tests show that these changes solve the issue that is visible,
but I have not investigated any side-affects on other parts of the code.
Is it worth investigating such a change further? The accuracy of
height_fields will be improved, but at the cost of increased memory usage.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott schrieb:
> I propose to change some of the storage types used that are related to
> height_field data. It seems to me these were written with minimum
> storage space in mind rather than accuracy (and maybe when integer
> processing was faster than float?). The changes should help when
> non-integer images are used for height_fieds, and also when functions
> are used within SDL to directly generate the height_field.
As memory footprint is still a concern, I recommend being careful in
this issue and use a different approach:
> There are some minor changes needed to hfield.h and hfield.cpp to avoid
> casting and storing heights/normals as integers.
>
> Then in Parser::Make_Pattern_Image (in parstxtr.cpp) it should create a
> float-based image to store the height values, and not an integer based one.
>
> My very basic tests show that these changes solve the issue that is
> visible, but I have not investigated any side-affects on other parts of
> the code.
>
> Is it worth investigating such a change further? The accuracy of
> height_fields will be improved, but at the cost of increased memory usage.
The proposed change is simple and straightforward, and should indeed
solve the observed issue without side effects. However, for the sake of
memory footprint, I propose a different approach:
At present, height fields normally basically copy the underlying image
data into a local data container within the height field data structure,
only changing the binary representation of the values from whatever the
source provides into 16-bit values.
This (a) often unnecessarily wastes memory, (b) limits the precision of
the data, and (c) may even add a bit of (minor) processing overhead,
when the original image data comes in floating-point format (or is
converted to floating-point format by the image file loader), as
ultimately the data must be converted to floating point format anyway.
This could be avoided by refraining from copying the data into a local
container altogether, and instead simply referencing the original image
data container; as image data containers make heavy use of polymorphism
(for instance there are int8-, int16- and float32-based image
containers), this allows for a highly flexibile trade-off between
precision and memory efficiency.
In cases where parse-time preprocessing is desired (maybe for added
smoothing or what-have-you), this could still be provided for by
creating a secondary greyscale image data container (matching the
precision of the original container), which the height field would then
reference instead of the original image data container.
It could make sense to provide the function image syntax with another
parameter to choose between int16- and float32-based data container format.
That said, normals could then be precomputed from the data in the image
container (rather than the local 16-bit copy of that data); 3x16-bit
storage for the normals should still be enough though, I see no need for
change there.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> At present, height fields normally basically copy the underlying image
> data into a local data container within the height field data structure,
> only changing the binary representation of the values from whatever the
> source provides into 16-bit values.
OK, I wasn't sure if the parser freed the image data after the height field
object was created, but your method would be much better overall for all
types of height_field.
> That said, normals could then be precomputed from the data in the image
> container (rather than the local 16-bit copy of that data); 3x16-bit
> storage for the normals should still be enough though, I see no need for
> change there.
Are you sure that 3x16 bit will be enough under all conditions to give a
perfect result? Worst case I am thinking of is some convex curved surface
that has been scaled a lot in one axis (eg <1,100,1>) and then has a mirror
finish. Any tiny error in the surface normals will be easily visible.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott schrieb:
> Are you sure that 3x16 bit will be enough under all conditions to give a
> perfect result? Worst case I am thinking of is some convex curved
> surface that has been scaled a lot in one axis (eg <1,100,1>) and then
> has a mirror finish. Any tiny error in the surface normals will be
> easily visible.
I'd consider that a pathological case anyway. If you do it to compensate
for bad scaling of the original data then you're doing something wrong
somewhere else. Other than that, such scaling will also stretch the
triangles pathologically, so I'd not expect any high-precision results
from it regardless of normals precision.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|