POV-Ray : Newsgroups : povray.binaries.images : POVEarth: Greater Caucasus (n42e044 quadrangle)... or Didi Kavkasioni, as we Kartwelophiles prefer to say ;-) : Re: POVEarth: Greater Caucasus (n42e044 quadrangle)... or Didi Kavkasioni, = Server Time: 27 Oct 2020 12:28:30 GMT
  Re: POVEarth: Greater Caucasus (n42e044 quadrangle)... or Didi Kavkasioni, =  
From: Bald Eagle
Date: 15 Aug 2020 13:20:06
=?UTF-8?Q?J=c3=b6rg_=22Yadgar=22_Bleimann?= <yaz### [at] gmxde> wrote:

> Contrary to Bald Eagle's recommendations, precalculations and
> concatenations of #write operations did not help much in increasing
> parsing speed of the mesh2 generating script - calculating a mesh2 from
> the n42e044 heightfield still took 2 hours, 32 minutes. I need a faster
> computer! 16-core Ryzen Threadripper fitted with 128 GiB of RAM or
> something like that...

generating a  3601 by 3601 mesh took some cool 200 days! My version does it in
two and a half hours...

Trying to help here with some constructive criticism.

Yadgar, I'm sorry, but something is _wrong_.
3601 x 3601 is only 12,967,201 pixels.
In its simplest terms, you're taking those pixels, and instead of x, y, RGB
you're writing some data to disk as x, y, (altitude).
That should take _minutes_.

I'm currently using a 21600 x 10800 image as a heightfield to sample my altitude
values and whether I'm rending that as an isosurface - which HAS to sample every
single pixel and then do all the isosurface geometry and the render the WHOLE
PLANET,

http://news.povray.org/povray.binaries.images/thread/%3Cweb.5f0a6d9a6f747b6fb0b41570%40news.povray.org%3E/

or if I'm sampling the data and then writing 16 control points per bezier patch,
plus the rest of the data for [currently a mere] 1500 separate include files,

http://news.povray.org/povray.binaries.images/thread/%3Cweb.5f3095315d69181b1f9dae300%40news.povray.org%3E/

it doesn't take anywhere near that long.


In my last analysis, I think I counted 7 separate loops, at least one of those
was nested.  There is a forest of calculations that you do, which although may
not be the biggest processing bottleneck, is sure to be a debugging nightmare
later on down the road.

I would suggest that you go through you get a coffee, take a deep breath and
enter a state of deep relaxation, and promise yourself that you refuse to accept
ANY assumptions no matter how small.
Then write out a small simple GOAL oriented description of what the scene should
do.   Not HOW it should be done, but WHAT needs to be accomplished.

Then take a tiny 640x480 noise image - you can make it in 5 sec with the bozo
pattern - and using nothing but spheres map out the vertices that you need in a
small test scene.  When you have that working, try it with your tile and see how
long it takes.

Moving back to the 640 x 480, you then add some connecting cylinders to make a
wireframe.  Then run your tile.

Do this all from scratch.
You shouldn't need a lot of loops or complex hard-to-follow expressions.
If you have very similar command lines in your code, get all of the operators
and subexpressions to line up so that you can tell at a glance if something is
off.

#local thisVector = <x * 123.0/ 7 + oneValue, y *  11.2/25, z * 0.04/7>;
#local thatVector = <x *-286.5/24 + twoValue, y * 364.0/ 3, z * 1.00/7>;

If you have a complex expression that gets evaluated over and over again, then
declare that as a variable or write a macro or use a function.   It will make
the above expression shorter, easier to read, easier to debug, easier to spot
errors, and you won't have 3 lines or 5 lines, or 40 lines that you need to edit
if there's a change in a fundamental calculation.
Think about it like a spreadsheet.  ONE change should be able to affect 4000
cells.  You shouldn't have to edit the equations in 4000 cells because one value
changed.  And simplifying one complex expression that gets used 4000 times - or
_12,967,201 times_ HAS to help.  You know that this HAS to be true.

It doesn't make any difference HOW many resources you have if you're not using
them efficiently.  Instead of _wasting_ all of your time trying to debug code
that doesn't work (this early on in the project) (and will be a recurring
theme), _spend_ your time writing code that doesn't NEED to be debugged.
And if you do the same thing with the logic and flow of the code, it will surely
be faster and nimbler.

I'm simply not accepting that taking ONE single image and converting it into a
mesh2 object takes THAT long.   Surely there are software packages that could do
that with an entire directory of such images in a FAR shorter amount of time.
And POV-Ray SDL and your computer cannot possibly be THAT slow.

I was trying to stitch together bezier patches, and overall that took me YEARS
to learn how to do properly.   deCasteljau was a _terrible_ way to go about
that, and it was only after TOK suggested that I use Bernstein polynomials that
things got set right.   It still took me MONTHS, with TOK's patient guidance and
clipka's merciless percussive disciplining and divine intervention for me to
work everything out and get the math and the code straight.

And that was just for a teensy tiny little torus.   It was painful.   It was a
daily struggle.   But I rewrote a LOT of "working" code and I _learned_ a lot.

So maybe just try to take a step back, experiment with different methods,
simpler model scenes, and alternative ways to assemble the data, and you may
have an epiphany or two about what you actually need to do and what is going on
in your code.   And you will be MUCH happier in the long run.


Post a reply to this message

Copyright 2003-2008 Persistence of Vision Raytracer Pty. Ltd.