POV-Ray : Newsgroups : povray.newusers : Should Parsing really take that long? How to speed it up? Server Time
31 Oct 2024 19:23:31 EDT (-0400)
  Should Parsing really take that long? How to speed it up? (Message 12 to 21 of 21)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: Stephen
Subject: Re: Should Parsing really take that long? How to speed it up?
Date: 4 Dec 2015 16:04:39
Message: <5661ffe7$1@news.povray.org>
On 12/4/2015 8:46 PM, ILM wrote:
>>> Warning_Level=n  Allows you to turn off classes of warnings.
>>> +WLn    Same as Warning_Level=n
>
> I really do not know why, but +WL0 does still show the parsing error.
>
>
>>> Regarding the file
>>> 1) It is a nearly 10 MB .pov file.
>>
>> That's not a trivial scene, but I've seen .pov files four times that
>> size parse in less than a minute. My current guess would be that your
>> .pov file loads an excessive number of PNG files -- or, possibly more to
>> the point, that it loads a reasonable number of PNG files an excessive
>> number of times.
>
> I went a step further and looked into the file.
>
> #declare cube=pigment { image_map { png "povincludes/cube.png"} }
> object { M_0
>    matrix <
>    0.7071472,  -5.718298E-05,  -0.7070668,
>    4.944397E-05,  1,  -3.142396E-05,
>    0.7070668,  -1.273882E-05,  0.7071472,
>    94.1652,  8.476583,  -71.2338>
>
> this Object repeats 30.000 times (!) with different numbers.
>
> So back to the point that I want to speed things up.
>
Are you a developer or a user of the logistics program?
The Pov code need to be changed so that it defines a macro with 
different variables.
I am not a programmer but I have been using programs that export Pov 
code, for about 15 years. It is a problem with machine written code. :-(

> Question 1: How should the file (scene?) look like when it contains a object
> that is repeated 30.000 times at different locations to minimize parsing time.
>

It might use a macro or a loop. I am no coder. So others might be of 
more help, here.


> Question 2: Rendering directly scales with my cpu cores. Parsing unfortunately
> does not. If there is is no chance to change the file format and no chance to
> switch to ssd. Is there anything that might help to improve the parsing
> performance? (I tried running multiple instances of pov ray. That worked pretty
> well!)
>
>

There is functionality in WinPov to use a file Queue and if your 
software exports each frame as a pov file. You could use that.

Help
1.5.7.6 File Queue dialog


But then as you are using less cores for each render it is a trade off 
against Parsing and rendering.


-- 

Regards
     Stephen


Post a reply to this message

From: clipka
Subject: Re: Should Parsing really take that long? How to speed it up?
Date: 4 Dec 2015 17:25:32
Message: <566212dc$1@news.povray.org>
Am 04.12.2015 um 21:46 schrieb ILM:

> I went a step further and looked into the file.
> 
> #declare cube=pigment { image_map { png "povincludes/cube.png"} }
> object { M_0
>   matrix <
>   0.7071472,  -5.718298E-05,  -0.7070668,
>   4.944397E-05,  1,  -3.142396E-05,
>   0.7070668,  -1.273882E-05,  0.7071472,
>   94.1652,  8.476583,  -71.2338>
> 
> this Object repeats 30.000 times (!) with different numbers.

Does the "#declare cube=pigment ..." statement repeat as well?
That would be bad, and explain a lot.


> Question 1: How should the file (scene?) look like when it contains a object
> that is repeated 30.000 times at different locations to minimize parsing time.

As for the object itself, that looks about as good as it gets: The
object description actually references a "template" presumably defined
earlier (named M_0 in this case), with the matrix<...> statement
specifying the location and orientation of the respective copy by means
of a coordinate transformation matrix.

The "#declare cube=pigment ..." statement should be present only once
(for each distinct filename, that is); if that is indeed the case, then
there's not much more to optimize there either. If the statement is
repeated all around the place, then it's an indication that the author
of the generating software /tried/ to optimize the generated .pov file
but made some stupid blunder along the way. (As a matter of fact, the
very purpose of the "#declare" statement is to define templates for
various things to be referenced later, in this case some texture
property. You'll also find some "#declare M_0=..." somewhere, hopefully
in just one single place.)


Post a reply to this message

From: William F Pokorny
Subject: Re: Should Parsing really take that long? How to speed it up?
Date: 4 Dec 2015 21:41:56
Message: <56624ef4$1@news.povray.org>
On 12/04/2015 03:46 PM, ILM wrote:
>>> Warning_Level=n  Allows you to turn off classes of warnings.
>>> +WLn    Same as Warning_Level=n
>
> I really do not know why, but +WL0 does still show the parsing error.
>
>

I'm not a developer, but it looks from the source code like this 
particular message conditional calls the parser's PossibleError() 
function instead of the Warning() function and so the message is not 
controllable with the warning level mechanism.

Likely this is why Christoph said in an earlier post you'd have to 
modify the input pov file to get rid of the message.

The parser seems to be set up with three message functions in Warning(), 
PossibleError() and Error().

Bill P.


Post a reply to this message

From: ILM
Subject: Re: Should Parsing really take that long? How to speed it up?
Date: 5 Dec 2015 02:25:01
Message: <web.566290d4fb84f5bafcb4e70@news.povray.org>
>Are you a developer or a user of the logistics program?
>The Pov code need to be changed so that it defines a macro with
>different variables.
>I am not a programmer but I have been using programs that export Pov
>code, for about 15 years. It is a problem with machine written code. :-(

I am an user. Do you have any source on how to use/write macros in pov ray?

> > Question 1: How should the file (scene?) look like when it contains a object
> > that is repeated 30.000 times at different locations to minimize parsing time.
>
> As for the object itself, that looks about as good as it gets: The
> object description actually references a "template" presumably defined
> earlier (named M_0 in this case), with the matrix<...> statement
> specifying the location and orientation of the respective copy by means
> of a coordinate transformation matrix.
>
> The "#declare cube=pigment ..." statement should be present only once
> (for each distinct filename, that is); if that is indeed the case, then
> there's not much more to optimize there either. If the statement is
> repeated all around the place, then it's an indication that the author
> of the generating software /tried/ to optimize the generated .pov file
> but made some stupid blunder along the way. (As a matter of fact, the
> very purpose of the "#declare" statement is to define templates for
> various things to be referenced later, in this case some texture
> property. You'll also find some "#declare M_0=..." somewhere, hopefully
> in just one single place.)


The complete part that I postet repeats 30.000 times. Including the #declare...
line.

In addition to this at the beginning of the document I can find the more general
definition you mentioned.

#declare cube=pigment { image_map { png "povincludes/cube.png"} }
#declare M_0=
  mesh2 {
    #include "povincludes/M0c27c969_b190_4ad2_9fb3_0f1f068dc56b_VV.inc"
    #include "povincludes/M0c27c969_b190_4ad2_9fb3_0f1f068dc56b_NV.inc"
    #include "povincludes/M0c27c969_b190_4ad2_9fb3_0f1f068dc56b_UV.inc"
    texture_list { 1,
      texture { cube finish { ambient 1 diffuse 1 reflection 0 } }
    }
    #include "povincludes/M0c27c969_b190_4ad2_9fb3_0f1f068dc56b_FI.inc"
    uv_mapping
  }
object { M_0
  matrix <
  -9.091367E-06,  -3.256779E-05,  0.9999999,
  -3.532349E-06,  1,  3.256775E-05,
  -0.9999999,  -3.532056E-06,  -9.091482E-06,
  -12.64854,  10.48967,  -33.32481>
}

I edited the pov file in a text editor and removed the extra 30.000 declare
lines. Unfortunately that did not reduce the parsing time.

Furthermore I tried the following:
- Changed the png file to a 1x1 px file. Parsing now only takes 40 seconds.
- Changed the png file to a 1x1 px file AND removed the 30.000 declare lines.
parsing now takes about 15 seconds.

So we found the spot where we have to apply the tweaking.

Something I can do by myself: Use smaller PNG file (even they are not really big
actually).

The big question is: How can we (or better said the developer) optimize the
loading of png files. Only removing the declare line has not changed a lot.


Post a reply to this message

From: William F Pokorny
Subject: Re: Should Parsing really take that long? How to speed it up?
Date: 5 Dec 2015 09:21:22
Message: <5662f2e2$1@news.povray.org>
On 12/05/2015 02:23 AM, ILM wrote:
>...
>
> I edited the pov file in a text editor and removed the extra 30.000 declare
> lines. Unfortunately that did not reduce the parsing time.
>
> Furthermore I tried the following:
> - Changed the png file to a 1x1 px file. Parsing now only takes 40 seconds.
> - Changed the png file to a 1x1 px file AND removed the 30.000 declare lines.
> parsing now takes about 15 seconds.
>
> So we found the spot where we have to apply the tweaking.
>
> Something I can do by myself: Use smaller PNG file (even they are not really big
> actually).
>
> The big question is: How can we (or better said the developer) optimize the
> loading of png files. Only removing the declare line has not changed a lot.
>
>
I've not spent much time in the internals of the mesh2 object as these 
are usually program generated and I just use them, but one thing which 
seems odd is the finish statement being inside the texture_list -> 
texture block. I'd expect code more like:

#declare Tcube=texture {
pigment { image_map { png "povincludes/cube.png" } }
finish { ambient 1 diffuse 1 reflection 0 }
}

#declare M_0=
   mesh2 {
     #include "povincludes/M0c27c969_b190_4ad2_9fb3_0f1f068dc56b_VV.inc"
     #include "povincludes/M0c27c969_b190_4ad2_9fb3_0f1f068dc56b_NV.inc"
     #include "povincludes/M0c27c969_b190_4ad2_9fb3_0f1f068dc56b_UV.inc"
     texture_list { 1,
       texture { Tcube }
     }
     #include "povincludes/M0c27c969_b190_4ad2_9fb3_0f1f068dc56b_FI.inc"
     uv_mapping
   }

// Then 30 thousand positioned copies of the object
object { M_0 matrix <...> }
object { M_0 matrix <...> }
object { M_0 matrix <...> }
object { M_0 matrix <...> }
....

A guess, but wondering if due the finish block being inside the texture 
block of the texture list, whether povray isn't creating many internal 
textures for each face/vertice of the mesh instead of referencing just one.

As for macros, there is documentation at: 
http://www.povray.org/documentation/3.7.0/r3_3.html#r3_3_2_8 for a start.

Bill P.


Post a reply to this message

From: Alain
Subject: Re: Should Parsing really take that long? How to speed it up?
Date: 5 Dec 2015 11:12:38
Message: <56630cf6$1@news.povray.org>

> On 12/05/2015 02:23 AM, ILM wrote:
>> ...
>>
>> I edited the pov file in a text editor and removed the extra 30.000
>> declare
>> lines. Unfortunately that did not reduce the parsing time.
>>
>> Furthermore I tried the following:
>> - Changed the png file to a 1x1 px file. Parsing now only takes 40
>> seconds.
>> - Changed the png file to a 1x1 px file AND removed the 30.000 declare
>> lines.
>> parsing now takes about 15 seconds.
>>
>> So we found the spot where we have to apply the tweaking.
>>
>> Something I can do by myself: Use smaller PNG file (even they are not
>> really big
>> actually).
>>
>> The big question is: How can we (or better said the developer)
>> optimize the
>> loading of png files. Only removing the declare line has not changed a
>> lot.
>>
>>

> ....
>
> A guess, but wondering if due the finish block being inside the texture
> block of the texture list, whether povray isn't creating many internal
> textures for each face/vertice of the mesh instead of referencing just one.
>
> As for macros, there is documentation at:
> http://www.povray.org/documentation/3.7.0/r3_3.html#r3_3_2_8 for a start.
>
> Bill P.
>
>

There is absolutely nothing wrong with the finish block as it is. In 
fact, it could have been included in the #declare as follow for the 
exact same end result:
#declare cube=texture { image_map { png "povincludes/cube.png"} finish { 
ambient 1 diffuse 1 reflection 0 } }

The real problem is that #declare cube=pigment { image_map { png 
"povincludes/cube.png"} } is repeated 30000 times. It's probably the 
case for all image_map that are used.

As for the parsing time not scalling with multiple cores, it's that the 
parsing is strictly linear and only support a single thread. In 
contrast, the rendering is multi threaded and can use all the available 
cores, up to 256 if I remember correctly.


Alain


Post a reply to this message

From: clipka
Subject: Re: Should Parsing really take that long? How to speed it up?
Date: 6 Dec 2015 07:11:30
Message: <566425f2$1@news.povray.org>
Am 05.12.2015 um 08:23 schrieb ILM:

> I edited the pov file in a text editor and removed the extra 30.000 declare
> lines. Unfortunately that did not reduce the parsing time.
> 
> Furthermore I tried the following:
> - Changed the png file to a 1x1 px file. Parsing now only takes 40 seconds.
> - Changed the png file to a 1x1 px file AND removed the 30.000 declare lines.
> parsing now takes about 15 seconds.

Are you absolutely, positively sure you removed all the 30.000 "#declare
cube=pigment { ... }" lines (except the first one of course)?

That would be a very odd finding.


Obviously, what bogs down the parsing of your scene has something to do
with the handling of PNG -- more specifically, the handling of the
image's actual data.

One of the most time-consuming operations in handling image file data
would typically be the process of getting it from disk into memory. This
is something POV-Ray does at the very moment it encounters the
"image_map { png ... }" statement. Having a lot fewer of those
statements /should/ have /some/ effect on parsing time.

If that is not the case /at all/, we'd have to conclude that some other
"expensive" stuff happens with the image data further down the line; the
only thing that would make sense there would be that the 30,000-fold
duplication of the declared object might also duplicate the texture list
each time, which in turn might duplicate the texture, which in turn
might duplicate the pigment, which in turn might duplicate the
image_map, which in turn might duplicate the image.

In fact the duplication of a mesh2 object /does/ duplicate the texture
list, which in turn /does/ duplicate the texture, which in turn /does/
duplicate the pigment, which in turn /does/ diplicate the image_map.

But here's where it stops: When an image_map is duplicated, this does
/not/ copy the image data; it simply copies a pointer to the actual
image data.

Thus, the overhead of all the duplication should be constant with
regards to the image size; whether the image is 1x1 pixels or
10,000x10,000 pixels in size should have no effect whatsoever on the
time it takes to duplicate those 30,000 mesh2 objects.


I'll do a few tests to see whether I'm overlooking something about the
mechanism behind image_map duplication, but at the moment the hypothesis
that makes most sense to me is that you made some trivial mistake when
testing with the 30,000 "#declare cube=..." lines removed, such as using
an external editor to make the change and forgetting to save before
rendering.


Post a reply to this message

From: William F Pokorny
Subject: Re: Should Parsing really take that long? How to speed it up?
Date: 6 Dec 2015 07:54:44
Message: <56643014$1@news.povray.org>
On 12/05/2015 11:13 AM, Alain wrote:

>> On 12/05/2015 02:23 AM, ILM wrote:
>>> ...
>>>
>>> I edited the pov file in a text editor and removed the extra 30.000
>>> declare
>>> lines. Unfortunately that did not reduce the parsing time.
>>>
>>> Furthermore I tried the following:
>>> - Changed the png file to a 1x1 px file. Parsing now only takes 40
>>> seconds.
>>> - Changed the png file to a 1x1 px file AND removed the 30.000 declare
>>> lines.
>>> parsing now takes about 15 seconds.
>>>
>>> So we found the spot where we have to apply the tweaking.
>>>
>>> Something I can do by myself: Use smaller PNG file (even they are not
>>> really big
>>> actually).
>>>
>>> The big question is: How can we (or better said the developer)
>>> optimize the
>>> loading of png files. Only removing the declare line has not changed a
>>> lot.
>>>
>>>
>
>> ....
>>
>> A guess, but wondering if due the finish block being inside the texture
>> block of the texture list, whether povray isn't creating many internal
>> textures for each face/vertice of the mesh instead of referencing just
>> one.
>>
>> As for macros, there is documentation at:
>> http://www.povray.org/documentation/3.7.0/r3_3.html#r3_3_2_8 for a start.
>>
>> Bill P.
>>
>>
>
> There is absolutely nothing wrong with the finish block as it is. In
> fact, it could have been included in the #declare as follow for the
> exact same end result:
> #declare cube=texture { image_map { png "povincludes/cube.png"} finish {
> ambient 1 diffuse 1 reflection 0 } }
>
> The real problem is that #declare cube=pigment { image_map { png
> "povincludes/cube.png"} } is repeated 30000 times. It's probably the
> case for all image_map that are used.
>
> As for the parsing time not scalling with multiple cores, it's that the
> parsing is strictly linear and only support a single thread. In
> contrast, the rendering is multi threaded and can use all the available
> cores, up to 256 if I remember correctly.
>
>
> Alain
Thanks Alain,

I didn't have a large mesh2 handy with which to test. I again think you 
and Christoph are correct in that :

 > The real problem is that #declare cube=pigment { image_map { png
 > "povincludes/cube.png"} } is repeated 30000 times.

Before posting yesterday I'd created a test scene in my own space with 
30K+ lines of declares as ILM posted with a PNG file of 30K bytes in 
size I just grabbed. My parse time increased about 25 seconds which put 
it more or less in line with what ILM reported at a 35 seconds decrease 
for deleting the duplicate declare lines in his 1x1 set of pix test. 
This test led me to believe ILM's result that the removing the duplicate 
lines changed little even with his original PNGs.

This morning I looked at the PNG I grabbed for my 30K declare experiment 
and it turned out to be only 160x120 pixels... I scaled this PNG up to 
1600x1600 and re-ran and the parse time jumped to 14 minutes even using 
a memory disk. So, the thousands of duplicate declares absolutely can 
drive large parse times if the PNG files are larger as you and Christoph 
have suggested.

Let's see... 160x120 & 30K declares at 25sec we get 0.013 sec/pixel. 
1600x1600 & 30K declares at 820sec we get 0.0003sec/pixel so nothing 
wrong in the direction that is scaling with larger PNGs either.

Bill P.


Post a reply to this message

From: clipka
Subject: Re: Should Parsing really take that long? How to speed it up?
Date: 6 Dec 2015 08:38:12
Message: <56643a44$1@news.povray.org>
Am 06.12.2015 um 13:11 schrieb clipka:
> Am 05.12.2015 um 08:23 schrieb ILM:
> 
>> I edited the pov file in a text editor and removed the extra 30.000 declare
>> lines. Unfortunately that did not reduce the parsing time.
>>
>> Furthermore I tried the following:
>> - Changed the png file to a 1x1 px file. Parsing now only takes 40 seconds.
>> - Changed the png file to a 1x1 px file AND removed the 30.000 declare lines.
>> parsing now takes about 15 seconds.
> 
> Are you absolutely, positively sure you removed all the 30.000 "#declare
> cube=pigment { ... }" lines (except the first one of course)?
> 
> That would be a very odd finding.

BTW, did you still get spammed with the PNG-related warning message in
the test where you only (thought to have) removed the 30,000 declare
lines and left the image size unchanged?

Because that would indeed indicate that the lines had not been removed.
That message absolutely, positively show /only/ whenever an "image_map
{...}" statement is encountered, /not/ when an image_map is duplicated
internally.


Post a reply to this message

From: ILM
Subject: Re: Should Parsing really take that long? How to speed it up?
Date: 6 Dec 2015 08:55:01
Message: <web.56643de1fb84f5bafcb4e70@news.povray.org>
> I'll do a few tests to see whether I'm overlooking something about the
> mechanism behind image_map duplication, but at the moment the hypothesis
> that makes most sense to me is that you made some trivial mistake when
> testing with the 30,000 "#declare cube=..." lines removed, such as using
> an external editor to make the change and forgetting to save before
> rendering.


I checked this again to be sure. This unfortunately seems true :(. I again
removed those lines and now parsing took only 15 seconds.


> This morning I looked at the PNG I grabbed for my 30K declare experiment
> and it turned out to be only 160x120 pixels... I scaled this PNG up to
> 1600x1600 and re-ran and the parse time jumped to 14 minutes even using
> a memory disk. So, the thousands of duplicate declares absolutely can
> drive large parse times if the PNG files are larger as you and Christoph
> have suggested.
>
> Let's see... 160x120 & 30K declares at 25sec we get 0.013 sec/pixel.
> 1600x1600 & 30K declares at 820sec we get 0.0003sec/pixel so nothing
> wrong in the direction that is scaling with larger PNGs either.

Thats something I realized too. I tried to re-save the png files with better
compression. This saved about 30-50% file size, but had more or less no effect
to the parsing time.

So the pixel-size of the pngs seems to be a critial factor to parsing time.

But as stated above it seems like something went wrong with removing the declare
line in the first time. I am sorry for that.

I will contact the developers of the software tomorrow. Maybe they have a quick
fix for that. Thank you all for your help! I will keep you updated.


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.