POV-Ray : Newsgroups : povray.binaries.images : A povr fork test image for new dens_object pattern. : Re: A povr fork test image for new dens_object pattern. Server Time
22 Sep 2023 14:01:03 EDT (-0400)
  Re: A povr fork test image for new dens_object pattern.  
From: William F Pokorny
Date: 7 Sep 2023 06:15:43
Message: <64f9a2cf$1@news.povray.org>
On 9/6/23 19:30, Samuel B. wrote:
>> Of course quite a bit longer with dens_object. No getting around there
>> being a lot more work getting done. Let's see. A render of just the R -
>> no normal{} at all on my old 2 core i3:
>>     1.50user 0.03system 0:01.26elapsed
>> and with a dens_object normal 'similar' to that posted prior.
>>     69.96user 0.11system 0:18.82elapsed
> I don't know what any of that means 🙁 Are you using Linux or something? I'm
> guessing those values are in seconds? And that the object with a normal took a
> bit over 1 minute? (POV-Ray has always showed me things in terms of seconds,
> minutes, hours, PPS etc.)

Yes, Ubuntu and I trust the unix/linux timers over POV-Ray's internal 
measures because they wrap POV-Ray's processes / sub-processes.

The elapsed time is likely the practical user measure. It's how long 
someone waits for the render to finish. So, 1.5 seconds became 19 
seconds on my i3.

Multi-cores, multi-threads, variable clock rates - now high power and 
low power cores. There are too configure and compile options, differing 
compilers etc too. Other load on the machine...

What the reported times mean these days is a good deal harder to really 
understand than it used to be.


... and facets ... I've never spent time with that normal{} block 
modifier or the code. In a quick look at the code it does look like 
there is a mode for complete replacement in addition to one which
perturbs. Thanks.


Aside: The povr fork has a normal{micro} perturbation method which 
really only works as a final results with AA. This sounds a little like 
what the clipka code experiment you'd like to do.


Shading... This something about which I don't know very much. I know 
there are shaders available in GPU hardware which can fake effects and 
do certain limited programming like stuff, but that's about it.

As for mapping to arbitrary shapes - a tough problem. Probably easier to 
create a mesh and work with that. Aside: I've played a little with df3 
environments into which arbitrary shapes can be placed for pigment / 
normal mapping of a sort. However, that approach has the non-trivial, 
practical issue of very large df3 sizes accompanying those now mapped / 
painted shapes.

Bill P.

Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.