POV-Ray : Newsgroups : povray.newusers : Emitting media Server Time
28 Mar 2024 17:15:26 EDT (-0400)
  Emitting media (Message 17 to 26 of 26)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: Alain
Subject: Re: Emitting media
Date: 2 Sep 2017 09:13:32
Message: <59aaae7c@news.povray.org>
Le 17-09-02 à 03:03, Thomas de Groot a écrit :
> On 2-9-2017 6:31, omniverse wrote:
>> Alain <kua### [at] videotronca> wrote:
>>> Le 17-08-31 à 18:58, Loren a écrit :
>>>>          media{ emission Red intervals 30 samples 100,100 }
>>> Yuck! That WAS ok in version 3.5 and older that used sampling method 2
>>> as the default. As of version 3.6, the sampling method is method 3 and
>>> it must use intervals 1 (the default value).
>>> Using method 3, more intervals only dramatically slow you down. It can
>>> also cause some artefacts.
>>> Also :
>>> 1) It only use a single value for samples. If a second value is used,
>>> it's silently ignored.
>>> 2) confidance and variance are also silently ignored as they are
>>> meaningless when you have only a single interval.
>>>
>>> Defaults for media in version 3.6+
>>> method 3
>>> samples 10
>>> confidance Not Applicable
>>> variance N/A
>>> intervals 1
>>> jitter 1
>>>
>>> Alain
>>
>> Well I'm learning something, again, because I had thought
>>
>>   samples LesserInteger, GreaterInteger
>>
>> was valid for method 3. And I also thought samples 1,1 was the default.
>> Reading the 3.7 doc I don't find it saying the above you tell of, not 
>> that I
>> don't believe you Alain, but I refer to the docs a lot and try to 
>> believe what I
>> read there. :)
>>
> 
> Alain is absolutely right. He is the one person warning us, again and 
> again, for the method/intervals/samples misconception cropping up 
> regularly in these ng's. Hail to the chief! ;-)
> 
> As for the docs, samples LesserInteger, GreaterInteger, is only valid 
> for method 1 and 2. I agree that there is an ambiguity where method 3 is 
> concerned: paragraph 2.7.2.3 Sampling Parameters & Methods in the wiki ( 
> http://wiki.povray.org/content/Reference:Sampling_Parameters_%26_Methods 
> ) does not state clearly that the second term is ignored when using 
> method 3. This should be changed.
> 
> 

For the massive slowdown when using intervals >1, I've found out by 
testings.
For the possibility of artefacts, it's writen in the docs.


Post a reply to this message

From: Stephen
Subject: Re: Emitting media
Date: 2 Sep 2017 13:53:42
Message: <59aaf026@news.povray.org>
On 02/09/2017 13:14, omniverse wrote:
> Note to everyone: Clicking a link like that one, the long jump down the page
> might not make it all the way to the target section. If that happens you should
> be able to locate the section by name via the left side list, or maybe a
> refresh.

Yes my browser hiccuped loading it.

-- 

Regards
     Stephen


Post a reply to this message

From: Kenneth
Subject: Re: Emitting media
Date: 2 Sep 2017 17:50:00
Message: <web.59ab250fa4b127e9883fb31c0@news.povray.org>
"Kenneth" <kdw### [at] gmailcom> wrote:
> "omniverse" <omn### [at] charternet> wrote:
> >
> > Where I get most confused is that factoring in of background colors, which I
> > think always remain additive (emitting) or subtractive (absorbing)
> > regardless of the media itself.
>
> I think that's true, when using a SINGLE media. But when two types of media are
> used (like emission + absorption), it gets a bit trickier-- and seems to depend
> on their own respective colors.

Actually, I'm starting to come around to your idea ;-)-- that at least emission
and absorption media effects depend (solely??) on what the background colors
are, for their final 'filtered' media-color. The use of a pure-color
media-- with one or more zeros in the color vector-- seems to confirm this. And,
that using multiple medias (well, emission + absorption) of COMPLEMENTARY pure
colors serves to filter *all* of the background color to some extent-- because
there are no longer any zeros in the 'combined' color-filtering vector-- with
the result *looking like* actual opacity.

This is a paradigm shift for me: I used to think that volumetric media was a
'thing unto itself', more or less, with its effects only modifying the colors
of objects WITHIN the media object-- and having nothing at all to do with
filtering the background and *its* colors. I guess I never really noticed the
background-color effects, because I've only lately tried creating a PURE-color
media (where there's a zero in one or more of the components, showing the
obvious filtering that's going on-- and showing NO so-called 'opacity' for those
colors.)  My previous uses of a single media never had a zero in the color
vector-- so I took the resulting 'all-color filtering' to mean 'opacity.'

This is my latest theory, anyway ;-)

HOWEVER... Scattering media might be a different animal (or not?) The current
way that I think about scattering (and its 'extinction' value) is that it's
basically emission and absorption media combined (while also showing effects
from lighting, of course.) That's probably a too-simplistic description, but it
will do for now.

But PURE-color scattering used as a SINGLE media also shows
the background filtering (no 'opacity' or filtering for certain colors) , even
with a very high extinction value. For example,
        scattering{1, <1,0,0> extinction 300}
Using this in your laser code, it completely extinguishes the red background
hexagons (i.e., makes them black)-- but leaves the green and blue hexagons
unaltered.  So extinction 300 is actually   300*<1,0,0>   in this case (or can
be thought of that way) -- the SAME color as the media color itself-- but
'extinguishes' that red color. So scattering media-- when used with extinction--
is a 'complementary-color' filter for the background, and for the impinging
light source. (Interestingly, scattering with extinction 0-- and no light
source--shows NO media effect at all, as if the media wasn't there.)

Currently, scattering's extinction allows just a single float value. I have a
dim and fuzzy memory, from v3.6xx days, that extinction could actually take a
color vector (but I might be confusing that with an added absorption media.) It
would be a nice feature addition to allow a color there-- so that a
'complementary' color could be used for the color extinction. For example,
        scattering{1, <.2,1,.2>, extinction 1}
produces a green-ish cloud-- but the 'complementary' color is filtered out of
the incoming light, resulting in purple self-shadowing. With extinction
<.8,0,.8>, the self-shadowing color would match the main green media, and the
cloud would look nice and green throughout. That's not physically accurate, of
course, but it would be more visually appealing ;-)


Post a reply to this message

From: Kenneth
Subject: Re: Emitting media
Date: 2 Sep 2017 18:00:01
Message: <web.59ab293aa4b127e9883fb31c0@news.povray.org>
"Kenneth" <kdw### [at] gmailcom> wrote:
>
>(Interestingly, scattering with extinction 0-- and no light
> source--shows NO media effect at all, as if the media wasn't there.)
>
Correction:
PURE-color scattering media, with extinction 0-- and no light source--shows NO
media effect at all, as if the media wasn't there.

Something like scattering {1,<.2,1,.2> extinction 0} does show some effects.


Post a reply to this message

From: Bald Eagle
Subject: Re: Emitting media
Date: 3 Sep 2017 10:40:01
Message: <web.59ac1371a4b127e95cafe28e0@news.povray.org>
Thomas de Groot <tho### [at] degrootorg> wrote:

> Alain is absolutely right. He is the one person warning us, again and
> again, for the method/intervals/samples misconception cropping up
> regularly in these ng's. Hail to the chief! ;-)
>
> As for the docs, samples LesserInteger, GreaterInteger, is only valid
> for method 1 and 2. I agree that there is an ambiguity where method 3 is
> concerned: paragraph 2.7.2.3 Sampling Parameters & Methods in the wiki (
> http://wiki.povray.org/content/Reference:Sampling_Parameters_%26_Methods
> ) does not state clearly that the second term is ignored when using
> method 3. This should be changed.

Well, this is something (the type of thing) that I think ought to be addressed
in a more explanatory, demonstrable way - especially for new users or people who
have never used media, or (like myself) who never use media - because it's
complicated enough to wind up turning into a huge debugging time-sink.
A POV-Ray feature ought to be a _tool_ not a stumbling block or independent
research project.

No one's fault - not casting aspersions or assigning blame - it's just the
present state of the current version. (and with a little more work, I could get
that to sound downright poetic...)

I think that with something like POV-Ray, which is driven primarily by
user-written SDL (as opposed to GUI / modeling interface) there needs to be a
mechanism by which directives, or at least the desired ones, can be unsilenced.
If a "local" flag could be added as part of something like a media statement,
like:
media {
....
....
show_messages
}

then it would tell you everything about what was going on under the hood that it
_could_ tell.

Stream output might look like:
"Media (method 3): using a samples value greater than 1 [default] may
dramatically increase render times and result in artefacts"

"Ignoring second value for samples"

"media set to interval 1: ignoring confidence and variance values"

If a GLOBAL flag could be added - say as part of the global_settings block of
code, then it would trigger those messages for all such message-providing
directives in the current scene.  It would probably be prudent to be able to
selectively disable and re-enable that feature so that all of the directives in
#include files could be excluded, and then successfully debugged sections of
code could be excluded, thereby facilitating progressive debugging by
process-of-elimination.

Until such a time as that happens, I'd say that rather than answering the same
type of questions over and over and over again (and I for one, am very grateful
to ALL of the knowledgeable and patient folks here who do that)
perhaps a sample scene could be constructed that allows such a thing to take
place, using #declare[d] flag variables and macros, very much like is done in
the guts of the screen.inc file for camera and screen positions.

There are, I think, a lot of instances where the checking of some attribute or
value, and the resulting message sent to an output stream would save hours of
frustration, and that adds up to a lot when multiplied by multiple scene files
and multiple users writing that SDL.
It also makes it MUCH more NEW-User-friendly.
I understand that such checks, if embedded into the code, would slow down render
times, but that's why they should be activated and deactivated by such flags -
they'd only be used when desired or needed.

Because as much as I like to pursue my own ideas and experiments, I believe that
I ought to give something back, in terms of [trying to] help, and making POV-Ray
better - more accessible, easier to use, and more responsive to the user.  And I
believe that the best way to do that is with an integral, automated feedback to
the user from within the software, or the scene.  Why should the onus be on the
[new] user to go look something up (if they even know what to look up or if an
issue exists that needs to be looked up) and decipher the explanation, when the
code already can calculate the presence of a potential issue and display a
[comprehensible] and meaningful message?


I could see this type of philosophy being useful for issues that continually
crop up such as:

No light source
no camera
coincident surfaces - "one or more bounding box faces are coincident"
near-coincident surfaces - "one or more bounding box faces are within 1e-6
POV-units...."
difference - "one or more bounding box faces of differenced object is equal to
or smaller than parent object"
#declared object definitions that are never instantiated with object {}
Objects that are outside the camera view frustum (presumably this would equate
to ray-object intersections = 0)

I'm sure I could go on, and much of this will probably never find its way into
official POV code, but having an alternate version whose function is primarily
to build and debug a scene, rather than render it with the image quality that
the official code does could be a huge time-saving tool.
Then the resulting SDL could be rendered at full quality with POV-Ray-proper.

Also, based on my reading of the Graphics Gems series, maybe there could be some
options to choose lower-accuracy but much higher speed calculations so that
rough scene-building could be rendered faster (different than Quality settings)
this would be used for slow calculations like sqrt(), certain isosurface
functions, root solving, trig functions, etc.
Some of these rely on pre-calculated look-up tables, etc.


Kenneth - I think you're doing a great job with your methodical media
experiments, and it would be great to see some scenes that show the results of
your comparative studies.   Grids of media cubes against different backgrounds,
and overlapping media, with captions.  I would strongly suggest that if you do
the work tyo make such scenes available, that they be included as part of the
example scenes provided with the next distribution.

:)


Post a reply to this message

From: Alain
Subject: Re: Emitting media
Date: 3 Sep 2017 21:23:26
Message: <59acab0e@news.povray.org>
Le 17-09-02 à 17:46, Kenneth a écrit :
> "Kenneth" <kdw### [at] gmailcom> wrote:
>> "omniverse" <omn### [at] charternet> wrote:
>>>
>>> Where I get most confused is that factoring in of background colors, which I
>>> think always remain additive (emitting) or subtractive (absorbing)
>>> regardless of the media itself.
>>
>> I think that's true, when using a SINGLE media. But when two types of media are
>> used (like emission + absorption), it gets a bit trickier-- and seems to depend
>> on their own respective colors.
> 
> Actually, I'm starting to come around to your idea ;-)-- that at least emission
> and absorption media effects depend (solely??) on what the background colors
> are, for their final 'filtered' media-color. The use of a pure-color
> media-- with one or more zeros in the color vector-- seems to confirm this. And,
> that using multiple medias (well, emission + absorption) of COMPLEMENTARY pure
> colors serves to filter *all* of the background color to some extent-- because
> there are no longer any zeros in the 'combined' color-filtering vector-- with
> the result *looking like* actual opacity.
> 
> This is a paradigm shift for me: I used to think that volumetric media was a
> 'thing unto itself', more or less, with its effects only modifying the colors
> of objects WITHIN the media object-- and having nothing at all to do with
> filtering the background and *its* colors. I guess I never really noticed the
> background-color effects, because I've only lately tried creating a PURE-color
> media (where there's a zero in one or more of the components, showing the
> obvious filtering that's going on-- and showing NO so-called 'opacity' for those
> colors.)  My previous uses of a single media never had a zero in the color
> vector-- so I took the resulting 'all-color filtering' to mean 'opacity.'
> 
> This is my latest theory, anyway ;-)
> 
> HOWEVER... Scattering media might be a different animal (or not?) The current
> way that I think about scattering (and its 'extinction' value) is that it's
> basically emission and absorption media combined (while also showing effects
> from lighting, of course.) That's probably a too-simplistic description, but it
> will do for now.
> 
> But PURE-color scattering used as a SINGLE media also shows
> the background filtering (no 'opacity' or filtering for certain colors) , even
> with a very high extinction value. For example,
>          scattering{1, <1,0,0> extinction 300}
> Using this in your laser code, it completely extinguishes the red background
> hexagons (i.e., makes them black)-- but leaves the green and blue hexagons
> unaltered.  So extinction 300 is actually   300*<1,0,0>   in this case (or can
> be thought of that way) -- the SAME color as the media color itself-- but
> 'extinguishes' that red color. So scattering media-- when used with extinction--
> is a 'complementary-color' filter for the background, and for the impinging
> light source. (Interestingly, scattering with extinction 0-- and no light
> source--shows NO media effect at all, as if the media wasn't there.)

Something is fishy here. Extinction is supposed to anly affect the 
absorbtion of incoming light and the shadowing effect. In a way, 
extinction 0 should be similar to no_shadow.
Maybe the media is intercepting your light.

> 
> Currently, scattering's extinction allows just a single float value. I have a
> dim and fuzzy memory, from v3.6xx days, that extinction could actually take a
> color vector (but I might be confusing that with an added absorption media.) It
> would be a nice feature addition to allow a color there-- so that a
> 'complementary' color could be used for the color extinction. For example,
>          scattering{1, <.2,1,.2>, extinction 1}

In version 2.5 and 3.6, that's exactly how it worked. It still work that 
way now.

> produces a green-ish cloud-- but the 'complementary' color is filtered out of
> the incoming light, resulting in purple self-shadowing. With extinction
> <.8,0,.8>, the self-shadowing color would match the main green media, and the
> cloud would look nice and green throughout. That's not physically accurate, of
> course, but it would be more visually appealing ;-)

For that effect, you need to use extinction 0 for the scattering media 
and absorbing media of the complementary colour to get colour matching 
shadow.

> 
> 

Emissive and absorbing medias are not filtering.
Emissive media ADD to whatever is behind. In a radiosity scene, with 
media on, it also illuminate it's surounding. They can't be seen against 
a white background. Well, if you save as high dynamic range image (HDR 
or EXR), you can see it as brighter than white if you reduce the 
exposure in your viewing application.
Absorbing media SUBSTRACT from whatever is behind, clipping to zero for 
any negative results. It also cast shadows. They can't be seen against a 
black background.

If you have a red emissive media and a cyant absorbing edia you have this:
Against a white background, the emissive media is invisible and the 
absorbing media remove the green and blue, leaving only the red.

Against a black background, the absorbing media is invisible and you 
only see the red one.


Post a reply to this message

From: Alain
Subject: Re: Emitting media
Date: 3 Sep 2017 21:55:21
Message: <59acb289@news.povray.org>
Le 17-09-03 à 10:36, Bald Eagle a écrit :

> 
> I think that with something like POV-Ray, which is driven primarily by
> user-written SDL (as opposed to GUI / modeling interface) there needs to be a
> mechanism by which directives, or at least the desired ones, can be unsilenced.
> If a "local" flag could be added as part of something like a media statement,
> like:
> media {
> ....
> ....
> show_messages
> }
The global_settings would probably be a better place for such a switch.

> 
> then it would tell you everything about what was going on under the hood that it
> _could_ tell.
> 
> Stream output might look like:
> "Media (method 3): using a intervals value greater than 1 [default] -may-
> dramatically increase render times and result in artefacts"
Not «may» but «will»

> 
> "Ignoring second value for samples"
> 
> "media set to interval 1: ignoring confidence and variance values"
> 
> If a GLOBAL flag could be added - say as part of the global_settings block of
> code, then it would trigger those messages for all such message-providing
> directives in the current scene.  It would probably be prudent to be able to
> selectively disable and re-enable that feature so that all of the directives in
> #include files could be excluded, and then successfully debugged sections of
> code could be excluded, thereby facilitating progressive debugging by
> process-of-elimination.
> 
> Until such a time as that happens, I'd say that rather than answering the same
> type of questions over and over and over again (and I for one, am very grateful
> to ALL of the knowledgeable and patient folks here who do that)
> perhaps a sample scene could be constructed that allows such a thing to take
> place, using #declare[d] flag variables and macros, very much like is done in
> the guts of the screen.inc file for camera and screen positions.
> 
> There are, I think, a lot of instances where the checking of some attribute or
> value, and the resulting message sent to an output stream would save hours of
> frustration, and that adds up to a lot when multiplied by multiple scene files
> and multiple users writing that SDL.
> It also makes it MUCH more NEW-User-friendly.
> I understand that such checks, if embedded into the code, would slow down render
> times, but that's why they should be activated and deactivated by such flags -
> they'd only be used when desired or needed.
Most of those can be found at parse time. Those can't affect render 
speed, and only very slightly affect parse time.

> 
> Because as much as I like to pursue my own ideas and experiments, I believe that
> I ought to give something back, in terms of [trying to] help, and making POV-Ray
> better - more accessible, easier to use, and more responsive to the user.  And I
> believe that the best way to do that is with an integral, automated feedback to
> the user from within the software, or the scene.  Why should the onus be on the
> [new] user to go look something up (if they even know what to look up or if an
> issue exists that needs to be looked up) and decipher the explanation, when the
> code already can calculate the presence of a potential issue and display a
> [comprehensible] and meaningful message?
> 
> 
> I could see this type of philosophy being useful for issues that continually
> crop up such as:
> 
> No light source
May be intentional. Common in radiosity scenes.
> no camera
In this case, the default camera is used, but yes, a warning may tell 
"default camera in use"
> coincident surfaces - "one or more bounding box faces are coincident"
> near-coincident surfaces - "one or more bounding box faces are within 1e-6
> POV-units...."
Those two are exedingly hard to detect. Even when the bounding boxes 
test positive to your critera, there may be no coincident surfaces, and 
you may have some in other cases when the bounding boxes are farther away.

> difference - "one or more bounding box faces of differenced object is equal to
> or smaller than parent object"
That case is so common that the message will be meaningless. You can 
easily have several big objects chopping bits of a small object, like 
planes cutting off parts of something, or many small objects carving 
dimples into a larger object...

> #declared object definitions that are never instantiated with object {}
Good one, but may be intentional, like the case of an intermediate 
object, or an object used as a building block for another complexe object.

> Objects that are outside the camera view frustum (presumably this would equate
> to ray-object intersections = 0)
What about objects visible only through reflection or refractions ?
What about out of view objects in a radiosity scene. You can't see them, 
but they can have an important effect.

> 
> I'm sure I could go on, and much of this will probably never find its way into
> official POV code, but having an alternate version whose function is primarily
> to build and debug a scene, rather than render it with the image quality that
> the official code does could be a huge time-saving tool.
> Then the resulting SDL could be rendered at full quality with POV-Ray-proper.
> 
> Also, based on my reading of the Graphics Gems series, maybe there could be some
> options to choose lower-accuracy but much higher speed calculations so that
> rough scene-building could be rendered faster (different than Quality settings)
> this would be used for slow calculations like sqrt(), certain isosurface
> functions, root solving, trig functions, etc.
> Some of these rely on pre-calculated look-up tables, etc.
> In modern CPUs, the FPU native accuracy is double precision. Using 
single precision probably won't be faster. I don't think that there is a 
way to split the FPU so tha tit can perform two single precision 
calculations at the same time.
sqrt and trigs funtions are native operations of your FPU and quite 
fast. Using a look up table could be couterproductive due to the space 
needed to store them all in memory. If your look up table ever get 
pushed to the page file, it's retreival will take far more time than 
doing the calculations.


Post a reply to this message

From: Thomas de Groot
Subject: Re: Emitting media
Date: 4 Sep 2017 02:47:29
Message: <59acf701@news.povray.org>
On 3-9-2017 16:36, Bald Eagle wrote:
> Well, this is something (the type of thing) that I think ought to be addressed
> in a more explanatory, demonstrable way - especially for new users or people who
> have never used media, or (like myself) who never use media - because it's
> complicated enough to wind up turning into a huge debugging time-sink.
> A POV-Ray feature ought to be a _tool_ not a stumbling block or independent
> research project.
> 
> [snip]

I need to ponder this. I fully agree with you by the way, but I think it 
is not a really easy matter to accomplish correctly. I have in my 
ancient times written little demo files for new users of some programs 
at work. It was a hellish job to get it so that even the most "stupid" 
(not supposed to be negative) user could navigate through without 
errors. One gets illuminating insights into the functioning of the human 
mind ;-)

-- 
Thomas


Post a reply to this message

From: Bald Eagle
Subject: Re: Emitting media
Date: 4 Sep 2017 12:50:01
Message: <web.59ad83a9a4b127e95cafe28e0@news.povray.org>
Alain <kua### [at] videotronca> wrote:

> > If a "local" flag could be added as part of something like a media statement,
> > like:
> > media {
> > show_messages
> > }
> The global_settings would probably be a better place for such a switch.

Depends on how many media statements you have, and where they're located.
One can have an include file (which presumably works as it's 'supposed to'.
One can have a media statement that you're currently debugging.
One can have a freshly written SDL file with multiple media statements that you
want to be checked, and then progressively exclude them as they are verified as
working correctly.
Then the global switch could be flicked off to have the parser skip all of the
individual checks.


> > "Media (method 3): using a intervals value greater than 1 [default] -may-
> > dramatically increase render times and result in artefacts"
> Not «may» but «will»

So then maybe put that in all caps with asterisks:  *** WILL ***   ;)


> > No light source
> May be intentional. Common in radiosity scenes.

Indeed, but when a message is issued, it's easy to look at it and know that it's
superfluous, whereas in the absence of a message, it's hard to tell that
something you're assuming exists actually doesn't.

> > no camera
> In this case, the default camera is used, but yes, a warning may tell
> "default camera in use"
Yes.

> > coincident surfaces - "one or more bounding box faces are coincident"
> > near-coincident surfaces - "one or more bounding box faces are within 1e-6
> > POV-units...."
> Those two are exedingly hard to detect. Even when the bounding boxes
> test positive to your critera, there may be no coincident surfaces, and
> you may have some in other cases when the bounding boxes are farther away.

Well this is just a first approximation of how it could work, and it's just
meant as a helper / reminder, than to be an omniscient automated debugger.

> > difference - "one or more bounding box faces of differenced object is equal to
> > or smaller than parent object"
> That case is so common that the message will be meaningless. You can
> easily have several big objects chopping bits of a small object, like
> planes cutting off parts of something, or many small objects carving
> dimples into a larger object...

Well, yes.  I knew that as I was writing it, but it might be meaningful for the
very new users who are doing simple test scenes.  Something that is easily
triggered by false positives would an excellent place to have a default = off,
and then it would only be used if intentionally enabled.

> > #declared object definitions that are never instantiated with object {}
> Good one, but may be intentional, like the case of an intermediate
> object, or an object used as a building block for another complexe object.

Yes, many things may be intentional - but the point is not to declare the
situation as a bona fide error - but pick it up and point it out just in case
it's something that's missed through haste or inattention, etc.

> > Objects that are outside the camera view frustum (presumably this would equate
> > to ray-object intersections = 0)
> What about objects visible only through reflection or refractions ?
> What about out of view objects in a radiosity scene. You can't see them,
> but they can have an important effect.

Of course, I had thought of that as well, but those are peripheral cases.
The point here is to address a case where a user may be programming a scene and
be very frustrated about why they can't see it.  And of course, if it's not in
the view frustum, then that's something worth knowing, rather than trying to
debug an isosurface statement because you thought it was something in the math
or the settings, rather than it's just in the wrong location.



> > In modern CPUs, the FPU native accuracy is double precision. Using
> single precision probably won't be faster. I don't think that there is a
> way to split the FPU so tha tit can perform two single precision
> calculations at the same time.
> sqrt and trigs funtions are native operations of your FPU and quite
> fast. Using a look up table could be couterproductive due to the space
> needed to store them all in memory. If your look up table ever get
> pushed to the page file, it's retreival will take far more time than
> doing the calculations.

I'm sure I may be using outdated information - but I've seen numerous people
point out that it's better to compare two "squared" values, than compare two
square-roots, because it's faster, especially if there's a LOT of those checks
being done in the code.
"Quite fast" can always be faster, especially if it helps one to write better
code, and if such calculations are being performed millions, or in the case of
nested or recursive code - billions of times.

Perhaps some things are good ideas, perhaps some are appropriate only for new
users, but I could see them being useful, since I've seen - and done myself -
simple errors that wasted hours or days of debugging until the issue was solved.
 It's always easy once you know what's wrong, and it's always obvious in
hindsight.

But I thought I'd get the ball rolling - and then perhaps others might start
thinking along those lines, and come up with more and better suggestions, and it
would get their minds into the habit of looking for those instances in their own
code, and they'd be much better debuggers for it.


Post a reply to this message

From: Bald Eagle
Subject: Re: Emitting media
Date: 4 Sep 2017 13:30:01
Message: <web.59ad8cfca4b127e95cafe28e0@news.povray.org>
Thomas de Groot <tho### [at] degrootorg> wrote:

> I need to ponder this. I fully agree with you by the way, but I think it
> is not a really easy matter to accomplish correctly. I have in my
> ancient times written little demo files for new users of some programs
> at work. It was a hellish job to get it so that even the most "stupid"
> (not supposed to be negative) user could navigate through without
> errors. One gets illuminating insights into the functioning of the human
> mind ;-)

Yes, and the code as well.
There are simple things that ought to be easy to check in a variety of instances
that would make debugging much, much easier.

The easy and bleeding-obvious cases ought to be done first, and then when the
general manner of implementation is established and debugged, the more
complicated and less generally useful cases can be experimented with.

I know that there is a logical and preferred order of statements in a camera
directive, with some overriding others.  It's not obvious without reading the
docs and groups and working it out - but the parser would "know" that it's
overriding something - so just SAY SO.  Then the user could go "Whoops!" and
change the order - or not.

There are obviously a LOT of things that could be done to make writing and
debugging a scene faster, easier, and more reliable.
"Rendering" a scene by some super-fast non-raytracing method would be a huge
benefit to just check the position of objects, especially with animations
consisting of hundreds or thousands of frames.

Some as-of yet unspecified method of quickly checking certain scene objects or
variable values (without having to run the WHOLE scene through the parser) would
be super-useful (with obvious impossibilities).
Perhaps just writing some simple, clean, well-commented code snippets in a fully
working scene for the insert menu would be best for that.

I'm just trying to smooth out the learning curve a bit, and help make it a bit
less arduous.

Removing the need to rewrite some of the labor-intensive "common
computer-science" algorithms and calculations ought to make POV-Ray more
accessible, easier to make better scenes, and more fun to code a scene -
rather than running into "oh crap- how do I do THAT???"

"That's _supposed to work_!  What did I do wrong (this time)!!!"


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.