POV-Ray : Newsgroups : povray.general : PovRay Google Trends : Re: PovRay Google Trends Server Time
26 Jun 2024 15:54:55 EDT (-0400)
  Re: PovRay Google Trends  
From: Bald Eagle
Date: 17 Nov 2016 10:15:00
Message: <web.582dc9365604f23b488d9aa0@news.povray.org>
I was thinking about this on my drive into work - and I think that a good way to
make POV-Ray more usable would to compare and contrast the way people use
hand-coding vs graphical modeling and WHY.

With graphical modeling, one can see a place or a thing, and move something to
that place, or move that thing to a new place or orientation.

I understand that there may be issues with "exactness", and there are certainly
ways to bridge the gap.  Snap & glue, alignment grids, etc.

from the other side, I think a lot of people have tried to achieve some of the
ease of use of modeling in hand-coding by writing macros that perform certain
tasks that a modeler does - and indeed, it's likely that very similar code
operates in the back end of a modeler.

Tasks such as
"center THIS object at THAT point"
"Align the left side of THIS with the right side of THAT"
"Align the edge of THIS with THAT guideline (axis)"

Although I haven't had the time to pursue the idea - I think that something
along the lines of VISIO's "smart-shapes" would be very useful.
It would be nice to have a pool of object metadata to query and modify that
could be accessible from inside the scene SDL.
This would likely be a list of axes, edges, endpoints, etc that would be
available to the scene writer so that other objects could be aligned with that
object, or the object could be rotated around that point, etc.

Again, there are likely macros and whatnot that already do much of this - but
the idea here is that every object is addressable, and has some basic list of
variables to work with, rather than having to hand code it all, every time.
I'm not sure how to implement this - it seems to me that the way POV SDL works,
the parser would have to scan through the SDL, populate the object attribute
lists, and then parse the scene again, only this time it would recognize and
interpret variable that were undefined during the first parsing.
I guess maybe it would work sort of like the 2-stage radiosity method.
Run a Stage1 scene through the parser and renderer, store the data,
and then use that stored data in a more complex scene.

Still just thinking out loud here, and brainstorming things that might be looked
at as options.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.