|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
(substantially the same post was made Dec. 4 to newusers & advancedusers
groups - my apologies to those seeing it multiple times)
Hello;
In my lab, we have been working on an application of a middleware layer
(called "cogito") to create a new way of interacting with POV-Ray - particularly
in exploring and refining images.
I've posted a youtube video that illustrates some of the key benefits that we
see with this approach. Here's the URL:
http://www.youtube.com/watch?v=KOJFM-FOJVg
Do you think that this could be useful to you in your use of POV-Ray?
I'd appreciate knowing your answer to that question, and also receiving any
other feedback, questions, comments, etc. - either here or to my e-mail.
Best regards,
Daryl
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Daryl" <dar### [at] ureginaca> wrote in message
news:web.475eb7e87791dd63d9500170@news.povray.org...
> (substantially the same post was made Dec. 4 to newusers & advancedusers
> groups - my apologies to those seeing it multiple times)
>
> Hello;
>
> In my lab, we have been working on an application of a middleware layer
> (called "cogito") to create a new way of interacting with POV-Ray -
> particularly
> in exploring and refining images.
> I've posted a youtube video that illustrates some of the key benefits that
> we
> see with this approach. Here's the URL:
>
> http://www.youtube.com/watch?v=KOJFM-FOJVg
>
> Do you think that this could be useful to you in your use of POV-Ray?
> I'd appreciate knowing your answer to that question, and also receiving
> any
> other feedback, questions, comments, etc. - either here or to my e-mail.
>
> Best regards,
> Daryl
I'm somewhat dubious as to the utility of this approach.
With anything except simple scenes the actual render times are
somewhat long, and rendering multiples of those scenes would
seem to take up a lot of processing power.
However, I can think of many times when I've re-rendered
a scene involving objects placed randomly, or with a random
shape parameter, only changing the seed value.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Tim Attwood escribió:
> I'm somewhat dubious as to the utility of this approach.
> With anything except simple scenes the actual render times are
> somewhat long, and rendering multiples of those scenes would
> seem to take up a lot of processing power.
That's where renderfarms come useful!
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Nicolas Alvarez <nic### [at] gmailisthebestcom> wrote:
> Tim Attwood escribió:
> > I'm somewhat dubious as to the utility of this approach.
> > With anything except simple scenes the actual render times are
> > somewhat long, and rendering multiples of those scenes would
> > seem to take up a lot of processing power.
>
> That's where renderfarms come useful!
I think that is true. Several processors to handle the rendering is certainly
a nice thing to have. I also think that there are different ways to manage the
render time so that doesn't become a huge issue. For example, a person might
explore and refine the layout of a scene while using a fairly fast rendering
method. After the non-rendering parameters have been selected, the high
quality rendering parameters could be explored and refined on the basis of
smaller image sizes, etc. Even if this is not at interactive rates, if the
system maintains a record of what explorations/refinements have been
attempted, it is easier then to continue on.
Ultimately, I hope that you would consider
imagining if this interaction would be useful - aside from these (ultimately
important) practical considerations.
Some might be familiar with Design Galleries (by Marks et al., presented
at SIGGRAPH 1997) that computes many small sample images based on
initial user input, evaluates those images automatically, and then presents
the user with choices. As I recall,
in that case, the computational work is done mostly at the beginning before
any interaction. With the case demonstrated in the video, the user is involved
much earlier and in the evaluation of the sample images (allowing "interactive
articulation of the problem or goals) - so it is more along
the lines of "computational steering."
Best regards,
Daryl
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|