POV-Ray : Newsgroups : povray.beta-test : POV-Ray v3.7.beta.10 available. : Re: POV-Ray v3.7.beta.10 available. Server Time
29 Jul 2024 00:24:28 EDT (-0400)
  Re: POV-Ray v3.7.beta.10 available.  
From: Chris Cason
Date: 19 Oct 2005 17:50:24
Message: <4356bfa0@news.povray.org>
Mike Raiford wrote:
> So, is this to say that POV-Ray will "officially" support clusters? Or 
> are you saying this opens the door for the developer community to create 
> a patch to do so, such as a form of POV-Ray that would work seamlessly 
> with IMP, for example?

At the moment we are concentrating on making sure the infrastructure exists
internally to allow us to separate the concept of user interface and renderer
which originally had a one-to-one relationship. That is, for each scene file
being parsed there was exactly one user interface (even if command-line), and
for each sucessfully-parsed scene file there was exactly one render started,
after which the parsed data was deleted from memory.

Since implementing the SMP support required us to basically rip the code
apart and bolt it back together again without the use of the literally
hundreds of global variables and other bits of writable shared data which
wouldn't work with multiple threads, we took the opportunity to implement
what is known as the 'document/view' model (this is what is used by many MDI
programs based on MFC, though of course our code has nothing to do with that
other than sharing the concept).

With respect to the doc/view concept, a parsed scene is a 'document' and a
render is a 'view'. The 'document' is everything in the scene except the
camera settings, and the 'view' is the camera settings, output resolution,
output file type, gamma correction, and so forth. Each document and view has
its own internal identifier and the POV-Ray code uses these ID's when
communicating between the front and back ends of the codebase. When a scene
is succesfully parsed the frontend is told about it, and it then using that
ID creates a view and subsequently asks the backend to render that view.

The parse and render stages are therefore now quite distinct and in
particular the render requires an explicit command from the frontend
referencing the parsed scene's ID to kick off the actual tracing.

What this also implies should now be clear: a parsed scene ('document') is a
separate object that has its own lifetime. It does not automatically vanish
once a render is successful; we have to explicitly delete it. And also that
you can have more than one render ('view') of a document from different
camera positions without re-parsing the scene. And furthermore you can have
more than one document in memory at once and be rendering one or several
views of those documents at any given time (presuming you have the resources
and CPU horsepower to make it practical).

There are still a few restrictions in the backend code making a simultaneous
multi-scene render impractical but the code as a whole already works exactly
as I have described.

You may note therefore - since the frontend and backend have the concept of
"document id's" - that this meshes in nicely with the ability of a frontend
to be in a different *place* than the backend. We don't pass pointers around
between the user-interface and renderer anymore, we refer to resources in a
more abstract way. Coupled with the fact that we use message-passing
interface that is not sensitive to the local machine's word size or
endianness, the ability to have remote renders just comes naturally.

However (and yes there is a however), we currently have no user-interface
support for this and don't plan on adding it anytime in the near future as
we're busy with the core code. Plus there's no transport (e.g. tcp) or
authentication support written yet.

Getting back to your original question: as you can see it's clear that
network support could be implemented by someone else. However I feel that
it is best done by us because (a) I don't want a raft of incompatible network
implementations out there, and (b) properly thought-out authentication and
security is essential if we want this to take off. We don't want to see POV
get a bad security reputation because someone somewhere does a quick network
port that ends up compromising someone's hardware.

If a group of sufficiently dedicated users wanted to get together and help
hammer out implementation details for networked rendering then I'd be willing
to listen.

Now that's a long reply to a simple question but in the process at least I've
publicly put into hardcopy some of the logic behind the rather large changes
that have been made to POV-Ray 3.7 (and perhaps now some will understand why
each beta takes so long - there's really a huge amount of infrastructure work
in there that we have to deal with and finish, totally apart from the actual
rendering bugs).

-- Chris


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.