POV-Ray : Newsgroups : povray.programming : Modify source code of program POVRAY + MPI (please Help) Server Time
21 Dec 2024 07:43:53 EST (-0500)
  Modify source code of program POVRAY + MPI (please Help) (Message 11 to 15 of 15)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: gaprie
Subject: Re: Modify source code of program POVRAY + MPI (please Help)
Date: 5 Jun 2014 13:20:00
Message: <web.5390a63c393ecfd02ebcdc450@news.povray.org>
Thanks for Mr.Lipka and Mr.Thorsten Froehlich for help.

I'm confused, I tried hard to understand and some hints from you guys.

so far I find the variable in the "source / backend.scene / view.cpp" ie "DBL
ratop" and "DBL rabottom". I try to change the value in the variable becomes
"DBL ratop = 1" and "DBL rabottom = 200" that, through my Povray build and run
the command "Povray pawns.pov" produces an image that has been rendered
pawns.png up to 200 lines.

I actually wanted to find a file whose job is to read the commad line
(ex: "Povray +SR1 +ER200 +Ipawns.pov ER200") and threw it into the main Povray
rendering process.

If the file is written in C, then I want to add a bit of coding MPI:

MPI_Comm_size (MPI_COMM_WORLD, &sum_of_nodes);
MPI_Comm_rank (MPI_COMM_WORLD, &number_of_node);

if (number_of_node == 0) {
  ......

 .....
}
else {// number of nodes = 2
 ......

 ......
}

So I expect every POVRay application that has been added a bit coding MPI will
build on each nodes.
Each nodes will do his job, for example, the first nodes do his job (line -100)
and the second node working line 101-200.

Is my explanation understandable?
Sorry I am still new in Povray.

Thanks for help.
Regards,
Galih Pribadi


Post a reply to this message

From: clipka
Subject: Re: Modify source code of program POVRAY + MPI (please Help)
Date: 5 Jun 2014 15:36:33
Message: <5390c6c1$1@news.povray.org>
Am 05.06.2014 19:17, schrieb gaprie:

> I actually wanted to find a file whose job is to read the commad line
> (ex: "Povray +SR1 +ER200 +Ipawns.pov ER200") and threw it into the main Povray
> rendering process.

There is no such file, because POV-Ray /already/ uses some MPI-ish 
interface between the part we call the front-end and the part we call 
the back-end (with the design goal to ultimately support multi-node 
operation in some future version).

The front-end is a single thread responsible for reading the command 
line, and later assembling and writing the output image.

The back-end is actually multiple threads: One parser thread responsible 
for parsing the scene file (in a multi-node environment there would have 
to be one such thread per node), and a number of threads rendering the 
image in chunks of 32x32 pixels at a time. A thread finished with its 
chunk would send the results to the front-end, then go for the next 
yet-unrendered chunk.

The interface used for message passing does not conform to the MPI 
standard defined by the Argonne National Laboratory, but uses a 
proprietary API: The thing called POVMS, which - as Thorsten has already 
noted - comes in both a C and a C++ version (the latter being 
implemented as a wrapper around the C API).


If your goal is not to just throw together a quick hack to run POV-Ray 
on multiple nodes, but to create some decent solution that future 
development can build on, maybe you should actually work on the POVMS 
implementation itself to get it flying across node boundaries. (Or, as a 
potential alternative, rip out POVMS entirely and replace it with some 
widespread and well-maintained MPI library.)

If your goal /is/ to just throw together a quick hack, then maybe you'd 
actually be better of with some shell scripts that just run multiple 
POV-Ray instances with suitable parameters, followed up with some 
imagemagick call to paste the results together.


Post a reply to this message

From: Le Forgeron
Subject: Re: Modify source code of program POVRAY + MPI (please Help)
Date: 5 Jun 2014 15:37:08
Message: <5390c6e4$1@news.povray.org>
Le 05/06/2014 19:17, gaprie nous fit lire :
> I actually wanted to find a file whose job is to read the commad line
> (ex: "Povray +SR1 +ER200 +Ipawns.pov ER200") and threw it into the main Povray
> rendering process.

source/base/processoptions.cpp

but you might need to look also at the vfe (virtual front end),
implemented for your system (unix, windows ...), that's where main() is.

The vfe/ hierarchy is complex on its own. (their is a subpart for unix,
and another for windows)

Parsing options fills the memory... and later, getters are provided with
default if the memory is not filled for a property.
Properties are encoded on a 4 bytes id, that's the part VMS transfers
around. You will find them in source/backend/povmsgid.h :-)



-- 
IQ of crossposters with FU: 100 / (number of groups)
IQ of crossposters without FU: 100 / (1 + number of groups)
IQ of multiposters: 100 / ( (number of groups) * (number of groups))


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: Modify source code of program POVRAY + MPI (please Help)
Date: 6 Jun 2014 13:45:01
Message: <web.5391fd92393ecfd078c5ba40@news.povray.org>
Le_Forgeron <jgr### [at] freefr> wrote:
> Le 05/06/2014 19:17, gaprie nous fit lire :
> > I actually wanted to find a file whose job is to read the commad line
> > (ex: "Povray +SR1 +ER200 +Ipawns.pov ER200") and threw it into the main Povray
> > rendering process.
>
> source/base/processoptions.cpp
>
> but you might need to look also at the vfe (virtual front end),
> implemented for your system (unix, windows ...), that's where main() is.
>
> The vfe/ hierarchy is complex on its own. (their is a subpart for unix,
> and another for windows)
>
> Parsing options fills the memory... and later, getters are provided with
> default if the memory is not filled for a property.
> Properties are encoded on a 4 bytes id, that's the part VMS transfers
> around. You will find them in source/backend/povmsgid.h :-)

My branch in Perforce still contains a modified late 3.7 beta that has POVMS
properly separating frontend and backend to work in independent processes. It is
a much better start for an MPI version because it has many dependencies cleaned
up and the POVMS that comes with it can handle "addresses" of various kinds.
Ranging from pipes to TCP/IP ports. Extending it with MPI addresses should be
fairly little work. I will see if I can zip up a version that compiles one day.
No promises when that will be though...

Thorsten


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: Modify source code of program POVRAY + MPI (please Help)
Date: 7 Jun 2014 03:35:00
Message: <web.5392c06e393ecfd078c5ba40@news.povray.org>
clipka <ano### [at] anonymousorg> wrote:
> The interface used for message passing does not conform to the MPI
> standard defined by the Argonne National Laboratory, but uses a
> proprietary API: The thing called POVMS, which - as Thorsten has already
> noted - comes in both a C and a C++ version (the latter being
> implemented as a wrapper around the C API).
>
> If your goal is not to just throw together a quick hack to run POV-Ray
> on multiple nodes, but to create some decent solution that future
> development can build on, maybe you should actually work on the POVMS
> implementation itself to get it flying across node boundaries. (Or, as a
> potential alternative, rip out POVMS entirely and replace it with some
> widespread and well-maintained MPI library.)

One reason to create POVMS was that MPI at the time (1997/1998) was not really
good at handling multiple threads (especially cooperative threading like Macs
had at the time) and implementations existed mostly for Unix, yet multithreading
is a huge benefit for ray-tracing. Seems like this has not changed a lot since
then, i.e.
http://www.open-mpi.org/~jsquyres/www.open-mpi.org/doc/v1.8/man3/MPI_Init_thread.3.php
still says it is no good for heavily multithreaded applications :-( So
effectively one would need to have a three layer communication instead of a two
layer communication in POV 3.7 to use MPI: MPI processes of POV with a central
server process controlling rendering and collecting results - POV processes each
with single MPI communication thread that bundles local POVMS messages - POV
threads that use POVMS for in-process communication.

As for POVMS message data exchange, it will work by simply putting POVMS message
streams into MPI byte arrays. POVMS streams already handle byte ordering, float
type conversion etc., so it should be straight forward.


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.