|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
How about changing main render loop (that goes through each pixel of each
line AFAIK) into some kind of server-client model.
It would give greate possibiliteis of comfortable distributed render as
well other possibilities.
If You are changing i.e. finish of some isosurface You could rerender part
of image. But better solution - to rerender only pixels that "touch" this
isosurface.
User will select exact are to re-render (i.e. in some GUI or other addon)
and this add-on will call PovRay engine.
Engine will parse scene, load all textures (and store them in memory etc).
Then it will recive i.e. commands like
RENDER LINE 76 PIXELS 64-158
RENDER LINE 77 PIXELS 60-140
via some named pipe or other programs communication protocol.
And - importand - if we want to render another part of image (in same
scene) immediately after, we just send info to Pov-Server to render some
more pixels, without waiting for parse/load data.
After all we send some
SCENE DONE
commend to dispose of loaded scene, and then maybe SCENE PARSE c.pov
to load another version of scene.
Second subject - distributed render. We just run several pov-servers, give
them some way of coomunicating with master client (via LAN, via internet,
or even via email for very long renders), master program sends scene files
to each render-server, and assign them parts of image to render. Each
server reports main client when he done it's part so that client-master can
assign new parts to new servers.
Just a qucik thought, but any chance it is worth it?
--
http://www.raf256.com/3d/
Rafal Maj 'Raf256', home page - http://www.raf256.com/me/
Computer Graphics
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I think to minimize computation, tiles would be preferable.
I think of radiosity and photons which are not local at all in
nature: they use large parts of the scene.
For example, having a bunch of render-slaves computers, one
could be master and dispatch tasks as the CPUs become
available: once a CPU is done with is tile, it requests another
region of the picture (without flushing its own data, that'd be
a shame to start from scratch with photons and radiosity.) The
master collects tiles and builds the final picture.
Radiosity is build in a hierarchical way, maybe there would be
some way of exploiting this to speed up things ?
I would much rather see that master/slave thing implemented
in POV than using a script-based dispatcher.
Best,
S.
"Rafal 'Raf256' Maj" <spa### [at] raf256com> wrote in message
news:Xns9504C0FBDA478raf256com@203.29.75.35...
> How about changing main render loop (that goes through each pixel of each
> line AFAIK) into some kind of server-client model.
>
> It would give greate possibiliteis of comfortable distributed render as
> well other possibilities.
>
> If You are changing i.e. finish of some isosurface You could rerender part
> of image. But better solution - to rerender only pixels that "touch" this
> isosurface.
>
> User will select exact are to re-render (i.e. in some GUI or other addon)
> and this add-on will call PovRay engine.
>
> Engine will parse scene, load all textures (and store them in memory etc).
> Then it will recive i.e. commands like
> RENDER LINE 76 PIXELS 64-158
> RENDER LINE 77 PIXELS 60-140
> via some named pipe or other programs communication protocol.
>
> And - importand - if we want to render another part of image (in same
> scene) immediately after, we just send info to Pov-Server to render some
> more pixels, without waiting for parse/load data.
>
> After all we send some
> SCENE DONE
> commend to dispose of loaded scene, and then maybe SCENE PARSE c.pov
> to load another version of scene.
>
>
>
> Second subject - distributed render. We just run several pov-servers, give
> them some way of coomunicating with master client (via LAN, via internet,
> or even via email for very long renders), master program sends scene files
> to each render-server, and assign them parts of image to render. Each
> server reports main client when he done it's part so that client-master
can
> assign new parts to new servers.
>
> Just a qucik thought, but any chance it is worth it?
>
>
> --
> http://www.raf256.com/3d/
> Rafal Maj 'Raf256', home page - http://www.raf256.com/me/
> Computer Graphics
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Steven Pigeon wrote:
> I think to minimize computation, tiles would be preferable.
> I think of radiosity and photons which are not local at all in
> nature: they use large parts of the scene.
>
>
> For example, having a bunch of render-slaves computers, one
> could be master and dispatch tasks as the CPUs become
> available: once a CPU is done with is tile, it requests another
> region of the picture (without flushing its own data, that'd be
> a shame to start from scratch with photons and radiosity.) The
> master collects tiles and builds the final picture.
>
> I would much rather see that master/slave thing implemented
> in POV than using a script-based dispatcher.
>
> Best,
>
> S.
Wow, what timing :) I've been working feverishly over the past two
weeks on MegapovXRS, which is similar to what you describe.
I've essentially retrofitted MegaPOV 1.0 with an XML-RPC server. This
provides a multiplatform way for a remote client (the master) to send
work requests via the internet or a LAN using http/(and possibly https).
Ideally, the authors of SMPOV, POV-Anywhere, Rendview and others could
add a module that supports rendering with a MegapovXRS slave.
The way it works is the master opens a new session, delivering
"command-line" arguments over RPC. Then, the MegapovXRS slave parses the
scene, shoots any photons and then waits for tile requests. When tiles
come in they are rendered *without* having to parse or shoot photons
again. This should provide a substantial boost (compared to a pure
script based approach) when it comes to network rendering scenes with
long parse times.
I have a highly functional beta here and I'm working as we speak on
preparing it for release to my beloved POV-Ray community for review.
Best Regards,
George Pantazopoulos
http://www.gammaburst.net
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 10 Jun 2004 13:00:30 -0400, "Rafal 'Raf256' Maj" <spa### [at] raf256com> wrote:
> RENDER LINE 76 PIXELS 64-158
> RENDER LINE 77 PIXELS 60-140
povray +SR76 +ER76 +SC64 +EC158
povray +SR77 +ER77 +SC60 +EC140
http://www.cit.gu.edu.au/~anthony/graphics/imagick5/mosaics/#composite
ABX
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
abx### [at] abxartpl news:7glic0h9hssmch7sq5rh4ne9jj2tofmesv@4ax.com
> povray +SR76 +ER76 +SC64 +EC158
> povray +SR77 +ER77 +SC60 +EC140
And parse each 10,000 lines of .pov, load 10 .df3 and 30 textures for each
line? No, thanks ;)
--
http://www.raf256.com/3d/
Rafal Maj 'Raf256', home page - http://www.raf256.com/me/
Computer Graphics
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 11 Jun 2004 06:51:00 -0400, "Rafal 'Raf256' Maj" <spa### [at] raf256com> wrote:
> > povray +SR76 +ER76 +SC64 +EC158
> > povray +SR77 +ER77 +SC60 +EC140
>
> And parse each 10,000 lines of .pov, load 10 .df3 and 30 textures for each
> line?
You already have to do it after changing finish in isosurface as in example
you gave. Or do you have at hands ready algorithm where it is easy to find
pixels in image where changed finish is visible (including reflections and
transparency of other objects)?
If you already know you can recreate those few pixels because there is no
mirror or transparent I do not believe that all 10.000 lines, 10.df3 files and
30 textures are *all* visible in those two short rows of pixels. Conditional
parsing is popular solution.
ABX
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
abx### [at] abxartpl news:3n3jc09gk9e7bvt26knb90sk50b62ug0i6@4ax.com
> Or do you have at hands ready algorithm where it is easy to find
> pixels in image where changed finish is visible (including reflections
> and transparency of other objects)?
Ofcourse we have. The raytrace procedure does *exacly* it, only need to add
extra value returned by it (nor only R,G,B but also information like
bool was_object_XXX_hit)
> If you already know you can recreate those few pixels because there is
> no mirror or transparent I do not believe that all 10.000 lines,
> 10.df3 files and 30 textures are *all* visible in those two short rows
> of pixels. Conditional parsing is popular solution.
Its also a good idea.
--
http://www.raf256.com/3d/
Rafal Maj 'Raf256', home page - http://www.raf256.com/me/
Computer Graphics
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 11 Jun 2004 07:23:17 -0400, "Rafal 'Raf256' Maj" <spa### [at] raf256com> wrote:
> abx### [at] abxartpl news:3n3jc09gk9e7bvt26knb90sk50b62ug0i6@4ax.com
> > Or do you have at hands ready algorithm where it is easy to find
> > pixels in image where changed finish is visible (including reflections
> > and transparency of other objects)?
>
> Ofcourse we have.
Of course we haven't. We could have if somebody would implement it. IIRC you
were talking about it already in the past and AFAIR you promised to deliver
patch with something like this. But you didn't IIRC. Did you?
> The raytrace procedure does *exacly* it, only need to add
> extra value returned by it (nor only R,G,B but also information like
> bool was_object_XXX_hit)
With the risk of repeating it again, are you aware of how many objects are hit
in radiosity scenes for each pixel with every intersection? Do you know memory
behind it? Do you know that antialiasing increases it? Do not theoretize
please, Rafal. Take a compiler which you already have and finaly prove us
something! I would be happy to see it.
ABX
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
abx### [at] abxartpl news:ek6jc0pqg1313gv43dqo485du7mag1h7re@4ax.com
> With the risk of repeating it again, are you aware of how many objects
> are hit in radiosity scenes for each pixel with every intersection? Do
> you know memory behind it?
Who said that each ray should have full history of with objects is he
depending on (with object do this ray or its children etc. hit ;) ?
Its another way around, YOu have one object i.e. 27-th isosurface in scene,
YOu store its pointer globaly, and each ray only have a boolean information
did he hit it - YES or NO
> Do you know that antialiasing increases it?
it don't in case a above
> Do not theoretize please, Rafal. Take a compiler which you already
> have and finaly prove us something! I would be happy to see it.
I would also heapy if I just had some free time...
--
http://www.raf256.com/3d/
Rafal Maj 'Raf256', home page - http://www.raf256.com/me/
Computer Graphics
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 11 Jun 2004 11:07:16 -0400, "Rafal 'Raf256' Maj" <spa### [at] raf256com> wrote:
> > With the risk of repeating it again, are you aware of how many objects
> > are hit in radiosity scenes for each pixel with every intersection? Do
> > you know memory behind it?
>
> Who said that each ray should have full history of with objects is he
> depending on (...) YOu have one object i.e. 27-th isosurface in scene,
> YOu store its pointer globaly, and each ray only have a boolean information
> did he hit it - YES or NO
So your "proposition" is supposed to work only as long as you are modyfing one
object?
(note object pattern does not use Trace routine)
ABX
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |