|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I have been using povray for roughly 48hours now having tried a couple of the
tutorials. Povray 3.6.1 has been successfully installed under SuSE 11.2 (64bit)
running on a Intel Celsius with 4 x Xeon 2.66GHz
I am trying to raytrace a number of anatomical structures (mesh2 objects). I
have successfully raytraced a single object for a simple camera set up
location<0,1000,0>
look at <0,0,0>
The up vector is <0,0,1> i.e. along +ve z-axis
I now want to orbit the camera around the structure in the xy plane - simple
enough to work out the necessary location coordinates of the camera. In the
first instance only the camera position is altered for the orbit location. Thus
the up vector remains unaltered(?)
Secondly I want to be able to rotate the camera about the direction vector
-(look at -location). Thus I need to update the up vector accordingly.
My questions are:
i) is it better to rotate the camera about the direction vector, or apply the
inverse camera matrix to the structure (to rotate and translate it) and keep the
camera fixed (thereby generating the same image)
ii) how can I automate the orbit? Do I need to generate a .pov file for a single
camera position. Or can I read in one file containing the mesh2 object and then
pass a number of simple "camera" files to povray containing the necessary camera
setup. Or should I simply define all camera positions in the .pov file and allow
povray to serially raytrace each camera (60 < number of cameras <= 180)
iii) is it possible to write a pbm file directly with povray since I am really
only interested in whether the image pixel is inside (1) or outside the mesh (0)
As far as I can tell povray outputs .png .tga or .ppm. Since speed is critical I
want to avoid having to convert the images into pbm files (which are used for
further processing)
iv) is there a way to batch run the raytracing of multiple structures either in
series or preferably in parallel on different CPUs?
Sorry this post is rather long
Thanks
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 01.09.2010 11:04, schrieb bobsta:
> i) is it better to rotate the camera about the direction vector, or apply the
> inverse camera matrix to the structure (to rotate and translate it) and keep the
> camera fixed (thereby generating the same image)
As you are /thinking/ about the animation in terms of a camera rotation,
that's what you should go for. It will also prevent you from having to
rotate the light source(s), sky sphere, and whatever else you have in
the scene.
> ii) how can I automate the orbit? Do I need to generate a .pov file for a single
> camera position. Or can I read in one file containing the mesh2 object and then
> pass a number of simple "camera" files to povray containing the necessary camera
> setup. Or should I simply define all camera positions in the .pov file and allow
> povray to serially raytrace each camera (60< number of cameras<= 180)
POV-Ray has some built-in features to help creating animations. By
specifying the command line parameter (e.g.) "+KFF100" (or INI-file
parameter "Final_Frame=100") you tell POV-Ray to render the scene file
100 times, varying a variable named "frame_number" from 1 to 100 and
appending the same number to the output file name. There's also a
"clock" variable, varying from 0.0 to 1.0 by default.
In your scene file, you would then use just a single camera, but compute
its position from "frame_number" or "clock", e.g.
camera {
rotate y*360*clock
}
which would have the camera position rotate around the Y axis in steps
of 3.6 degrees per frame (presuming Final_Frame=100).
If you absolutely /need/ to go through the pain of manually defining the
individual camera positions, you'd typically use a "#switch" construct.
See also sections 2.3.8 "Making Animations" and 3.2.1.3.6 "Built-in
Variables" of the documentation
> iii) is it possible to write a pbm file directly with povray since I am really
> only interested in whether the image pixel is inside (1) or outside the mesh (0)
> As far as I can tell povray outputs .png .tga or .ppm. Since speed is critical I
> want to avoid having to convert the images into pbm files (which are used for
> further processing)
No - POV-Ray being a raytracing software, file formats with a bit depth
of 1 are a pretty seldom asked-for feature. The current v3.7 beta does
support .pgm as well, but that's as close as you'll get.
> iv) is there a way to batch run the raytracing of multiple structures either in
> series or preferably in parallel on different CPUs?
I know of two "out-of-the-box" ways of rendering a set of images in a batch:
(A) Use an external script, such as a Unix shell script.
(B) Use the built-in animation mechanism. You can not only #switch
between different cameras or object, but even whole scenes if you so wish,
As for making use of multiple CPUs or CPU cores, there are also
different approaches:
(A) Simply run multiple instances of POV-Ray in parallel, each rendering
a different frame or scene of your batch. (For best performance, make
sure not to run more instances than you have (virtual) CPU cores.)
(B) Go for the POV-Ray 3.7 beta, which is pretty stable to use by now,
and does symmetric multiprocessing out-of-the-box, taking all cores it
can get by default.
For your particular situation, solution (A) may be better suited though,
as during scene file parsing even the POV-Ray beta uses only a single
core, and from your description I guess your scenes may take more time
to parse than to actually render.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Many thanks to clipka for the detailed response. In the end I found it easier to
declare a global object for each mesh2 in an appropriate .inc file and set up
as follows:
#include"Structure.inc"
camera {
perspective
location <0 0 -100>
up <0,-1,0>
right <0.762,0,0>
angle 11.99
look_at<0 0 0>
rotate -z*clock*360
}
light_source { <0, 10, 0> color rgb 1 }
object{Structure translate<-0.34173,-19.6812,-25.5002>}
Rendering 180 scenes (+Q0) for a single mesh2 object took approximately 15
seconds of which 8 seconds were taken to parse the file:
Parse Time: 0 hours 0 minutes 8 seconds (8 seconds)
INFO 14:02:06,437 [Thread-548 ] ( RenderThread.java :
run : 869) - ERROR> Photon Time: 0
hours 0 minutes 0 seconds (0 seconds)
INFO 14:02:06,437 [Thread-548 ] ( RenderThread.java :
run : 869) - ERROR> Render Time: 0
hours 0 minutes 7 seconds (7 seconds)
INFO 14:02:06,437 [Thread-548 ] ( RenderThread.java :
run : 869) - ERROR> Total Time: 0
hours 0 minutes 15 seconds (15 seconds)
My questions are:
i) Does the fact that over 50% of the total time is spent parsing the file mean
that I should rotate the camera instead, presumably reducing the time taken to
parse the file (?)
ii) I set the position of the camera at 0,0,-100 i.e. 1m from the coordinate
origin. The viewport should cover a region of 210 x 160mm in the plane z=0,
where all pixels are 1x1 mm2 (210x160 pixels).
I can use the half-angle to set the width of the viewport i.e.
2*arctan(10.5/100). But how do I set the height of the viewport? When rendering
is completed all rendered .png files have a resolution of 320x200 (which is
presumably the default)
Many thanks
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Many thanks to clipka for the detailed response. In the end I found it easier to
> declare a global object for each mesh2 in an appropriate .inc file and set up
> as follows:
>
> #include"Structure.inc"
>
> camera {
> perspective
>
> location<0 0 -100>
> up<0,-1,0>
> right<0.762,0,0>
> angle 11.99
>
> look_at<0 0 0>
> rotate -z*clock*360
>
> }
> light_source {<0, 10, 0> color rgb 1 }
>
>
> object{Structure translate<-0.34173,-19.6812,-25.5002>}
>
>
> Rendering 180 scenes (+Q0) for a single mesh2 object took approximately 15
> seconds of which 8 seconds were taken to parse the file:
>
> Parse Time: 0 hours 0 minutes 8 seconds (8 seconds)
> INFO 14:02:06,437 [Thread-548 ] ( RenderThread.java :
> run : 869) - ERROR> Photon Time: 0
> hours 0 minutes 0 seconds (0 seconds)
> INFO 14:02:06,437 [Thread-548 ] ( RenderThread.java :
> run : 869) - ERROR> Render Time: 0
> hours 0 minutes 7 seconds (7 seconds)
> INFO 14:02:06,437 [Thread-548 ] ( RenderThread.java :
> run : 869) - ERROR> Total Time: 0
> hours 0 minutes 15 seconds (15 seconds)
>
>
> My questions are:
>
> i) Does the fact that over 50% of the total time is spent parsing the file mean
> that I should rotate the camera instead, presumably reducing the time taken to
> parse the file (?)
That won't change anything. The whole scene get parsed for each frame of
an animation.
If you use the "real time animation" feature that effectively skip
reparsing, you won't get any file output.
>
> ii) I set the position of the camera at 0,0,-100 i.e. 1m from the coordinate
> origin. The viewport should cover a region of 210 x 160mm in the plane z=0,
> where all pixels are 1x1 mm2 (210x160 pixels).
>
> I can use the half-angle to set the width of the viewport i.e.
> 2*arctan(10.5/100). But how do I set the height of the viewport? When rendering
> is completed all rendered .png files have a resolution of 320x200 (which is
> presumably the default)
You can alway set the resolution: Add +w800 +h600 to the command line or
the ini file to get 800 by 600 images. With the windows version, there
is a drop list that enable you to sellect a variety of resolutions. You
can add more to this list if you want, just edit the quickres.ini file.
>
> Many thanks
>
>
>
>
>
Alain
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hi
I now need to modify my camera orbit slightly so that I start at a particular
angle, stop at a particular angle rendering a number of scenes in between
(number of frames, start and stop angle are determined at run time). I also need
to enable both clockwise and anticlockwise rotations
Thus I need something like
rotate -z*(startAngle+(clock*stopAngle))
or rotate +z*(startAngle+(clock*stopAngle))
depending on the orientation ((anti)clockwise) of the orbit.
i) How do I pass the parameters startAngle and stopAngle correctly into a
generic script?
ii) It may happen that during the camera orbit that the modulus of the term in
brackets is < 0 or >360 degrees? Does povray do bounds checking to ensure 0 <
angle < 360 ? Or can I safely rely on basic trigonometry to give the correct
mapping
e.g. 365 degrees => +5 degrees and -27 degrees =>+333 degrees
iii) To be more general I would like to allow the camera to rotate around the
look at vector and then orbit the now rotated camera about the scene? This means
that I need to update the up vector accordingly. Given that I am already
translating the scene to be centred at <0 0 0> what is
the easiest way to accomplish this: should I rotate the scene in the opposite
direction by the appropriate angle, or the camera in the desired direction.
Thanks
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Hi
>
> I now need to modify my camera orbit slightly so that I start at a particular
> angle, stop at a particular angle rendering a number of scenes in between
> (number of frames, start and stop angle are determined at run time). I also need
> to enable both clockwise and anticlockwise rotations
>
> Thus I need something like
>
> rotate -z*(startAngle+(clock*stopAngle))
>
> or rotate +z*(startAngle+(clock*stopAngle))
>
> depending on the orientation ((anti)clockwise) of the orbit.
>
>
> i) How do I pass the parameters startAngle and stopAngle correctly into a
> generic script?
>
> ii) It may happen that during the camera orbit that the modulus of the term in
> brackets is< 0 or>360 degrees? Does povray do bounds checking to ensure 0<
> angle< 360 ? Or can I safely rely on basic trigonometry to give the correct
> mapping
>
> e.g. 365 degrees => +5 degrees and -27 degrees =>+333 degrees
>
> iii) To be more general I would like to allow the camera to rotate around the
> look at vector and then orbit the now rotated camera about the scene? This means
> that I need to update the up vector accordingly. Given that I am already
> translating the scene to be centred at<0 0 0> what is
> the easiest way to accomplish this: should I rotate the scene in the opposite
> direction by the appropriate angle, or the camera in the desired direction.
>
> Thanks
>
>
>
The rotation in a rotate statement is not limited to 360 degrees. rotate
<1000, -557, 10> is perfectly legal. Rotate 720 is the same as rotate
360 and rotate 0.
There is no easiest way to do your rotations. It all depends on how you
perceive/conceive it.
If, in your mind, it's easier to represent a rotation as a rotation of
the camera, thet's THE way you should do it. BUT, if your perseption of
the rotation involve a rotation of the environment, then you should
leave the camera stationary and rotate everything else.
If you bind your whole scene into a big union, there is nothing
preventing you from rotating the complete scene as one entity. Once
that's done, rotating the camera +67 degrees around the Y axis is
exactly the same as rotating the scene -67 around the same axis. Just be
sure that you also rotate your light source(s) with the rest of the scene.
Alain
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Alain <aze### [at] qwertyorg> wrote:
>
> The rotation in a rotate statement is not limited to 360 degrees. rotate
> <1000, -557, 10> is perfectly legal.
Does this statement mean rotate 1000 (+40 degrees) about x, -197 (+163) degrees
about y and 10 degrees about z. Or does it define a rotation axis
<1000,-557,10>? Could you point me to the relevant part of the documentation?
I think the easiest solution to my particular requirements is to write a C++
wrapper program to generate the necessary code for the desired rotation in the
..pov file.
In my previous post I was trying to ask if one could pass command line arguments
for the start and stop angle variables, which would be used to correctly
initialise the corresponding variables at run-time.
eg. povray +Q0 +I<fileName>... <param1=startAngle> <param2=stopAngle>
But since I am already auomatically generating the .inc and an unix script to
execute the .pov file(s) in parallel I might as well set the start and stop
parameters by generating the .pov correctly at run time, i.e. generate a bespoke
..pov file based on user input to my program, and then execute the unix script to
do the raytracing.
Is it possible to define an isotropic light source on the (shell) surface of a
sphere and direct all light rays at the origin: Basically I am only using povray
to do very crude raycasting to determine the projection of an anatomical
structure.
Mark
> Rotate 720 is the same as rotate
> 360 and rotate 0.
>
> There is no easiest way to do your rotations. It all depends on how you
> perceive/conceive it.
> If, in your mind, it's easier to represent a rotation as a rotation of
> the camera, thet's THE way you should do it. BUT, if your perseption of
> the rotation involve a rotation of the environment, then you should
> leave the camera stationary and rotate everything else.
>
> If you bind your whole scene into a big union, there is nothing
> preventing you from rotating the complete scene as one entity. Once
> that's done, rotating the camera +67 degrees around the Y axis is
> exactly the same as rotating the scene -67 around the same axis. Just be
> sure that you also rotate your light source(s) with the rest of the scene.
>
>
>
>
> Alain
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Alain<aze### [at] qwertyorg> wrote:
>
>>
>> The rotation in a rotate statement is not limited to 360 degrees. rotate
>> <1000, -557, 10> is perfectly legal.
>
> Does this statement mean rotate 1000 (+40 degrees) about x, -197 (+163) degrees
> about y and 10 degrees about z. Or does it define a rotation axis
> <1000,-557,10>? Could you point me to the relevant part of the documentation?
It rotate several times around the X axis, then it rotate around the Y
axis, then around the Z axis.
Rotations are always done that way.
http://wiki.povray.org/content/Documentation:Tutorial_Section_2.2#Rotate
>
> I think the easiest solution to my particular requirements is to write a C++
> wrapper program to generate the necessary code for the desired rotation in the
> ..pov file.
>
> In my previous post I was trying to ask if one could pass command line arguments
> for the start and stop angle variables, which would be used to correctly
> initialise the corresponding variables at run-time.
>
> eg. povray +Q0 +I<fileName>...<param1=startAngle> <param2=stopAngle>
Perfectly possible.
You can declare variables on the command line with the following construct:
declare=IDENTIFIER=FLOAT (yes, there are 2 "=" signs)
So, you can have: declare=Param1=12 declare=Param2=79
to set Param1 to 12 and Param2 to 79.
>
>
> But since I am already auomatically generating the .inc and an unix script to
> execute the .pov file(s) in parallel I might as well set the start and stop
> parameters by generating the .pov correctly at run time, i.e. generate a bespoke
> ..pov file based on user input to my program, and then execute the unix script to
> do the raytracing.
If you need several images from various locations, you can use the
animation feature. Use the internal "clock" variable to controll the
movement.
Just passing +kffn to the command line will start the animation loop to
render n successive images.
>
> Is it possible to define an isotropic light source on the (shell) surface of a
> sphere and direct all light rays at the origin: Basically I am only using povray
> to do very crude raycasting to determine the projection of an anatomical
> structure.
No.
You can use other options: (from fastest, cruder, to slowest and nicest)
Using +q0 ignores any light and only use full ambient illumination and
simple textures/pigments. Any transparency and reflection is ignored.
Fast and crude.
Use 6 or 8 shadowless lights placed as a cube or octagon. Slightly
slower. Can use coloured lights to beter show shapes.
Use area_light instead of regular point_light, to illuminate the scene
from several directions and hide the shadows. Be sure to use adaptive
sampling, starting with adaptive 0.
Use radiosity with a white background and no actual light. It gives an
all encompacing illumination. It takes more time to render and may not
be what you need to do "crude" renders.
>
> Mark
>
>
Alain
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Thanks for the reply. I wrote a c++ program to write the .pov file based on a
parameter file which specifies the number of orbits, start, stop and step
angles.
This works fine. Executing the jobs in parallel takes approximately 30 s on a 4
core Intel Xeon 2.66GHz linux box for two distinct structures and two arcs, viz.
four orbits (one for each structure and arc combination).
> No.
> You can use other options: (from fastest, cruder, to slowest and nicest)
>
Speed is of the utmost importance: quick and dirty is okay! However with +Q0
switch set and simple pigment i.e.
texture {
pigment {
color rgb <1.00, 0.0, 0.0>
}
}
for each mesh2 object shadows are still rendered!!
Sample .pov file
#include "/home/mark/2step/PTV.inc"
camera {
perspective
location <0 100 0>
up <0,0,-1>
right <-1.33,0,0>
angle 11.99
look_at <0 0 0>
rotate -z*(178+(clock*-356))
}
light_source { <0,10,0> color rgb 1 }
object{PTV translate <-0.34173,-19.6812,-25.5002>}
Below is the script I generate which uses the parallel program to execute the
jobs on different CPUs.
rayTracing.sh
#!/bin/bash
parallel 8 povray -D mlc[Synergy]+Q0 +KFF178 +I/home/mark/2step/PTV0.pov
povray -D mlc[Synergy]+Q0 +KFF178 +I/home/mark/2step/Rektum0.pov
povray -D mlc[Synergy]+Q0 +KFF178 +I/home/mark/2step/PTV1.pov
povray -D mlc[Synergy]+Q0 +KFF178 +I/home/mark/2step/Rektum1.pov
I experimented by setting up 6 light_sources at <10,0,0> <-10,0,0> <0,10,0>
<0,-10 0> <0,0,10> <0,0,-10> . This didn't improve matters and increased the
ratracing time from 30 to 45 seconds! Clearly I am missing something here. Any
suggestions??
> Using +q0 ignores any light and only use full ambient illumination and
> simple textures/pigments. Any transparency and reflection is ignored.
> Fast and crude.
>
> Use 6 or 8 shadowless lights placed as a cube or octagon. Slightly
> slower. Can use coloured lights to beter show shapes.
>
> Use area_light instead of regular point_light, to illuminate the scene
> from several directions and hide the shadows. Be sure to use adaptive
> sampling, starting with adaptive 0.
>
> Use radiosity with a white background and no actual light. It gives an
> all encompacing illumination. It takes more time to render and may not
> be what you need to do "crude" renders.
>
>
> >
> > Mark
> >
> >
>
> Alain
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|