|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Somewhere in the distant past I had a set up where I could render an
image in tiles by rotating the camera towards the region to be rendered
and shearing the camera to compensat for the 'distortions'. It required a
'long lens' so wasn't extremly usefull. This code is lost though and the
sad part is that I can't code it again as I don't remeber how.
Anyway, I thought to do it with the user_defined camera. No rotations or
shearing (I hope), just select a small section of the image and render it
to the proper resolution. But I fail.
In the scene below, two camera's, the user_defined one gives an identical
image to the commented out perpective one, pixel perfect says imagemagick
(without AA). With the commented out part of the user_defined camera I
intend select the top left quadrant and render that to a square image. I
get an image of a quarter sphere, not in the position I want and
stretched in strange ways. It looks as if the "canvas" needs to be moved.
But how?
ingo
---%<------%<------%<---
#version 3.8;
global_settings {assumed_gamma 1}
// +w400 +h400 +a0.001 +am3
camera {
user_defined
location 0
direction{
function{u*tan(radians(90)/2)}
function{v*tan(radians(90)/2)}
//function{select((u>-0.5 & u<0 ),0,0,u*radians(90)/2)}
//function{select((v> 0 & v<0.5),0,0,v*radians(90)/2)}
function{.5}
}
}
//camera{
// perspective
// location 0
// look_at <0,0,1>
// direction z
// right x
// up y
// angle 90
//}
background{rgb 1}
light_source {<100,100,-100> rgb 1}
sphere{<0,0 , 1>,0.5 texture{pigment{checker}} scale 5}
sphere{<0,0.5, 1>,0.2 pigment{rgb z} scale 5}
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 21.01.2019 um 17:24 schrieb ingo:
> In the scene below, two camera's, the user_defined one gives an identical
> image to the commented out perpective one, pixel perfect says imagemagick
> (without AA). With the commented out part of the user_defined camera I
> intend select the top left quadrant and render that to a square image. I
> get an image of a quarter sphere, not in the position I want and
> stretched in strange ways. It looks as if the "canvas" needs to be moved.
> But how?
I'm not exactly sure what you want to achieve.
(A) The easiest way to render an image in tiles is to tell POV-Ray to
create a full-sized image but only render a subset of it, leaving the
remaining pixels blank (transparent if the image file format supports
it, or black otherwise). Multiple such images can later be merged with
an appropriate operation (addition or overlay, depending on whether
unrendered pixels are black or transparent).
If this is what you want, it can be easily achieved without even using a
user-defined camera; simply specify the full image resolution, and use
the partial output options (`+SCn +ECn +SRn +ERn`) to have POV-Ray
render only the specified subset of the image.
Note that most file formats will compress uniform regions pretty well,
so the unrendered portions won't add much to the on-disk size of the
individual image files.
(B) If for some reason this is not an option, and you need to render to
files that encompass just the regions actually rendered you could
probably accomplish this use a shearing transformation on the camera;
but if you already have a user-defined camera set up, it should be a
piece of cake to use that instead.
> camera {
> user_defined
> location 0
> direction{
> function{u*tan(radians(90)/2)}
> function{v*tan(radians(90)/2)}
> //function{select((u>-0.5 & u<0 ),0,0,u*radians(90)/2)}
> //function{select((v> 0 & v<0.5),0,0,v*radians(90)/2)}
> function{.5}
> }
> }
This looks like you were trying to aim for (A).
If (B) is what you are after, all you need to do is scale and offset u
and v, like so:
direction{
//function{u*tan(radians(90)/2)}
//function{v*tan(radians(90)/2)}
function{(u*2-1)*tan(radians(90)/2)}
function{(v*2+1)*tan(radians(90)/2)}
function{.5}
}
This should render an image that shows - in higher resolution - one
quadrant of the image. (Not quite sure if it's the top left or bottom
left quadrant, but it should be the one you were trying for.)
(The "quarter sphere" and "not in the position I want" is because you
got the approach wrong, and the "stretched in strange ways" is because
you forgot the `tan()` in the commented-out functions.)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
in news:5c45feaf$1@news.povray.org clipka wrote:
> If (B) is what you are after, all you need to do is scale and offset u
> and v, like so:
>
I'll try that later, thanks!! I think I see where my train of thought
derailed. Time for some experimentation.
I'm aware of the commandline option to render regions. It requires some
post processing and it seems to me it is more complex to integrate
rendering a whole tile set in an animation while adapting textures to the
"zoom level".
Goal is to render tile sets for a viewer like Leaflet.js or Marzipano:
http://www.marzipano.net/demos/cube-generated/index.html
http://www.marzipano.net/demos/cube-multi-res/index.html
ingo
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
in news:5c45feaf$1@news.povray.org clipka wrote:
> all you need to do is scale and offset u
> and v,
It seems to work very well, a bit rough and unchecked macro, have to have
a closer look at the "zoom factors" etc.
Thanks again!
ingo
---%<------%<------%<---
#version 3.8;
global_settings {assumed_gamma 1}
#macro TileCam(Loc,Face,Zoom,Col,Row) //Face not implemented yet
/*
Renders a single square tile for the given tile position
Set image resolution with aspect ratio 1:1, most tile renders
use 256x256
//+w256 +h256 +a0.001 +am3
Number of tiles = 4^zoomlevel
Tiles:Side = sqrt(4^zoomlevel)
Zoomlevel 0
one square 256x256
Z C R
0 (0,0,0)
Zoomlevel 1
4 squares
Z C R
1 (1,0,0) (1,1,0)
(1,0,1) (1,1,1)
Zoomlevel 2
16 squares
Z C R
2 (2,0,0) (2,1,0) (2,2,0) (2,3,0)
(2,0,1) (2,1,1) (2,2,1) (2,3,1)
(2,0,2) (2,1,2) (2,2,2) (2,3,2)
(2,0,3) (2,1,3) (2,2,3) (2,3,3)
Loc (vec): Camera location
Zoom (int): Zoomlevel
resulting amount of tiles per zoomlevel = pow(4,zoom)
max zoom level = ...?
Face (str): Letter describing the direction to look into
"F": front look_at +z
"B": back look_at -z
"U": up look_at +y
"D": down look_at -y
"R": right look_at +x
"L": left look_at -x
Col (int): Column coordinate to render, top,left = 0
Row (int): Row coordinate to render, top,left = 0
*/
#local Map05 = function(x,y,z){((x-y)/(z-y))-0.5};
#local TileTotal = pow(4,Zoom);
#local TileSide = sqrt(TileTotal);
#local TileLength = 1/TileSide;
//TODO:
//check col and row index fit inside zoomlevel else error
#local TileUStart = Map05(Col*TileLength,0,1);
#local TileUC = TileUStart+(TileLength/2);
#local TileVStart = Map05(Row*TileLength,1,0);
#local TileVC = TileVStart-(TileLength/2);
#debug concat("Tiles Total : ",str(TileTotal , 0,0 ),"\n")
#debug concat("Tiles Side : ",str(TileSide , 0,0 ),"\n")
#debug concat("Tiles Length : ",str(TileLength, 0,0 ),"\n\n")
#debug concat("TileUStart : ",str(TileUStart,10,10),"\n")
#debug concat("TileVStart : ",str(TileVStart,10,10),"\n\n")
#debug concat("TileUC : ",str(TileUC, 10,10),"\n")
#debug concat("TileVC : ",str(TileVC, 10,10),"\n\v")
camera {
user_defined
location Loc
direction{
function{((1/TileSide*u)+TileUC)*tan(radians(90)/2)}
function{((1/TileSide*v)+TileVC)*tan(radians(90)/2)}
function{.5}
}
}
#end
TileCam(<0,0,0>,"F",1,0,0)
background{rgb 1}
light_source {<100,100,-100> rgb 1}
sphere{<0,0 , 2>,0.5 texture{pigment{checker}}}
sphere{<0,0.5, 2>,0.2 pigment{rgb z}}
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
ingo <ing### [at] tagpovrayorg> wrote:
> Goal is to render tile sets for a viewer like Leaflet.js or Marzipano:
> http://www.marzipano.net/demos/cube-generated/index.html
> http://www.marzipano.net/demos/cube-multi-res/index.html
So, I'm not sure what the _exact_ parameter requirements are for your tiles are,
but check out these and see if they help any:
http://paulbourke.net/miscellaneous/cubemaps/
http://paulbourke.net/stereographics/stereopanoramic/
http://www.f-lohmueller.de/pov_tut/backgrnd/p_sky9.htm
https://ru-ru.facebook.com/notes/panoramic-photographers-on-facebook/reprojecting-panoramas-using-pov-ray/7976429569464
84/
If you need 96 tiles, maybe you can make an animation that uses arrays of camera
location and look_at vectors to take snapshots of a rendered scene from frame 1
to frame 96.
Likely you can get away with 1/6 of the calculations and just use a macro to
rotate a set of basis vectors 90 degrees.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
in news:web.5c46236e3d2afe54765e06870@news.povray.org Bald Eagle wrote:
> So, I'm not sure what the _exact_ parameter requirements are for your
> tiles are
In high res photography they shoot thousands of images that are then
stiched together to one biggy. Then they crop it to 256x256 tiles, scale
the image down and again crop to tiles etc. until there is a single small
tile that holds the full image. These pyramid stacked tiles are then put
in a fitting directory structure so the web server can present the right
image to the right tile when zooming in or moving the view.
This only works well with long lenses 600 mm or more, or you have to use
shif and tilt lanses/field cameras for objects close to the camera. POV-
Ray's shear.
Now, with POV-Ray and I guess many other renderers you can do the
opposite. Trace the first low res image. Chop it to tiles and magnify
those under your copier, without losing resolution. This kind of what I
do with Clipka's camera as in the macro I posted.
Of coarse we can render he ultra high res image, but going from low to
high res may have some advantages.
You can use resolution dependent textures, the earth looks like a shiny
marble from space, but when your nose hits the pavement it looks rather
rough. You can add a finer layer of texture at every zoom level. Or add
an extra function layer to your isosurface. When rendering a detail you
won't have to parse the whole scene (beware of mirrors). You can replace
object outside the field of view by simpler ones without having a too big
influence on radiosity data etc.
When rendering 360 cubes you could reuse the lower res image from the
previous round as an image map behind the camera so you even have to
parse less scene. Maybe you can even reuse radiosity data from that run.
But all in all it won't be faster only more flexible that with one big
image.
So I'm creating a set of macros to control such a tilable camera, the
basics work but I got stuck again on recreating a general perspective
camera based on location, look at and angle :(
#declare Loc = <0,1,0>;
#declare Look = <0,1,1>;
#declare Angle = 69;
#declare AR = image_width/image_height;
#declare DirLen = 0.5*vlength(Right)/tan((radians(Angle))/2);
camera {
user_defined
location Loc
direction{
function{(u)*tan(pi/4)*AR} ....
function{(v)*tan(pi/4)} ....
function{DirLen}
}
}
ingo
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
in news:5c45feaf$1@news.povray.org clipka wrote:
> I'm not exactly sure what you want to achieve.
Thinking about rendering tiles a bit more, I intend to render them in an
animation where each frame number results in a specific tile location.
This can be parallelised relative easy. The master just sends out the
frame number to be renderer and the locic in the .pov / .inc file makes
sure the proper tile is rendered.
A quick search shows a master worker configuration and logic in "Parallel
image computation in clusters with task-distributor" -- Christian Baun, at
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4870533/ . The stitching
process would be easier than described here as there is nothing to be
removed from the resulting images.
ingo
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|