|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I'm using Povray to render graphical elements to be used in a computer game.
It's a board game, so I have a camera looking at a board (at a nice perspective
angle), and I have game pieces on the board.
For the game board, it's easy... I render the full image with no pieces. Done.
To generate the game piece graphics, I generate full-board images with subsets
of pieces in situ on the board, but with the board set as no_image, and the
background pixels transparent. I can then load several of these full images
into my game program, and manually select slices of them for a given piece. I
can have several game pieces in each image (as long as they have a
non-overlapping rectangle bounding box.
I also considered generating individual bounded images for the pieces (rendered
at all the various positions). This essentially amounts to crops of the images
in option 1. That options seems to only complicate everything below... so I
think the first option is better, but I'm open to suggestions.
******* The problem: *******
How do I know where (in output-image pixels) to make my image slices for each
game piece?
I.e. inside my game program, I load the full image with pieces rendered in
several places. How can I find the (x, y, width, height) of a slice from the
rendered image to overlay the board so that I show that one game piece?
I could manually go in and figure out where all these slices need to be made,
but that's a huge pain, and if I decide I need to change my camera position or
something, I would have to do this again. I want a programmatic solution.
******* A guess at a solution: *******
(which I don't know how to do)
If I could find the output-image pixel position for a given 3D coordinate in my
scene, I could pretty easily mark locations in the scene and export a text file
that has the pixel coordinates.
Better yet would be to find the pixel position for the corners of a camera-view
bounding box for each piece. That way I don't have to manually figure out
where those 3D dots should go to fully bound the object.
Is there a trick for doing this without writing a reverse ray-tracer in SDL? So
far, my camera is simple, so maybe this is not that hard...
camera {
location <44, 27.5, 0>
look_at <0.5,0,0>
angle 39.5
right <720/340, 0, 0> // 720 x 340 output image
}
Thanks,
Malcolm
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Wasn't it Slagheap who wrote:
>I'm using Povray to render graphical elements to be used in a computer game.
>It's a board game, so I have a camera looking at a board (at a nice perspective
>angle), and I have game pieces on the board.
>
>For the game board, it's easy... I render the full image with no pieces. Done.
>
>To generate the game piece graphics, I generate full-board images with subsets
>of pieces in situ on the board, but with the board set as no_image, and the
>background pixels transparent. I can then load several of these full images
>into my game program, and manually select slices of them for a given piece. I
>can have several game pieces in each image (as long as they have a
>non-overlapping rectangle bounding box.
>
>I also considered generating individual bounded images for the pieces (rendered
>at all the various positions). This essentially amounts to crops of the images
>in option 1. That options seems to only complicate everything below... so I
>think the first option is better, but I'm open to suggestions.
You could use "screen.inc" to place the object wherever you want on the
screen. You know where it is because you put it there.
See the scenes/incdemo/screen.pov example file that comes with the
POVRay distribution.
Doing it that way, the second option is probably easier because there's
no danger of overlap.
--
Mike Williams
Gentleman of Leisure
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Slagheap wrote:
> How do I know where (in output-image pixels) to make my image slices for each
> game piece?
Why not render the positioned pieces into an image file with a
background color that none of the pieces have (black, transparent,
bright green, etc) and then write a program to find the pixels farthest
north, south, east, and west, and that will give you the 2D bounding
box? Or did I not understand the question?
--
Darren New / San Diego, CA, USA (PST)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> To generate the game piece graphics, I generate full-board images with subsets
> of pieces in situ on the board, but with the board set as no_image, and the
> background pixels transparent. I can then load several of these full images
> into my game program, and manually select slices of them for a given piece. I
> can have several game pieces in each image (as long as they have a
> non-overlapping rectangle bounding box.
Let me mention shadows.
When doing similar stuff, I often had the problem that, translated to your
situation, different game pieces cast shadows onto each other. This looks
bad, when in the actual play only the shadowed piece is present, not the
one casting the shadow. The same goes for reflections of pieces in shiny
parts of other pieces. So in any case I would recommend to render each
piece separately. Of course, pieces failing to cast shadows onto each other
when they should also looks unrealistic. But much less so.
Another issue with shadows is shadows of pieces cast onto the board. You
might want that. In which case, unfortunately, everything below becomes
more complicated. So for the moment I assume you do not need those.
Also, let me mention antialiasing.
If you want some, then you should not just render your pieces with the
board set to noimage. Instead, you should render twice, first the piece
and the board normally, then just the piece with all textures replaced
by white and a black background. Afterwards, you use the non-black parts
of the second image (due to AA, they will not be full-white, mind you)
as a mask for the first image.
> I also considered generating individual bounded images for the pieces (rendered
> at all the various positions). This essentially amounts to crops of the images
> in option 1. That options seems to only complicate everything below... so I
> think the first option is better, but I'm open to suggestions.
What does it complicate? This is what I would suggest. Of course, it wastes
a bit of time. But very little: It wastes the time to parse a scene more
often and for povray to find out that some pixel is indeed transparent.
So, now you have for each piece at each position an image which consists
mainly of transparent pixels. Then use some postprocessing tool to crop
this to the minimal rectangle. Write this tool yourself, unless somebody
else suggests an existing one.
The important point is, that the tool should output, separately, the
coordinates of cropping it has used. Plug these into your game source
code. That's all.
Mark Weyer
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Mark Weyer" <nomail@nomail> wrote:
> Let me mention shadows.
> [snip]
I didn't mention shadows in my original message, but I have thought about that.
I basically just set up the lighting so that pieces will only ever cast shadows
onto the board, and not onto other pieces. (At least for the key light... the
fill lights would cast shadows, but their shadows would hopefully be subtle
enough to not be missed).
For the board shadows I rendered the board entirely without shadows, and then
again with shadows (from a typical-enough piece) at all locations. As with the
piece images, I was going to take crops from this shadow image to overlay the
non-shadowed board at locations where the pieces are.
> Also, let me mention antialiasing.
> If you want some, then you should not just render your pieces with the
> board set to noimage. Instead, you should render twice, first the piece
> and the board normally, then just the piece with all textures replaced
> by white and a black background. Afterwards, you use the non-black parts
> of the second image (due to AA, they will not be full-white, mind you)
> as a mask for the first image.
Ok... I'll keep that in mind. I'd thought about the AA, but was hoping it
wouldn't cause problems.
> > I also considered generating individual bounded images for the pieces (rendered
> > at all the various positions). This essentially amounts to crops of the images
> > in option 1. That options seems to only complicate everything below... so I
> > think the first option is better, but I'm open to suggestions.
>
> What does it complicate?
I was thinking I would need to directly render the individual piece images
bounded to the size of the piece... How do I render that? Do I change the +W,
+H and camera{ right<???> } for each one? It seemed like it just pushed the
cropping-coordinates problem further up the chain to povray... and in such a
way that I would have to tell pov before it rendered these coordinates.
It makes sense as you say below to do full image renders, with one piece...
that's more workable.
> This is what I would suggest. Of course, it wastes
> a bit of time. But very little: It wastes the time to parse a scene more
> often and for povray to find out that some pixel is indeed transparent.
>
> So, now you have for each piece at each position an image which consists
> mainly of transparent pixels. Then use some postprocessing tool to crop
> this to the minimal rectangle. Write this tool yourself, unless somebody
> else suggests an existing one.
> The important point is, that the tool should output, separately, the
> coordinates of cropping it has used. Plug these into your game source
> code. That's all.
Good idea... I used ImageMagick:
convert -deconstruct \( all_transparent_frame.png BoardPieces_0_0_0_0.png \)
out.png
"-deconstruct" diffs the two images and gives a crop of only the parts that
changed, i.e. the non-transparent pixels in the BoardPieces image.
I modified the ImageMagick source to print the coordinates:
magick/layer.c : CompareImageLayers()
line 818:
bounds[i]=CompareImageBounds(image_b,image_a,method,exception);
+ printf("Bounds[%d]: (%lu x %lu) @ (%ld , %ld)\n", i, bounds[i].width,
bounds[i].height, bounds[i].x, bounds[i].y);
I will probably still go with my original plan of loading the big images with
several pieces into my program... I think it's a little more memory efficient
than having thousands of images for every piece at every location. But I will
use the individual-piece images to figure out the coordinates as I described
above.
So... That solves my problem nicely. I'm still curious though if there is a
neat way to do it all within povray...
Given a +W, +H, camera{} and an object{}, is it possible to print out the
output-image pixel coordinates of the bounding box of the object?
Anyway, Thanks Mark and others who responded.
Malcolm
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Ok... I'll keep that in mind. I'd thought about the AA, but was hoping it
> wouldn't cause problems.
As you are using image diffs, it won't if your pieces never overlap in
any game position (as long as you do not use jitter or such). I did not
think of diffs. If pieces are going to overlap, you will need to use an
alpha channel (with more than one bit).
My other comment on AA was based on my experience from a project, where
we can assume that sprites overlap rarely enough for us to be content
with a one bit alpha channel, aka bit mask.
> So... That solves my problem nicely. I'm still curious though if there is a
> neat way to do it all within povray...
>
> Given a +W, +H, camera{} and an object{}, is it possible to print out the
> output-image pixel coordinates of the bounding box of the object?
First, assume your camera is
camera {
location 0
direction z
right A*x
up B*y
}
for some numbers A and B. Then the 3D coordinate <X,Y,Z> will be rendered
at the pixel ( (X/(A*Z)-1/2)*W , (1/2-Y/(B*Z))*H ). Your actual camera is
equivalent to one of this form with some rotates and translates applied.
Then, apply the inverses of these rotates and translates, in reverse order,
to the 3D coordinate beforehand.
How your camera is equivalent to one of the above form, I cannot tell, at
least not from the top of my head, because I never use angle or look_at.
> Anyway, Thanks Mark and others who responded.
Welcome.
Mark Weyer
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> at the pixel ( (X/(A*Z)-1/2)*W , (1/2-Y/(B*Z))*H ). Your actual camera is
^^^^^^^^^^^^^^^
Oh, bugger. This should have been (1/2+X/(A*Z))*W. Beware of other mistakes
I might have made.
Mark Weyer
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Mark Weyer" <nomail@nomail> wrote:
> First, assume your camera is
>
> camera {
> location 0
> direction z
> right A*x
> up B*y
> }
>
> for some numbers A and B. Then the 3D coordinate <X,Y,Z> will be rendered
> at the pixel ( (X/(A*Z)-1/2)*W , (1/2-Y/(B*Z))*H ). Your actual camera is
> equivalent to one of this form with some rotates and translates applied.
> Then, apply the inverses of these rotates and translates, in reverse order,
> to the 3D coordinate beforehand.
> How your camera is equivalent to one of the above form, I cannot tell, at
> least not from the top of my head, because I never use angle or look_at.
>
> > Anyway, Thanks Mark and others who responded.
>
> Welcome.
>
> Mark Weyer
What if your camera's right, up and directions aren't x, y and z (lowercase).
Mike
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 11/14/2009 11:06 PM, SharkD wrote:
> What if your camera's right, up and directions aren't x, y and z (lowercase).
>
> Mike
Duh. Maybe I should read the entire post/thread first?
;)
Mike
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|