POV-Ray : Newsgroups : povray.advanced-users : Reverse engineering scene settings from rendered images Server Time
1 Nov 2024 03:15:16 EDT (-0400)
  Reverse engineering scene settings from rendered images (Message 1 to 8 of 8)  
From: SharkD
Subject: Reverse engineering scene settings from rendered images
Date: 7 Jul 2008 21:25:00
Message: <web.4872c105646cdb1df7bf1c280@news.povray.org>
Can anyone suggest any methods of reverse engineering basic scene settings
(light color/intensity, texture diffuse/ambient, etc.) from existing, rendered
images?

The source code to a commercial video game (Jagged Alliance 2) was released
several years ago, and as part of the overhaul of the game, some members of the
community would like to add new sprites. However, the company that made the game
didn't release any of the assets used to create the artwork. I am wondering if
there are ways of determining basic scene information from the existing images?

For some things, such as the physical proportions of objects, camera and
lighting angles, this is trivial as the information can be calculated
geometrically. However, the more subtle aspects prove to be a more difficult
challenge.

I'd like to mention that there are certain characteristics of the game sprites
(images) that should make this task easier. Namely, the base sprites are made
up of primary colors only. E.g., they are made up only of varying shades of
red, green, cyan, magenta, etc. Any other colors besides these primaries are
substituted later by the game's own rendering engine. I have a hunch (but have
no idea how to test it) that some of the missing information can be determined
by comparing the range of light to dark shades of these primaries. Something
along the lines of analyzing the image's histogram and determining where the
maximum and minimum shades of each color occur, I think.

That's about as far as my hunch has taken me. Can anyone think of any strategies
that are more complete?

Thanks very much!

-Mike


Post a reply to this message

From: Alain
Subject: Re: Reverse engineering scene settings from rendered images
Date: 10 Jul 2008 17:48:03
Message: <48768393$1@news.povray.org>
SharkD nous illumina en ce 2008-07-07 21:21 -->
> Can anyone suggest any methods of reverse engineering basic scene settings
> (light color/intensity, texture diffuse/ambient, etc.) from existing, rendered
> images?
> 
> The source code to a commercial video game (Jagged Alliance 2) was released
> several years ago, and as part of the overhaul of the game, some members of the
> community would like to add new sprites. However, the company that made the game
> didn't release any of the assets used to create the artwork. I am wondering if
> there are ways of determining basic scene information from the existing images?
> 
> For some things, such as the physical proportions of objects, camera and
> lighting angles, this is trivial as the information can be calculated
> geometrically. However, the more subtle aspects prove to be a more difficult
> challenge.
> 
> I'd like to mention that there are certain characteristics of the game sprites
> (images) that should make this task easier. Namely, the base sprites are made
> up of primary colors only. E.g., they are made up only of varying shades of
> red, green, cyan, magenta, etc. Any other colors besides these primaries are
> substituted later by the game's own rendering engine. I have a hunch (but have
> no idea how to test it) that some of the missing information can be determined
> by comparing the range of light to dark shades of these primaries. Something
> along the lines of analyzing the image's histogram and determining where the
> maximum and minimum shades of each color occur, I think.
> 
> That's about as far as my hunch has taken me. Can anyone think of any strategies
> that are more complete?
> 
> Thanks very much!
> 
> -Mike
> 
> 
It may be much easier to recreate the scenes from scratch to match the 
original(s) as much as possible.
Recreate the objects, props, walls, terrains and characters on ther own. Adjust 
the scales so that they match. Assemble the various elements to remake the 
desired image.

-- 
Alain
-------------------------------------------------
You know you've been raytracing too long when you're starting to find these 
quotes more unsettling than funny.
     -- Alex McLeod a.k.a. Giant Robot Messiah


Post a reply to this message

From: SharkD
Subject: Re: Reverse engineering scene settings from rendered images
Date: 10 Jul 2008 23:15:01
Message: <web.4876cf8d9aa045fde116e5c40@news.povray.org>
Alain <ele### [at] netscapenet> wrote:
> It may be much easier to recreate the scenes from scratch to match the
> original(s) as much as possible.
> Recreate the objects, props, walls, terrains and characters on ther own. Adjust
> the scales so that they match. Assemble the various elements to remake the
> desired image.
>
> --
> Alain
> -------------------------------------------------
> You know you've been raytracing too long when you're starting to find these
> quotes more unsettling than funny.
>      -- Alex McLeod a.k.a. Giant Robot Messiah

Thank you for taking interest in the subject! I would like to stress that I do
not wish to recreate the objects themselves. There are several people (or a few
people anyway) who have dedicated themselves to supplementing the existing
objects with new ones. What is missing, however, is the original scene
information so that new images don't look out of whack when compared next to
existing ones. Lighting, camera and object geometry should be trivial to
determine geometrically. Things like ambience and light intensity are what I
wish to figure out (luckily radiosity is not an issue). Also, I have an
uncommon bent towards exactitude. I would like to learn of computational means
of determining this. I don't like experimenting and fiddling with trial and
error unless what I'm creating is entirely new.

-Mike


Post a reply to this message

From: Alain
Subject: Re: Reverse engineering scene settings from rendered images
Date: 11 Jul 2008 22:54:34
Message: <48781cea$1@news.povray.org>
SharkD nous illumina en ce 2008-07-10 23:12 -->
> Alain <ele### [at] netscapenet> wrote:
>> It may be much easier to recreate the scenes from scratch to match the
>> original(s) as much as possible.
>> Recreate the objects, props, walls, terrains and characters on ther own. Adjust
>> the scales so that they match. Assemble the various elements to remake the
>> desired image.
>>
>> --
>> Alain
>> -------------------------------------------------
>> You know you've been raytracing too long when you're starting to find these
>> quotes more unsettling than funny.
>>      -- Alex McLeod a.k.a. Giant Robot Messiah
> 
> Thank you for taking interest in the subject! I would like to stress that I do
> not wish to recreate the objects themselves. There are several people (or a few
> people anyway) who have dedicated themselves to supplementing the existing
> objects with new ones. What is missing, however, is the original scene
> information so that new images don't look out of whack when compared next to
> existing ones. Lighting, camera and object geometry should be trivial to
> determine geometrically. Things like ambience and light intensity are what I
> wish to figure out (luckily radiosity is not an issue). Also, I have an
> uncommon bent towards exactitude. I would like to learn of computational means
> of determining this. I don't like experimenting and fiddling with trial and
> error unless what I'm creating is entirely new.
> 
> -Mike
> 
> 
The original images probably used environment maping (for reflections) and light 
maping (for light intensity and shadows). Many games never use actual lighting, 
but rely on light maping. It works great for scan-line rendering, but not for 
ray-tracing.
If you don't have access to those maps, then your only choice is emulation and 
trial and error.

-- 
Alain
-------------------------------------------------
You know you've been raytracing too long when your personal correspondence to 
friends starts out with #Dear Linda =
Ken Tyler


Post a reply to this message

From: SharkD
Subject: Re: Reverse engineering scene settings from rendered images
Date: 13 Jul 2008 22:10:01
Message: <web.487ab4599aa045fd302c26d00@news.povray.org>
Alain <ele### [at] netscapenet> wrote:
> The original images probably used environment maping (for reflections) and light
> maping (for light intensity and shadows). Many games never use actual lighting,
> but rely on light maping. It works great for scan-line rendering, but not for
> ray-tracing.
> If you don't have access to those maps, then your only choice is emulation and
> trial and error.
>
> --
> Alain
> -------------------------------------------------
> You know you've been raytracing too long when your personal correspondence to
> friends starts out with #Dear Linda =
> Ken Tyler

I am pretty sure actual lighting was used, as the images come complete with
shadows.

Anyway, I started a similar thread over at CGTalk in which I provided examples.
You can find it here:

http://forums.cgsociety.org/showthread.php?f=2&t=651372

-Mike


Post a reply to this message

From: scott
Subject: Re: Reverse engineering scene settings from rendered images
Date: 14 Jul 2008 06:33:14
Message: <487b2b6a$1@news.povray.org>
> I am pretty sure actual lighting was used, as the images come complete 
> with
> shadows.
>
> Anyway, I started a similar thread over at CGTalk in which I provided 
> examples.
> You can find it here:
>
> http://forums.cgsociety.org/showthread.php?f=2&t=651372

With that cuboid image you have, you could take the average pixel intensity 
over each face (in the x y and z direction), then assuming the texture was 
the same on each face you'd have an approximation of the light intensity on 
each face.  You should be able to then determine the direction the light is 
coming from, and a guess at the light intensity (you can't know for sure, 
since you don't have the original texture used on the surface).


Post a reply to this message

From: SharkD
Subject: Re: Reverse engineering scene settings from rendered images
Date: 15 Jul 2008 07:05:00
Message: <web.487c83b89aa045fd60cc59ad0@news.povray.org>
"scott" <sco### [at] scottcom> wrote:
> With that cuboid image you have, you could take the average pixel intensity
> over each face (in the x y and z direction), then assuming the texture was
> the same on each face you'd have an approximation of the light intensity on
> each face.

Yes, that was my backup plan. I am a bit wary, however, as I have done this once
before and things didn't turn out so well (though this may be due to the
original author assigning shades arbitrarily).

> You should be able to then determine the direction the light is
> coming from, and a guess at the light intensity (you can't know for sure,
> since you don't have the original texture used on the surface).

I can also determine the lighting direction from the angle and length of the
shadows. I was hoping to find an object where I knew for sure what color it was
to begin with, such as a perfectly white object. I.e., I was guessing that the
likelihood of them messing with such a pigment would be smaller. Alas, I
couldn't find such an object.

-Mike


Post a reply to this message

From: Alain
Subject: Re: Reverse engineering scene settings from rendered images
Date: 15 Jul 2008 22:34:12
Message: <487d5e24$1@news.povray.org>
SharkD nous illumina en ce 2008-07-13 22:05 -->
> Alain <ele### [at] netscapenet> wrote:
>> The original images probably used environment maping (for reflections) and light
>> maping (for light intensity and shadows). Many games never use actual lighting,
>> but rely on light maping. It works great for scan-line rendering, but not for
>> ray-tracing.
>> If you don't have access to those maps, then your only choice is emulation and
>> trial and error.
>>
>> --
>> Alain
>> -------------------------------------------------
>> You know you've been raytracing too long when your personal correspondence to
>> friends starts out with #Dear Linda =
>> Ken Tyler
> 
> I am pretty sure actual lighting was used, as the images come complete with
> shadows.
> 
> Anyway, I started a similar thread over at CGTalk in which I provided examples.
> You can find it here:
> 
> http://forums.cgsociety.org/showthread.php?f=2&t=651372
> 
> -Mike
> 
> 
Don't be so sure. Light maps can have directional informations that will be used 
to generate the shadows and the shading of the objects. This don't even take 
into acount the preshading of the objects, or the fact that sprites can include 
the shadows, as it's evident in the one showing the walking personage.
In some cases, they use light map to provide the ambient lighting and light for 
the shadows, then add environment map to complement the shading and highlights 
as well as simulating reflections.

-- 
Alain
-------------------------------------------------
You know you've been raytracing too long when you downloaded and printed the 
Renderman Interface documentation, so you'd have a little light reading to take 
on holiday.
Alex McLeod a.k.a. Giant Robot Messiah


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.