|
 |
On 6/27/2021 8:19 AM, Bald Eagle wrote:
> There are plenty of examples of people creating POV-Ray renderings based on real
> things - things they often only have photographs of.
>
> But then I was thinking that there are probably actual POV-Ray renderings, but
> somehow the source to generate those renderings have been lost due to the code
> never being posted, a HDD crash, or just getting - lost. People might want to
> recreate a scene, and having some basic information about the size and placement
> of the objects in the scene would greatly speed up the writing of a new scene
> file.
>
> I know that I have done some work to recreate some of the documentation images
> for things like isosurfaces, and didn't have the code for those images. I had
> to make educated guesses. I "knew" the probably size of the object, or at least
> its relative scale, and then I just needed to place the camera and light source
> in the right place to get the image to look the same. But determining where
> the camera and light source are seems to me to be something that could be
> calculated using "photogrammetric image cues" and well-established equations.
>
> Let's take a photograph of a room. It likely has tables and chairs and windows,
> and these all have right angles and typical sizes. It seems to me that there
> might be a way to use photogrammetry to compute the 3D points of the corners and
> rapidly generate a basic set of vectors for the proper sizing and placement of
> everything in a basic rendered version of that same room.
>
> I know that they have cell phone apps that can generate a floor plan of your
> house just by snapping a few photos of the rooms from different angles.
>
> Also, the "augmented reality" apps for cell phones.
>
Can (most likely) already be done using general purpose external tools.
At what stage is povray useful/necessary?
Mike
Post a reply to this message
|
 |