|
![](/i/fill.gif) |
A lot of work has already been done with generating 3d geometry from
multiple 2d images using techniques such as structured light.
http://www.prip.tuwien.ac.at/Research/3DVision/struct.html
Luke Church wrote:
>> Given almost infinite time, what you'll probably end up with is a
>> large amount of small objects, each taking a pixel (or several), which
>> will only make sense when looking from a specific point in 3D space.
>> You can do that with a single iteration so why bother? :)
>
> Certainly true if only one perspective is used. Sorry, I wasn't clear... I
> was considering using at least 3 photos of the same object under similar
> lighting conditions taken ideally at equal intervals around the object and
> comparing all of them to the model...
>
> The original idea for this was effectively a poor mans digitiser using an
> algorithm to make up for not being able to afford to build proper sensing
> equipment, with the hope of arriving at a more 'real looking' model of a
> complex object that would otherwise be practical...
>
> Thanks for the input, :)
>
> Luke
Post a reply to this message
|
![](/i/fill.gif) |