|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
First off, I hate photoshop as a solution to make textures. I hate it
even more when trying to work out how to do it for a 3D object, instead
of a flat surface. So.. I had a crazy idea, which is seems someone else
had to. lol
Now, there is a trick someone used for making scuplties for Second Life,
which involves a mirrored sphere, a spherical camera, a texture that
generates colors based on the angle of the light ray hitting it, and an
object, with "no image" set to it. It works, to a point, but it has a
massive flaw. Complex detail will get "lost", due to the resulting
displacement map not quite producing a clean result.
Now, apparently we have a "mesh camera", based on some googling. I
haven't really looked hard at the documentation on the beta, so had no
clue. And, there is some trick to "back" texture onto a mesh, based on
this. This is brilliant. But.. I would also kind of prefer, if possible
to avoid the whole mesh thing, to a point. So.. Is there an "object camera"?
See, my thinking here is that you:
1. Create your object.
2. Make a "merge" copy of it, so its only got outer surfaces.
3. Place that, with "no image" over the same place as the camera.
4. Resize, or, if needed, build a slightly bigger version of the same
thing, as a mirror.
Then you do two passes. First pass produces a displacement map, as per
the scuplty, which could be converted to a mesh object, fairly
trivially. Second pass includes textures, and maybe addon bits of
things, screws, panels, etc., which add finer detail, but don't need to
be "real", that a pure texture can't, tacked onto the surface of your
"no image" copy (also no-image). In principle, the result should be both
a mesh object, once converted from the displacement map, which you can
adjust a bit, as needed, and a texture, which exactly matches the
contours of the object you are applying it to.
Am I off my rocker thinking about doing such a thing, or not?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 02/12/2011 00:06, Patrick Elliott a écrit :
> See, my thinking here is that you:
>
> 1. Create your object.
> 2. Make a "merge" copy of it, so its only got outer surfaces.
> 3. Place that, with "no image" over the same place as the camera.
> 4. Resize, or, if needed, build a slightly bigger version of the same
> thing, as a mirror.
>
> Then you do two passes. First pass produces a displacement map, as per
> the scuplty, which could be converted to a mesh object, fairly
> trivially. Second pass includes textures, and maybe addon bits of
> things, screws, panels, etc., which add finer detail, but don't need to
> be "real", that a pure texture can't, tacked onto the surface of your
> "no image" copy (also no-image). In principle, the result should be both
> a mesh object, once converted from the displacement map, which you can
> adjust a bit, as needed, and a texture, which exactly matches the
> contours of the object you are applying it to.
>
> Am I off my rocker thinking about doing such a thing, or not?
I get lost at the first pass and the displacement map. (which is a known
concept, but is not part of povray's concept, IIRC)
Might I assume that the "object" in 1 is globally convex ?
Would a menger's sponge still qualify ?
As I understand so far, you want to image-map a texture on your object O.
To generate such image-map, I would do something like:
1. spherical Camera inside object O. Object O is hollow, no_image,
textured with a basic pattern and using only ambient light, or so.
2. Put a sphere mirror at the camera, with a radius large enough to
avoid hitting object O. sphere is hollow, no_reflection (save on
rendering time).
3. Render the image P
4. new scene: Object O is textured with image P (map_type 1). place
camera and lights as needed, as well as other environments.
Iteration on step 4 with image P going into gimp/photoshop for details
of texture/pigment.
You could replace the spherical mirror with a vertical cylindrical
mirror if using a cylindrical camera (type 3) and map_type 2 (as long as
you do not have any surface parallel to the horizontal plane in object O)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 12/2/2011 1:44 AM, Le_Forgeron wrote:
> Le 02/12/2011 00:06, Patrick Elliott a écrit :
>
>> See, my thinking here is that you:
>>
>> 1. Create your object.
>> 2. Make a "merge" copy of it, so its only got outer surfaces.
>> 3. Place that, with "no image" over the same place as the camera.
>> 4. Resize, or, if needed, build a slightly bigger version of the same
>> thing, as a mirror.
>>
>> Then you do two passes. First pass produces a displacement map, as per
>> the scuplty, which could be converted to a mesh object, fairly
>> trivially. Second pass includes textures, and maybe addon bits of
>> things, screws, panels, etc., which add finer detail, but don't need to
>> be "real", that a pure texture can't, tacked onto the surface of your
>> "no image" copy (also no-image). In principle, the result should be both
>> a mesh object, once converted from the displacement map, which you can
>> adjust a bit, as needed, and a texture, which exactly matches the
>> contours of the object you are applying it to.
>>
>> Am I off my rocker thinking about doing such a thing, or not?
>
> I get lost at the first pass and the displacement map. (which is a known
> concept, but is not part of povray's concept, IIRC)
>
> Might I assume that the "object" in 1 is globally convex ?
> Would a menger's sponge still qualify ?
>
> As I understand so far, you want to image-map a texture on your object O.
> To generate such image-map, I would do something like:
>
> 1. spherical Camera inside object O. Object O is hollow, no_image,
> textured with a basic pattern and using only ambient light, or so.
> 2. Put a sphere mirror at the camera, with a radius large enough to
> avoid hitting object O. sphere is hollow, no_reflection (save on
> rendering time).
> 3. Render the image P
>
> 4. new scene: Object O is textured with image P (map_type 1). place
> camera and lights as needed, as well as other environments.
>
> Iteration on step 4 with image P going into gimp/photoshop for details
> of texture/pigment.
>
> You could replace the spherical mirror with a vertical cylindrical
> mirror if using a cylindrical camera (type 3) and map_type 2 (as long as
> you do not have any surface parallel to the horizontal plane in object O)
>
Uh, well, already used things like a spherical camera to do bits of it,
but, lets try this again..
The idea is to take a baseline object, say, a chair, and use three
copies of it. One copy is hollow, and reflective, but scaled larger. The
second is the object, with a texture applied. The third is a camera,
with the same shape. The idea being to do something that combines these
two concepts:
http://www.ignorancia.org/en/index.php?page=mesh-camera
Section on "texture baking", with the pre-pass, doing this:
http://johannahyacinth.blogspot.com/2007/05/sculpted-prims-with-pov-ray.html
I had to force a stop load on the page. Seems Google's system is
misdirecting services you are logged into through their mess right now,
resulting in blogger.com loading Youtube instead, almost immediately
after the page loads. Maybe turning off scripting would stop it, or
something...
In any case, the first pass is to create a larger, more detailed "color
map", such as what the sculpties use. Since a sculpty is just a fixed
set of points, that are "displaced" based on the color in the image, it
follows that you could, just as easily, use that data to generate a mesh.
The second pass would include any extra objects you want to add, to
"flesh out" the texture, since it would make as part of the final "flat"
image, which you are going to apply to the mesh, as well as any textures
you want to apply (mind, not including reflection, and the like, since
that would mess with the mirror we are using to get the result, though..
would having the mirror produce no image mean you don't get the
reflected result either? I think so, but I am not sure.)
The idea being to use POV-Ray to produce "both" the displacement map,
which is to be converted to mesh, and the "UV texture", which will be
applied to it. I know you could do it with a Mesh_Camera, as per the
first site, but that still means building the original mesh in something
else, and importing it, instead of doing the whole thing in SDL.
Hopefully that is clearer.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 03.12.2011 05:14, schrieb Patrick Elliott:
> The idea being to use POV-Ray to produce "both" the displacement map,
> which is to be converted to mesh, and the "UV texture", which will be
> applied to it. I know you could do it with a Mesh_Camera, as per the
> first site, but that still means building the original mesh in something
> else, and importing it, instead of doing the whole thing in SDL.
The problem is that the mesh camera concept only works for objects with
a well-defined UV mapping. It could theoretically be extended to simple
shapes such as spheres, boxes and such (as these do have a defined UV
mapping), but certainly not for complex CSG stuff.
Even if it was possible to auto-generate a mesh from CSG, automatically
finding a suitable UV mapping would be yet another challenge.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 12/2/2011 9:23 PM, clipka wrote:
> Am 03.12.2011 05:14, schrieb Patrick Elliott:
>
>> The idea being to use POV-Ray to produce "both" the displacement map,
>> which is to be converted to mesh, and the "UV texture", which will be
>> applied to it. I know you could do it with a Mesh_Camera, as per the
>> first site, but that still means building the original mesh in something
>> else, and importing it, instead of doing the whole thing in SDL.
>
> The problem is that the mesh camera concept only works for objects with
> a well-defined UV mapping. It could theoretically be extended to simple
> shapes such as spheres, boxes and such (as these do have a defined UV
> mapping), but certainly not for complex CSG stuff.
>
> Even if it was possible to auto-generate a mesh from CSG, automatically
> finding a suitable UV mapping would be yet another challenge.
Maybe. I would think that would depend on how the mesh was generated.
Since the displacement would be aligned identically to the image you
intend to apply later, they should match up the same way. If you convert
the mesh wrong, then, of course you are going to have a problem. Mind, I
could be dead wrong of course. Could be interesting to try. My issue is
whether the only means of generating such a camera is to use a
mesh_camera, or if its possible to do like object_camera? I suspect its
not a feature, but then I don't know the full list of things
planned/implemented. Mind, I have no idea why a UV map *should* be
different than this.
Anyway, its an idea. Just kind of wondered if you had to start with a
mesh, or you could produce one, and the texture, or well, it might
*still* work if you did like:
1. Displacement map.
2. Generate mesh.
2a. Maybe produce a UV map from this?
3. Use resulting mesh (and new map, if needed) to generate the texture
via the mesh_camera method.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|