POV-Ray : Newsgroups : povray.binaries.images : Baking dirt maps with the mesh camera Server Time
2 May 2024 19:06:00 EDT (-0400)
  Baking dirt maps with the mesh camera (Message 11 to 20 of 21)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 1 Messages >>>
From: Jaime Vives Piqueres
Subject: Re: Baking dirt maps with the mesh camera
Date: 26 Jul 2012 04:43:20
Message: <50110328$1@news.povray.org>
On 26/07/12 10:29, Thomas de Groot wrote:
> Does the AO map correspond to the uv maps? That looks quite complex.

   Yes, the distribution type 3 uses the uv maps as the camera, or
something like that... and no, it's not complex at all: the baking scene
is just a few lines, and using the obtained map is just a matter of
setting up a pigment_pattern. An example will follow ASAP...

--
Jaime


Post a reply to this message

From: Ive
Subject: Re: Baking dirt maps with the mesh camera
Date: 5 Aug 2012 12:38:00
Message: <501ea168$1@news.povray.org>
Am 24.07.2012 09:07, schrieb Jaime Vives Piqueres:
> Hi All:
>
>    I've resumed my mesh_camera experiments, this time trying to bake two
> kind of dirt maps. One is a classical AO map, using radiosity from a
> white sphere, and the other is a sort of "dust" map using a slope y
> pattern. Here are attached some examples of their typical usage back
> into the original mesh...
>
Perfect! I'm ATM trying to add some dirt and desert dust to a brand new 
car model of mine - and after some quick experiments with this method it 
looks already very promising.

-Ive


Post a reply to this message

From: Jaime Vives Piqueres
Subject: Re: Baking dirt maps with the mesh camera
Date: 5 Aug 2012 13:41:39
Message: <501eb053@news.povray.org>
On 05/08/12 18:37, Ive wrote:
> Perfect! I'm ATM trying to add some dirt and desert dust to a brand new
> car model of mine - and after some quick experiments with this method it
> looks already very promising.

   Given the description, I'm already anticipating an amazing scene!

--
Jaime


Post a reply to this message

From: MichaelJF
Subject: Re: Baking dirt maps with the mesh camera
Date: 2 Sep 2012 14:05:00
Message: <web.50439f4e75313461c4d3cefc0@news.povray.org>
Thomas de Groot <tho### [at] degrootorg> wrote:
> On 26-7-2012 10:26, Jaime Vives Piqueres wrote:
> > On 25/07/12 13:03, Thomas de Groot wrote:
> >> goes far beyond any proximity pattern coding I believe.
> >
> > Well, at least is faster and can be reused with no cost, but it only
> > works for uv-mapped meshes...
> >
>
> Right. That would not be a problem for me. Does the AO map correspond to
> the uv maps? That looks quite complex.
>
> Thomas

As I understand it so far, POV is scanning the uv-map, determines whether there
is a corresponding point in the mesh, shoots a ray from that point, if it
exists, at the mesh (here slightly above the mesh in the negative direction of
the normal) and stores the result at the point gained from the uv-map. May be
I'm completely wrong with that, but I think that this is it, what distribution 3
does.

Best regards,
Michael


Post a reply to this message

From: MichaelJF
Subject: Re: Baking dirt maps with the mesh camera
Date: 6 Sep 2012 14:05:01
Message: <web.5048e5b175313461febde7330@news.povray.org>
I'm just playing around with the macros and got the one or other idea about
them. My model is my first more complex work with wings which collected some
dust at my hard drive during the last two years. It's a kind of dragon and seems
very suited for Jaimes occlusion map macros. Some years ago I searched the
wikipedia for unusaul clocks and found a picture of an chinese incense alarm
clock which was heavily overexposed to a flashlight. Unfortunatelly the
overexposion was at one of the most interesting parts, the head of the dragon.
Google for "incense alarm clock" and you will find the picture very soon.

One of my thoughts was about using the GIMP. I may be wrong with that, but as I
understand Gaussian blur, it takes the surrounding pixels and average them with
a two dimensional Gaussian distribution. It should be possible to gain this with
nearly the same approach as Jaime did with his repair_seams.pov, only adjusting
to the pixel level by using an appropriate fraction of image_width for his
bake_padding value. May be one has to put the central point more than once into
an texture_map and average them. Has someone tried this so far? This could make
the GIMP step superfluous.

The next idea is due to the fact that the occlusion maps are used as
pigment_patterns. Can there be a gain expected using hdri-images in place of the
png?

Best regards,
Michael


Post a reply to this message

From: Thomas de Groot
Subject: Re: Baking dirt maps with the mesh camera
Date: 7 Sep 2012 03:02:42
Message: <50499c12$1@news.povray.org>
I have started to look into the baking process and am wondering about 
the following.

Consider a building. Walls consist of bricks, covered by a cracked 
plaster layer; the whole is covered with dirt (proximity pattern like). 
Traditionally, I would tackle that with a three-layered texture (in the 
simplest case).

If I use the mesh camera, I could also bake the brick layer and then 
layer plaster and dirt, or could I bake separately the three layers and 
then superpose them? or successively bake one on top of the other? How 
would you go about this?

Thomas


Post a reply to this message

From: Jaime Vives Piqueres
Subject: Re: Baking dirt maps with the mesh camera
Date: 7 Sep 2012 03:24:05
Message: <5049a115$1@news.povray.org>
On 06/09/12 20:04, MichaelJF wrote:
> I'm just playing around with the macros and got the one or other idea
> about them. My model is my first more complex work with wings which
> collected some dust at my hard drive during the last two years. It's
> a kind of dragon [...]

   Wow! ...that's a clock? :O

> One of my thoughts was about using the GIMP. I may be wrong with
> that, but as I understand Gaussian blur, it takes the surrounding
> pixels and average them with a two dimensional Gaussian distribution.
> It should be possible to gain this with nearly the same approach as
> Jaime did with his repair_seams.pov, only adjusting to the pixel
> level by using an appropriate fraction of image_width for his
> bake_padding value. May be one has to put the central point more than
> once into an texture_map and average them. Has someone tried this so
> far? This could make the GIMP step superfluous.

   Yes, my repair_seams method was a quick hack... indeed what you
suggest could deliver a more accurate result, if I'm understanding
correctly.

> The next idea is due to the fact that the occlusion maps are used as
> pigment_patterns. Can there be a gain expected using hdri-images in
> place of the png?

   For texture baking, I don't think they would make much of a
difference, but having a wider range wouldn't hurt, I suppose.

--
Jaime


Post a reply to this message

From: Jaime Vives Piqueres
Subject: Re: Baking dirt maps with the mesh camera
Date: 7 Sep 2012 03:41:24
Message: <5049a524$1@news.povray.org>
On 07/09/12 09:02, Thomas de Groot wrote:
> Consider a building. Walls consist of bricks, covered by a cracked
> plaster layer; the whole is covered with dirt (proximity pattern
> like). Traditionally, I would tackle that with a three-layered
> texture (in the simplest case).
>
> If I use the mesh camera, I could also bake the brick layer and then
> layer plaster and dirt, or could I bake separately the three layers
> and then superpose them? or successively bake one on top of the
> other? How would you go about this?

   First, let's see if I understand what you're trying to do...

   So, you have an actual mesh model of a brick wall, properly uv mapped,
and you want to bake some maps out of it? Does it have already the
bricks and plaster layer as actual geometry?

   In general, you can mix several maps in one, or use separate maps,
depending on the usage. For example, if the dirt on the bricks and the
plaster is of the same color/texture, I would bake the occlusion map for
both the bricks and the plaster on a single map. But if they should have
different textures, I would bake them to separate maps, then layer one
on top of the other.

--
Jaime


Post a reply to this message

From: Thomas de Groot
Subject: Re: Baking dirt maps with the mesh camera
Date: 7 Sep 2012 07:27:05
Message: <5049da09$1@news.povray.org>
On 7-9-2012 9:41, Jaime Vives Piqueres wrote:
>    First, let's see if I understand what you're trying to do...
>
>    So, you have an actual mesh model of a brick wall, properly uv mapped,
> and you want to bake some maps out of it? Does it have already the
> bricks and plaster layer as actual geometry?

The uv-mapped wall would be a simple, box-like geometry. Bricks and 
plaster would consist of two different textures, bricks with an 
image_map and plaster as a pigment pattern. In a more complex way, both 
could be made of two different geometries of course (more naturalistic).

>
>    In general, you can mix several maps in one, or use separate maps,
> depending on the usage. For example, if the dirt on the bricks and the
> plaster is of the same color/texture, I would bake the occlusion map for
> both the bricks and the plaster on a single map. But if they should have
> different textures, I would bake them to separate maps, then layer one
> on top of the other.

I was thinking of dirt as identical for bricks and plaster. I shall have 
to experiment with your suggestions.

Thanks Jaime! Some thoughts for experiment and thoughts.

Thomas


Post a reply to this message

From: MichaelJF
Subject: Re: Baking dirt maps with the mesh camera
Date: 8 Sep 2012 14:45:01
Message: <web.504b91307531346132e700c20@news.povray.org>
Jaime Vives Piqueres <jai### [at] ignoranciaorg> wrote:
>
>    Wow! ...that's a clock? :O
Yes indeed, that one of the wonderful things about raytracing, one learns every
a litte bit more.

>
>    Yes, my repair_seams method was a quick hack... indeed what you
> suggest could deliver a more accurate result, if I'm understanding
> correctly.

I tried the following with good results:
#declare t_output_image=
texture{t_base_image translate bake_padding*x}
texture{t_base_image translate -bake_padding*x}
texture{t_base_image translate bake_padding*y}
texture{t_base_image translate -bake_padding*y}
texture{t_base_image}

#declare Pixel=1/image_width;
#declare t_blur_image = texture {
   average
   texture_map {
      [ 1 t_output_image translate <-Pixel,-Pixel,0>  ]
      [ 2 t_output_image translate <0,-Pixel,0>  ]
      [ 1 t_output_image translate <Pixel,-Pixel,0>  ]
      [ 2 t_output_image translate <-Pixel,0,0>  ]
      [ 4 t_output_image  ]
      [ 2 t_output_image translate <Pixel,0,0>  ]
      [ 1 t_output_image translate <-Pixel,Pixel,0>  ]
      [ 2 t_output_image translate <0,Pixel,0>  ]
      [ 1 t_output_image translate <Pixel,Pixel,0>  ]


   }
}

>
> > The next idea is due to the fact that the occlusion maps are used as
> > pigment_patterns. Can there be a gain expected using hdri-images in
> > place of the png?
>
>    For texture baking, I don't think they would make much of a
> difference, but having a wider range wouldn't hurt, I suppose.

Yes it was only an idea, may be a higher resolution of the occlusion map will
yield more.

Best regards,
Michael


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 1 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.