 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <ur5dgsgctbpg7mva7203jsljrr70icukhu@4ax.com>, Peter Popov
<pet### [at] usa net> wrote:
> It's just a constant you add to the weighted sum before you divide it
> by the divisor.
What is that supposed to do, other than just lightening or (with
negative values) darkening the image? Maybe I misunderstood you...but it
sounds like adding in a gray "pixel" with an intensity equal to the
levelling value.
It is implemented anyway...the final syntax is:
camera or global_settings {
post_process {
blur_matrix {xDim, yDim, Divisor, Levelling, < DATA >}
}
}
--
Christopher James Huff - Personal e-mail: chr### [at] yahoo com
TAG(Technical Assistance Group) e-mail: chr### [at] tag povray org
Personal Web page: http://chrishuff.dhs.org/
TAG Web page: http://tag.povray.org/
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On Wed, 26 Apr 2000 14:44:46 -0500, Chris Huff
<chr### [at] yahoo com> wrote:
>What is that supposed to do, other than just lightening or (with
>negative values) darkening the image? Maybe I misunderstood you...but it
>sounds like adding in a gray "pixel" with an intensity equal to the
>levelling value.
Glad you implemented it. Its purpose is to control contrast while the
divisor is used to control brightness (generally speaking). It is
expecially useful if the divisor differs from the sum of the cells of
the convolution matrix. For example this:
<0,1,0,
1,5,1,
0,1,0>
will make a slight blur with a divisor of 9. If you lower the divisor,
the image gets brighter. Adding an offset of about 0.8 puts the
brightness back to about where it was but with a much increased
contrast. It works in the reverse direction, too, with a higher
divisor and negative offset.
Peter Popov ICQ : 15002700
Personal e-mail : pet### [at] usa net
TAG e-mail : pet### [at] tag povray org
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <t0iegs0o3sm0ebto2qcv84npae607v7mfc@4ax.com>, Peter Popov
<pet### [at] usa net> wrote:
> Glad you implemented it. Its purpose is to control contrast while the
> divisor is used to control brightness (generally speaking). It is
> expecially useful if the divisor differs from the sum of the cells of
> the convolution matrix. For example this:
>
> <0,1,0,
> 1,5,1,
> 0,1,0>
>
> will make a slight blur with a divisor of 9. If you lower the divisor,
> the image gets brighter. Adding an offset of about 0.8 puts the
> brightness back to about where it was but with a much increased
> contrast. It works in the reverse direction, too, with a higher
> divisor and negative offset.
Ah, I see now! I hadn't thought of using it along with the divisor...
Thanks for the suggestion.
--
Christopher James Huff - Personal e-mail: chr### [at] yahoo com
TAG(Technical Assistance Group) e-mail: chr### [at] tag povray org
Personal Web page: http://chrishuff.dhs.org/
TAG Web page: http://tag.povray.org/
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <njUGOUFjcnSZ3qj34lbk51m8BluE@4ax.com>, Glen Berry
<7no### [at] ezwv com> wrote:
> How hard would it be to do something like the following?
>
> resultRed = ( ((R+G+B)/3)^0.95)*1.05
> resultGreen = (R+G+B)/3
> resultBlue = ((R+G+B)/3)^0.9
>
> This gives a fairly close approximation of Selenium Toning.
Hmm, now that I think of it...if I add "exponent" and "multiply"
filters...that could be probably be written using multiple filters:
post_process {
color_matrix < 0.3, 0.3, 0.3,
0.3, 0.3, 0.3,
0.3, 0.3, 0.3 >
exponent < 0.95, 1, 0.9>
multiply < 1.05, 1, 1>
}
> Or maybe this?
>
> resultRed = ((R+G+B)/3)^0.67
> resultGreen = ((R+G+B)/3)^0.85
> resultBlue = ((R+G+B)/3)
>
> This gives a fairly close approximation of Sepia Toning
post_process {
color_matrix < 0.3, 0.3, 0.3,
0.3, 0.3, 0.3,
0.3, 0.3, 0.3 >
exponent < 0.67, 0.85, 1>
}
> Or maybe even this?
>
> output_Red = 100*sin(R*pi*3)
> output_Green = 100*cos(G*pi*3)
> output_Blue = -100*sin(B*pi*3)
>
> This gives a very wild posterization effect
post_process {
color_function < function {100*sin(r(h,v)*pi*3)},
function {100*cos(g(h,v)*pi*3)},
function {100*sin(b(h,v)*pi*3)} >
}
I don't know enough about the inner workings of isosurface functions to
do this last one, though.
One other thing that might be possible: some kind of heiarchical linking
of post_process filters. Instead of a simple list, you could have
several "branches" that are combined at the end. I am not sure if I am
up to coding this though. :-)
An example would be:
post_process {
average {
find_edges {...}
find_edges {...}
blur {...}
}
}
This would do each filter individually, and then average their results.
--
Christopher James Huff - Personal e-mail: chr### [at] yahoo com
TAG(Technical Assistance Group) e-mail: chr### [at] tag povray org
Personal Web page: http://chrishuff.dhs.org/
TAG Web page: http://tag.povray.org/
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Chris Huff <chr### [at] yahoo com> wrote:
: r, g, b - unclipped RGB colors.
: x, y, z - intersection point.
: u, v - UV coordinates.
: depth - distance to intersection point.
: inorm_x, inorm_y, inorm_z - surface normal at intersection point.
: pnorm_x, pnorm_y, pnorm_z - perturbed surface normal.
: dir_x, dir_y, dir_z - ray direction(if I succeed in adding it to
: the available information).
Don't tell me all this is stored into memory in order to be able to apply
the post processing?
Hmm... Let's count.
If we make a 1024x768 image that would be 786432 pixels.
I suppose that the floating point numers are of 'double' type, which means
8 bytes (in PC and most other systems).
This means that for each pixel the rgb info will take 3*8 bytes, the
intersection point 3*8 bytes, the uv-coordinates 2*8 bytes, the depth 8
bytes, surface normals and perturbed normals 6*8 bytes and finally the ray
direction 3*8 bytes.
Summing all this up we get:
786432*(3*8+3*8+2*8+8+6*8+3*8) = 113246208 bytes = 108 Megabytes.
Taking into account that the average computer has 128 Megabytes of memory,
that would eat it up pretty efficiently.
If all this is not stored into memory, then forget this :)
--
main(i,_){for(_?--i,main(i+2,"FhhQHFIJD|FQTITFN]zRFHhhTBFHhhTBFysdB"[i]
):5;i&&_>1;printf("%s",_-70?_&1?"[]":" ":(_=0,"\n")),_/=2);} /*- Warp -*/
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Btw, I think this is a good idea. Keep up the good work :)
--
main(i,_){for(_?--i,main(i+2,"FhhQHFIJD|FQTITFN]zRFHhhTBFHhhTBFysdB"[i]
):5;i&&_>1;printf("%s",_-70?_&1?"[]":" ":(_=0,"\n")),_/=2);} /*- Warp -*/
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <390835eb@news.povray.org>, Warp <war### [at] tag povray org>
wrote:
> Don't tell me all this is stored into memory in order to be able to
> apply the post processing?
>
> Hmm... Let's count.
> If we make a 1024x768 image that would be 786432 pixels.
> I suppose that the floating point numers are of 'double' type, which
> means 8 bytes (in PC and most other systems).
True for most of these, but the COLOUR type is an array of 3 floats, not
doubles.
> This means that for each pixel the rgb info will take 3*8 bytes, the
> intersection point 3*8 bytes, the uv-coordinates 2*8 bytes, the depth
> 8 bytes, surface normals and perturbed normals 6*8 bytes and finally
> the ray direction 3*8 bytes.
> Summing all this up we get:
> 786432*(3*8+3*8+2*8+8+6*8+3*8) = 113246208 bytes = 108 Megabytes.
>
> Taking into account that the average computer has 128 Megabytes of
> memory, that would eat it up pretty efficiently.
>
> If all this is not stored into memory, then forget this :)
Currently, only the needed data is saved. Each post_process has a set of
flags indicating which data it needs, and only that data is
saved/loaded. This would be harder to do with the color_function filter,
since you would have to detect which functions and/or variables are
used, but it should still be possible.
Also, I don't know if this is the way it already is done, but it should
be possible to save to a file and only read in the data as it is used.
This would slow down some post_processes though...it would probably be
best as an option that people could turn on for lower-memory systems.
--
Christopher James Huff - Personal e-mail: chr### [at] yahoo com
TAG(Technical Assistance Group) e-mail: chr### [at] tag povray org
Personal Web page: http://chrishuff.dhs.org/
TAG Web page: http://tag.povray.org/
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Chris Huff <chr### [at] yahoo com> writes:
> In article <390835eb@news.povray.org>, Warp <war### [at] tag povray org>
> wrote:
>
> > Don't tell me all this is stored into memory in order to be able to
> > apply the post processing?
[...]
> > Summing all this up we get:
> > 786432*(3*8+3*8+2*8+8+6*8+3*8) = 113246208 bytes = 108 Megabytes.
> >
> > Taking into account that the average computer has 128 Megabytes of
> > memory, that would eat it up pretty efficiently.
> >
> > If all this is not stored into memory, then forget this :)
>
> Currently, only the needed data is saved. Each post_process has a set of
> flags indicating which data it needs, and only that data is
> saved/loaded. This would be harder to do with the color_function filter,
> since you would have to detect which functions and/or variables are
> used, but it should still be possible.
> Also, I don't know if this is the way it already is done, but it should
> be possible to save to a file and only read in the data as it is used.
> This would slow down some post_processes though...it would probably be
> best as an option that people could turn on for lower-memory systems.
Mmmh, I don't like the idea of saving a temporary file that has several
times the size of the final image. Perhaps you should consider to apply
the filter on the fly. I know that you need more than one pixel a time for
several of the exististing and proposed filters. Perhaps it is possible
to determine which one you need and forget the others.
Thomas
--
http://thomas.willhalm.de/ (includes pgp key)
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Thomas Willhalm wrote:
> Mmmh, I don't like the idea of saving a temporary file that has several
> times the size of the final image. Perhaps you should consider to apply
> the filter on the fly. I know that you need more than one pixel a time for
> several of the exististing and proposed filters. Perhaps it is possible
> to determine which one you need and forget the others.
>
What about the possibility of aborting/continuing a trace ? It's already a
problem with the current post-processing implementation (focal blur behaves
strangely when a trace that have been aborted is resumed).
G.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <390835eb@news.povray.org> , Warp <war### [at] tag povray org> wrote:
> Don't tell me all this is stored into memory in order to be able to apply
> the post processing?
If you look into the MegaPOV 0.4 source code you will find that a maximum of
160 bytes per pixel are in memory (using the default COLORs are float (4
byte) and DBL is double (8 byte)). However, there are several flags in
there which I guess reduce the amount of data stored per pixel.
Thorsten
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |