|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christopher James Huff <cja### [at] earthlinknet> wrote:
> Roll "sky_sphere" into "background". With the present name, too many
> people mistake it for an object.
Another handy feature for "background" would be support for a pigment
(instead of the current support for a single color).
The pigment could be evaluated, for example, at <0,0,0> in the lower
left pixel of the image and <1,1,0> at the upper right pixel. The
functionality would be pretty similar to using the alpha channel (+ua),
but instead of blending to transparent, the image would blend to the
given pigment.
One of the most useful applications for this would be to specify a bitmap
for the background (which is often requested by people).
--
#macro N(D)#if(D>99)cylinder{M()#local D=div(D,104);M().5,2pigment{rgb M()}}
N(D)#end#end#macro M()<mod(D,13)-6mod(div(D,13)8)-3,10>#end blob{
N(11117333955)N(4254934330)N(3900569407)N(7382340)N(3358)N(970)}// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
>
> Christopher James Huff <cja### [at] earthlinknet> wrote:
> > Roll "sky_sphere" into "background". With the present name, too many
> > people mistake it for an object.
>
> Another handy feature for "background" would be support for a pigment
> (instead of the current support for a single color).
> The pigment could be evaluated, for example, at <0,0,0> in the lower
> left pixel of the image and <1,1,0> at the upper right pixel. The
> functionality would be pretty similar to using the alpha channel (+ua),
> but instead of blending to transparent, the image would blend to the
> given pigment.
>
> One of the most useful applications for this would be to specify a bitmap
> for the background (which is often requested by people).
How would you contrain it to the view point of the camera?
--
Ken Tyler
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3e89d9a7@news.povray.org>, Warp <war### [at] tagpovrayorg>
wrote:
> One of the most useful applications for this would be to specify a bitmap
> for the background (which is often requested by people).
This would be handled by something else. Your definition only works for
camera rays, it is meaningless for reflected or refracted rays. I can
think of two (non-exclusive) possibilities: a post-process feature, and
a programmable camera feature which lets you specify every detail of the
camera...basically coding the pixel level tracing in POV code. Some
built-in functions would be made available to make new cameras based on
existing ones. In this case, you would use one of these functions and a
pigment function to determine the final pixel color.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Tue, 01 Apr 2003 21:58:55 -0500, Christopher James Huff
<cja### [at] earthlinknet> wrote:
> I can think of two (non-exclusive) possibilities: a post-process feature
already done for future MegaPOV
> and a programmable camera feature which lets you specify every detail of the
> camera...basically coding the pixel level tracing in POV code.
already done for future MegaPOV in case user_defined camera type is enough
http://news.povray.org/search/?s=user_defined
> pigment function to determine the final pixel color.
In case you mean something like camera_view pigment, then such a thing is
already done for future MegaPOV
ABX
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christopher James Huff <cja### [at] earthlinknet> wrote:
> Your definition only works for
> camera rays, it is meaningless for reflected or refracted rays.
If you read my article, I said that it would work in the same way
as the alpha channel (+ua) works.
--
#macro M(A,N,D,L)plane{-z,-9pigment{mandel L*9translate N color_map{[0rgb x]
[1rgb 9]}scale<D,D*3D>*1e3}rotate y*A*8}#end M(-3<1.206434.28623>70,7)M(
-1<.7438.1795>1,20)M(1<.77595.13699>30,20)M(3<.75923.07145>80,99)// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3e8b35c1@news.povray.org>, Warp <war### [at] tagpovrayorg>
wrote:
> Christopher James Huff <cja### [at] earthlinknet> wrote:
> > Your definition only works for
> > camera rays, it is meaningless for reflected or refracted rays.
>
> If you read my article, I said that it would work in the same way
> as the alpha channel (+ua) works.
Ok, but that doesn't really help. It's still nothing like the
"background" or "sky_sphere" features.
Would a post_process filter that used the alpha channel to overlay the
image over another do what you want?
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <pf6l8vgj0hf1j09dqfhee960ogrekdi7r7@4ax.com>,
ABX <abx### [at] abxartpl> wrote:
> > and a programmable camera feature which lets you specify every detail of
> > the
> > camera...basically coding the pixel level tracing in POV code.
>
> already done for future MegaPOV in case user_defined camera type is enough
> http://news.povray.org/search/?s=user_defined
I doubt it, unless somebody else has worked up a more capable function
language. I'm talking about something more like (roughly):
define BasicCamera = camera {...usual camera stuff...}
special_camera {
function trace_pixel(x, y) {
define pixelColor = BasicCamera.trace_pixel(x, y);
return pixelColor + (1 -
pixelColor.alpha)*bkgndPigment(x/image_width, y/image_height);
}
}
> > pigment function to determine the final pixel color.
>
> In case you mean something like camera_view pigment, then such a thing is
> already done for future MegaPOV
Again, highly doubtful. What I'm talking about would require something
like my G project (now called Amber), which hasn't made it into a really
useable state yet.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christopher James Huff <cja### [at] earthlinknet> wrote:
> Would a post_process filter that used the alpha channel to overlay the
> image over another do what you want?
Yes, but adding a post-process engine for this purpose is overkill
(since it can be done in a pixel-by-pixel basis while rendering).
--
#macro M(A,N,D,L)plane{-z,-9pigment{mandel L*9translate N color_map{[0rgb x]
[1rgb 9]}scale<D,D*3D>*1e3}rotate y*A*8}#end M(-3<1.206434.28623>70,7)M(
-1<.7438.1795>1,20)M(1<.77595.13699>30,20)M(3<.75923.07145>80,99)// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3e8b7c4e@news.povray.org>, Warp <war### [at] tagpovrayorg>
wrote:
> Yes, but adding a post-process engine for this purpose is overkill
> (since it can be done in a pixel-by-pixel basis while rendering).
It would be if it was only added for this one filter. If a post
processing engine already exists, I think a post process filter is a
better choice than a separate special-purpose feature.
And if one doesn't exist, I think adding one would still be a better
idea. It solves more problems with less work than adding tons of little
one-use features.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Wed, 02 Apr 2003 17:07:06 -0500, Christopher James Huff
<cja### [at] earthlinknet> wrote:
> I doubt it, unless somebody else has worked up a more capable function
> language. I'm talking about something more like (roughly):
>
> define BasicCamera = camera {...usual camera stuff...}
>
> special_camera {
> function trace_pixel(x, y) {
> define pixelColor = BasicCamera.trace_pixel(x, y);
> return pixelColor + (1 -
> pixelColor.alpha)*bkgndPigment(x/image_width, y/image_height);
> }
> }
> > > pigment function to determine the final pixel color.
(Are you sure you used correct math for this effect? I used weighting.)
That's possible and already tested with my post_process implementation (if you
wish I can send sample image to p.b.i but its content seems obvious). It is
possible with following script.
#version unofficial megapov 1.1;
#include "pprocess.inc"
Use_PP_Color_Output() // this macro turns on caching of color output and
// defines internal functions f_output_red,
// f_output_green, f_output_blue and f_output_alpha
// for further usage
#declare back_Pig=function{pigment{agate}}; // pigment for background
#declare f_avg=function(v1,v2,w){(1-w)*v1+w*v2}; // average/weight
sphere{0 1 translate z*4 pigment{rgb 1}}
light_source{-99 1}
camera{}
global_settings{
post_process{
// functions for four channels of output
function{f_avg(f_output_red(x,y) ,back_Pig(x,y,0).x,f_output_alpha(x,y))}
function{f_avg(f_output_green(x,y),back_Pig(x,y,0).y,f_output_alpha(x,y))}
function{f_avg(f_output_blue(x,y) ,back_Pig(x,y,0).z,f_output_alpha(x,y))}
function{0}
save_file "with_background.png"
}
}
Please note that there will be two passes, one for rendering and one for
post_process. But it will be also possible to do it in one pass. Instead of
using internal function with output of rendering you could use new camera_view
pigment type. The engine recognizes lack of output usage (it simple checks
whether output internal functions are defined in script). So rendering is
skipped (or rather does not use trace) and post_process loop starts. It is
possible and already tested with following script:
#version unofficial megapov 1.1;
#declare back_Pig=function{pigment{agate}}; // pigment for background
#declare camb_Pig=function{pigment{camera_view{} // rendering output
scale -y translate y}}; // has to be transformed
// like all image maps
#declare f_avg=function(v1,v2,w){(1-w)*v1+w*v2}; // average/weight
sphere{0 1 translate z*4 pigment{rgb 1}}
light_source{-99 1}
global_settings{
post_process{
// functions for four channels of output
function{f_avg(camb_Pig(x,y,0).x,back_Pig(x,y,0).x,camb_Pig(x,y,0).transmit)}
function{f_avg(camb_Pig(x,y,0).y,back_Pig(x,y,0).y,camb_Pig(x,y,0).transmit)}
function{f_avg(camb_Pig(x,y,0).z,back_Pig(x,y,0).z,camb_Pig(x,y,0).transmit)}
function{0}
save_file "with_background.png"
}
}
Please note that you have more control in this implementation than in your
script becouse you can define differend behaviour for each channel. In order to
avoid such long syntax for every scene you can simple collect your favourite
post_processes in macros (as it is done for replications of post_processing
effects used in previous MegaPovs) as well as in #declarations. Imagine:
#macro PP_Clip_Colors(Color_Min,Color_Max)
#local cminr=Color_Min.red;
#local cmaxr=Color_Max.red;
#local cming=Color_Min.green;
#local cmaxg=Color_Max.green;
#local cminb=Color_Min.blue;
#local cmaxb=Color_Max.blue;
#local cmina=Color_Min.transmit;
#local cmaxa=Color_Max.transmit;
function{clip(f_pp_red(u,v,-1) ,cminr,cmaxr)}
function{clip(f_pp_green(u,v,-1),cming,cmaxg)}
function{clip(f_pp_blue(u,v,-1) ,cminb,cmaxb)}
function{clip(f_pp_alpha(u,v,-1),cmina,cmaxa)}
#end
global_settings{
post_process{ PP_Clip_Colors(rgb.1,rgb.9) }
}
Moreover instead of refering to output data of rendering, you can refer one of
previous post_process effects. In below script in first effect channels are
reordered, in second effect they are inversed, in third effect they are
transformed:
#version unofficial megapov 1.1;
#include "pprocess.inc"
Use_PP_Effects_Output() // defines internal functions f_pp_red, f_pp_green,
// f_pp_blue and f_pp_alpha for further usage
global_settings{
post_process{
function{f_output_green(x,y))} // green instead of red
function{f_output_blue(x,y))} // blue instead of green
function{f_output_red(x,y))} // red instead of blue
function{f_output_alpha(x,y))} // transparency not changed
}
post_process{
function{1-f_pp_green(x,y,1))} // inverse green
function{1-f_pp_blue(x,y,1))} // inverse blue
function{1-f_pp_red(x,y,1))} // inverse red
function{f_pp_alpha(x,y,1))} // transparency not changed
save_file "inversed.png"
}
post_process{
function{f_pp_green(x,1-y,2))} // mirror along y
function{f_pp_blue(x,1-y,2))} // mirror along y
function{f_pp_red(x,1-y,2))} // mirror along y
function{f_pp_alpha(x,1-y,2))} // mirror along y
}
}
In above example there will be only two passes. One for rendering and one for
second effect. First effect if calculated online. Third effect is not calculated
because has no save_file parameter and is not used in any other effect.
Please note third parameter of f_pp_* functions. If positive it is index to
refer which previous effect should be calculated to get value. Index 0 means
original rendering output. Index negative is relative to current effect (simpler
when long list of effects is used).
I hope you like this new post_processing.
> > In case you mean something like camera_view pigment, then such a thing is
> > already done for future MegaPOV
>
> Again, highly doubtful.
In case you could deliver example script with description of expected image we
could clarify it.
ABX
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|