|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I've come up with an interesting little camera motion-blur idea for animation
(which I'll post when it's done), and the code is *almost* there--but I've been
stopped at a brick wall, with a single problem that I haven't yet been able to
work around, which is keeping me from implementing the whole idea. It concerns
an image_map and its syntax.
Here's a simplified explanation:
I have an image--named "an animation image001.png"--(one of 100 such numbered
images) which I want to apply as an image_map onto a box. But instead of using
that name directly in an image_map, I'm trying to use string commands to
generate the name (to make some 'generic' code that will easily work with any
and all image names and numbers; the details are unimportant for now.) I.e.,
rather than directly doing this--
image_map{png "an animation image001.png"}
I want to do this:
#declare image_name = "an animation image" // the BASIC name of the image; no
// numbers here. A string.
#declare image_type = "png" // also a string
#declare total_images = "100" // the number of images I've pre-rendered; also
// made into a string, for the next step.
#declare zeros = strlen(total_images); // returns the number of digits in
// 'total_images'--3 in this case
#declare counter = 1;
#declare my_image = concat(image_name,str(counter,-zeros,0),".",image_type)
When I #debug my_image, it shows up exactly as...
an animation image001.png
which is the exact name of the actual image that I want to use. But when I try
to plug this into an image_map...
image_map{png "my_image"}
it *should* decode to "an animation image.png"--but it doesn't work; generates a
fatal error, "Cannot open PNG file."
My initial reaction was that I had made a string syntax error; but I've tried
lots of other ideas that all fail as well...
#declare my_image = concat("\"",image_name,str(frame_NUMBER,-zeros,0),"\"")
which #debugs to "an animation image001.png" (WITH the quotes)
then...
image_map{png my_image}
Or this...
#declare my_image = concat(image_name,str(frame_NUMBER,-zeros,0))
which #debugs to an animation image001
then...
image_map{png "my_image.png"}
There must be something fundamental that I'm doing wrong, or else I really don't
understand how POV-Ray reads strings in image_maps. I've looked through the
docs for an explanation, but haven't come across a solution. I seem to recall
a previous post that dealt with a similar issue, but I don't know where to
look. Any help would be GREATLY appreciated; I'm *so close* to working out my
animation blurring scheme that I can taste it!
Ken W.
Post a reply to this message
|
|
| |
| |
|
|
From: Chris B
Subject: Re: generating an image_map name: string syntax problem
Date: 18 Feb 2009 18:17:19
Message: <499c96ff$1@news.povray.org>
|
|
|
| |
| |
|
|
"Kenneth" <kdw### [at] earthlinknet> wrote in message
news:web.499c91fb755b086ff50167bc0@news.povray.org...
> I have an image--named "an animation image001.png"--(one of 100 such
> numbered
> images) which I want to apply as an image_map onto a box. But instead of
> using
> that name directly in an image_map, I'm trying to use string commands to
> generate the name ...
>
You've got too many quotes. Try:
#declare my_image =
concat(image_name,str(counter,-zeros,0),".",image_type)
with
image_map{png my_image}
Regards,
Chris B.
Post a reply to this message
|
|
| |
| |
|
|
From: Zeger Knaepen
Subject: Re: generating an image_map name: string syntax problem
Date: 18 Feb 2009 18:18:24
Message: <499c9740$1@news.povray.org>
|
|
|
| |
| |
|
|
you're declaring and initialising the variable my_image, but you're using
the literal "my_image"
just use image_map {png my_image} instead of image_map {png "my_image"}
(btw, creating camera motion-blur by averaging 24bit images won't give you
correct results when there are very bright colors in the scene. I believe
it's better to use MegaPOV's cameraview-pigment instead: no need for
temporary-images and it results to accurate motion-blur)
cu!
--
#macro G(b,e)b+(e-b)*C/50#end#macro _(b,e,k,l)#local C=0;#while(C<50)
sphere{G(b,e)+3*z.1pigment{rgb G(k,l)}finish{ambient 1}}#local C=C+1;
#end#end _(y-x,y,x,x+y)_(y,-x-y,x+y,y)_(-x-y,-y,y,y+z)_(-y,y,y+z,x+y)
_(0x+y.5+y/2x)_(0x-y.5+y/2x) // ZK http://www.povplace.com
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Wonderful news: The idea worked! Thanks to both of you for the QUICK responses.
(Geez, I thought I already tried this idea, and got negative results; apparently
I skipped your particular permutation.) I've been fiddling with this problem for
two days; what a welcome relief!
KW
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Zeger Knaepen" <zeg### [at] povplacecom> wrote:
> (btw, creating camera motion-blur by averaging 24bit images won't give you
> correct results when there are very bright colors in the scene. I believe
> it's better to use MegaPOV's cameraview-pigment instead: no need for
> temporary-images and it results to accurate motion-blur)
>
Yes, you've guessed my technique. In experiments I've done so far--using POV's
'average' pigment pattern as a post-processing '2nd render' step--my 10-frame
composite blur actually looks very nice (the colors and shades seem to
reproduce well); but I didn't look closely enough to notice the problem you
mentioned. (I was too excited just to see the scheme work!) I'll take a closer
look at my composite image.
Does MegaPOV actually blur the motion *during* a single-frame render?
Ken W.
Post a reply to this message
|
|
| |
| |
|
|
From: Zeger Knaepen
Subject: Re: generating an image_map name: string syntax problem
Date: 19 Feb 2009 04:33:14
Message: <499d275a$1@news.povray.org>
|
|
|
| |
| |
|
|
"Kenneth" <kdw### [at] earthlinknet> wrote in message
news:web.499cf56640230f63f50167bc0@news.povray.org...
> Does MegaPOV actually blur the motion *during* a single-frame render?
if you ask it to :)
in MegaPOV you have a camera_view pigment, which gives you, with some
scripting, the possibility to have multiple camera's in your scene and
average the result of all of them. There are two ways of doing this. The
first is simply using an average pattern:
--- START CODE ---
#local Samples=50;
#local Clock=0;
#local Clock_Delta=.1;
#macro CameraLocation(Clock)
#local R=<1,6,-6+Clock*10>;
R
#end
#macro CameraLookAt(Clock)
#local R=<0,1.5,Clock*10>;
R
#end
#local CameraAngle=80;
#declare Camera_motion_blur=
texture {
pigment {
average
pigment_map {
#declare I=0;
#while(I<Samples)
#declare CurrentClock=Clock+Clock_Delta*I/Samples;
#declare Location=CameraLocation(CurrentClock);
#declare Look_at=CameraLookAt(CurrentClock);
[1
camera_view{ location Location angle CameraAngle look_at Look_at }
]
#declare I=I+1;
#end
}
}
finish {ambient 1 diffuse 0}
}
#include "screen.inc"
//make sure your real camera is out of the way of the scene
Set_Camera_Location(<500,5000,500>)
Set_Camera_Look_At(y*10000)
Screen_Plane (Camera_motion_blur, 1, 0, 1)
--- END CODE ---
And the second way is by using MegaPOV's noise_pigment. It renders faster,
but I use this method more for testing-purposes only as it doesn't give
really accurate results:
--- START CODE ---
#declare Camera_motion_blur=
texture {
pigment {
pigment_pattern {noise_pigment {1 rgb 0 rgb 1}}
pigment_map {
#declare I=0;
#while(I<Samples)
#declare CurrentClock=Clock+Clock_Delta*I/Samples;
#declare Location=CameraLocation(CurrentClock);
#declare Look_at=CameraLookAt(CurrentClock);
[I/Samples
camera_view{ location Location angle CameraAngle look_at Look_at }
]
#declare I=I+1;
#end
}
}
finish {ambient 1 diffuse 0}
}
--- END CODE ---
hope this helps :)
cu!
--
#macro G(b,e)b+(e-b)*C/50#end#macro _(b,e,k,l)#local C=0;#while(C<50)
sphere{G(b,e)+3*z.1pigment{rgb G(k,l)}finish{ambient 1}}#local C=C+1;
#end#end _(y-x,y,x,x+y)_(y,-x-y,x+y,y)_(-x-y,-y,y,y+z)_(-y,y,y+z,x+y)
_(0x+y.5+y/2x)_(0x-y.5+y/2x) // ZK http://www.povplace.com
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Zeger Knaepen" <zeg### [at] povplacecom> wrote:
> "Kenneth" <kdw### [at] earthlinknet> wrote in message
> news:web.499cf56640230f63f50167bc0@news.povray.org...
> > Does MegaPOV actually blur the motion *during* a single-frame render?
>
> if you ask it to :)
> in MegaPOV you have a camera_view pigment, which gives you, with some
> scripting, the possibility to have multiple camera's in your scene and
> average the result of all of them. There are two ways of doing this. The
> first is simply using an average pattern:
> --- START CODE ---
> .......
> #include "screen.inc"
>
> //make sure your real camera is out of the way of the scene
> Set_Camera_Location(<500,5000,500>)
> Set_Camera_Look_At(y*10000)
> Screen_Plane (Camera_motion_blur, 1, 0, 1)
> --- END CODE ---
Fascinating. It's actually very close to a paradigm I had come up with (but only
as a thought experiment) concerning a feature that could be added to POV-Ray
itself--*internally* rendering multiple 'snapshots' of a scene over time, then
averaging them internally, then spitting out the final blurred frame. So
MegaPOV already has that; cool.
You had mentioned earlier that this MegaPOV averaging method produces a more
natural blur than using POV the way I had worked it out (i.e., just averaging
multiple pre-rendered 24-bit frames during a 2nd render.) Is that solely
because you've given MegaPOV fifty 'camera views' to average, vs. my smaller
number of ten? Or is there something about MegaPOV's internal method that
inherently produces a more accurate blur? I'm most curious about the
difference.
BTW, there *is*, at present, an inherent problem with my own blurring scheme: It
currently applies the multiple averaged images onto a flat box, positioned in
front of my orthographic camera for the 2nd render. And it's quite tricky to
scale the box, to get an exact 1:1 correlation between the pre-rendered images'
pixels and the 'new' camera's camera rays. (I.e., the 2nd camera's rays should
exactly intersect each pixel in the averaged composite image, to get a truly
accurate 2nd render.) I looked through 'screen.inc' to see what I could use
there instead of my box idea, but I couldn't discern if it produces this
*exact* 1:1 correspondence. I'm thinking that it does, but I haven't tried yet.
>
> And the second way is by using MegaPOV's noise_pigment. It renders faster,
> but I use this method more for testing-purposes only as it doesn't give
> really accurate results:
This is a weird one to understand. I need to read the MegaPOV documentation to
get a mental picture of what happens here. I'm wondering if it has anything to
do with the idea of 'frameless rendering'?
http://www.acm.org/crossroads/xrds3-4/ellen.html
That introduced me to a new term: 'temporal antialisaing.' The method seems to
be more applicable to real-time rendering, though (of game graphics, for
example.)
Ken W.
Post a reply to this message
|
|
| |
| |
|
|
From: Zeger Knaepen
Subject: Re: generating an image_map name: string syntax problem
Date: 20 Feb 2009 17:11:14
Message: <499f2a82@news.povray.org>
|
|
|
| |
| |
|
|
"Kenneth" <kdw### [at] earthlinknet> wrote in message
news:web.499f1e4d40230f63f50167bc0@news.povray.org...
> Fascinating. It's actually very close to a paradigm I had come up with
> (but only
> as a thought experiment) concerning a feature that could be added to
> POV-Ray
> itself--*internally* rendering multiple 'snapshots' of a scene over time,
> then
> averaging them internally, then spitting out the final blurred frame. So
> MegaPOV already has that; cool.
>
> You had mentioned earlier that this MegaPOV averaging method produces a
> more
> natural blur than using POV the way I had worked it out (i.e., just
> averaging
> multiple pre-rendered 24-bit frames during a 2nd render.) Is that solely
> because you've given MegaPOV fifty 'camera views' to average, vs. my
> smaller
> number of ten? Or is there something about MegaPOV's internal method that
> inherently produces a more accurate blur? I'm most curious about the
> difference.
The main difference is where you have very bright spots. Normal 24bit
images have no way of storing those colors, in POV-Ray-terms, all
color-components >1 are clipped to 1, hence averaging those images will not
give accurate results.
Example: let's put it in 1D and black&white. Let's say you have the
following frames (every line is a frame)
.1 .1 8 .1 .1
.1 .1 .1 8 .1
.1 .1 .1 .1 8
(so, that's like a very bright spot moving to the right :))
You're 24bit prerendered image will have them stored like:
.1 .1 1 .1 .1
.1 .1 .1 1 .1
.1 .1 .1 .1 1
and averaging those frames will result in the following:
.1 .1 .4 .4 .4
while the actual result should be:
.1 .1 2.73 2.73 2.73
(which will be stored in a 24bit image as .1 .1 1 1 1)
This makes all the difference between a realistic animation and a "there's
just something synthetic about this!"-animation
> BTW, there *is*, at present, an inherent problem with my own blurring
> scheme: It
> currently applies the multiple averaged images onto a flat box, positioned
> in
> front of my orthographic camera for the 2nd render. And it's quite tricky
> to
> scale the box, to get an exact 1:1 correlation between the pre-rendered
> images'
> pixels and the 'new' camera's camera rays. (I.e., the 2nd camera's rays
> should
> exactly intersect each pixel in the averaged composite image, to get a
> truly
> accurate 2nd render.) I looked through 'screen.inc' to see what I could
> use
> there instead of my box idea, but I couldn't discern if it produces this
> *exact* 1:1 correspondence. I'm thinking that it does, but I haven't tried
> yet.
I suppose screen.inc gives a 1:1 correlation, as long as you're output-image
is the same size as the input-image. Be sure though not to use
anti-aliasing and/or jitter.
>> And the second way is by using MegaPOV's noise_pigment. It renders
>> faster,
>> but I use this method more for testing-purposes only as it doesn't give
>> really accurate results:
>
> This is a weird one to understand. I need to read the MegaPOV
> documentation to
> get a mental picture of what happens here. I'm wondering if it has
> anything to
> do with the idea of 'frameless rendering'?
I guess there's some similarity, but it certainly isn't frameless rendering.
With frameless rendering, every ray (or every pixel when not using
anti-aliasing) is shot at a random point in time in the given time-interval.
With perfect anti-aliasing, this will produce perfect accuracy. My second
method also uses a random time per ray, but only from a predefined (quite
small) subset of all possible points in time. Even with perfect
anti-aliasing the best you'll get is exactly the same as the frist method.
The only advantage of this second method, is that without anti-aliasing it
renders much faster than the first method, while still giving you a fairly
good idea of what's going on in the animation, making it ideal for
test-renders :)
> http://www.acm.org/crossroads/xrds3-4/ellen.html
That site seems to use a slightly different definition of frameless
rendering than I used, but the idea remains the same
> That introduced me to a new term: 'temporal antialisaing.' The method
> seems to
> be more applicable to real-time rendering, though (of game graphics, for
> example.)
as is the definition of frameless rendering that site uses :)
Interesting stuff though!
cu!
--
#macro G(b,e)b+(e-b)*C/50#end#macro _(b,e,k,l)#local C=0;#while(C<50)
sphere{G(b,e)+3*z.1pigment{rgb G(k,l)}finish{ambient 1}}#local C=C+1;
#end#end _(y-x,y,x,x+y)_(y,-x-y,x+y,y)_(-x-y,-y,y,y+z)_(-y,y,y+z,x+y)
_(0x+y.5+y/2x)_(0x-y.5+y/2x) // ZK http://www.povplace.com
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Zeger Knaepen" <zeg### [at] povplacecom> wrote:
>
> The main difference is where you have very bright spots. Normal 24bit
> images have no way of storing those colors, in POV-Ray-terms, all
> color-components >1 are clipped to 1, hence averaging those images will not
> give accurate results.
>
> Example: let's put it in 1D and black&white. Let's say you have the
> following frames (every line is a frame)...
>
> This makes all the difference between a realistic animation and a "there's
> just something synthetic about this!"-animation
Oh! I'm seeing the problem now. Thanks for walking me through it with your good
example. Good food for thought. I guess my own blur method will be a
*temporary* one now, until I get to know MegaPOV. :-(
> I suppose screen.inc gives a 1:1 correlation, as long as you're output-image
> is the same size as the input-image. Be sure though not to use
> anti-aliasing and/or jitter.
Right, right and right. (I actually tried antialiasing while generating some
blurred composite frames--the original images had no AA--as a lazy man's way of
getting AA 'on the cheap.' And it just didn't look very good. Not that I was
really expecting it to...)
BTW, my overall methodology of turning my final POV 'blur' renders into MPEG
animation is to use monkeyjam to gather all the frames together (and to
temporarily see the animation), then use that along with the nice xvid MPEG
codec to generate the animation file, which I then view in Windows Media
Player. But *somewhere* in this chain, something isn't exactly 'right'--the
final animation viewed in WMP has what looks like slightly increased contrast.
(Or a gamma shift, I'm not sure which.) I'm betting it's solely the fault of
WMP. (Or else MPEG encoding itself introduces this as a by-product--but I don't
really believe that.) The animation pre-viewed in monkeyjam looks exactly right
(that is, discounting the 'averaging' problem you described.) Right now, I'm
just living with this 'shift', but it's irritating. I've been all through the
xvid app, to make sure I haven't set something wrong there. (It's actually a
menu-driven app where you set up and tweak the codec, which is nice.) Yet I
can't say that I'm an expert with it; I'm surely not!
KW
Post a reply to this message
|
|
| |
| |
|
|
From: "Jérôme M. Berger"
Subject: Re: generating an image_map name: string syntax problem
Date: 21 Feb 2009 02:21:17
Message: <499fab6d@news.povray.org>
|
|
|
| |
| |
|
|
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Zeger Knaepen wrote:
> "Kenneth" <kdw### [at] earthlinknet> wrote in message
> news:web.499cf56640230f63f50167bc0@news.povray.org...
>> Does MegaPOV actually blur the motion *during* a single-frame render?
>
> if you ask it to :)
> in MegaPOV you have a camera_view pigment, which gives you, with some
> scripting, the possibility to have multiple camera's in your scene and
> average the result of all of them. There are two ways of doing this. The
> first is simply using an average pattern:
> --- START CODE ---
> snip...
> --- END CODE ---
>
> hope this helps :)
>
By the way, you *do* know about the "motion_blur" keyword don't
you? It's not quite as flexible as averaging multiple images, but it
can be much faster if only a few objects move...
Jerome
- --
mailto:jeb### [at] freefr
http://jeberger.free.fr
Jabber: jeb### [at] jabberfr
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iEYEARECAAYFAkmfq2oACgkQd0kWM4JG3k9AEgCeLICDP0cGJZZn7aDyps8einEE
WjwAn3awgjryzuLfnpmv9TepqV8wBrck
=9Pv9
-----END PGP SIGNATURE-----
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|