|
|
in news:414dac97$1@news.povray.org Rune wrote:
> I don't get the logic behind your described system. There seems to be
> only one sound attached to each source.
One can add as many sounds as one wants by adding more track macros to
the object or by adding more sounds to the sound file and use just one
track macro. As said, it just an idea not a comletely developed system.
What is important i.m.o is the sepparation of sound and image aspects,
POV-Ray outputs all kinds of spatial data, the soundfile gives the sound
data and the Java program combines it to a sound-track.
> Also, when pitch and gain are described
> seperately from the scene file, it seems difficult to change pitch
> and gain dependent on variables in the scene.
Ah, I interpreted your gain wrong and thought of it as an offset value,
...
> Pitch and gain can change over time just like position, and it should
> be just as simple to do this.
... the Java program calculates the actual gain and pitch as a result of
the speed, and proximity of the object to the camera.
> I also don't get your conceot of sound_on and sound_off.
> You tell your ball object to play a sound while it is off at the same
> time, or at least right in the next frame?
One thing I realy miss in POV-Ray's macros is that it is not possible to
specify default argument values (see for example
http://www.python.org/doc/2.3.4/tut/node6.html#SECTION006710000000000000
000 )
So has a double meaning, there is no sound or turn the current sound,
started with sound_on, off. But that is limiting, the sound of the ball
is longer than the duration of the contact. So its sound is started with
play_sound. The sound takes as long as the duration of the WAV or maybe
the duration could be set in the sound file.
> So it is up to the user to figure out how to provide the right
> trigger string for every frame? Sounds like it would require lots and
> lots of #if(...) #declare trigger = ... #end -structures...
Yes, in the end its always the user that has to figure out when to
trigger a sound. In one of your examples:
"sound_point_loop("ball","wavdata/bump.wav",1000*3.5)"
the user also has to figure out when to start the sound. The difference
is that when you change something in your animation, you also have to
recalculate the timing and change it in the file. If you can use event
based triggers, using #if or #case, you only have to set it up once.
Actually you don't even need to know the length of the animation in
advance, it can all be calculated afterwards.
In case you want to add sound to each object in your particle system,
would you want to figure out the timing for each particle or once set up
a (general) event based system.
> You can just render with "+w1 +h1 -f". It has worked fine for me.
Yes.
> But how is the time-line file generated? It sounds like you say that
> it should be edited manually, even though it contains data for every
> frame and thus has a huge amount of data.
There's no manual editing of POV-Ray output. POV-Ray provides frame
number, spatial data, maybe even speeds and accelerations, whatever you
want. Everything is written to file, per object / track, per frame.
Frome these data, the total numer of frames and the framerate, plus the
data from the sound-file the Java program can create a timeline per
object or one for the whole animation. Internaly that can indeed be
something simmilar to your data-file.
> [...]
> 0 source ship1 create
> 0 source ship1 loop ../wavdata/engine.wav
> 0 source ship1 position <0,0,1>
> 0 source ship1 gain 2
> [...]
Ingo
Post a reply to this message
|
|