|
|
in news:414d4389$1@news.povray.org Rune wrote:
> Thanks for your thoughts. I hope you can elaborate a bit on the parts
> I didn't get.
I'll try, so here's a little demo "scene":
POV-file:
object {
Ship_01
Track("ship_01", position, trigger)
}
object {
Ship_02
Track("ship_01", position, trigger)
}
object {
Ball
Track("ball", position, trigger)
}
Time-line file (output of POV-Ray):
ship_01, frame0, <position_vector>, [...], sound_off
ship_02, frame0, <position_vector>, [...], sound_off
ball , frame0, <position_vector>, [...], sound_off
ship_01, frame1, <position_vector>, [...], sound_on
ship_02, frame1, <position_vector>, [...], sound_off
ball , frame1, <position_vector>, [...], play_sound Contact = -13
ship_01, frame2, <position_vector>, [...], sound_on
ship_02, frame2, <position_vector>, [...], sound_on
ball , frame2, <position_vector>, [...], sound_off
ship_01, frame3, <position_vector>, [...], sound_on
ship_02, frame3, <position_vector>, [...], sound_off
ball , frame3, <position_vector>, [...], sound_off
.....
Sound-file:
track {
ship_01
sound {
engine_noise
gain 10
pitch 2
phase -50
delay 100
}
}
track {
ship_02
sound {
engine_noise
gain 7
pitch 0.5
}
}
track {
ball
sound {
"boinggg.wav"
gain 3
fade_in 5
fade_out 38
delay Contact
}
}
>> Track("ship1", position, trigger)
>
> What is "trigger" here?
A value or string that defines the state of the track. In its simplest
form it could be 'sound_on' or 'sound_off'. Another one could be
"play_sound", that starts a WAV (sound) and plays it to the end. More
complex it could be the letters of the alphabet for speech
syncronisation.
> Gain and pitch are properties of the object/source, not the sound.
Well in the end also the sound is a property of the object ;) just like
a texture.
> This means that you can play the same sound at several sources at the
> same time with different gain/pitch values for each source. If it was
> a property of the sound, then you couldn't adjust it on a per-object
> basis, which would be a disadvantage.
By using a sepparate sound render file there is no such disadvantage.
The advantage is, or may be, that you don't have to render the images
again if you want to change the properties of a sound linked to an
object.
Now if POV-Ray had inbuild sound capacities it would be more logic to
add all the sound properties to the object inside SDL just like
textures. But the sound creation proces is completely sepparated, then
to me it is logical to also sepparate the script. This also means that
if the system is general enough, and with less extra work, the user
could use a different language binding to OpenAL or even a completely
different sound rendering systems.
Maybe the time-line file created by POV-Ray can also be used for
completely different things.
>> // name sound_filename time
>> sound_point_loop("ball","wavdata/bump.wav",1000*3.5)
>>
>> This one could become:
>> Track("ball", position, trigger)
>
> How is the time where the sound should be played specified? It's
> insufficient to base it on the frame number, as you might want a
> sound to start playing at a time value that lies between two frames.
You can have POV-Ray calculate the exact moment of contact and then pass
that information to the sound rendering system. In the demoscene the
first frame beyond the moment of contact sets the trigger for the sound
to play and defines a variable for the delay.
>> POV-Ray. The Java script should know the sound-properties of the
>> ball-object, like should I start the sound one frame, or n
>> milliseconds, before or after the impact. [...]
>
> I'm not sure what you mean here, but I also don't see how the Java
> program would know all these things.
You're right the JAVA program can't do much without the proper
information. This information has to be in the Time-line file, generated
by POV-Ray and, if you want to sepparate sound and image generation
files, in the Sound-file.
Ingo
Post a reply to this message
|
|