POV-Ray : Newsgroups : povray.general : POV-Ray 3D Sound System Server Time
2 Aug 2024 18:10:25 EDT (-0400)
  POV-Ray 3D Sound System (Message 11 to 20 of 40)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Rune
Subject: Re: POV-Ray 3D Sound System
Date: 19 Sep 2004 04:13:33
Message: <414d3fad$1@news.povray.org>
Tim Nikias wrote:
> But the JAVA programm is called after the animation, right? As I
> understand it, the pov-source parses and creates a data-file, which
> the JAVA-app loads and runs through OpenAL to retrieve a nifty
> stereo-sound.

Exactly.

> If that is so, shouldn't it be easy to just create a
> macro which, when activated, will return some data to the data-file?

Yes, it's not a technical problem, it's only a matter of what design is more
clever.

> A modified version of the macro above (perhaps sound_point_once())

Yeah, I'll probably end up with something similar to that...

Rune
-- 
3D images and anims, include files, tutorials and more:
rune|vision:  http://runevision.com
POV-Ray Ring: http://webring.povray.co.uk


Post a reply to this message

From: Rune
Subject: Re: POV-Ray 3D Sound System
Date: 19 Sep 2004 04:30:01
Message: <414d4389$1@news.povray.org>
ingo wrote:
> What was your perception of the process when you started?

To define points in space from which sound can be emitted and which can be
moved around continuesly.

> I would think of two things, "tracks" and "time-line". Tracks would be
> what you call sound_point, think of multi track recording. Every
> object has its own track.
>
> The time-line tells us what happens when. In your current macro it
> seems that you define start and end time, so you have to know the
> length of the whole sequence on beforehand. I'd prefer the data on
> the time-line to be independent of the actual length of the sequence,
> so one does not have to change values if one does a "10 frame test
> run".
>
> Track("ship1", position, trigger)

What is "trigger" here?

> It gives you maximum
> flexibility in case you wan't to change something later on, like gain
> or pitch. These are properties of the sound not of the object.

Gain and pitch are properties of the object/source, not the sound. This
means that you can play the same sound at several sources at the same time
with different gain/pitch values for each source. If it was a property of
the sound, then you couldn't adjust it on a per-object basis, which would be
a disadvantage. I think this logic makes sense, and it's the logic used by
OpenAL itself anyway. I should mention that one source can only play back
one sound at a time - otherwise per-object gain/pitch would indeed not have
made sense.

>
> //               name    sound_filename    time
> sound_point_loop("ball","wavdata/bump.wav",1000*3.5)
>
> This one could become:
> Track("ball", position, trigger)

How is the time where the sound should be played specified? It's
insufficient to base it on the frame number, as you might want a sound to
start playing at a time value that lies between two frames.

> The Java program should know the relation ball -> bump.wav

I disagree here. All you need to do with my Java program is to specify the
data file to use as input, and then it returns a .wav file. To have to
provide additional input to the Java program would only be more cumbersome
and time-consuming. I like that you can control everything from one place.

, not
> POV-Ray. The Java script should know the sound-properties of the
> ball-object, like should I start the sound one frame, or n
> milliseconds, before or after the impact. Should it still have a
> sound some time after it bounced, etc. Also you could do some extra
> calculations in POV-Ray to determine wether the trigger should switch
> from off to on.

I'm not sure what you mean here, but I also don't see how the Java program
would know all these things.

> So you may have to write two scripts, a POV-script and a sound script.

As it is now, you can have the sound and scene action completely integrated
but you can also have them seperated if you like. I prefer it that way.

Thanks for your thoughts. I hope you can elaborate a bit on the parts I
didn't get.

Rune
-- 
3D images and anims, include files, tutorials and more:
rune|vision:  http://runevision.com
POV-Ray Ring: http://webring.povray.co.uk


Post a reply to this message

From: Rune
Subject: Re: POV-Ray 3D Sound System
Date: 19 Sep 2004 04:33:21
Message: <414d4451@news.povray.org>
Jellby wrote:
> By the way, did you (Rune) take into account sound retardation (the
> time it takes for the sound to reach the observer)? For this, you'd
> need the speed of sound, which I suggest should be an adjustable
> variable and given in pov units, this also affects the Doppler effect.

This is already the case. In my example code posted in p.b.a you may notice
this line:
sound_set_dobbler_velocity(100)

> Another thought: the "microphone" doesn't have to be the same as the
> camera ;-)

You currently control this with the macro
sound_set_listener(camera_location,camera_forward,camera_up)

...and indeed you can provide it with data which doesn't match that of the
camera.

Rune


Post a reply to this message

From: Tim Nikias
Subject: Re: POV-Ray 3D Sound System
Date: 19 Sep 2004 05:51:37
Message: <414d56a9$1@news.povray.org>
> Yes, it's not a technical problem, it's only a matter of what design is
more
> clever.

My LSSM had a similiar problem. Certain Macros add values to the wavefield
and, as you know, those heights then just get smoothened out for a wave-like
effect. What I did was to have the macro add the data only when it is
occuring within the clock_delta of the current frame. That way, even when I
change the scene a little during the rendering (e.g. nudge a few values
while it's already rendering the first frames) it all works out.
Additionally, when stopping the animation and rendering it later on, the
data will get added once it actually pops up, not earlier or later. It
avoids double entry of the data.
-- 
"Tim Nikias v2.0"
Homepage: <http://www.nolights.de>


Post a reply to this message

From: ingo
Subject: Re: POV-Ray 3D Sound System
Date: 19 Sep 2004 08:50:17
Message: <Xns956996F08A0C6seed7@news.povray.org>
in news:414d4389$1@news.povray.org Rune wrote:

> Thanks for your thoughts. I hope you can elaborate a bit on the parts
> I didn't get.

I'll try, so here's a little demo "scene":

POV-file:

object {
  Ship_01   	
  Track("ship_01", position, trigger)
}
object {
  Ship_02
  Track("ship_01", position, trigger)
}
object {
  Ball
  Track("ball", position, trigger)
}


Time-line file (output of POV-Ray):

ship_01, frame0, <position_vector>, [...], sound_off
ship_02, frame0, <position_vector>, [...], sound_off
ball   , frame0, <position_vector>, [...], sound_off
ship_01, frame1, <position_vector>, [...], sound_on
ship_02, frame1, <position_vector>, [...], sound_off
ball   , frame1, <position_vector>, [...], play_sound Contact = -13
ship_01, frame2, <position_vector>, [...], sound_on
ship_02, frame2, <position_vector>, [...], sound_on
ball   , frame2, <position_vector>, [...], sound_off
ship_01, frame3, <position_vector>, [...], sound_on
ship_02, frame3, <position_vector>, [...], sound_off
ball   , frame3, <position_vector>, [...], sound_off
.....

Sound-file:

track {
  ship_01
  sound {
    engine_noise
    gain 10
    pitch 2
    phase -50
    delay 100
  }
}
track {
  ship_02
  sound {
    engine_noise
    gain 7
    pitch 0.5
  }
}
track {
  ball
  sound {
    "boinggg.wav"
    gain 3
    fade_in 5
    fade_out 38
    delay Contact
  }
}

>> Track("ship1", position, trigger)
> 
> What is "trigger" here?

A value or string that defines the state of the track. In its simplest 
form it could be 'sound_on' or 'sound_off'. Another one could be 
"play_sound", that starts a WAV (sound) and plays it to the end. More 
complex it could be the letters of the alphabet for speech 
syncronisation.
 
> Gain and pitch are properties of the object/source, not the sound.

Well in the end also the sound is a property of the object ;) just like 
a texture.

> This means that you can play the same sound at several sources at the
> same time with different gain/pitch values for each source. If it was
> a property of the sound, then you couldn't adjust it on a per-object
> basis, which would be a disadvantage.

By using a sepparate sound render file there is no such disadvantage. 
The advantage is, or may be, that you don't have to render the images 
again if you want to change the properties of a sound linked to an 
object. 
Now if POV-Ray had inbuild sound capacities it would be more logic to 
add all the sound properties to the object inside SDL just like 
textures. But the sound creation proces is completely sepparated, then 
to me it is logical to also sepparate the script. This also means that 
if the system is general enough, and with less extra work, the user 
could use a different language binding to OpenAL or even a completely 
different sound rendering systems.
Maybe the time-line file created by POV-Ray can also be used for 
completely different things.

>> //               name    sound_filename    time
>> sound_point_loop("ball","wavdata/bump.wav",1000*3.5)
>>
>> This one could become:
>> Track("ball", position, trigger)
> 
> How is the time where the sound should be played specified? It's
> insufficient to base it on the frame number, as you might want a
> sound to start playing at a time value that lies between two frames.

You can have POV-Ray calculate the exact moment of contact and then pass 
that information to the sound rendering system. In the demoscene the 
first frame beyond the moment of contact sets the trigger for the sound 
to play and defines a variable for the delay.
 
>> POV-Ray. The Java script should know the sound-properties of the
>> ball-object, like should I start the sound one frame, or n
>> milliseconds, before or after the impact. [...]
>
> I'm not sure what you mean here, but I also don't see how the Java
> program would know all these things.

You're right the JAVA program can't do much without the proper 
information. This information has to be in the Time-line file, generated 
by POV-Ray and, if you want to sepparate sound and image generation 
files, in the Sound-file.


Ingo


Post a reply to this message

From: ingo
Subject: Re: POV-Ray 3D Sound System
Date: 19 Sep 2004 11:51:02
Message: <Xns9569B5963E9E2seed7@news.povray.org>
in news:414ae431@news.povray.org Rune wrote:

> - How can I make the java program more clean and well-structured? How
> does one write a text parser? (The current one is a hack.)
> 

You could use an existing parser, like CSVParser 
http://ostermiller.org/utils/CSV.html and have POV-Ray write comma 
sepparated values. Or use XML, or maybe there is a JAVA based (windows)
ini-file parser.

Ingo


Post a reply to this message

From: Rune
Subject: Re: POV-Ray 3D Sound System
Date: 19 Sep 2004 11:58:15
Message: <414dac97$1@news.povray.org>
ingo wrote:
> in news:414d4389$1@news.povray.org Rune wrote:
>
>> Thanks for your thoughts. I hope you can elaborate a bit on the parts
>> I didn't get.
>
> I'll try, so here's a little demo "scene":

I don't get the logic behind your described system. There seems to be only
one sound attached to each source. A source can play many different sounds,
just not at the same time. How would one specify that in your system? Also,
when pitch and gain are described seperately from the scene file, it seems
difficult to change pitch and gain dependent on variables in the scene.
Pitch and gain can change over time just like position, and it should be
just as simple to do this. I also don't get your conceot of sound_on and
sound_off. You tell your ball object to play a sound while it is off at the
same time, or at least right in the next frame?

I have changed my own system. The code below is working code:

// START POVSOUND
sound_start("soundtest.txt")

sound_set_milliseconds_per_clock(1000,0) // second value is offset
#declare Time = sound_get_time();

#declare EngineSound = "../wavdata/engine.wav";
#declare HoverSound = "../wavdata/hovercraft.wav";

sound_set_gain_once(1000*0, 0.5)
sound_set_dobbler_factor_once(1000*0, 1.0)
sound_set_dobbler_velocity_once(1000*0, 100)

sound_set_listener(Time,camera_location,camera_forward,camera_up)

#declare ship1_starttime = 1000*0;
#declare ship1_endtime = 1000*10;
sound_point_create_once(ship1_starttime,"ship1")
sound_point_destroy_once(ship1_endtime,"ship1")
sound_point_loop_once(1000*0,"ship1",EngineSound)

#if ( is_between(Time,ship1_starttime,ship1_endtime) )
   sound_point_location(Time,"ship1",ship1_translate)
   sound_point_gain(Time,"ship1",2.0) // can easily be changed over time
   sound_point_pitch(Time,"ship1",1.0) // if you use a variable instead of a
number
#end

#declare ship2_starttime = 1000*0;
#declare ship2_endtime = 1000*10;
sound_point_create_once(ship2_starttime,"ship2")
sound_point_destroy_once(ship2_endtime,"ship2")
sound_point_loop_once(1000*0,"ship2",HoverSound)

#if ( is_between(Time,ship2_starttime,ship2_endtime) )
   sound_point_location(Time,"ship2",ship2_translate)
   sound_point_gain(Time,"ship2",1.0)
   sound_point_pitch(Time,"ship2",1.0)
#end

sound_end()
// END POVSOUND

> A value or string that defines the state of the track. In its simplest
> form it could be 'sound_on' or 'sound_off'. Another one could be
> "play_sound", that starts a WAV (sound) and plays it to the end. More
> complex it could be the letters of the alphabet for speech
> syncronisation.

So it is up to the user to figure out how to provide the right trigger
string for every frame? Sounds like it would require lots and lots of
#if(...) #declare trigger = ... #end -structures...

>> Gain and pitch are properties of the object/source, not the sound.
>
> Well in the end also the sound is a property of the object ;) just
> like a texture.

Yes indeed - but that doesn't mean that gain and pitch are properties of the
sound.

>> This means that you can play the same sound at several sources at the
>> same time with different gain/pitch values for each source. If it was
>> a property of the sound, then you couldn't adjust it on a per-object
>> basis, which would be a disadvantage.
>
> By using a sepparate sound render file there is no such disadvantage.
> The advantage is, or may be, that you don't have to render the images
> again if you want to change the properties of a sound linked to an
> object.

Pitch and gain can also be dependent on how the object moves around and be
set for every frame. It would not be easy to manually edit them if they're
specified for each frame. Besides, you don't have to render the images
again. You can just render with "+w1 +h1 -f". It has worked fine for me.

> This also means that
> if the system is general enough, and with less extra work, the user
> could use a different language binding to OpenAL or even a completely
> different sound rendering systems.

Seeing as there are no sound rendering alternatives available yet, his is
not really a priority.

> Maybe the time-line file created by POV-Ray can also be used for
> completely different things.
>
>>> //               name    sound_filename    time
>>> sound_point_loop("ball","wavdata/bump.wav",1000*3.5)
>>>
>>> This one could become:
>>> Track("ball", position, trigger)
>>
>> How is the time where the sound should be played specified? It's
>> insufficient to base it on the frame number, as you might want a
>> sound to start playing at a time value that lies between two frames.
>
> You can have POV-Ray calculate the exact moment of contact and then
> pass that information to the sound rendering system. In the demoscene
> the first frame beyond the moment of contact sets the trigger for the
> sound to play and defines a variable for the delay.

It still sounds like the user have to put a lot of effort into creating the
right triggers...

> You're right the JAVA program can't do much without the proper
> information. This information has to be in the Time-line file,
> generated by POV-Ray and, if you want to sepparate sound and image
> generation files, in the Sound-file.

But how is the time-line file generated? It sounds like you say that it
should be edited manually, even though it contains data for every frame and
thus has a huge amount of data.

Your time-line file sounds very similar to the data format that my system
outputs and which my Java program reads. But the whole point of my macros is
that the user shouldn't have to edit that file manually.

This is what the data-file looks like (the first number on each line is the
time in milliseconds):

0 system gain 0.5
0 system dobbler_factor 1
0 system dobbler_velocity 100
0 system listener <0,0.3,1.2> <0,0,1> <0,1,0>
0 source ship1 create
0 source ship1 loop ../wavdata/engine.wav
0 source ship1 position <0,0,1>
0 source ship1 gain 2
0 source ship1 pitch 1
0 source ship2 create
0 source ship2 loop ../wavdata/hovercraft.wav
0 source ship2 position <1,0.6,10>
0 source ship2 gain 1
0 source ship2 pitch 1
33 system listener <0,0.3,1.18333> <0.00333333,0,1> <0,1,0>
33 source ship1 position <0.418757,0,1>
33 source ship1 gain 2
33 source ship1 pitch 1
33 source ship2 position <1,0.6,9.99123>
33 source ship2 gain 1
33 source ship2 pitch 1
66 system listener <0,0.3,1.16667> <0.00666667,0,1> <0,1,0>
66 source ship1 position <0.836778,0,1>
66 source ship1 gain 2
66 source ship1 pitch 1
66 source ship2 position <1,0.6,9.96493>
66 source ship2 gain 1
66 source ship2 pitch 1

Rune
-- 
3D images and anims, include files, tutorials and more:
rune|vision:  http://runevision.com
POV-Ray Ring: http://webring.povray.co.uk


Post a reply to this message

From: Rune
Subject: Re: POV-Ray 3D Sound System
Date: 19 Sep 2004 12:02:56
Message: <414dadb0@news.povray.org>
Tim Nikias wrote:
> What I did was to have the macro add the data only when
> it is occuring within the clock_delta of the current frame.

That's a good idea. I have implemented two versions of every macro now - one
that writes the data to the file whenever the macro is called, and one which
only does it if the time of occurance is within the clock_delta (backwards)
of the current frame.

Rune
-- 
3D images and anims, include files, tutorials and more:
rune|vision:  http://runevision.com
POV-Ray Ring: http://webring.povray.co.uk


Post a reply to this message

From: ingo
Subject: Re: POV-Ray 3D Sound System
Date: 19 Sep 2004 15:10:34
Message: <Xns9569D7693B81Cseed7@news.povray.org>
in news:414dac97$1@news.povray.org Rune wrote:

> I don't get the logic behind your described system. There seems to be
> only one sound attached to each source.

One can add as many sounds as one wants by adding more track macros to 
the object or by adding more sounds to the sound file and use just one 
track macro. As said, it just an idea not a comletely developed system. 
What is important i.m.o is the sepparation of sound and image aspects, 
POV-Ray outputs all kinds of spatial data, the soundfile gives the sound 
data and the Java program combines it to a sound-track.

> Also, when pitch and gain are described
> seperately from the scene file, it seems difficult to change pitch
> and gain dependent on variables in the scene. 

Ah, I interpreted your gain wrong and thought of it as an offset value, 
...

> Pitch and gain can change over time just like position, and it should
> be just as simple to do this.

... the Java program calculates the actual gain and pitch as a result of 
the speed, and proximity of the object to the camera.

> I also don't get your conceot of sound_on and sound_off.
> You tell your ball object to play a sound while it is off at the same
> time, or at least right in the next frame? 

One thing I realy miss in POV-Ray's macros is that it is not possible to 
specify default argument values (see for example 
http://www.python.org/doc/2.3.4/tut/node6.html#SECTION006710000000000000
000 ) 
So has a double meaning, there is no sound or turn the current sound, 
started with sound_on, off. But that is limiting, the sound of the ball 
is longer than the duration of the contact. So its sound is started with  
play_sound. The sound takes as long as the duration of the WAV or maybe 
the duration could be set in the sound file.

> So it is up to the user to figure out how to provide the right
> trigger string for every frame? Sounds like it would require lots and
> lots of #if(...) #declare trigger = ... #end -structures...

Yes, in the end its always the user that has to figure out when to 
trigger a sound. In one of your examples:
    "sound_point_loop("ball","wavdata/bump.wav",1000*3.5)"
the user also has to figure out when to start the sound. The difference 
is that when you change something in your animation, you also have to 
recalculate the timing and change it in the file. If you can use event 
based triggers, using #if or #case, you only have to set it up once. 
Actually you don't even need to know the length of the animation in 
advance, it can all be calculated afterwards. 
In case you want to add sound to each object in your particle system, 
would you want to figure out the timing for each particle or once set up 
a (general) event based system.

> You can just render with "+w1 +h1 -f". It has worked fine for me. 

Yes.

> But how is the time-line file generated? It sounds like you say that
> it should be edited manually, even though it contains data for every
> frame and thus has a huge amount of data.

There's no manual editing of POV-Ray output. POV-Ray provides frame 
number, spatial data, maybe even speeds and accelerations, whatever you 
want. Everything is written to file, per object / track, per frame. 
Frome these data, the total numer of frames and the framerate, plus the 
data from the sound-file the Java program can create a timeline per 
object or one for the whole animation. Internaly that can indeed be 
something simmilar to your data-file.

> [...] 
> 0 source ship1 create
> 0 source ship1 loop ../wavdata/engine.wav
> 0 source ship1 position <0,0,1>
> 0 source ship1 gain 2
> [...]


Ingo


Post a reply to this message

From: Rune
Subject: Re: POV-Ray 3D Sound System
Date: 19 Sep 2004 18:50:46
Message: <414e0d46@news.povray.org>
ingo wrote:
>> Pitch and gain can change over time just like position, and it should
>> be just as simple to do this.
>
> ... the Java program calculates the actual gain and pitch as a result
> of the speed, and proximity of the object to the camera.

...but there can be lots of other reasons to change gain and pitch than just
to simulate distance and doppler effect. For example I want my hovercraft to
rise from the ground and into the air, and while doing this the sound of the
engine rises both in volume (=gain) and in pitch. I can easily control this
from within my POV-Ray file. What's left for the sound-file you propose?
Just to specify the path/filename of each sound? It seems a little silly to
have a seperate file just for that.

> In case you want to add sound to each object in your particle system,
> would you want to figure out the timing for each particle or once set
> up a (general) event based system.

My current approach works fine as an event-based system. For example, if you
want to play a sound every time a particle hits an object, you can just call
a macro every time it happens:

//               time         source                  sound
sound_point_play(impact_time, concat("",particle_id), BoingSound)

> There's no manual editing of POV-Ray output.

So the time-line should be generated by the .pov-file after all? And only
the sound-file should be seperated from it? But then again, with a seperate
sound-file you can not as easily control gain and pitch over time, as I
mentioned... So I still think it's most logical to have it all created from
the .pov-file.

Rune
-- 
3D images and anims, include files, tutorials and more:
rune|vision:  http://runevision.com
POV-Ray Ring: http://webring.povray.co.uk


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.