|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I've been working on a system to make it possible to easily composite 3D
sound for POV-Ray animations. By 3D sound I mean sound which is in stereo
and which also automatically gets louder when the source of the sound gets
closer to the camera. Dobbler effect is also implemented.
My first test animation can be seen/heard in povray.binaries.animations.
http://news.povray.org/povray.binaries.animations/thread/%3C414adb2a%40news.povray.org%3E/
The system works this way: In your POV-Ray scene you set up the sound with
some macros. (An example is shown together with the test animation.) An
include file then writes a data text file to the disc while your scene is
rendering frame by frame. The text file contains data about where the camera
and the sources of sound are at each point in time, and which wav-files to
play from which sources at which times. This text file can then be read by a
java program that I've written which then creates an output stereo wav-file.
However, designing the system in a way which makes it both flexible and easy
to use requires some thought. The system as it is now is functional but
rather crude. Therefore I could really need help with designing the system
in a good way. I need help from people experienced with writing complex
systems of macros, java programmers and people who have a habit of thinking
"ah, the syntax of this feature could have been designed in a much cleverer
way". Here of some of my design problems:
- How should the user specify the sound information?
- How can the system be made to work both for clock-based animations and
frame-based animations (typically simulations with I/O)?
- How should the specification of the data file be? The current one is
somewhat a mess.
- How does one make a java program easy to use by others who don't know what
to do with .java and .class files? There should be a good solution to this,
but I'm not experienced in this area.
- How can I make the java program more clean and well-structured? How does
one write a text parser? (The current one is a hack.)
If there is sufficient interest, I'll upload the files somewhere for others
to look at.
Rune
--
3D images and anims, include files, tutorials and more:
rune|vision: http://runevision.com
POV-Ray Ring: http://webring.povray.co.uk
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Among other things, Rune wrote:
> Dobbler effect is also implemented.
It's "Doppler". You have the same mistake in the code in p.b.a.
--
light_source{9+9*x,1}camera{orthographic look_at(1-y)/4angle 30location
9/4-z*4}light_source{-9*z,1}union{box{.9-z.1+x clipped_by{plane{2+y-4*x
0}}}box{z-y-.1.1+z}box{-.1.1+x}box{.1z-.1}pigment{rgb<.8.2,1>}}//Jellby
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Jellby wrote:
> Among other things, Rune wrote:
>
>> Dobbler effect is also implemented.
>
> It's "Doppler". You have the same mistake in the code in p.b.a.
You're right. Thanks you.
Rune
--
3D images and anims, include files, tutorials and more:
rune|vision: http://runevision.com
POV-Ray Ring: http://webring.povray.co.uk
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Well, let's have a look at your questions:
> - How should the user specify the sound information?
Depends on what you mean with "sound information". If it's just specifying a
sound with a given volume and its location, that would be real easy. Maybe a
single parameter could store something like global-scale, e.g. how much one
POV-Unit is supposed to be in meters or such, to properly calculate the
doppler-effect. I'd guess tracking the relative position of the sound to the
camera would supply all the data needed to calculate volume and doppler.
> - How can the system be made to work both for clock-based animations and
> frame-based animations (typically simulations with I/O)?
In my animations, I define a variable Animation_Time, which I use to tell
different macros how long the scene is in seconds. Especially for
physics-simulation this is useful, as balls etc drop as much in 10 frames as
in 100 frames. My clock always run from 0 to 1. With the seconds and the
frames I can simply calculate the time-interval and voila: clock-based and
frame-based aren't a problem anymore.
> - How should the specification of the data file be? The current one is
> somewhat a mess.
Not quiet sure what you mean with the data file. The POV-Source?
> - How does one make a java program easy to use by others who don't know
what
> to do with .java and .class files? There should be a good solution to
this,
> but I'm not experienced in this area.
Me neither. But if your app just parses the data-file and outputs a
wave-file, a simple batch-file which runs the app should be sufficient.
> - How can I make the java program more clean and well-structured? How does
> one write a text parser? (The current one is a hack.)
Ah, well, I'd need to have a look at it to say anything about that, but I'm
not *that* experienced either.
I hope some of my comments are of some help. I'd like to have a look and
all, but I'm currently tied up working on an IRTC entry that needs to be
finished, but I'll be happy to participate in a few brainstorming sessions
or discussions, either here or via email. But maybe off-topic would be a
better area to discuss things then, in order not to junk the groups here.
Regards,
Tim
--
"Tim Nikias v2.0"
Homepage: <http://www.nolights.de>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Tim Nikias wrote:
>> - How can the system be made to work both for clock-based animations
>> and frame-based animations (typically simulations with I/O)?
>
> In my animations, I define a variable Animation_Time, which I use to
> tell different macros how long the scene is in seconds. Especially for
> physics-simulation this is useful, as balls etc drop as much in 10
> frames as in 100 frames. My clock always run from 0 to 1.
This is not what I meant. All time is measured in milliseconds in my system,
so the problem is not a matter of units.
The problem is this: Say you have a ball that hits the ground at some point
in time and when that happens you want a sound to be played. In a
clock-based (deterministic) animation you know all from the beginning at
what time the ball hits the ground so you just specify that time value like
this:
// name sound_filename time
sound_point_loop("ball","wavdata/bump.wav",1000*3.5)
This should play the sound after 3.5 seconds. Now, how should this
information be put into the data text file? My sound system include file is
invoked in every frame of the animation, but naturally it should not save
the information "play bump.wav after 3500 milliseconds" every time it is
invoked (which is in every frame). This would create dublicate information.
In my current system such information is simply saved to the file the first
time the system is invoked, that is, in frame 1. The macro is called in all
frames, but only in the first frame does it do something. No problems so
far.
However, in a simulation based system where the ball could be controlled by
some physics system, you don't know from the beginning at what point in time
the ball will hit the ground. So you can't save it to the data file the
first time the system is invoked. You typically don't know it until the
frame that comes right before or right after the impact (assuming the impact
happens between two frames, which is most likely). Then when you invoke the
macro at this point it's no good if it doesn't do anything (because it only
does something in the first frame).
So - I don't quite know how to implement a good solution that works in both
cases.
>> - How should the specification of the data file be? The current one
>> is somewhat a mess.
>
> Not quiet sure what you mean with the data file. The POV-Source?
You haven't seen it. We can discuss it later once I've gotten a few files up
for download.
> Me neither. But if your app just parses the data-file and outputs a
> wave-file, a simple batch-file which runs the app should be
> sufficient.
Yeah. Only, it still requires that you have both Java Virtual Machine and
OpenAL installed - I think.
> I hope some of my comments are of some help. I'd like to have a look
> and all, but I'm currently tied up working on an IRTC entry that
> needs to be finished, but I'll be happy to participate in a few
> brainstorming sessions or discussions, either here or via email. But
> maybe off-topic would be a better area to discuss things then, in
> order not to junk the groups here.
Thanks for the offer.
I don't think it's off-topic, at least not major parts of the discussion.
And it would be impractical if the things we discuss are lost in a few weeks
time. For now I think we can just keep the discussion here.
Rune
--
3D images and anims, include files, tutorials and more:
rune|vision: http://runevision.com
POV-Ray Ring: http://webring.povray.co.uk
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> // name sound_filename time
> sound_point_loop("ball","wavdata/bump.wav",1000*3.5)
> This should play the sound after 3.5 seconds. Now, how should this
> information be put into the data text file? My sound system include file
is
> invoked in every frame of the animation, but naturally it should not save
> the information "play bump.wav after 3500 milliseconds" every time it is
> invoked (which is in every frame).
SNIP
But the JAVA programm is called after the animation, right? As I understand
it, the pov-source parses and creates a data-file, which the JAVA-app loads
and runs through OpenAL to retrieve a nifty stereo-sound. If that is so,
shouldn't it be easy to just create a macro which, when activated, will
return some data to the data-file? A modified version of the macro above
(perhaps sound_point_once()) would then, add the wave to the data once it is
called. In the ball-hopping example, the macro would only be called directly
after/when it hits a surface. The physics-system would supply the data
needed to pinpoint the moment for the sound to start and call the macro with
that information. It won't get called when there's no hit. Is that a
possibility?
> Yeah. Only, it still requires that you have both Java Virtual Machine and
> OpenAL installed - I think.
Well, that's much like the requirement to install POV-Ray before making use
of the sources, right? Is it bothersome to install OpenAL? A small
readme.txt with guidelines tagged with your app would be all it takes then.
> I don't think it's off-topic, at least not major parts of the discussion.
> And it would be impractical if the things we discuss are lost in a few
weeks
> time. For now I think we can just keep the discussion here.
Well, not off-topic, but if we're rambling some thoughts, we'd use quiet
some space... We'll see. I guess we'll just get banned once we get
off-topic. ;-)
Regards,
Tim
--
"Tim Nikias v2.0"
Homepage: <http://www.nolights.de>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
in news:414b1a41$1@news.povray.org Rune wrote:
> // name sound_filename time
> sound_point_loop("ball","wavdata/bump.wav",1000*3.5)
>
Rune,
Just shooting some stuff that comes up in me.
What was your perception of the process when you started?
I would think of two things, "tracks" and "time-line". Tracks would be
what you call sound_point, think of multi track recording. Every object
has its own track.
The time-line tells us what happens when. In your current macro it seems
that you define start and end time, so you have to know the length of
the whole sequence on beforehand. I'd prefer the data on the time-line
to be independent of the actual length of the sequence, so one does not
have to change values if one does a "10 frame test run".
Track("ship1", position, trigger)
This could be the simplest form of the macro and writes a line like:
- ship1, startframe, endframe, currentframe, position, sound_on/off
From a complete dataset like this, you Java program could figure out
everything needed, like calculating a time-code from the frame data.
Leave out as much information as possible. It gives you maximum
flexibility in case you wan't to change something later on, like gain or
pitch. These are properties of the sound not of the object.
// name sound_filename time
sound_point_loop("ball","wavdata/bump.wav",1000*3.5)
This one could become:
Track("ball", position, trigger)
The Java program should know the relation ball -> bump.wav, not POV-Ray.
The Java script should know the sound-properties of the ball-object,
like should I start the sound one frame, or n milliseconds, before or
after the impact. Should it still have a sound some time after it
bounced, etc. Also you could do some extra calculations in POV-Ray to
determine wether the trigger should switch from off to on.
So you may have to write two scripts, a POV-script and a sound script.
This is more or less what also happens during real filming. Sound and
image are recorded sepparately, have two time-lines and time-codes are
used to match things. When making (multi screen) slide shows it is even
custom to first make the sound track and then match the timing of the
slides to that.
As said, just my thoughts, so don't be bothered too much with them. It's
your project.
Ingo
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Among other things, ingo wrote:
> The Java program should know the relation ball -> bump.wav, not POV-Ray.
> The Java script should know the sound-properties of the ball-object,
> like should I start the sound one frame, or n milliseconds, before or
> after the impact. Should it still have a sound some time after it
> bounced, etc. Also you could do some extra calculations in POV-Ray to
> determine wether the trigger should switch from off to on.
Those are my thoughts too. POV-Ray should be used only to "calculate" the
relative positions of different sound sources and observer, and to get some
other properties (what's the source's speed?) and triggers (did the ball
hit the ground?). Well, maybe the speed can be calculated afterwards too.
By the way, did you (Rune) take into account sound retardation (the time it
takes for the sound to reach the observer)? For this, you'd need the speed
of sound, which I suggest should be an adjustable variable and given in pov
units, this also affects the Doppler effect.
Another thought: the "microphone" doesn't have to be the same as the camera
;-)
--
light_source{9+9*x,1}camera{orthographic look_at(1-y)/4angle 30location
9/4-z*4}light_source{-9*z,1}union{box{.9-z.1+x clipped_by{plane{2+y-4*x
0}}}box{z-y-.1.1+z}box{-.1.1+x}box{.1z-.1}pigment{rgb<.8.2,1>}}//Jellby
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Fri, 17 Sep 2004 15:18:28 +0200, "Rune" <run### [at] runevisioncom>
wrote:
>I've been working on a system to make it possible to easily composite 3D
>sound for POV-Ray animations. By 3D sound I mean sound which is in stereo
>and which also automatically gets louder when the source of the sound gets
>closer to the camera. Dobbler effect is also implemented.
>
Fascinating concept. Good luck with it.
Have you considered simply having povray
render a text file in csound syntax?
csound is a public domain cross-platform
audio rendering package. It's pov-ray for
sound. No need to code any java. :D
I havn't tried this beacause I can't figure
csound out. I do remember that csound will
read in external .wav files so careful use
of variable delays, panning, and volume
(amplitude) envelopes/modulation should
be enough to move sound-sources in 3-space.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
pov### [at] almostbestwebnet wrote:
> Have you considered simply having povray
> render a text file in csound syntax?
> csound is a public domain cross-platform
> audio rendering package. It's pov-ray for
> sound. No need to code any java. :D
>
> I havn't tried this beacause I can't figure
> csound out. I do remember that csound will
> read in external .wav files so careful use
> of variable delays, panning, and volume
> (amplitude) envelopes/modulation should
> be enough to move sound-sources in 3-space.
I did have a look at csound before I found OpenAL, but it didn't look
straightforward or easy to learn. Besides, with csound I'd have to do all
the hard work myself modulating volumes, waves, panning to simulate 3d,
while with OpenAL it really does everything by itself because it's designed
for 3d-environments from the beginning. I don't manipulate any sound data at
all, I just pass on the information that OpenAL needs to do it, like
positions of the sound sources and the camera/listener.
The only problem is that OpenAL is made for real-time and thus plays the
sound immediately. The sound can only be stored in a file by recording what
is being played. My program starts and stops the recording automatically so
that the timing is just right, but there is still some reduction in sound
quality, because of the indirect way it is saved. Not a big deal though.
Rune
--
3D images and anims, include files, tutorials and more:
rune|vision: http://runevision.com
POV-Ray Ring: http://webring.povray.co.uk
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|