 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Fa3ien <fab### [at] yourshoes skynet be> wrote:
> Just curious : I wasn't aware of that kind of optimizations (I thought
> that everything was done by matrix, as the docs states). Does it really
> makes a difference in speed ?
I don't remember if someone has ever made a benchmark.
I suppose that you could make a comparison with POV-Ray 3.6 by first
rendering a scene with a lot of spheres which are only translated and
then rendering the same scene with the same spheres with the same
translations, but additionally something "scale <1, .9999, 1>" is
applied to each sphere, which would force it to use a transformation
matrix.
My guess is that it might make a measurable difference, but perhaps
not a huge one (because most of the time is probably spent in the bounding
box hierarchy tests).
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
> Sigh.. You are I think making an assumption that, when you do want to
> change something, you would only want to change "one" transform. But,
> maybe you have a dozen, each with does something specific to positioning
> the object, each of which is "also" effected by all of the prior
> transforms. Tell me, with a real example, not just some assertion that I
> am imagining a problem, how you do that. Yes, you can use some commands
> that can revert the object to a known state, like at the origin, then
> transform it, but that is useless if the transform you need is relative
> to some arbitrary point, which is the result of 3-4 other prior
> transforms. How do you, if you are doing say 7 translates, for some odd
> reason, revert back to the 3rd, change the 4th, then reapply the last 3?
> You can't, without drastically altering how you handled those transforms
> in the first place, and reducing them to a bare minimum number needed to
> do the task. Sure, it might be possible, but it still breaks, as near as
> I can tell, when you try to provide a post-creation transform on the
> object, to modify the prior result. Show me that I am wrong, don't just
> tell me I am.
>
What do you mean with changing "one" transform? There is only one
transform per object/texture/camera/thing. POV-Ray doesn't (and doesn't
need to) keep track of each translate/rotate/scale you type.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
> I agree there is no technical limitation. But I can see drawbacks. In the
> current syntax, we write
>
> #declare my_sphere = sphere
> {
> 0, 1
> pigment {White}
> translate <1,2,3>
> }
> ..../...
> trace (my_sphere, ...)
>
> Once the sphere has been created (= after the ending '}' of the syntactical
> block), the sphere stays how it is and we have no mean to apply a new
> transform to it afterwards. The only way I see to fake this is to create a
> new object and transform it:
>
> #declare my_sphere_2 = object {my_sphere translate <4,5,6>}
>
> But the first sphere remains present (even if it had no_image, no_shadow
> ....).
>
>
> The new syntax might allow such code:
>
> my_sphere = sphere
> {
> 0, 1
> pigment {White}
> translate <1,2,3>
> }
> ..../...
>
> my_sphere.translate y;
>
> trace (my_sphere, ...)
>
> And you get what you want, for animations for example, provided you are
> programmatically skilled enough to keep track of your objects and what
> happens to them.
>
> My opinion (I'd rather say my 'intuition', but may change, I am not that
> proud of myself) is that if you perform transformations in different
> locations in the code, perhaps out of the scope of the current source file,
> you can never be sure for the current object's situation, and therefore for
> the end result.
#declare foo = object { foo translate <1,2,3> }
Add a few of those randomly around your code and "you can never be sure
for the current object's situation". But aren't there times where it
would be useful, when you know where transformations are being done?
foo.translate(1,2,3) is just a nicer syntax for that, and also will
probably run faster (no need to actually create a new object).
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
> In article <4709fc49@news.povray.org>, war### [at] tag povray org says...
>> Patrick Elliott <sel### [at] rraz net> wrote:
>>> And I don't think you have *at all* addressed what I was getting at. How
>>> do you do something like IK, without either a) allowing an object like
>>> reference system *which keeps track of* which order the transforms took
>>> place in, such that if you move something, then rotate it, you don't
>>> want to later just rotate it, and assume its going to produce the same
>>> result, or b) limiting the types of transforms that *are* possible to an
>>> already parsed object, or c) reparsing every damn thing in the script,
>>> so you can recalculate just what the heck the object is described doing
>>> *in that frame*?
>> You don't need to reparse the entire object if you simply want to apply
>> some new transformations to it. What you do is to reset its transformation
>> matrix and apply the new transformations to it. That's it.
>>
>> If what you are doing requires remembering and applying a set of
>> transformations in order, you can simply create an array or whatever
>> with these transformations, or whatever you like. However, that's
>> completely irrelevant from the point of view of the object itself.
>>
>> The only thing you have to be able to specify is whether a transformation
>> is applied to the object only, the texture only, or both.
>>
>> You don't seem to understand how transformations work.
>>
> I know damn well how they work. And you don't solve the problem by
> reverting things. How do you revert **only** to the Nth transform so
> that you change only that one?
There is no such thing as the Nth transform. There is only one
transformation matrix. You don't seem to understand how transformations
work.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp wrote in message <470b5016@news.povray.org>:
> There's no more skill involved than in the current POV-Ray when you apply
> a series of transformations to an object.
> There's no need for a stack. There's no need to revert transformations.
> You simply apply a series of transformations to the object at each frame
> (basically what you do with the current SDL, but without having to reparse
> the object).
>
> I really can't understand what is the problem people are seeing here.
> Could someone please explain to me this?
I am not sure that this is exactly the problem that everyone else has in
mind, but I can see an example. Consider that you are writing an articulated
model using CSG (maybe a robot or something). Consider the elbow: there is
an articulation, meaning a variable rotation.
To make it easily, you proceed that way:
- you move the forearm part to put the elbow at the origin;
- you apply the free rotation;
- you move the forearm to put the elbow at the end of the upper arm.
And later, you move the whole arm as a whole.
If you want to change the free rotation of the elbow, you need to change the
second of the three transformations applied to the forearm.
With the current SDL, the way to do that is to pre-define all the values for
free rotations, and then build the whole object.
With a new SDL, I hope it would be possible to write something like that:
#declare Giskard = Robot(...);
Giskard.forearm.rotate(30*x);
The naive way to write this object is something like that:
#declare Arm = union {
[... upper arm ...]
object {
Forearm
translate <to move it to the origin>
rotate <free rotation>
translate <to the elbow>
}
}
If things are done that way, it is necessary to be able to change the
rotation between the two translations.
On the other hand, things can be written:
#declare Arm = union {
[... upper arm ...]
object {
object { /* forearm at origin here */
Forearm
translate <to move it to the origin>
}
translate <to the elbow>
}
}
In that case, it is enough to be able to apply a new transform to the object
marked by the comment. But maybe this is less efficient.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Bruno Cabasson wrote:
> andrel <a_l### [at] hotmail com> wrote:
>> Bruno Cabasson wrote:
>>> andrel <a_l### [at] hotmail com> wrote:
>>>> Bruno Cabasson wrote:
>>>>> Concerning the animation problem, I see things as follows:
>>>>>
>>>>> Solution 1:
>>>>> -----------
>>>>> The nth frame I(n) of an animation is a function of time only. Its
>>>>> description depends on the sole time parameter. Then you can conceptually
>>>>> write:
>>>>>
>>>>> I(n) = F(tn), with F being the function that describes the scene at time tn.
>>>>>
>>>>> This is POV's point of view, through the 'clock' variable and reparsing the
>>>>> whole scene (except radiosity and photon maps if so specified).
>>>>>
>>>>> This solution requires only the description of the F(t) function.
>>>>>
>>>>> Solution 2:
>>>>> -----------
>>>>> The nth frame I(n) is made by delta wrt first frame. Its description relies
>>>>> on the description of first frame at t0 and a delta function that depends
>>>>> on the time parameter. Then you can conceptually write:
>>>>>
>>>>> I(n) = I(0) + D(tn), with D being the function that describes the variation
>>>>> of the scene between tn and t0.
>>>>>
>>>>> This solution requires the decription of I(0) and the D(t) function.
>>>>>
>>>>> Solution 3:
>>>>> -----------
>>>>> The nth frame I(n) is made by delta wrt previous frame. Its description
>>>>> relies on that of the previous frame I(n-1) and a delta function that
>>>>> depends on the two instants tn and tn-1. Then you can conceptually write:
>>>>>
>>>>> I(n) = G(I(n-1)) = I(n) + d(tn, tn-1), with d being the function that
>>>>> describes the variation of the scene between tn and tn-1.
>>>>>
>>>>> This solution requires the description of I(0) and d(t1, t2) function.
>>>>>
>>>>>
>>>>> Each of these solution is a different approach with pros and cons and
>>>>> implies related features and syntax.
>>>>>
>>>>> Concerning POV4, which of these is preferable?
>>>>>
>>>> none or all
>>> What do you mean exactly? I don't get your point ...
>> simply that if at frame n you want to position and transform an object
>> you will be allowed to access the state in frame n-1, so you can
>> transform incremental (3). You can also reset the transformation
>> variables and start from there (1) or first revert to a known situation
>> either because POV4 gets a mechanism to make a snapshot or because
>> somebody will write a function (formerly a #macro) to do so.
>>
>> So all will be possible and none will be preferable. It depends on the
>> application which one is more natural and if you use a GUI you won't
>> even know.
>
> OK, but I'm still not sure I get you. You seem to suggest that POV4 should
> implement all 3 approaches for convenience reasons, and that they are
> equivalent wrt this. But would not implementing all 3 (or more) solutions
> yield too a voluminous syntax and wouldn't it be somewhat confusing to the
> user instead of being more convenient? And POV4 would have to implement
> all, which means more development effort.
no not at all, it would be more difficult to implement it in a way that
only one option is possible.
Here is how is might be done:
scheme 1)
/* everything parsed, now in the segment that positions for frame FrameNr */
object.transform.reset // revert to initial position
object.translate(X(FrameNr),Y(FrameNr),Z(FrameNr))
object.rotate(A(FrameNr),B(FrameNr),C(FrameNr))
scheme 2)
/* everything parsed, now in the segment that positions for frame FrameNr */
/* keep current transform matrix */
object.translate(dX(FrameNr),dY(FrameNr),dZ(FrameNr))
object.rotate(dA(FrameNr),dB(FrameNr),dC(FrameNr))
/* note: rotate and translate do interact */
scheme 3)
/* at the end of the creation of the objects, but before the */
/* positioning of objects for frame FrameNr */
if FrameNr==0
save_transforms_of_all_objects
else
load_transforms_of_all_objects
/* everything is now positionable wrt to first frame */
object.translate(X(FrameNr),Y(FrameNr),Z(FrameNr))
object.rotate(A(FrameNr),B(FrameNr),C(FrameNr))
end
>
> BTW, considering the problem of POV4's development: who will? What is the
> manpower and skills available today? We have POV-team and perhaps
> volunteers around here. But it is quite a long-term commitment. Would
> POV-team members be likeley to? Such a developmement requires a good
> implusion at the start AND perennial manpower AND a team developmement
> process. To my understanding (and whish), POV4 cannot be a hack. It
> requires an 'industrial' process, whether or not it is open-sourced.
> Anyway, I don't worry about the testing power ...
>
>>> POV has currently solution 1.
>> no, it reparses the scene for every frame.
>
> This is precisely what I(n) = F(tn) means: every frame is a function of the
> sole time parameter through the clock variable. Solution 1 means therfore
> that the scene is reparsed every frame, all stuff of the scene is re-build
> from scratch and from the clock variable or its derived variables such as
> frame_number. They all represent an 'absolute' time.
There is no need to reparse the object creation block in POV4.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Bruno Cabasson wrote:
> andrel <a_l### [at] hotmail com> wrote:
>> Patrick Elliott wrote:
>> I don't want interfere in your personal fight, but I think this is what
>> is going to happen:
>> - POV4 transformations will be implemented the same as in current POV,
>> one transformation matrix per object, texture.
>> - The language will be enhanced and the processing of it changed in such
>> a way that retransforming one or more specific elements will become an
>> issue. e.g. in generating the next frame of a complicated object without
>> reparsing the lot.
>
> It is the Frame(n) = Frame(0) + Delta(tn) scheme, with Frame(0) representing
> most of the parsing (initial conditions of the scene, building of objects
> ....), and Delta(tn) the variation of the scene since first frame. The
> function Delta() can be expressed (and syntaxted) as timelines attached to
> objects, handled by a corresponding control process. These time lines would
> then embody the memory of the transformations to apply through time (the
> 'array' of transformations in question in the next paragraph)
>
>> - it'll be up to the user to implement a stack of transformations if he
>> thinks he needs one. Reverting to a marked state and replaying the
>> changed set of transformations from that point.
>> - luckily the language will be enhanced in such a way that such an
>> implementation is easy.
>
> I think that leaving the responsability to handle these stacks of
> transformations to the programmer is dangerous and requires too much
> programming skills wrt the goal we intend to reach in terms of programming
> ease and accessibility. In the scheme I described, only the control process
> of timelines can do the job and guarantees sensible operations.
>
>> Also before fighting on, you should first define what you mean by
>> creation/post creation etc. I think you have a different view on when an
>> object is actually created. Is that e.g at the end of the union defining
>> it (Patrick?) or at the end of parsing (warp?).
>> If you assume the latter (which happens to be my point of view) post
>> creation transforms do not exist, by definition.
>
> My point of view is that within a single frame, all transforms should be
> defined at creation time, and post creation transforms should be forbidden,
> unless controlled by the timelines attached to the objects and their control
> process within aniations (and in this sole case).
>
>
>> ... BTW I think the time of
>> definition of the shape also needs a name (birth? though you will be
>> able to clone after birth... conception?), because it is a significant
>> moment in the life of that object. I predict that it'll come up in
>> discussions about POV4 frequently.
>
> As we try to make POV have some OO aspects, I'd rather be inclined to keep
> the term of 'creation'.
>
My point is that there are more than one moment where an object is
'created'. The first time when is is fully constructed for the first
time at some arbitrary point with its transform matrix fully reset. The
final time is when the rendering starts at that point is has reached its
final position, orientation and scale. In between there may be moments
that it is repositioned (etc.) and included in a larger object, so that
from the perspective of that object it is fully created but the compound
can still be transformed. It might even be technically possible that the
implementation would allow to reposition an object from within a shader
program, but I would be against that.
What I was saying is that we should distinguish between the 'first
creation' and the 'final creation'. Calling both simply 'creation' will
lead to confusion (as shown is this thread. Much of what you and
Patrck are writing does not make sense to me, because for me 'creation'
is 'final creation').
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Patrick Elliott nous apporta ses lumieres en ce 2007/10/08 17:58:
> In article <4709fc49@news.povray.org>, war### [at] tag povray org says...
>> Patrick Elliott <sel### [at] rraz net> wrote:
>>> And I don't think you have *at all* addressed what I was getting at. How
>>> do you do something like IK, without either a) allowing an object like
>>> reference system *which keeps track of* which order the transforms took
>>> place in, such that if you move something, then rotate it, you don't
>>> want to later just rotate it, and assume its going to produce the same
>>> result, or b) limiting the types of transforms that *are* possible to an
>>> already parsed object, or c) reparsing every damn thing in the script,
>>> so you can recalculate just what the heck the object is described doing
>>> *in that frame*?
>> You don't need to reparse the entire object if you simply want to apply
>> some new transformations to it. What you do is to reset its transformation
>> matrix and apply the new transformations to it. That's it.
>>
>> If what you are doing requires remembering and applying a set of
>> transformations in order, you can simply create an array or whatever
>> with these transformations, or whatever you like. However, that's
>> completely irrelevant from the point of view of the object itself.
>>
>> The only thing you have to be able to specify is whether a transformation
>> is applied to the object only, the texture only, or both.
>>
>> You don't seem to understand how transformations work.
>>
> I know damn well how they work. And you don't solve the problem by
> reverting things. How do you revert **only** to the Nth transform so
> that you change only that one? You are assuming, I think wrongly, that
> no combination of transforms can produce a situation where the result
> cannot be reset, then some arbitrary transform reapplied to make the one
> change needed. Worse, your assertion that all you need to do, if it is a
> problem, is keep every transform in some sort of array, then reapply
> them from that, is... What they frack do you think I have been saying?
> The only difference between your array and mine is that I separate
> "types" of transforms so you don't have to remember if the second
> translate is the 6th transform in the array, not the 5th. The point is
> to still track those transforms in an array of some type, so they can be
> reapplied, *if* you have to manage them that way. Your, "just make some
> separate transform array", just obfuscates what is going on, by
> separating the transforms from the object they effect, when they should,
> logically be considered "part" of the final object (especially if its a
> compound object and things like "how" the texture is applied is changed
> dependent on the position of those sub-objects in some way, as a result
> of those transforms).
>
> I think you are badly missing my point, both in terms of what I mean and
> how any such system would end up looking from a user standpoint.
>
There are NO list/array of transformations, just one and only one transform
matrix that hold the CUMULATIVE result of every transformation thet was applied
to your object. You always only apply ONE cumulative transform. Fast and effecient.
If you were to keep a list of transforms, it would be memory ineffecient and
slow. You would have to always apply every transformations for every ray that
encounter the object. Direct rays, refracted rays, reflected rays and shadow
tests. You will also need to apply all of those listed transformations for any
CSG operation you want to do with the transformed object(s). SLOW and ineffecient.
ALL object always have a texture, even if you don't provide one: There is
something called a Default Texture. Any texture you supply replace the parts of
that default texture with those you define. Any left over part remains that of
the default.
If you apply the texture before the transform, then the transform also apply to
the texture, and you can considere the texture as "part" of the object.
If you apply the transform before you apply the texture, that transform don't
apply to the texture. If you apply another transformation, then that last
transformation will also apply to the texture. The texture is still an integral
part of the object.
--
Alain
-------------------------------------------------
You know you've been raytracing too long when you can recite your high school
Trig book from memory.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <470### [at] hotmail com>, a_l### [at] hotmail com
says...
> Patrick Elliott wrote:
> > In article <4709fb43@news.povray.org>, war### [at] tag povray org says...
> >> Patrick Elliott <sel### [at] rraz net> wrote:
> >>> If you have something like:
> >>> #declare yy = clock
> >>> sphere {...
> >>> translate <yy,35,61>
> >>> texture { some_complex_texture translate 100*x}
> >>> translate y*2}
> >>> Then you **must** reparse the object every single time, because once
the
> >>> object exists internally, it can't be changed.
> >> Wrong.
> >>
> >> The object has its own transformation matrix and the texture its own
.
> >> All transformations applied to the object go to its transformation mat
rix,
> >> and all transformations applied to the texture (be it directly in the
> >> texture block or indirectly in the object block) go to the transformat
ion
> >> matrix of the texture.
> >> It would be perfectly possible to alter these two transformation mat
rices
> >> afterwards. It's simply a question of which transformations are applie
d to
> >> the object only, which ones to the texture only and which ones to both
.
> >>
> >>> The
> >>> transforms you need to be able to change are "not" sitting convenient
ly
> >>> as the last thing in the object, they are buried deep within the
> >>> structure.
> >> You seem to have this concept that the transformations are somehow
> >> stored in the definition of the object, and that this order must be
> >> preserved.
> >>
> >> The individual transformations are not stored anywhere. Each transfo
rmation
> >> is simply a command (a kind of "function call" if you like) which modi
fies
> >> the internal transformation matrix of the object.
> >>
> >> It would be perfectly possible, after the object has been created (w
ith
> >> the transformations and all), to reset its transformation matrix and t
hen
> >> apply the same transformations to it, resulting in the exact same end
> >> result. The only distinction you have to make is which transformations
> >> go to the object, which ones to the texture and which ones to both.
> >>
> >>> If we want to be able to animate, without a reparse, we need
> >>> an internal representation that allows "each" transform, texture,
> >>> object, etc. to exist as accessible elements, not as static
> >>> declarations.
> >> No we don't.
> >>
> >>> In other words, you need to make it "look" like:
> >>> yy = yy + 1
> >>> mysphere.translate(0)=yy,35,61
> >>> Even as the engine keeps track of "how" those things connect:
> >>> start->translate(0)->texture(0)->translate(1)->end
> >> The engine doesn't need to keep track of that. You should acquaint
> >> yourself with transformation matrices and how they work.
> >>
> >>> How else do you both allow animation, without a reparse, but also
> >>> maintain the capacity to place as many transforms, or other elements,
> >>> into the object as you can now?
> >> You can write a thousands individual transformations into an object,
> >> yet none of them will be (individually) stored anywhere. They are all
> >> applied to one single 4x4 transformation matrix. POV-Ray doesn't need
> >> to keep track of the individual transformations nor store them anywher
e.
> >>
> >> The only thing you need to specify is whether a certain transformati
on
> >> is applied to the object, to the texture, or both.
> >>
> >>> See what I am getting at? If you want to maintain the "existing" SDL,
> >>> you have to allow for this, or suffer the current consequence of havi
ng
> >>> to reparse the "entire" SDL every frame.
> >> That's just not true. The only thing you have to allow in the new SD
L
> >> is to be able to apply transformations to the object only, the texture
> >> only or both at the same time. This is very trivial to do.
> >> It's perfectly possible to transform the object but not the texture
> >> even after the texture has been specified.
> >>
> >>
> > Sigh.. You are I think making an assumption that, when you do want to
> > change something, you would only want to change "one" transform. But,
> > maybe you have a dozen, each with does something specific to positionin
g
> > the object, each of which is "also" effected by all of the prior
> > transforms. Tell me, with a real example, not just some assertion that
I
> > am imagining a problem, how you do that. Yes, you can use some commands
> > that can revert the object to a known state, like at the origin, then
> > transform it, but that is useless if the transform you need is relative
> > to some arbitrary point, which is the result of 3-4 other prior
> > transforms. How do you, if you are doing say 7 translates, for some odd
> > reason, revert back to the 3rd, change the 4th, then reapply the last 3
?
> > You can't, without drastically altering how you handled those transform
s
> > in the first place, and reducing them to a bare minimum number needed t
o
> > do the task. Sure, it might be possible, but it still breaks, as near a
s
> > I can tell, when you try to provide a post-creation transform on the
> > object, to modify the prior result. Show me that I am wrong, don't just
> > tell me I am.
> >
> I don't want interfere in your personal fight, but I think this is what
> is going to happen:
> - POV4 transformations will be implemented the same as in current POV,
> one transformation matrix per object, texture.
> - The language will be enhanced and the processing of it changed in such
> a way that retransforming one or more specific elements will become an
> issue. e.g. in generating the next frame of a complicated object without
> reparsing the lot.
> - it'll be up to the user to implement a stack of transformations if he
> thinks he needs one. Reverting to a marked state and replaying the
> changed set of transformations from that point.
> - luckily the language will be enhanced in such a way that such an
> implementation is easy.
>
> Also before fighting on, you should first define what you mean by
> creation/post creation etc. I think you have a different view on when an
> object is actually created. Is that e.g at the end of the union defining
> it (Patrick?) or at the end of parsing (warp?).
> If you assume the latter (which happens to be my point of view) post
> creation transforms do not exist, by definition. BTW I think the time of
> definition of the shape also needs a name (birth? though you will be
> able to clone after birth... conception?), because it is a significant
> moment in the life of that object. I predict that it'll come up in
> discussions about POV4 frequently.
>
Well, in any language that would allow parsing, then animation
"internally", without a reparse, its *both*. I.e., you instance a
complete object on the first "pass" if you will, much as you would
compile a language. Once this is done, code execution, and any changes
to the object "allowed" in that context, can still happen, but the
"objects" are now static in all other respects. In other words, you
never really stop parsing, in the sense of executing branching, loops or
other commands that have an affect on the objects, but you also
*don't/can't* do the equivalent of arbitrarily changing the parameters
of an object that has already been created, any more than you can do the
following in any compiled/JIT type language:
sub blah(a, b, c)
'do stuff
end sub
sub blah(a)
'do something completely different.
end sub
call blah(1, 2, 3)
At best, you are going to get an error just trying to redefine the same
function, at worst, if the language allowed it, you would redefine it to
the later version, and you would get an error due to the parameters
being wrong. You are thus **not** generally allowed to do that. Same
with an object. So, what I am looking at is something like a JIT compile
of the script. Objects get defined *once*, then from then on you make
changes to *that* object. No reparsing happens, because the *script*
executes like any other JIT. You can change variables, etc., within
limits, or even attributes of the object **if** some method exists for
getting at them, but you can't reparse the object all over again to
remake it for the next frame, you have to take what *exists* and change
what parameters are "allowed" at that point. The problem is, if getting
it to that point requires 5-6 interlocked transforms *before* the one
you plan to make changes to in the next frame, you need some way to make
sure it is in that state "before" making that transform, then you also
have to make sure any additional transforms that happen "after" that
point also take place, and in the right order. The alternative is to do
them all after the fact, which shoots huge holes in any attempt to make
it object oriented, or you have to deny the ability to make multiple
transforms on objects or their sub-objects, as you are currently able
to.
The only other solution I could see would be the "mark" the transforms
you "want" to be able to change later, then only allow access to those.
So, in that case, every transform up to that point becomes the objects
"default" state, to which your change is applied. That still doesn't
solve the problem, because then it also has to remember and reapply any
additional state changes that need to happen "after" that as well. All
you end up doing is decreasing the number of state changes you have to
track, without actually changing the problem.
Finally, if you do keep an array of all the transforms applied to a
specific object, to be replayed, then why not keep them in a list *on*
the object, where it makes sense, instead of in a separate array/table,
which has no obvious connection to the original object at all? And
then.. I am not sure what POVRay does in such cases, but what happens if
you do something like:
union {
object1
object2
scale .5
object3
}
Does it transform all three, or the first two, *then* union the third?
If the later, then you have a huge problem because no list of transforms
"can" produce the same result, unless you also replay the exact process
of instancing the objects in the union as well. I would hope that it
deals with it as one complete object, instead of changing the size of
the first two, then adding the third. That would make is possible to
simply apply a list of transforms. If it doesn't work that way though,
then the objects themselves become strongly dependent on the order of
operations as well.
--
void main () {
call functional_code()
else
call crash_windows();
}
<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
3D Content, and 3D Software at DAZ3D!</A>
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <470b5016@news.povray.org>, war### [at] tag povray org says...
> Bruno Cabasson <bru### [at] alcatelaleniaspace fr> wrote:
> > I think that leaving the responsability to handle these stacks of
> > transformations to the programmer is dangerous and requires too much
> > programming skills wrt the goal we intend to reach in terms of programm
ing
> > ease and accessibility.
>
> There's no more skill involved than in the current POV-Ray when you app
ly
> a series of transformations to an object.
> There's no need for a stack. There's no need to revert transformations.
> You simply apply a series of transformations to the object at each frame
> (basically what you do with the current SDL, but without having to repars
e
> the object).
>
> I really can't understand what is the problem people are seeing here.
> Could someone please explain to me this?
>
> Is it that some people seem to think that once you have applied a
> texture to the object you can't transform the object without transforming
> the texture too? Says who? Just because you can't do it in the current SD
L
> that doesn't mean it wouldn't be possible in the new one. Even in the
> current SDL it's just a *syntactical* limitation. There's absolutely no
> technical reason why there couldn't be a command like "apply this
> transformation to the object but not its texture". It would simply be
> a question of adding the proper syntax for it.
>
> Seemingly some people also think that if a transformation has been
> applied, it's engraved in stone and cannot be removed anymore. Removing
> all the transformations is just a question of resetting the transformatio
n
> matrix. Then you can re-apply all the transformations you want to the
> object, making the modifications you want.
>
> (Granted, in the current POV-Ray implementation this would require a
> bit more of work because certain transformations to certain objects are
> "optimized away", for example by applying the transformation to some
> object coordinate instead of applying it to the transformation matrix.
> Thus such a transformation would indeed by "engraved in stone" and not
> possible to revert.
> However, it's perfectly possible to change this system without removing
> these optimizations. It's possible to apply all the transformations to
> the transformation matrix of the object and then, after all the
> transformations have been applied, just before starting the rendering,
> it's possible to examine the transformation matrix for certain properties
> and if these properties exist, they can be "optimized away" in the same
> way as currently. For example, if the object is a sphere there exists a
> translation component, the translation can be applied to the center
> coordinate of the sphere and removed from the transformation matrix.)
>
> > My point of view is that within a single frame, all transforms should b
e
> > defined at creation time, and post creation transforms should be forbid
den
>
> Why impose such an artificial limitation? It doesn't make any sense.
>
I think we are talking past each other here in some respects. Yeah, I
agree, if they work in the sense that they should, i.e., scaling
"before" defining the last object in a union also scales "that" object,
not just the prior ones, then applying all transforms "after" is
perfectly valid. It doesn't fundamentally change my argument, which is
that, "If you are going to store that stuff in an array, lets at least
assign the array to something that "looks" like its part of the object,
and not some separate entity." In other words, if you are going to apply
10 transforms to object Z, then *make that array* part of the Z object,
*at least* from the perspective of the coder. That way you can tell
"what" they belong to by simply referencing the object they belong to,
not some arbitrary array that has no association, save that you just
happen to use it for that.
Or is there some huge objection to making it even than simplified?
Oh, and the idea of placing each "type" of transform into a separate
array is more like a filter. I.e., you want to change the 3rd "scale",
just reference the 3rd scale in the array, even if it is "actually" the
7th transform. This doesn't even need to be a separate array, just a
function call that counts the transforms in the main array, until it
gets the "nth" transform of the type requested. In other words,
something like:
function translate(n)
c = -1
for each temp in self.transforms
if type(temp) = "translate" then
c = c + 1
end if
next
if c > -1 then
return c
else
return -1
end if
end function
"transforms" being the "array" that is used to track "all" transforms
applied to that object in the original SDL step that created it, which
would be identical to the existing SDL. This doesn't prevent you adding
new transforms, either via the existing SDL method, or via the object
reference, or even *deleting* one that was previously applied (which
isn't currently possible).
Hope that is clearer.
--
void main () {
call functional_code()
else
call crash_windows();
}
<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
3D Content, and 3D Software at DAZ3D!</A>
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |