|
|
Colin Doncaster <col### [at] bentanimationcom> wrote...
> It's more the ray that gets jittered, similar to DOF, than the actual
> object. You'd just need to hold the different sample points in memory
similar
> to the current implementation. Wouldn't you?
But how exactly would you jitter the ray. As I understand it, jittering a
ray in the time domain does not change the geometry of the ray itself, but
rather just means that you choose a time for that ray. Then, the
intersection function for each object includes a 'time' parameter, which
temporarily adjusts the object accordingly, so that you end up with the
intersection for that point in time. This would require that objects be
able to store expressions based on the clock variable (or at least splines
based on clock) for various attributes, which would then be evaluated a
render-time. Currently all expressions, including those based on the clock
variable, are evaluated at parse time, producing a static scene. No
movement information is stored in the objects.
My motion blur patch still uses static objects. It just provides the
ability to produce a bunch of static objects and make them all
semi-transparent through a special supersampling scheme.
-Nathan
Post a reply to this message
|
|