|
|
On Fri, 20 Nov 2009 02:21:27 +0100, Warp <war### [at] tagpovrayorg> wrote:
>
> I don't think making a perspective transformation to a set of vertex
> points is any more expensive than rotating that same set of vertex
> points.
It think it was. A single division probably took at least several hundred
cycles on hardware from that time.
From the YouTube comments, it also seems that rotation and translation was
hardware accelerated.
crimblue wrote:
"I actually worked for Larry Cuba. He drew the lines, I did the animation
of the pieces. We used a PDP 11/45 computer (a 16-bit processor with 16KB
of RAM) and a Vector General Graphics Display Unit. The VG could do the
motion and rotation of the image in real time, but we could not do the
perspective adjustments in real time. That's what took so much time. We
had to move the image one frame at a time and do the perspective
adjustment."
"The VG (display unit) had hardware to do movement on the screen and
rotation. The PDP11 only had to assemble a list of the original endpoints
of the vectors and the desired motion and the VG hardware would do the
rotation/motion for display on the screen. The VG had no hardware to do
3-D perspective. This meant we had to rebuild the vector endpoints for
each shot and redownload them to the VG.
That meant for every frame, the PDP11 had to recalculate the rotated/moved
endpoints and then do all the divisions necessary to create the
perspective depth effects. I don't know why the VG didn't have perspective
hardware, it just didn't.
Also remember, motion and rotation can be done with addition and
multiplication. Perspective required division. Division is a very slow
operation compared to the others. Back then we had to add up the
milliseconds each operation took!"
--
FE
Post a reply to this message
|
|