|
![](/i/fill.gif) |
Tek wrote:
>>You can't really answer this from looking how it works in real life -
>>when you take a photograph of a star the ideal camera outside the earth
>>atmosphere in ideal empty space will only show an infinitely small
>>point.
>
>
> Well I was thinking of a digital camera, which works by having a grid of colour
> sensors that effectively just add together the brightness due to all photons
> falling upon that pixel. Surely with that a very small point would always be 1
> pixel in size and have a brightness proportional to it's brightness multiplied
> by how much of the area of the pixel it covers.
>
> I'm not sure how all that corresponds to gamma ramps and such, but digital
> cameras tend to get sharp images without aliasing, so surely that can be used as
> a model for an anti-aliasing technique?
That's something completely different and does not help to determine the
best way to antialias a raytracing scene. Note the only reasons why i
brought up the photography comparison was to illustrate the interaction
of averaging processes and nonlinearities (both occurs both in digital
and conventional photography).
Christoph
--
POV-Ray tutorials, include files, Sim-POV,
HCR-Edit and more: http://www.tu-bs.de/~y0013390/
Last updated 11 Jan. 2004 _____./\/^>_*_<^\/\.______
Post a reply to this message
|
![](/i/fill.gif) |