|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Tue, 22 Apr 2003, Christopher James Huff wrote:
>In article <Pine.GSO.4.53.0304221845420.6559@blastwave>,
> Dennis Clarke <dcl### [at] blastwaveorg> wrote:
>
>> In what way is antialiasing within a single pixel different from blur of
>> a single pixel.
>
>You can't blur a pixel, you need a set of pixels. The algorithm used is
>somewhat similar, but the goal and result are different. Blurring
>removes information from an image, spreading colors out across adjacent
>pixels.
well .. not really. If by "blur" we mean the optical result of an image
being out-of-focus then the data is all there, simply not arranged in a
fashion that people prefer. A digital blur of a digital image by way of
a gaussian ( or similar algorithm ) approach merely distributes the data.
There are algorithms that will un-blur an image. These are the same tools
that are used to unblur the linear movement of a projectile or the rotation
of an engine component within a photograph.
> Antialiasing adds data, coloring each pixel with the overall
>color of the area it covers instead of a single point within that area,
>and is not dependant on the colors of adjacent pixels.
ok .. I'm with you on that. We are talking about using multiple sample
points within a pixel then, probably distributed as a square matrix of
samples. This would be the same then as simply having a higher resolution
image and then doing a blur of the pixels on a block by block basis while
ignoring neighbors. At least that is how I perceive the issue. The
removal of jagged edges on lines and sharp boundaries can be achieved with
either a high resolution image blured or a low-resolution image with a
multi-sample per pixel approach.
> You can use an
>antialiasing algorithm with a large image as input, but you will get a
>smaller image as a result.
well yes, that is clear. It makes no difference whether you sample each
pixel 9 times ( 3x3 ) within a 100x100 data array or simply blur the 3x3
pixel blocks of a 300x300 data array to produce a 100x100 result set. I
think, however, that the result from the latter would be smoother than the
former.
Dennis
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Tue, 22 Apr 2003 19:21:20 +0100, "Andrew Coppin"
<orp### [at] btinternetcom> wrote:
>(From someone who has now MEMORISED the zillion-digit product key for my
>copy of Windows 2000 Advanced Server... what does that tell you???)
That you had to reinstall it so many times you have the number burned
on your retina?
Peter Popov ICQ : 15002700
Personal e-mail : pet### [at] vipbg
TAG e-mail : pet### [at] tagpovrayorg
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Tue, 22 Apr 2003 23:04:29 -0400, Christopher James Huff
<cja### [at] earthlinknet> wrote:
>Manually blurring won't help much here, if at all. It really only helps
>with the "nearest neighbor" algorithm, which won't be used by any decent
>graphics program and would be useless for removing aliasing.
Disagreed. Perhaps you're confusing blur with mosaic here? Mosaic does
wonders with nearest neighbor, if you scale down by an integer factor
and use the same number for the mosaic size. Thus you get the exact
same result as +a0.0 +rn, where n is the scale factor.
Gaussian blur helps a lot in most other downsampling algorithms, with
the slight drawback that some careful work with unsharp mask is need
after the resizing to bring out the (now) subpixel details.
Peter Popov ICQ : 15002700
Personal e-mail : pet### [at] vipbg
TAG e-mail : pet### [at] tagpovrayorg
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> That you had to reinstall it so many times you have the number burned
> on your retina?
BINGO!
Andrew.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <22icav4v4rfsmsaue3rbijffon1g935u3h@4ax.com>,
Peter Popov <pet### [at] vipbg> wrote:
> Disagreed. Perhaps you're confusing blur with mosaic here? Mosaic does
> wonders with nearest neighbor, if you scale down by an integer factor
> and use the same number for the mosaic size. Thus you get the exact
> same result as +a0.0 +rn, where n is the scale factor.
No, I am thinking of blur. (How could I confuse it with mosaiac?!?)
And mosaiac will only "do wonders with nearest neighbor" if it does just
what the resampling algorithm would do anyway. What you suggest here is
equivalent to downsampling with some good algorithm, upsampling back to
the original size with nearest neighbor, and then downsampling again
with nearest neighbor. You also assume mosaiac itself doesn't use
nearest neighbor.
> Gaussian blur helps a lot in most other downsampling algorithms, with
> the slight drawback that some careful work with unsharp mask is need
> after the resizing to bring out the (now) subpixel details.
And this can all be handled by the downsampling algorithm itself, in
which case you will just be doing more work than necessary at best, or
get in its way and get inferior results at the worst. A good algorithm
will sample the area around the destination pixel, if the source image
is blurred you will get bleed over in the final image. The resulting
image will be blurrier than it should be.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <Pine.GSO.4.53.0304230228300.7720@blastwave>,
Dennis Clarke <dcl### [at] blastwaveorg> wrote:
> well .. not really. If by "blur" we mean the optical result of an image
> being out-of-focus then the data is all there, simply not arranged in a
> fashion that people prefer.
No, the blurred image contains less information. Small details are gone,
geometry and spatial information is lost.
> A digital blur of a digital image by way of
> a gaussian ( or similar algorithm ) approach merely distributes the data.
> There are algorithms that will un-blur an image.
No there aren't, not in the way you seem to be thinking anyway. You've
watched too many Hollywood movies. Information is irretrievably lost in
the blur process, you can not recover the exact original.
> These are the same tools
> that are used to unblur the linear movement of a projectile or the rotation
> of an engine component within a photograph.
This is a bit different...you have information about the motion other
than what is contained in the photo, and can use that to at least
partially reconstruct the original. And then you can't completely
recover the original.
> ok .. I'm with you on that. We are talking about using multiple sample
> points within a pixel then, probably distributed as a square matrix of
> samples. This would be the same then as simply having a higher resolution
> image and then doing a blur of the pixels on a block by block basis while
> ignoring neighbors. At least that is how I perceive the issue. The
> removal of jagged edges on lines and sharp boundaries can be achieved with
> either a high resolution image blured or a low-resolution image with a
> multi-sample per pixel approach.
Go ahead and blur a high-res image...you get a blurry high-res image
with no edges, not a smooth-edged one. Antialiasing requires inputting
several samples and outputting one, you have to end up with a smaller
image.
> well yes, that is clear. It makes no difference whether you sample each
> pixel 9 times ( 3x3 ) within a 100x100 data array or simply blur the 3x3
> pixel blocks of a 300x300 data array to produce a 100x100 result set. I
> think, however, that the result from the latter would be smoother than the
> former.
You don't get a smaller result set with a blur, you get one of the same
size as the source data. What you are talking about is downsampling.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
3.1415926535897932384626 IIRC
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Thu, 24 Apr 2003, Apache wrote:
>3.1415926535897932384626 IIRC
ha ha .. yes .. more or less .. off the top of my head I recall
3.14159 26535 89793 23846 26433 83279 50288
more or less ... I think that I can do at least fifty digits in five
digit groups .. for no good reason.
In any case .. perhaps a POV render of Pi related art could be done?
hmmmmm
Dennis
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> >3.1415926535897932384626 IIRC
>
> ha ha .. yes .. more or less .. off the top of my head I recall
>
> 3.14159 26535 89793 23846 26433 83279 50288
3.1415926535897932384626433832975028841971693993 :-P
> more or less ... I think that I can do at least fifty digits in five
> digit groups .. for no good reason.
Likewise; whenever anyone asks why my memory is so bad, I tell them I
already used it up with pi. (And statments like "the diffusional rate of a
gas is inverstly proportional to the square root of its density" or "a
relation is a subset of the extended Cartesian product of the domains of its
attributes". And the op-code for RTS in 6502 machine code - in hex and
decimal. Man, I need a life...)
> In any case .. perhaps a POV render of Pi related art could be done?
I know (or rather, have on file) a rather neat algorithm for calculating
pi... used it to write a program which could find hundreds of digits fairly
fast... hmmm...
Andrew.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Thu, 24 Apr 2003, Andrew Coppin wrote:
>> >3.1415926535897932384626 IIRC
>>
>> ha ha .. yes .. more or less .. off the top of my head I recall
>>
>> 3.14159 26535 89793 23846 26433 83279 50288
>
>3.1415926535897932384626433832975028841971693993 :-P
^^
I know that you have it in your head ( why? ) but you fingers had minor
dyslexia ! :)
>
>> more or less ... I think that I can do at least fifty digits in five
>> digit groups .. for no good reason.
>
>Likewise; whenever anyone asks why my memory is so bad, I tell them I
>already used it up with pi. (And statments like "the diffusional rate of a
>gas is inverstly proportional to the square root of its density" or "a
>relation is a subset of the extended Cartesian product of the domains of its
>attributes". And the op-code for RTS in 6502 machine code - in hex and
>decimal. Man, I need a life...)
oh God! another math-physics major!
I used to be able to program 6502 assembly off the top of my head also,
many years ago. Back when the Commodore 64 with 64K or RAM was huge!
>
>> In any case .. perhaps a POV render of Pi related art could be done?
>
>I know (or rather, have on file) a rather neat algorithm for calculating
>pi... used it to write a program which could find hundreds of digits fairly
>fast... hmmm...
well .. maybe there is a way to do a render in which a really slow method
of finding pi could be used to do successive image frames of geometric
objects where the approximation to pi would be used to generate the object
structures. After about 100 frames we could have pretty much perfection but
the initial fifty frames would be akin to a Salvador Dali painting coming
to life with lots of bent and twisted objects.
Sound cool?
The slowest convergence to pi is achieved with a Riemann sum where s=2
thus :
zeta(s) = 1 + (1/2)^s + (1/3)^s + (1/4)^s + (1/5)^s + ... + (1/n)^s where
n should be inf. It was Leonhard Euler that discovered in the 1700's that
zeta(2) converges on the square of pi divided by six. This simple and slow
fact has been used by me for two decades now to benchmark the floating ops
of various processors. The Sun UltraSparc III had an initial NAN problem
that would induce a massive wait state in the processor. This has since
been fixed. I have a Intel Pentium P90 processor system here with the math
flaw and it produces the same result on either of its two processors.
In any case, it would be neat, in a very geeky kind of way, to produce an
animation of various objects sorting themselves out with various n.
SunOS ag0 5.8 Generic_108528-10 sun4u sparc SUNW,Sun-Fire-280R
$ time -p ./pi
pi at n=1073741823 is 3.14159264498
real 30.81
user 30.80
sys 0.01
As you can see from the above, it takes a large n to get any real precision.
Dennis Clarke
dcl### [at] blastwaveorg
---------------------------------------------------------------------------
/********************************************************************/
/* Standard PI calculation using an infinite series - Dennis Clarke */
/* */
/* Purpose : Calculate pi using the least efficient method known */
/* without actually resorting to drawing a circle in the */
/* sand and measuring it. */
/* dcl### [at] blastwaveorg */
/********************************************************************/
#include <locale.h>
#include <stdio.h>
#include <sys/time.h>
#include <math.h>
int main(int argc, char *argv[]) {
double pi = (double) 0.0;
unsigned long i;
/*****************************************************/
/** sum the series 1/(x^2) **/
/*****************************************************/
fprintf ( stdout, "\n\n" );
for (i = 1; i < 1073741823; i++) {
pi = pi + (double)1.0/( (double)i * (double)i );
}
fprintf(stdout, " pi at n=%9u is %.12g \n", i, sqrt( pi * (double)6.0 ));
exit(1);
}
--------------------------------------------------------------------------
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |