POV-Ray : Newsgroups : povray.off-topic : Mac Plus vs AMD Dual Core : Re: Mac Plus vs AMD Dual Core Server Time
12 Oct 2024 03:17:12 EDT (-0400)
  Re: Mac Plus vs AMD Dual Core  
From: Warp
Date: 24 Oct 2007 13:21:47
Message: <471f7f2a@news.povray.org>
Jim Henderson <nos### [at] nospamcom> wrote:
> (a) sloppy 
> declaration of the variable type, letting the default signed value be 
> used rather than specifying an unsigned data type

  How exactly would that solve the problem? The only difference would be
that instead of seeing something like "-1234" you would see something like
"4294966062" (or "18446744073709550382" if they are using 64-bit values),
which isn't any more helpful. In fact, it's actually worse because it's
even more confusing.

  On a different note, I have been changing my habit with regard to this.
In the past I followed the practice "always use unsigned values for things
where negative values don't make sense, only use signed values where negative
values make sense and are possible". However, I have noticed more and more
how this cause more problems than it's worth.
  The current trend in programming guides is becoming more "always use
signed values unless there's a good reason not to", and I am starting
to agree with that.

  One good example: Assume you have a bitmap in memory. The dimensions of
bitmaps are always positive. Negative values don't make any sense with such
a thing as bitmap dimensions. These dimensions will never be negative.
Thus it makes sense to use unsigned values to represent the dimensions of
the bitmap?

  However, suppose that you want to draw the bitmap on screen, at a given
position. The position if the bitmap on the screen is given as pixel
coordinates so that the center of the bitmap is located at those coordinates.
It's perfectly valid if the bitmap is partially outside the screen. This
means that the upper-left corner screen coordinates of the bitmap can be
negative. The center coordinates themselves could be negative too.

  Usually when you draw a bitmap on screen you specify its position on
screen by its upper left corner coordinate. Thus you would calculate these
coordinates with something like (x - width/2, y - height/2).

  Since x and y are signed integers and width and height are unsigned
integers, you are now mixing signed and unsigned integers, causing some
implicit conversions, and possibly producing a compiler warning.

  Moreover, comparing (signed) coordinates with the (usigned) bitmap
dimensions are even more likely to give you problems, or at least compiler
warnings, for example in something like "if(x < width)".

  The handiest way of doing this is to keep the bitmap dimensions as
signed integers. Even though they never get negative values, they can
be part of expressions which result in negative values, and thus there
will not be any surprising problems with implicit conversions nor compiler
warnings.

-- 
                                                          - Warp


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.