POV-Ray : Newsgroups : povray.off-topic : Haskell vs Java: Building a ray tracer Server Time
1 Nov 2024 01:23:53 EDT (-0400)
  Haskell vs Java: Building a ray tracer (Message 1 to 10 of 18)  
Goto Latest 10 Messages Next 8 Messages >>>
From: Invisible
Subject: Haskell vs Java: Building a ray tracer
Date: 28 Jun 2012 06:07:18
Message: <4fec2cd6@news.povray.org>
OK, so here's an application program which, I presume, I don't need to 
explain to you guys. ;-)

How do you do this in Java? Well, first you're going to do something like

   public class Vector3
   {
     public final double X, Y, Z;

     public Vector3(double x, double y, double z) {X = x; Y = y; Z = z;}

     public Vector3 Add(Vector3 other) {...}
     ...
   }

Now we have vector arithmetic. Next you'll probably do something similar 
for RGB colour values as well. OK, so that's the basics covered. Next, 
you probably want something like

   public class Ray
   {
     public final Vector3 Start, Direction;

     public Ray(Vector3 s, Vector3 d) {Start = s; Direction = d;}

     public Vector3 Point(double t)
     {
       return Start.Add(Direction.Scale(t));
     }
   }

Now you'll probably sit down and do

   public abstract class Shape
   {
     public abstract boolean TestIsect(Ray r);
     public abstract double FirstIsect(Ray r);
     public abstract double[] AllIsect(Ray r);
     public abstract Vector3 Normal(Vector3 p);
     public abstract boolean Inside(Vector3 p);
   }

The most fundamental thing you need to do to a shape is ray intersection 
tests. Most of the time, you just want the nearest intersection, so 
that's FirstIsect(). However, to implement CSG, you need to obtain /all/ 
ray intersections, hence AllIsect(). You also need to know if the points 
of intersection on one shape are inside or outside some other shape, 
hence Inside(). Lighting calculations invariably require the surface 
normal at a given point [approximately] on the surface, so Normal(). You 
also sometimes use a Shape as a bounding volume, hence TestIsect(), 
which tests for an intersection without bothering to compute where it 
actually is.

So far, so good. We can now write out a whole bunch of Shape subclasses 
- planes, spheres, cones, etc. I won't bore you with the details. Next, 
let's have a camera:

   public abstract class Camera
   {
     public abstract Ray Shoot(double x, double y);
   }

Given a point on the image plane, generate a camera ray. Nice and easy. 
We can now do things like

   public class Perspective extends Camera
   {
     public final Vector3 V0, Vx, Vy, Vz;

     public Camera(Vector3 v0, Vector3 x, Vector3 y, Vector3 z)
     {V0 = v0; Vx = x; Vy = y; Vz = z;}

     public Ray Shoot(double x, double y)
     {
       Vector3 v1 = Vx.Scale(x);
       Vector3 v2 = Vy.Scale(y);
       Vector3 v3 = Vx.Add(Vy).Add(vz).Normalise();
       return new Ray(V0, v3);
     }
   }

OK, so far, we have shapes. In the ray tracer I designed way, way back 
in 1998 or whenever it was, an "object" had a "shape" and a "texture", 
and a texture maps every point in space to a "surface", which describes 
the rendering properties of the object. So we have:

   public abstract class Texture
   {
     public abstract Surface GetSurface(Vector3 p);
   }

   public abstract class Surface
   {
     public abstract Colour ProcessIsect(Scene s, Vector3 hit, Vector3 in);
   }

What I then did was to define several Surface subclasses:

- AmbientSurface always returns a user-defined Colour.
- DiffuseSurface uses Lambert's cosine rule to scale a user-defined 
Colour according to the cosien of the angle of light from all the light 
sources in the Scene. (It also performs shadow tests.)
- ReflectSurface implements perfect specular reflection.
- SumSurface computes the sum of several surfaces. In this way, an 
object can have both reflection /and/ diffuse terms.

When I got to textures, I really had a party. First, there is obviously 
ConstantTexture, which always returns a user-defined Surface for every 
point in space. But then I defined various kinds of "map" classes:

- PointScalar maps every point in space to a scalar.
- PointVector maps every point in space to a unital vector.
- ColourMap maps scalars to colours.
- PointColour combines a PointScalar and a ColourMap to map every point 
in space to a colour.

I had basically planar and spherical maps. Then I had various classes 
that transform the coordinates being input to a map. For example, if you 
take a spherical map and "slice" it through the middle, it becomes a 
conical map. I also had an elaborate system whereby you could use a 
PointScalar to scale a PointVector and use that to displace the 
coordinates being fed to some other mapping. In this way, you could take 
a simple planar map and sine-modulate it on multiple axes to generate 
complicated swirly patterns.

You then have things like DiffuseTexture, which uses a PointColour to 
assign a colour to every point in space, and produce a DiffuseSurface 
with that colour. (Ditto for AmbientTexture, ReflectTexture, etc.) And, 
obviously, CheckerTexture, which takes two Texture objects and picks 
which one to return for different points in space. Or a Texture that 
linearly interpolates between several subordinary Textures based on a 
ScalarMap. Or...

Implementing light sources is fiddly. It depends on whether you just 
want point lights, or whether you want area lights. But let's keep it 
simple and stick to point lights only.

   public class Light
   {
     public final Vector3 Position;
     public final Colour  Colour;

     public Light(Vector3 p, Colour c) {Position = p; Colour = c;}
   }

Now a Scene is fairly simple; it holds lights, camera, and action. It 
also seems like a reasonable place to stick things like the TraceRay() 
function:

   public class Scene
   {
     public final Camera   Camera;
     public final Object[] Objects;
     public final Light[]  Lights;

     public Scene(...)

     public Colour TraceRay(Ray r)
     {
       double[][] ts = new double[Objects.length];
       for (int lp=0; lp<ts.length; lp++)
         ts = Objects[lp].FirstIsect(ray);

       Object o;
       double t = 1e100;
       for (int x=0; x<ts.length; x++)
         for (int y=0; y<ts[x].length; y++)
           if (ts[x][y] > 0 && ts[x][y] < t)
             {t = ts[x][y]; o = Objects[x];}

       Vector3 p = o.GetShape.Point(t);
       Surface s = o.GetTexture().GetSurface(p);
       return s.ProcessIsect(this, p, r.Direction);
     }
   }

Now just add some classes that implement mapping pixel coordinates to 
image plane coordinates (taking care to invert Y), generating camera 
rays, feeding them to TraceRay(), maybe antialiasing, and writing the 
results to disk and/or screen. And there, that's your ray tracer.

My original ray tracer actually included slightly more code, because it 
let you inspect the scene at run-time and get human-readable 
descriptions for various things. (E.g., Shape had a GetName() function, 
and ShapeSphere.GetName() returns "Sphere".) I never actually got as far 
as /using/ any of this infrastructure, however. And actually, I didn't 
build many scenes, because... well, look at it:

   Scene out = new Scene();

   Shape   sh = new ShapeSphere(new Vector3(0, 0, 0), 1);
   Surface su = new DiffuseSurface(Colour.RED);
   Object  ob = new Object(sh, new ConstantTexture(su));
   out.AddObject(ob);

   Shape   sh = new ShapePlane(new Vector3(0, 1, 0), -2);
   Surface su = new DiffuseSurface(Colour.GREEN);
   Object  ob = new Object(sh, new ConstantTexture(su));
   out.AddObject(ob);

   Shape   sh = new ShapePlane(new Vector3(0, -1, 0), -2);
   Surface su = new DiffuseSurface(Colour.BLUE);
   Object  ob = new Object(sh, new ConstantTexture(su));
   out.AddObject(ob);

   out.SetCamera(new PerspectiveCamera(Vector3(0, 0, -5), Vector3(1, 0, 
0), Vector3(0, 1, 0), Vector3(0, 0, 2.5));

   out.AddLight(new Vector3(0, 0, -5), Colour.WHITE);

This places a piffling three objects and one light source. YUCK! You can 
see why I didn't write many test scenes...



So how does all this look in Haskell?

First of all, we need vectors.

   data Vector3 = Vector3 !Double !Double !Double

   class Num Vector3 where
     (Vector3 x1 y1 z1) + (Vector3 x2 y2 z2) = Vector3 (x1+x2) (y1+y2) 
(z1+z2)
     ...

Similarly, we need colours. Unlike Java, we can actually use real 
grown-up operator names, so we don't have to write monstrosities like

   return v1.Add(v2).Add(v3).Normalise();

Instead we can simply say

   normalise (v1 + v2 + v3)

which is far easier to read. It also means stackloads of existing 
numeric functions automatically work with vectors (e.g., the sum 
function can sum a list of vectors - or a list a colours, which turns 
out to be more useful).

Note, also, that we don't need to do this stupid

   public Vector3(double x, double y, double z) {X = x; Y = y; Z = z;}

and then write

   new Vector3(1, 2, 3)

every time we want a vector. Instead, you can simply do

   Vector3 1 2 3

to create a vector.

Similarly, look back at the Java Ray class. See how much boilerplate 
code there is for declaring that the fields are public and constant, and 
for constructing a new object, and so forth. Now look at Haskell:

   data Ray = Ray {ray_start, ray_direction :: Vector3}

   ray_point :: Ray -> Double -> Vector3
   ray_point (Ray s d) t = s + d |* t

Much more compact, and yet much more /readable/ at the same time.

These, of course, are mere trifles. Let us get to the real meat of the 
problem. Now, we need to implement shapes somehow. The /obvious/ thing 
to do is copy the Java version; define a class for shapes, and then 
create various data structures representing concrete shapes:

   class Shape s where
     isect  :: s -> Ray     -> [Double]
     normal :: s -> Vector3 -> Vector3
     inside :: s -> Vector3 -> Bool

   data Sphere = Sphere {center :: Vector3, radius :: Double}

   instance Shape Sphere where
     isect (Ray s d) (Sphere c r) = ...

   data Plane = Plane {normal :: Vector3, offset :: Double}

   instance Shape Plane where
     isect (Ray s d) (Plane n o) = ...

Notice the isect function. In Java, we have /three/ functions for 
efficiency; one function only bothers to compute /whether/ there are any 
intersections, another that only bothers to compute the /first/ 
intersection, and another which computes /all/ intersections.

Haskell's lazy evaluation makes this quite unnecessary. All we need is 
/one/ function which returns all intersections in a lazy list. 
Inspecting whether the list is empty evaluates just enough of isect to 
answer this question, and no more. And obviously, accessing only the 
first solution does not cause any further solutions to be computed. Note 
we're actually doing /better/ than Java: If our CSG happens, at 
run-time, to need the first /three/ intersections but no further, then 
only these are computed.

(In fairness, we could use an Iterator in Java to achieve the self-same 
thing. The hasNext() function would enable intersection tests without 
computing the actual location, and next() would compute one additional 
intersection. It would be really quite a lot of work to set up this much 
lazy evaluation manually, though. Haskell gives it to us for free!)

You can certainly do things this way. There is a snag, however. If you 
stick to Haskell 2010 (the currently published official language 
specification), you cannot easily make a list of shapes. Because all the 
elements of a list have to be of identical types. And (for example) 
Sphere is not the same type as Plane, even if they do both implement the 
Shape class (and possibly other classes like Eq or Show). There is no 
way to say "a list of everything that has these methods".

GHC adds such a way. It's called

   {-# LANGUAGE ExistentialQuantification #-}

It could more accurately be called "existential /type/ quantification", 
or better yet "hidden type variables". But whatever. It lets you write

   data AnyShape = forall s. Shape s => Wrap s

With this incantation, Sphere is still a different type from Plane, but 
if you take a Sphere object and apply Wrap to it (or any other type that 
implements Shape), it becomes an AnyShape type. So you cannot write

   [sphere 0 1, plane 1 0]

but you /can/ write

   [Wrap (sphere 0 1), Wrap (plane 1 0)]

and this is valid. Notice, however, that it is impossible to write an 
"unwrap" function. Because once you "wrap" something, its original type 
information is lost forever, so there's no way in hell to know what type 
to cast it back to.

(More precisely, all type information is lost at compile-time, just like 
in C or Pascal. It's called "type erasure".)

You can't write an unwrap function, but what you /can/ do is make 
AnyShape itself an instance of the Shape class:

   instance Shape AnyShape where
     isect (AnyShape s) r = isect s r
     ...

Now we still can't unwrap an AnyShape, but we don't need to. We can just 
treat AnyShape like a normal Shape instance.

You can't do any of this in Haskell 2010. "forall" is not a valid 
keyword there. The best you can do in Haskell 2010 is to define

   data AnyShape =
     AS_Sphere Sphere |
     AS_Plane  Plane  |
     ...

Every time you define a new Shape instance, you need to update this 
giant monolithic data declaration. Yuck!

But wait... Actually, there is a simpler way. What do we want to /do/ 
with our list of shapes? All we /actually/ want to do is execute shape 
methods. But because functions are first-class in Haskell, we can 
actually solve this in an unusual way. Rather than storing a data 
structure representing the shape and providing a way to statically look 
up the correct ray-intersection function, we can just STORE THE 
INTERSECTION FUNCTION!

Well, actually we've got /three/ functions. But we can make a data 
structure which stores all three:

   data Shape =
     Shape
     {
       shape_isect  :: Ray -> [Double]
       shape_normal :: Vector3 -> Vector3
       shape_inside :: Vector3 -> Bool
     }

   sphere :: Vector3 -> Double -> Shape
   sphere center radius =
     Shape
     {
       shape_isect  = (\ ray   -> ...),
       shape_normal = (\ point -> vnormalise (point - center)),
       shape_inside = (\ point -> vmag (point - center) < radius)
     }

   plane :: Vector3 -> Double -> Shape
   plane normal offset =
     Shape
     {
       shape_isect  = (\ ray   -> ...),
       shape_normal = (\ _     -> normal),
       shape_inside = (\ point -> normal `vdot` point < offset)
     }

Now a "shape" is simply an ordinary data structure, which contains some 
function pointers. And each sort of shape - a sphere, a cone, whatever - 
is an ordinary /function/ which fills out this data structure with the 
right function pointers. In particular, every type of shape now has the 
same type signature. There is no Sphere type, no Plane type, no Cone 
type, there is only a Shape type. So now we can write

   [sphere 0 1, plane 1 0]

and have it be well-typed. (It's [Shape].)

One thing about this transformation: It is now impossible to "look at" a 
shape and know anything about it. Before, when each shape was 
represented as a data structure, you could look at (say) a sphere and 
see where its center is or what its radius is. Now you can't do that. 
After the shape is created, it is impossible to do anything with it 
except ray intersection tests.

That might be a problem if, for example, you wanted to construct a 
bounding volume or something. OTOH, you could just add some more fields 
containing function pointers for supporting such volume construction. In 
general, whatever abilities you need shapes to have, you add more fields 
to Shape to support that.

Notice that with our fancy AnyShape wrapping technique, you /also/ lose 
the ability to do anything except what the Shape class provides. In 
fact, GHC internally implements that particular trick by doing exactly 
what we're manually doing here - storing a dictionary of function 
pointers. But doing it by hand means you don't need any unofficial 
language extensions.

OK, so we can solve the shape problem. Now how about a camera?

Well, this is where the fun /really/ starts. A camera simply maps an 
image plane coordinate to a ray. So in Haskell, a camera is /just a 
function/!

   type Camera = Vector2 -> Ray

Similarly, a Texture becomes utterly trivial:

   type Texture = Vector3 -> Surface

In general, and Java class that has "just one method" can be turned into 
a plain vanilla Haskell function. That means that Surface is just a 
function, although we need to think carefully about what arguments it 
needs. As it turns out, Surface needs quite a few items of data, all of 
similar types. Since function arguments are unnamed in Haskell, it's 
probably a good idea to define a data structure. (Otherwise we'll 
forever be supplying the arguments in the wrong order and causing weird 
bugs that the compiler can't catch.)

   data RayIsect =
     RayIsect
     {
       isect_point     :: Vector3,
       isect_direction :: Vector3,
       isect_normal    :: Vector3
     }

   type Surface = Scene -> RayIsect -> Colour

Notice, again, that we can fill out /all/ fields, and then lazy 
evaluation will skip any calculations we don't need. In particular, 
Surface types that don't depend on the surface normal can skip the 
surface normal calculation.

With the definitions above, a "constant texture" becomes so trivial it's 
barely worth defining a name for it. You can just write "const s" and 
you're done. Let's look at some simple instances:

   cam_orthographic :: Vector3 -> Vector3 -> Vector3 -> Vector3 -> Camera
   cam_orthographic v0 vx vy vz =
      \ (Vector2 x y) = Ray {start = v0 + x *| vx + y *| vy, direction = vz}

   cam_perspective :: Vector3 -> Vector3 -> Vector3 -> Camera
   cam_perspective vx vy vz =
      \ (Vector2 x y) = Ray {start = v0, direction = vnormalise (x *| vx 
+ y *| vy + vz)}

   sur_ambient :: Colour -> Surface
   sur_ambient c = \ _ _ -> c

   sur_diffuse :: Colour -> Surface
   sur_diffuse c =
     \ scene isect ->
       let
         fn light =
           if shadow_test scene light isect
             then cBlack
             else c *| (light_point light `vdot` isect_point isect)
         colours = map fn (scene_lights scene)
       in sum colours

   sur_reflect :: Colour -> Surface
   sur_reflect c =
     \ scene isect ->
       let
         v_in   = isect_direction isect
         v_norm = isect_normal isect
         v_out  = v_in - 2 * (v_in `vdot` v_norm) *| v_norm
       in trace_ray scene (Ray {start = isect_point isect, direction = 
v_out})

Recall that in Java, ever single one of these things would be an entire 
class, with the "public class Woo extends Wah" and the field declaration 
and the constructor declaration, and only THEN do we get to writing the 
bit of code that actually does something useful.

In Haskell, we just write the useful bit. All of the stuff about is 
useful code, and no filler. It's all stuff implementing actual maths, 
not micromanaging object construction or field initialisation or whatever.

So if you can't access object internals, how can you implement spatial 
transformations? Easy: You transform the coordinates before input.

   transform_texture :: Transform -> Texture -> Texture
   transform_texture transform texture =
     \ point -> texture (transform point)

Now wasn't that easy? Compare Java:

   public class TransformTexture extends Texture
   {
     public final Transform Trans;
     public final Texture   Text;

     public TransformTexture(Transform t1, Texture t2)
     {Trans = t1; Text = t2;}

     public Surface GetSurface(Vector3 p)
     {
       return Text.GetSurface(Trans.Apply(p));
     }
   }

What a load of waffle! Especially when you consider that a typical 
Haskeller would probably write

   transform_texture :: Transform -> Texture -> Texture
   transform_texture = flip (.)

and be done with it!

(This works because this convoluted-sounding concept of "apply a 
coordinate transformation to the input before passing it to the texture" 
is really nothing other than /function composition/, which is a 
well-known concept in Haskell. A coordinate transformation maps a point 
to a point, a texture maps a point to a Surface. Combining these steps 
is clearly function composition. When you think about it like that... 
suddenly it's /obvious/ that this is a trivial thing.)

When you start thinking of (say) a coordinate transformation as a 
trivial function rather than as some complex active "object" which has 
"behaviour", "state" and "identity" which needs "constructors" and 
"destructors" and "gettters" and "setters" and "reflection" and... Well, 
suddenly all your problems start to look a hell of a lot simpler.

Take a look back at all those "map" classes I had. In Haskell, each of 
these is merely a function. It's hardly worth assigning names to them. 
Indeed, in Java I had trouble picking suitably descriptive names for all 
the combinations and variations I could come up with. In Haskell, I 
don't need to bother. If I want to deal with functions that map points 
to scalars, I can just /call/ them "functions that map points to 
scalars". Which is actually less confusing then naming them ScalarMap or 
whatever. It more succinctly conveys what they do.



What have I just said? The short summary is that Haskell, the finest 
functional programming language in the land, is superior to Java, one of 
the more sucky OOP languages. Not exactly a revelation, is it? I think 
I'm going to go outside for a while...


Post a reply to this message

From: clipka
Subject: Re: Haskell vs Java: Building a ray tracer
Date: 28 Jun 2012 11:04:39
Message: <4fec7287$1@news.povray.org>
Am 28.06.2012 12:07, schrieb Invisible:

> Notice the isect function. In Java, we have /three/ functions for
> efficiency; one function only bothers to compute /whether/ there are any
> intersections, another that only bothers to compute the /first/
> intersection, and another which computes /all/ intersections.
>
> Haskell's lazy evaluation makes this quite unnecessary. All we need is
> /one/ function which returns all intersections in a lazy list.
> Inspecting whether the list is empty evaluates just enough of isect to
> answer this question, and no more. And obviously, accessing only the
> first solution does not cause any further solutions to be computed. Note
> we're actually doing /better/ than Java: If our CSG happens, at
> run-time, to need the first /three/ intersections but no further, then
> only these are computed.

You're wrong there.

When querying the "first" intersection, you're not querying for "the 
first intersection that our algorithm cranks out", but "the intersection 
that is closest to the camera, of all the intersections cranked out by 
our algorithm".

In many cases, to get this answer you'll need to compute /all/ 
intersections and sort them by spatial distance, which will necessarily 
trigger evaluation.


> You can certainly do things this way. There is a snag, however. If you
> stick to Haskell 2010 (the currently published official language
> specification), you cannot easily make a list of shapes. Because all the
> elements of a list have to be of identical types. And (for example)
> Sphere is not the same type as Plane, even if they do both implement the
> Shape class (and possibly other classes like Eq or Show). There is no
> way to say "a list of everything that has these methods".

Heh. So standard Haskell obviously sucks as much as Java without generics.

> Notice, again, that we can fill out /all/ fields, and then lazy
> evaluation will skip any calculations we don't need. In particular,
> Surface types that don't depend on the surface normal can skip the
> surface normal calculation.

While that may sound nice, intersection computations might already have 
involved computation of interim results that would come in handy for 
normal computation as well; how do you address that?


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Haskell vs Java: Building a ray tracer
Date: 28 Jun 2012 12:18:53
Message: <4fec83ed$1@news.povray.org>
>> Notice the isect function. In Java, we have /three/ functions for
>> efficiency;
>>
>> Haskell's lazy evaluation makes this quite unnecessary. All we need is
>> /one/ function which returns all intersections in a lazy list.
>
> You're wrong there.
>
> When querying the "first" intersection, you're not querying for "the
> first intersection that our algorithm cranks out", but "the intersection
> that is closest to the camera, of all the intersections cranked out by
> our algorithm".

In my implementation, it is an (unchecked) requirement that the result 
list is always returned sorted in depth order. For planes and spheres, 
this is trivial to arrange. For some more general shape where you can't 
easily determine ahead of time which intersection is nearest, that 
particular shape would have to perform explicit sorting. (This, 
naturally, destroys most of the laziness properties.)

>> You can certainly do things this way. There is a snag, however. If you
>> stick to Haskell 2010 (the currently published official language
>> specification), you cannot easily make a list of shapes.
>
> Heh. So standard Haskell obviously sucks as much as Java without generics.

...unless you scroll down to the bit where I show you exactly how to 
easily do this in standard Haskell. :-P

>> Notice, again, that we can fill out /all/ fields, and then lazy
>> evaluation will skip any calculations we don't need. In particular,
>> Surface types that don't depend on the surface normal can skip the
>> surface normal calculation.
>
> While that may sound nice, intersection computations might already have
> involved computation of interim results that would come in handy for
> normal computation as well; how do you address that?

Hmm, that's an interesting idea. I hadn't thought of that. None of the 
ray tracers I've ever implemented has had any shapes that complicated, 
so none of them would handle this duplication of work. It ought to be 
fairly straightforward (in Haskell or Java) to rectify that, however...


Post a reply to this message

From: Francois Labreque
Subject: Re: Haskell vs Java: Building a ray tracer
Date: 28 Jun 2012 12:34:16
Message: <4fec8788@news.povray.org>


[SNIP Tolstoi novell about java]

> So how does all this look in Haskell?
>
> First of all, we need vectors.
>
> data Vector3 = Vector3 !Double !Double !Double
>
> class Num Vector3 where
> (Vector3 x1 y1 z1) + (Vector3 x2 y2 z2) = Vector3 (x1+x2) (y1+y2) (z1+z2)
> ...
>
> Similarly, we need colours. Unlike Java, we can actually use real
> grown-up operator names, so we don't have to write monstrosities like
>
> return v1.Add(v2).Add(v3).Normalise();
>
> Instead we can simply say
>
> normalise (v1 + v2 + v3)
>
> which is far easier to read. It also means stackloads of existing
> numeric functions automatically work with vectors (e.g., the sum
> function can sum a list of vectors - or a list a colours, which turns
> out to be more useful).

Don't you still have to define how the sum fonction handles this Vector3 
object?

> Note, also, that we don't need to do this stupid
>
> public Vector3(double x, double y, double z) {X = x; Y = y; Z = z;}
>
> and then write
>
> new Vector3(1, 2, 3)
>
> every time we want a vector. Instead, you can simply do
>
> Vector3 1 2 3
>
> to create a vector.

Slight correction, in Java, you only define the constructor once, not at 
every call.

Then,

new Vector3(1, 2, 3)

vs.

Vector3 1 2 3

Is only a matter of language syntax.  You save 7 characters.  Whooptidoo!

> Similarly, look back at the Java Ray class. See how much boilerplate
> code there is for declaring that the fields are public and constant, and
> for constructing a new object, and so forth. Now look at Haskell:
>
> data Ray = Ray {ray_start, ray_direction :: Vector3}
>
> ray_point :: Ray -> Double -> Vector3
> ray_point (Ray s d) t = s + d |* t
>
> Much more compact, and yet much more /readable/ at the same time.

Not quite.  Since I don't speak Haskell, I wouldn't even know what two 
lines of codes actually mean.  I knind of figure that the :: specifies 
that data type (ray_point is of type ray), but I have no idea why you 
have the "-> Double -> Vector3" added at the end.  Likewise, not knowing 
what "!*" means makes the second line hard to follow.

> These, of course, are mere trifles. Let us get to the real meat of the
> problem. Now, we need to implement shapes somehow. The /obvious/ thing
> to do is copy the Java version; define a class for shapes, and then
> create various data structures representing concrete shapes:
>
> class Shape s where
> isect :: s -> Ray -> [Double]
> normal :: s -> Vector3 -> Vector3
> inside :: s -> Vector3 -> Bool
>
> data Sphere = Sphere {center :: Vector3, radius :: Double}
>
> instance Shape Sphere where
> isect (Ray s d) (Sphere c r) = ...
>
> data Plane = Plane {normal :: Vector3, offset :: Double}
>
> instance Shape Plane where
> isect (Ray s d) (Plane n o) = ...
>
> Notice the isect function. In Java, we have /three/ functions for
> efficiency; one function only bothers to compute /whether/ there are any
> intersections, another that only bothers to compute the /first/
> intersection, and another which computes /all/ intersections.
>
> Haskell's lazy evaluation makes this quite unnecessary. All we need is
> /one/ function which returns all intersections in a lazy list.

You mean like AllIsect() does in Java.

> Inspecting whether the list is empty evaluates just enough of isect to
> answer this question, and no more. And obviously, accessing only the
> first solution does not cause any further solutions to be computed. Note
> we're actually doing /better/ than Java: If our CSG happens, at
> run-time, to need the first /three/ intersections but no further, then
> only these are computed.
>
> (In fairness, we could use an Iterator in Java to achieve the self-same
> thing. The hasNext() function would enable intersection tests without
> computing the actual location, and next() would compute one additional
> intersection. It would be really quite a lot of work to set up this much
> lazy evaluation manually, though. Haskell gives it to us for free!)
>

You already do it for AllIsect().  There's no additional work necessary.

Also, bug report time:  AllIsect() returns an array of doubles, yet you 
don't specify the size of the array in question.  How would the caller 
know how the size of the array?  (Disclaimer: I'm not very fluent in 
Java, so if there's some embedded mechanism to prevent your from falling 
off the edge of an array, nevermind)

[more snippage]

> Similarly, a Texture becomes utterly trivial:
>
> type Texture = Vector3 -> Surface
>
> In general, and Java class that has "just one method" can be turned into
> a plain vanilla Haskell function. That means that Surface is just a
> function, although we need to think carefully about what arguments it
> needs. As it turns out, Surface needs quite a few items of data, all of
> similar types. Since function arguments are unnamed in Haskell, it's
> probably a good idea to define a data structure. (Otherwise we'll
> forever be supplying the arguments in the wrong order and causing weird
> bugs that the compiler can't catch.)
>

So a simple language syntax is a good thing, except when it isn't! :P

> data RayIsect =
> RayIsect
> {
> isect_point :: Vector3,
> isect_direction :: Vector3,
> isect_normal :: Vector3
> }
>
> type Surface = Scene -> RayIsect -> Colour
>
> Notice, again, that we can fill out /all/ fields, and then lazy
> evaluation will skip any calculations we don't need. In particular,
> Surface types that don't depend on the surface normal can skip the
> surface normal calculation.
>
> With the definitions above, a "constant texture" becomes so trivial it's
> barely worth defining a name for it. You can just write "const s" and
> you're done. Let's look at some simple instances:
>
> cam_orthographic :: Vector3 -> Vector3 -> Vector3 -> Vector3 -> Camera
> cam_orthographic v0 vx vy vz =
> \ (Vector2 x y) = Ray {start = v0 + x *| vx + y *| vy, direction = vz}
>
> cam_perspective :: Vector3 -> Vector3 -> Vector3 -> Camera
> cam_perspective vx vy vz =
> \ (Vector2 x y) = Ray {start = v0, direction = vnormalise (x *| vx + y
> *| vy + vz)}

I don't know why you name your camera parameters vx, vy, vz.  I guess 
I'm used to POV's location, up, right and look_at...

>
> sur_ambient :: Colour -> Surface
> sur_ambient c = \ _ _ -> c
>
> sur_diffuse :: Colour -> Surface
> sur_diffuse c =
> \ scene isect ->
> let
> fn light =
> if shadow_test scene light isect
> then cBlack
> else c *| (light_point light `vdot` isect_point isect)
> colours = map fn (scene_lights scene)
> in sum colours
>
> sur_reflect :: Colour -> Surface
> sur_reflect c =
> \ scene isect ->
> let
> v_in = isect_direction isect
> v_norm = isect_normal isect
> v_out = v_in - 2 * (v_in `vdot` v_norm) *| v_norm
> in trace_ray scene (Ray {start = isect_point isect, direction = v_out})
>
> Recall that in Java, ever single one of these things would be an entire
> class, with the "public class Woo extends Wah" and the field declaration
> and the constructor declaration, and only THEN do we get to writing the
> bit of code that actually does something useful.

No, you'd have one camera class, and have different calculations based 
on the value of the camera.type member, or two dereived classes that 
inherit from a generic camera class, if you prefer.

Likewise, you'd have only one surface class with ambient, diffuse and 
reflect being member methods, with another method called FinalColour 
that combines the colour of those three and returns the overall colour 
obf the surface at that point.

>
> In Haskell, we just write the useful bit. All of the stuff about is
> useful code, and no filler. It's all stuff implementing actual maths,
> not micromanaging object construction or field initialisation or whatever.

You don't need to micromanage object construction and field 
initialisation in Java or C++ either, but if you don't you better be 
sure that no one will ever try to access an uninitialized member!!!

By the way, what happens in Haskell if you try to access an unitialised 
member?

>
> So if you can't access object internals, how can you implement spatial
> transformations? Easy: You transform the coordinates before input.
>
> transform_texture :: Transform -> Texture -> Texture
> transform_texture transform texture =
> \ point -> texture (transform point)
>
> Now wasn't that easy? Compare Java:
>
> public class TransformTexture extends Texture
> {
> public final Transform Trans;
> public final Texture Text;
>
> public TransformTexture(Transform t1, Texture t2)
> {Trans = t1; Text = t2;}
>
> public Surface GetSurface(Vector3 p)
> {
> return Text.GetSurface(Trans.Apply(p));
> }
> }
>
> What a load of waffle! Especially when you consider that a typical
> Haskeller would probably write
>
> transform_texture :: Transform -> Texture -> Texture
> transform_texture = flip (.)
>
> and be done with it!
>
> (This works because this convoluted-sounding concept of "apply a
> coordinate transformation to the input before passing it to the texture"
> is really nothing other than /function composition/, which is a
> well-known concept in Haskell. A coordinate transformation maps a point
> to a point, a texture maps a point to a Surface. Combining these steps
> is clearly function composition. When you think about it like that...
> suddenly it's /obvious/ that this is a trivial thing.)
>
> When you start thinking of (say) a coordinate transformation as a
> trivial function rather than as some complex active "object" which has
> "behaviour", "state" and "identity" which needs "constructors" and
> "destructors" and "gettters" and "setters" and "reflection" and... Well,
> suddenly all your problems start to look a hell of a lot simpler.
>
> Take a look back at all those "map" classes I had. In Haskell, each of
> these is merely a function. It's hardly worth assigning names to them.
> Indeed, in Java I had trouble picking suitably descriptive names for all
> the combinations and variations I could come up with. In Haskell, I
> don't need to bother. If I want to deal with functions that map points
> to scalars, I can just /call/ them "functions that map points to
> scalars". Which is actually less confusing then naming them ScalarMap or
> whatever. It more succinctly conveys what they do.
>

I think you have the whole idea backwards.  Each object should know how 
it is being transformed (think about it, how else can you determine 
which face is being intersected by a ray?), therefore can take care of 
those transformations internally when computing the object.FinalColour 
value, so the transformation computations will have access to those 
private members, you don't need to do any of the gymnastics you just did.

>
>
> What have I just said? The short summary is that Haskell, the finest
> functional programming language in the land, is superior to Java, one of
> the more sucky OOP languages. Not exactly a revelation, is it? I think
> I'm going to go outside for a while...

No, you've just showed that trying to use one language in a way that is 
best suited for another language gets ugly.

I remember a discussion a long, long time ago, in a compl.lang.lisp far, 
far away of number-crunching speed comparisons (IIRC, positions of the 
planets around the sun for a command-line spcified date) between Lisp 
and C, where the Lisp programmers had to write all kinds of additional 
code to turn off all kinds of bounds, checking, automatic type 
conversions, etc... just to come with comparable speeds.

If you _know_ that you will never run out of bounds with a double or an 
int, then the C porgrammer will cliam victory.  However, If you aren't 
absolutely sure that you can't end up with a negative orbit radius, or a 
heartrate larger than MAX_INT, then Lisp's automatic bounds checking 
will save you some time-consuming assert() calls and test cases.

-- 
/*Francois Labreque*/#local a=x+y;#local b=x+a;#local c=a+b;#macro P(F//
/*    flabreque    */L)polygon{5,F,F+z,L+z,L,F pigment{rgb 9}}#end union
/*        @        */{P(0,a)P(a,b)P(b,c)P(2*a,2*b)P(2*b,b+c)P(b+c,<2,3>)
/*   gmail.com     */}camera{orthographic location<6,1.25,-6>look_at a }


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Haskell vs Java: Building a ray tracer
Date: 28 Jun 2012 13:18:59
Message: <4fec9203@news.povray.org>
>> So how does all this look in Haskell?
>>
>> First of all, we need vectors.
>>
>> data Vector3 = Vector3 !Double !Double !Double
>>
>> class Num Vector3 where
>> (Vector3 x1 y1 z1) + (Vector3 x2 y2 z2) = Vector3 (x1+x2) (y1+y2) (z1+z2)
>> ...
>>
>> Similarly, we need colours. Unlike Java, we can actually use real
>> grown-up operator names, so we don't have to write monstrosities like
>>
>> return v1.Add(v2).Add(v3).Normalise();
>>
>> Instead we can simply say
>>
>> normalise (v1 + v2 + v3)
>>
>> which is far easier to read. It also means stackloads of existing
>> numeric functions automatically work with vectors (e.g., the sum
>> function can sum a list of vectors - or a list a colours, which turns
>> out to be more useful).
>
> Don't you still have to define how the sum function handles this Vector3
> object?

No. The sum function just calls the (+) function, so as long as (+) has 
been defined, sum will work.

(Actually, that's not completely true. sum also uses zero, so you have 
to define what a zero vector is. That's the rest of the Num instance 
that I left out with "...")

> Slight correction, in Java, you only define the constructor once, not at
> every call.

Well, yes. You have to write a constructor once per class, which means 
if you need a lot of classes, you end up writing a heck of a lot of 
constructors.

> Then,
>
> new Vector3(1, 2, 3)
>
> vs.
>
> Vector3 1 2 3
>
> Is only a matter of language syntax. You save 7 characters. Whooptidoo!

Sure. I'll give you that one.

>> Similarly, look back at the Java Ray class. See how much boilerplate
>> code there is for declaring that the fields are public and constant, and
>> for constructing a new object, and so forth. Now look at Haskell:
>>
>> data Ray = Ray {ray_start, ray_direction :: Vector3}
>>
>> ray_point :: Ray -> Double -> Vector3
>> ray_point (Ray s d) t = s + d |* t
>>
>> Much more compact, and yet much more /readable/ at the same time.
>
> Not quite. Since I don't speak Haskell, I wouldn't even know what two
> lines of codes actually mean. I kind of figure that the :: specifies
> that data type (ray_point is of type ray), but I have no idea why you
> have the "-> Double -> Vector3" added at the end. Likewise, not knowing
> what "!*" means makes the second line hard to follow.

Given that the equation of a straight line is f(t) = S + Dt, I would 
have imagined that s + d |* t would relatively stright-forward. (Oh, the 
puns...)

Do you not think that, once you've learned a little syntax, having to 
read just two lines of code is easier than having to wade through 12? (I 
guess that one isn't all that easy to answer objectively...)

>> Haskell's lazy evaluation makes this quite unnecessary. All we need is
>> /one/ function which returns all intersections in a lazy list.
>
> You mean like AllIsect() does in Java.
>
>> Inspecting whether the list is empty evaluates just enough of isect to
>> answer this question, and no more
>
> You already do it for AllIsect(). There's no additional work necessary.

In Java, calling AllIsect() causes all intersections to be computed. In 
Haskell, calling the isect function does /not/ necessarily cause all 
intersections to be computed. Only the ones you actually "look at". 
That's the point I'm making.

> How would the caller know how the size of the array?

Java stores the size of all arrays and lets you query it at run-time. It 
also throws exceptions on all array bounds violations. Java is 
supposedly a "safe" language, remember? (It also throws exceptions on 
all null pointer accesses, for example.)

>> In general, any Java class that has "just one method" can be turned into
>> a plain vanilla Haskell function. That means that Surface is just a
>> function, although we need to think carefully about what arguments it
>> needs. As it turns out, Surface needs quite a few items of data, all of
>> similar types. Since function arguments are unnamed in Haskell, it's
>> probably a good idea to define a data structure. (Otherwise we'll
>> forever be supplying the arguments in the wrong order and causing weird
>> bugs that the compiler can't catch.)
>
> So a simple language syntax is a good thing, except when it isn't! :P

Haskell doesn't have named arguments, so you can trivially define a data 
structure and solve the problem that way. It's no big deal.

Java doesn't let you mention arguments by name when invoking a function 
either. But if you have an IDE, it will at least show you what the 
argument names are in the function definition, which can be good 
enough... You're still likely to get it wrong, though. Java doesn't let 
you explicitly assign values to names at all, unless you start making 
public fields writable or manually writing lots of getter and setter 
methods.

> I don't know why you name your camera parameters vx, vy, vz. I guess I'm
> used to POV's location, up, right and look_at...

More like location, up, right and forward. If this wasn't example code, 
you would probably write down documentation for what all these 
parameters are actually supposed to mean, the assumptions about them 
(e.g., the code is assuming that some of these are /unit/ vectors, and 
others aren't), and so on.

>> Recall that in Java, ever single one of these things would be an entire
>> class, with the "public class Woo extends Wah" and the field declaration
>> and the constructor declaration, and only THEN do we get to writing the
>> bit of code that actually does something useful.
>
> No, you'd have one camera class, and have different calculations based
> on the value of the camera.type member

This is about as anti-OO as you can get.

> or two dereived classes that
> inherit from a generic camera class, if you prefer.

Which would be... one class per camera type, as I asserted. :-P

> Likewise, you'd have only one surface class with ambient, diffuse and
> reflect being member methods, with another method called FinalColour
> that combines the colour of those three and returns the overall colour
> obf the surface at that point.

Well, yeah, you could write one Surface class which implements every 
possible rendering model all at once. It's more modular to make a 
separate class for each term, but you could go down the monolithic route 
if you want.

>> In Haskell, we just write the useful bit. All of the stuff about is
>> useful code, and no filler. It's all stuff implementing actual maths,
>> not micromanaging object construction or field initialisation or
>> whatever.
>
> You don't need to micromanage object construction and field
> initialisation in Java or C++ either, but if you don't you better be
> sure that no one will ever try to access an uninitialized member!!!

In Haskell, you write down the name of the thing, what fields it should 
have, and what their types are. The end.

In Java or C++, you have to write whether each field is public or 
private, whether it's constant or not, you have to manually write the 
function that takes all the values and assigns them to each of the 
fields, and in general you have to write a whole /bunch/ of code to get 
the job done.

Then again, if you're working in Java or C++ (more so Java, since C++ 
isn't pure-OO), you shouldn't be writing too many classes which just 
passively hold on to some data. Objects are supposed to "do" stuff.

> By the way, what happens in Haskell if you try to access an unitialised
> member?

It throws an exception saying "this field is uninitialised". It also 
generates a compiler warning.

>> So if you can't access object internals, how can you implement spatial
>> transformations? Easy: You transform the coordinates before input.
>>
>> transform_texture :: Transform -> Texture -> Texture
>> transform_texture transform texture =
>> \ point -> texture (transform point)
>>
>> Now wasn't that easy?

> I think you have the whole idea backwards. Each object should know how
> it is being transformed (think about it, how else can you determine
> which face is being intersected by a ray?), therefore can take care of
> those transformations internally when computing the object.FinalColour
> value, so the transformation computations will have access to those
> private members, you don't need to do any of the gymnastics you just did.

Composing two functions is hardly "gymnastics".

Essentially, there are two ways you can implement a coordinate space 
transformation:

- Transform the object and leave the coordinates alone.
- Transform the coordinates and leave the object alone.

Both of these are equivalent, and equally valid, ways to solve the 
problem. The latter has the advantage that you don't have to reimplement 
the transformation code for every object; you can write it just once.

>> What have I just said? The short summary is that Haskell, the finest
>> functional programming language in the land, is superior to Java, one of
>> the more sucky OOP languages. Not exactly a revelation, is it? I think
>> I'm going to go outside for a while...
>
> No, you've just showed that trying to use one language in a way that is
> best suited for another language gets ugly.

Ray tracing is all about mathematical operations. Vector 
transformations, set unions, spatial subdivisions, etc. And Haskell 
makes it really, really easy to write that code. Java is completely 
capable of doing everything Haskell is doing. It just makes it much more 
work.

> the Lisp programmers had to write all kinds of additional
> code to turn off all kinds of bounds, checking, automatic type
> conversions, etc... just to come with comparable speeds.
>
> If you _know_ that you will never run out of bounds with a double or an
> int, then the C porgrammer will cliam victory. However, If you aren't
> absolutely sure that you can't end up with a negative orbit radius, or a
> heartrate larger than MAX_INT, then Lisp's automatic bounds checking
> will save you some time-consuming assert() calls and test cases.

...which is why I use a programming language that defaults to safe 
rather than fast?

(It still amuses me when Oracle claims that Java is a "safe language". 
Compared to Haskell, it's pretty damned weak, actually...)


Post a reply to this message

From: Stephen
Subject: Re: Haskell vs Java: Building a ray tracer
Date: 28 Jun 2012 14:31:18
Message: <4feca2f6@news.povray.org>
On 28/06/2012 5:34 PM, Francois Labreque wrote:
> [SNIP Tolstoi novell about java]

LOL :-D

-- 
Regards
     Stephen


Post a reply to this message

From: Francois Labreque
Subject: Re: Haskell vs Java: Building a ray tracer
Date: 28 Jun 2012 15:37:43
Message: <4fecb287$1@news.povray.org>

>>> So how does all this look in Haskell?
>>>
>>> First of all, we need vectors.
>>>
>>> data Vector3 = Vector3 !Double !Double !Double
>>>
>>> class Num Vector3 where
>>> (Vector3 x1 y1 z1) + (Vector3 x2 y2 z2) = Vector3 (x1+x2) (y1+y2)
>>> (z1+z2)
>>> ...
>>>
>>> Similarly, we need colours. Unlike Java, we can actually use real
>>> grown-up operator names, so we don't have to write monstrosities like
>>>
>>> return v1.Add(v2).Add(v3).Normalise();
>>>
>>> Instead we can simply say
>>>
>>> normalise (v1 + v2 + v3)
>>>
>>> which is far easier to read. It also means stackloads of existing
>>> numeric functions automatically work with vectors (e.g., the sum
>>> function can sum a list of vectors - or a list a colours, which turns
>>> out to be more useful).
>>
>> Don't you still have to define how the sum function handles this Vector3
>> object?
>
> No. The sum function just calls the (+) function, so as long as (+) has
> been defined, sum will work.

Makes sense.

>>> Similarly, look back at the Java Ray class. See how much boilerplate
>>> code there is for declaring that the fields are public and constant, and
>>> for constructing a new object, and so forth. Now look at Haskell:
>>>
>>> data Ray = Ray {ray_start, ray_direction :: Vector3}
>>>
>>> ray_point :: Ray -> Double -> Vector3
>>> ray_point (Ray s d) t = s + d |* t
>>>
>>> Much more compact, and yet much more /readable/ at the same time.
>>
>> Not quite. Since I don't speak Haskell, I wouldn't even know what two
>> lines of codes actually mean. I kind of figure that the :: specifies
>> that data type (ray_point is of type ray), but I have no idea why you
>> have the "-> Double -> Vector3" added at the end. Likewise, not knowing
>> what "!*" means makes the second line hard to follow.
>
> Given that the equation of a straight line is f(t) = S + Dt, I would
> have imagined that s + d |* t would relatively stright-forward. (Oh, the
> puns...)

Ok.  I Still don't know what the -> thingamadoodle does.

>
> Do you not think that, once you've learned a little syntax, having to
> read just two lines of code is easier than having to wade through 12? (I
> guess that one isn't all that easy to answer objectively...)
>

My point was that you don't have to wade through the constructor(s) and 
member function definition to understand what

return v1.Add(v2).Add(v3).Normalise();

does either. [as long as your method names aren't too obfuscated, of course]

Besides a lot of people gripe about Java not supporting operator 
overloading.  You are not the only one.

>>> Haskell's lazy evaluation makes this quite unnecessary. All we need is
>>> /one/ function which returns all intersections in a lazy list.
>>
>> You mean like AllIsect() does in Java.
>>
>>> Inspecting whether the list is empty evaluates just enough of isect to
>>> answer this question, and no more
>>
>> You already do it for AllIsect(). There's no additional work necessary.
>
> In Java, calling AllIsect() causes all intersections to be computed. In
> Haskell, calling the isect function does /not/ necessarily cause all
> intersections to be computed. Only the ones you actually "look at".
> That's the point I'm making.

As Clipka pointed out, you need to calculate all intersections anyway to 
find the nearest one.

>
>> How would the caller know how the size of the array?
>
> Java stores the size of all arrays and lets you query it at run-time. It
> also throws exceptions on all array bounds violations. Java is
> supposedly a "safe" language, remember? (It also throws exceptions on
> all null pointer accesses, for example.)
>

Thanks.  That's why I asked if there was automatic bounds-checking.

>>> In general, any Java class that has "just one method" can be turned into
>>> a plain vanilla Haskell function. That means that Surface is just a
>>> function, although we need to think carefully about what arguments it
>>> needs. As it turns out, Surface needs quite a few items of data, all of
>>> similar types. Since function arguments are unnamed in Haskell, it's
>>> probably a good idea to define a data structure. (Otherwise we'll
>>> forever be supplying the arguments in the wrong order and causing weird
>>> bugs that the compiler can't catch.)
>>
>> So a simple language syntax is a good thing, except when it isn't! :P
>
> Haskell doesn't have named arguments, so you can trivially define a data
> structure and solve the problem that way. It's no big deal.
>
> Java doesn't let you mention arguments by name when invoking a function
> either. But if you have an IDE, it will at least show you what the
> argument names are in the function definition, which can be good
> enough... You're still likely to get it wrong, though. Java doesn't let
> you explicitly assign values to names at all, unless you start making
> public fields writable or manually writing lots of getter and setter
> methods.

[macho programmer mode=on]
IDEs are for sissies.  In my days, we programmed in vi and we liked it. 
  If we weren't sure about the order of parameters, we'd open the .h in 
another buffer and search for the function, or look at the man page for 
standard stuff.
[macho programmer mode=off]

Actually, we'd burn out the SHIFT-F1, or CTRL-F1, or whatever the hotkey 
was in Borland's Turbo IDE!

>
>> I don't know why you name your camera parameters vx, vy, vz. I guess I'm
>> used to POV's location, up, right and look_at...
>
> More like location, up, right and forward. If this wasn't example code,
> you would probably write down documentation for what all these
> parameters are actually supposed to mean, the assumptions about them
> (e.g., the code is assuming that some of these are /unit/ vectors, and
> others aren't), and so on.
>

I was just pulling your leg.  You can call your variables Fred, Pebbles 
and Bambam if you want to.  Just don't go about making readability 
claims, if you do, though. :)

>>> Recall that in Java, ever single one of these things would be an entire
>>> class, with the "public class Woo extends Wah" and the field declaration
>>> and the constructor declaration, and only THEN do we get to writing the
>>> bit of code that actually does something useful.
>>
>> No, you'd have one camera class, and have different calculations based
>> on the value of the camera.type member
>
> This is about as anti-OO as you can get.
>

So if I want to desing a red/black binary tree, I'd have to define a red 
class and a black class, instead of having colour as a member value of 
my node class?

The type of projection, be it isometric, orthographic, perspective, 
wall-eye, etc... only affects the direction the rays shot.  It's a lot 
simpler to have a switch{ } statement in one method than to reinvent 
five or six classes just because one of the methods is different (cf. 
you earlier comment about "one-method-classes").

If you are going to publish your stuff as a libray and expect people to 
build on it, then, of course, it makes sense to create new classes for 
each camera type derived from a common one, making it easier for someone 
to come along and add new camera types (e.g.: red/green stereoscopic)

>> or two dereived classes that
>> inherit from a generic camera class, if you prefer.
>
> Which would be... one class per camera type, as I asserted. :-P
>

Agreed. However, you don't need to rewrite the constructor(s) and other 
methods, unless you really need to.  So, in the end, it just amounts to 
the syntactic differences between the two languages.

>> Likewise, you'd have only one surface class with ambient, diffuse and
>> reflect being member methods, with another method called FinalColour
>> that combines the colour of those three and returns the overall colour
>> obf the surface at that point.
>
> Well, yeah, you could write one Surface class which implements every
> possible rendering model all at once. It's more modular to make a
> separate class for each term, but you could go down the monolithic route
> if you want.
>

I fail to see in which circumstance, you'd need to know anything other 
than the final colour of an object, so I don't see the advantage fo 
computing the ambient, diffuse and reflective components of a point 
outside of the object itself.  Again, feel free to correct me if I'm 
overlooking something obvious, such as radiosity.  I'm not an algoritm 
expert.

>>> In Haskell, we just write the useful bit. All of the stuff about is
>>> useful code, and no filler. It's all stuff implementing actual maths,
>>> not micromanaging object construction or field initialisation or
>>> whatever.
>>
>> You don't need to micromanage object construction and field
>> initialisation in Java or C++ either, but if you don't you better be
>> sure that no one will ever try to access an uninitialized member!!!
>
> In Haskell, you write down the name of the thing, what fields it should
> have, and what their types are. The end.
>
> In Java or C++, you have to write whether each field is public or
> private, whether it's constant or not, you have to manually write the
> function that takes all the values and assigns them to each of the
> fields, and in general you have to write a whole /bunch/ of code to get
> the job done.
>

public/private/protected is important when you make libraries that are 
intended to be reused.  This is to prevent other programmers from 
mucking around with the internals of your objects and potentially 
creating all kinds of problems at runtime, such as memory leakage, or 
buffer overflows.  My OO theory is very rusty, but back in the day, 
objects were supposed to be treated as black boxes and you only 
interacted with them via public or protected methods.  Leaving 
everything wide open means you can either shoot yourself in the foot, or 
the compiler needs to add a lot of baby-sitting code that will prevent 
you from doing it at run-time (cf. my earlier anecdote about Lisp). 
This may not be all that important on a Win/Mac/*NIX machine with 8GB of 
ram, but in the mobile app area, the added bloat is a constraint.

> Then again, if you're working in Java or C++ (more so Java, since C++
> isn't pure-OO), you shouldn't be writing too many classes which just
> passively hold on to some data. Objects are supposed to "do" stuff.
>
>> By the way, what happens in Haskell if you try to access an unitialised
>> member?
>
> It throws an exception saying "this field is uninitialised". It also
> generates a compiler warning.
>
>>> So if you can't access object internals, how can you implement spatial
>>> transformations? Easy: You transform the coordinates before input.
>>>
>>> transform_texture :: Transform -> Texture -> Texture
>>> transform_texture transform texture =
>>> \ point -> texture (transform point)
>>>
>>> Now wasn't that easy?
>
>> I think you have the whole idea backwards. Each object should know how
>> it is being transformed (think about it, how else can you determine
>> which face is being intersected by a ray?), therefore can take care of
>> those transformations internally when computing the object.FinalColour
>> value, so the transformation computations will have access to those
>> private members, you don't need to do any of the gymnastics you just did.
>
> Composing two functions is hardly "gymnastics".
>
> Essentially, there are two ways you can implement a coordinate space
> transformation:
>
> - Transform the object and leave the coordinates alone.
> - Transform the coordinates and leave the object alone.
>
> Both of these are equivalent, and equally valid, ways to solve the
> problem. The latter has the advantage that you don't have to reimplement
> the transformation code for every object; you can write it just once.
>

Huh?  While you are right that in both instances, the transform method 
or function needs to be called for each point calculation, a cylinder, 
an array light and a marble texture don't have the same parameters, so 
you _do_ have to rewrite the transformation code for each one anyway. 
To me it seems a lot more obvious to have an object call its internal 
transform() method while computing the color of a point on its surface, 
than to to have to externally figure out which transform function to 
call based on the type of object we're dealing with.

>>> What have I just said? The short summary is that Haskell, the finest
>>> functional programming language in the land, is superior to Java, one of
>>> the more sucky OOP languages. Not exactly a revelation, is it? I think
>>> I'm going to go outside for a while...
>>
>> No, you've just showed that trying to use one language in a way that is
>> best suited for another language gets ugly.
>
> Ray tracing is all about mathematical operations. Vector
> transformations, set unions, spatial subdivisions, etc. And Haskell
> makes it really, really easy to write that code. Java is completely
> capable of doing everything Haskell is doing. It just makes it much more
> work.
>
>> the Lisp programmers had to write all kinds of additional
>> code to turn off all kinds of bounds, checking, automatic type
>> conversions, etc... just to come with comparable speeds.
>>
>> If you _know_ that you will never run out of bounds with a double or an
>> int, then the C porgrammer will cliam victory. However, If you aren't
>> absolutely sure that you can't end up with a negative orbit radius, or a
>> heartrate larger than MAX_INT, then Lisp's automatic bounds checking
>> will save you some time-consuming assert() calls and test cases.
>
> ...which is why I use a programming language that defaults to safe
> rather than fast?
>
> (It still amuses me when Oracle claims that Java is a "safe language".
> Compared to Haskell, it's pretty damned weak, actually...)

Huh.... Earlier you were complaining about having to specify public, 
private and const in Java.  _That_, along with garbage collection, is 
the safety they're talking about.

-- 
/*Francois Labreque*/#local a=x+y;#local b=x+a;#local c=a+b;#macro P(F//
/*    flabreque    */L)polygon{5,F,F+z,L+z,L,F pigment{rgb 9}}#end union
/*        @        */{P(0,a)P(a,b)P(b,c)P(2*a,2*b)P(2*b,b+c)P(b+c,<2,3>)
/*   gmail.com     */}camera{orthographic location<6,1.25,-6>look_at a }


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Haskell vs Java: Building a ray tracer
Date: 28 Jun 2012 16:27:23
Message: <4fecbe2b@news.povray.org>
>> Given that the equation of a straight line is f(t) = S + Dt, I would
>> have imagined that s + d |* t would relatively stright-forward. (Oh, the
>> puns...)
>
> Ok. I Still don't know what the -> thingamadoodle does.

Valid point is accepted.

>> Do you not think that, once you've learned a little syntax, having to
>> read just two lines of code is easier than having to wade through 12? (I
>> guess that one isn't all that easy to answer objectively...)
>>
>
> My point was that you don't have to wade through the constructor(s) and
> member function definition to understand what
>
> return v1.Add(v2).Add(v3).Normalise();
>
> does either. [as long as your method names aren't too obfuscated, of
> course]

With a class, there's lots more code to write, and lots more to read. 
With this function, you're only writing the actual "functional parts", 
if you will. You don't have to perform a bunch of extra busy-work every 
time you want to do something.

> Besides a lot of people gripe about Java not supporting operator
> overloading. You are not the only one.

Oh, no. I don't claim that any of my remarks are /original/. ;-)

>>> You already do it for AllIsect(). There's no additional work necessary.
>>
>> In Java, calling AllIsect() causes all intersections to be computed. In
>> Haskell, calling the isect function does /not/ necessarily cause all
>> intersections to be computed. Only the ones you actually "look at".
>> That's the point I'm making.
>
> As Clipka pointed out, you need to calculate all intersections anyway to
> find the nearest one.

And as I pointed out, this is not the case for many objects. E.g., a 
sphere has two points of intersection. These are the solutions to a 
quadratic equation. And we all know that the way to solve that is

   x = -b +/- Sqrt(b^2 - 4ac) / 2a

To find the "nearest" intersection, you merely pick the "-" rather than 
the "+". Done.

There are shapes where the procedure is not so simple. But that doesn't 
mean we can't have the benefit from the cases where it /is/ this simple.

>>> I don't know why you name your camera parameters vx, vy, vz. I guess I'm
>>> used to POV's location, up, right and look_at...
>>
>> More like location, up, right and forward.
>
> I was just pulling your leg. You can call your variables Fred, Pebbles
> and Bambam if you want to. Just don't go about making readability
> claims, if you do, though. :)

Is it wrong that I actually know who those people are? o_O

>>>> Recall that in Java, ever single one of these things would be an entire
>>>> class.
>>>
>>> No, you'd have one camera class, and have different calculations based
>>> on the value of the camera.type member
>>
>> This is about as anti-OO as you can get.
>
> So if I want to desing a red/black binary tree, I'd have to define a red
> class and a black class, instead of having colour as a member value of
> my node class?

No. Because nodes need to change colour, and in Java [and most other OO 
languages] objects can't dynamically change class.

More to the point, red/black is just a boolean switch that controls 
which way a node is processed. Much like a priority on a job object, or 
whatever. However...

> The type of projection, be it isometric, orthographic, perspective,
> wall-eye, etc... only affects the direction the rays shot. It's a lot
> simpler to have a switch{ } statement in one method than to reinvent
> five or six classes just because one of the methods is different (cf.
> you earlier comment about "one-method-classes").

...the camera type completely changes the entire algorithm. Writing a 
giant switch{} statement to cover every possible camera type is

1. the exact problem that OOP was supposed to get rid of
2. prevents anybody adding new camera types

> If you are going to publish your stuff as a libray and expect people to
> build on it, then, of course, it makes sense to create new classes for
> each camera type derived from a common one, making it easier for someone
> to come along and add new camera types (e.g.: red/green stereoscopic)

One of the fun things about Java is that it supports dynamic loading. 
You can throw a Jar file in a folder, and have the ray tracer load new 
code and immediately use it, without recompiling.

Or at least, you can if you /don't/ code your application using giant 
switch{} statements everywhere. :-P

>>> or two dereived classes that
>>> inherit from a generic camera class, if you prefer.
>>
>> Which would be... one class per camera type, as I asserted. :-P
>
> Agreed. However, you don't need to rewrite the constructor(s) and other
> methods, unless you really need to. So, in the end, it just amounts to
> the syntactic differences between the two languages.

Actually, yes you do.

Because Java is retarded, you're not allowed to inherit constructors. 
You can inherit regular methods, but not constructors. You have to 
explicitly write out the new constructor and make it explicitly call the 
old one, or it won't compile.

How's that for stupid?

>> Well, yeah, you could write one Surface class which implements every
>> possible rendering model all at once. It's more modular to make a
>> separate class for each term, but you could go down the monolithic route
>> if you want.
>>
>
> I fail to see in which circumstance, you'd need to know anything other
> than the final colour of an object, so I don't see the advantage for
> computing the ambient, diffuse and reflective components of a point
> outside of the object itself. Again, feel free to correct me if I'm
> overlooking something obvious, such as radiosity. I'm not an algoritm
> expert.

It's the same thing as with cameras. If you implement each surface 
property as a separate class, then you can add more later. But if you 
implement one giant class that has every possible surface characteristic 
hard-coded into it, you can't do that any more. It's less modular in 
that way.

>> In Haskell, you write down the name of the thing, what fields it should
>> have, and what their types are. The end.
>>
>> In Java or C++, you have to write whether each field is public or
>> private, whether it's constant or not, you have to manually write the
>> function that takes all the values and assigns them to each of the
>> fields, and in general you have to write a whole /bunch/ of code to get
>> the job done.
>
> public/private/protected is important when you make libraries that are
> intended to be reused. This is to prevent other programmers from mucking
> around with the internals of your objects and potentially creating all
> kinds of problems at runtime, such as memory leakage, or buffer
> overflows. My OO theory is very rusty, but back in the day, objects were
> supposed to be treated as black boxes and you only interacted with them
> via public or protected methods. Leaving everything wide open means you
> can either shoot yourself in the foot, or the compiler needs to add a
> lot of baby-sitting code that will prevent you from doing it at run-time
> (cf. my earlier anecdote about Lisp). This may not be all that important
> on a Win/Mac/*NIX machine with 8GB of ram, but in the mobile app area,
> the added bloat is a constraint.

In the case of a ray tracer, you basically build a big static data 
structure (the scene), and then run some complicated algorithms over it 
(the ray tracer). On top of that, there are almost no special invariants 
which these data structures need to satisfy, meaning that there's almost 
no way to break the program. So issues like whether something is 
writable or not are pretty much moot in this specific instance. 
Obviously in other instances, it's rather important. Critical, even.

In Java, you can declare fields as constant, meaning that even if 
they're public, you can't alter them anyway. (But beware: If the field 
points to a mutable object, you /can/ still mutate that!) For the 
ray-tracer, you could equally declare everything private. You don't 
actually need field access at all, except within the object itself. As I 
say, in this example, it's all fairly moot either way.

Writing "public" or "private" isn't so bad I guess. But having to define 
all the fields, and then manually write a constructor with lists the 
same information /again/, and then manually assigns all the arguments to 
the fields... that's just tedious. Again, in some other situation, you 
might want your constructor to actually perform some real initialisation 
work. But in this specific case, there's nothing to initialise.

Haskell doesn't make me write initialisers. If I want one, I can write a 
special initialisation function and then do some mojo to prevent people 
forgetting to use it. And if I don't need that, I can just leave it. 
Either way, I'm not manually writing useless boilerplate code.

>> Composing two functions is hardly "gymnastics".
>>
>> Essentially, there are two ways you can implement a coordinate space
>> transformation:
>>
>> - Transform the object and leave the coordinates alone.
>> - Transform the coordinates and leave the object alone.
>>
>> Both of these are equivalent, and equally valid, ways to solve the
>> problem. The latter has the advantage that you don't have to reimplement
>> the transformation code for every object; you can write it just once.
>>
>
> Huh? While you are right that in both instances, the transform method or
> function needs to be called for each point calculation, a cylinder, an
> array light and a marble texture don't have the same parameters, so you
> _do_ have to rewrite the transformation code for each one anyway. To me
> it seems a lot more obvious to have an object call its internal
> transform() method while computing the color of a point on its surface,
> than to to have to externally figure out which transform function to
> call based on the type of object we're dealing with.

I will try again. Suppose for argument's sake we want to rotate an 
object through 13 degrees. You can do this two ways:

- Tell the object "rotate yourself 13 degrees", and have the object 
update all its coordinates and so forth. Of course, every type of object 
is defined in a different way. For a sphere, you'd rotate just the 
center coordinates. For a cube, you'd rotate two corner coordinates. For 
a polygon, you'd rotate each corner coordinate. And so forth.

- Wrap the object in a transform object, and replace the original object 
with the wrapped one. Now every time you say "what intersections do you 
have with this ray?", the wrapper object transforms the ray, and then 
asks the original object "what intersections do you have with this 
transformed ray?" The original object has no idea that anything has 
changed. You can implement the wrapper object once, and use it to 
transform any possible object.

On the downside, the first way involves transforming the object once, 
whereas the second involves transforming every individual ray...

>> ...which is why I use a programming language that defaults to safe
>> rather than fast?
>>
>> (It still amuses me when Oracle claims that Java is a "safe language".
>> Compared to Haskell, it's pretty damned weak, actually...)
>
> Huh.... Earlier you were complaining about having to specify public,
> private and const in Java. _That_, along with garbage collection, is the
> safety they're talking about.

Java is safer than C, that I'll grant you. But that's not the same as 
"safe".

Every time a Java program has an int field and a bunch of public static 
final int FOO = 0x384 declarations, you just /know/ that somebody, 
somewhere is going to put an invalid value into that field. And then the 
programmer must explicitly write a manual runtime check for this 
condition, and decide what to do about it if it occurs, and test that 
this works.

A Haskell programmer can simply write

   data Mode = Foo | Bar | Baz

and rest safe in the knowledge that the Mode field will /always/ be one 
of the tree valid possibilities. No need to write code to check this, no 
need to write test cases to check for it. No need for the client to 
guess what happens if this isn't true. Much. Safer.

Java inherits C's strange conventions of returning an int value for 
whatever, unless there's in error, in which case it returns a negative 
integer. And if you forget to test whether the result value is negative, 
you now have a runtime bug.

(Exhibit A: Reader.read() returns int, not char, because that way it can 
return a character code unless EOF is reached in which case it returns 
-1. WTF?)

Haskell has Maybe. If an operation can fail, you return Maybe Int 
instead of just Int. The API tells you "this operation can fail". You 
can't forget this fact. If you don't check for failure, your code does 
not compile. (Stick /that/ in your pipe and smoke it, Mr 
Checked-Exceptions-Unless-They-Subclass-java.lang.Error :-P ) You can't 
just "ignore" failure; you have to actually define what to do about it. 
(Although you can sort-of get away with returning a plausible but wrong 
default value if you're hell-bent on writing bad code.)

C behaves in undefined ways on a null pointer. Java throws an exception 
on a null pointer. Haskell doesn't /have/ pointers, so they can never be 
null. :-P

I could go on at length... but I won't. (Mainly because I know nobody 
except me cares.)


Post a reply to this message

From: nemesis
Subject: Re: Haskell vs Java: Building a ray tracer
Date: 28 Jun 2012 18:30:00
Message: <web.4fecda86321519bc773c9a3e0@news.povray.org>
Invisible <voi### [at] devnull> wrote:
[tldr]

> and this is valid. Notice, however, that it is impossible to write an
> "unwrap" function. Because once you "wrap" something, its original type
> information is lost forever, so there's no way in hell to know what type
> to cast it back to.

what about in heaven?  In Lisp there's no such hellish troubles. :)

> Now a "shape" is simply an ordinary data structure, which contains some
> function pointers. And each sort of shape - a sphere, a cone, whatever -
> is an ordinary /function/ which fills out this data structure with the
> right function pointers. In particular, every type of shape now has the
> same type signature. There is no Sphere type, no Plane type, no Cone
> type, there is only a Shape type. So now we can write
>
>    [sphere 0 1, plane 1 0]
>
> and have it be well-typed. (It's [Shape].)

So your solution to get away from the complaints of the compiler is to have all
types be the same single type?  Yes, sounds like the Lisp solution too. ;)

> The short summary is that Haskell, the finest
> functional programming language in the land, is superior to Java, one of
> the more sucky OOP languages. Not exactly a revelation, is it? I think
> I'm going to go outside for a while...

you do good. :)


Post a reply to this message

From: Warp
Subject: Re: Haskell vs Java: Building a ray tracer
Date: 29 Jun 2012 01:37:54
Message: <4fed3f32@news.povray.org>
clipka <ano### [at] anonymousorg> wrote:
> > You can certainly do things this way. There is a snag, however. If you
> > stick to Haskell 2010 (the currently published official language
> > specification), you cannot easily make a list of shapes. Because all the
> > elements of a list have to be of identical types. And (for example)
> > Sphere is not the same type as Plane, even if they do both implement the
> > Shape class (and possibly other classes like Eq or Show). There is no
> > way to say "a list of everything that has these methods".

> Heh. So standard Haskell obviously sucks as much as Java without generics.

Why exactly would you need generics in order to make a list of different
shapes? (Being able to do such a thing is kind of the very definition of
object-oriented programming.)

-- 
                                                          - Warp


Post a reply to this message

Goto Latest 10 Messages Next 8 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.