POV-Ray : Newsgroups : povray.programming : DirectX9's HLSL & NVidia's Cg Server Time
22 Dec 2024 21:14:57 EST (-0500)
  DirectX9's HLSL & NVidia's Cg (Message 1 to 7 of 7)  
From: Ryan Bennitt
Subject: DirectX9's HLSL & NVidia's Cg
Date: 1 Nov 2003 07:23:55
Message: <3fa3a5db@news.povray.org>
I was browsing the DirectX 9 documentation, in particular looking at the High
Level Shader Language (HLSL) specifications, and started thinking about the
potential of this language, and for that matter languages like NVidia's Cg
language. I was wondering if we would ever see raytracers written in
vertex/pixel shader languges like these. It strikes me that the latest graphics
cards are turning into small parallel supercomputers capable of performing not
only vector arithmatic very quickly, but also allowing program flow and control
and supporting custom data structures. Currently they are used purely to
transform vertexes into meshes, but I see no reason why they can't transform an
array of vectors and floats into spheres and calculate ray intersections and
texture/normal values. Is a raytracer not just a complicated vertex/pixel shader
program?

Graphics cards are effectively becoming as flexible as processors in terms of
functions they can perform. Plus, if a graphics card contains 4+ vertex/pixel
pipelines (these days pipeline is a bit of a misnomer coined for marketing
purposes, I reckon it actually should be processor), each of which is running at
around 500MHz, with access to a large 128MB+ shared memory running at similar
speed, what you've effectively got is a parallel computer with comparable
processing capability to the CPU. We all know raytracing is a highly parallel
process. Attempts at shared processing over a network prove this. Can we not
have a 'server' running on a single machine, serving batches of pixels to be
rendered on either the CPU or on a pipeline in the GPU as each requires them?

Now there are certain functions that it won't be able to perform (such as
parsing), but once a CPU has parsed the data it can copy the necessary data
structures into the graphics card's memory and both the CPU and GPU can start
rendering sets of pixels in the image.

The kind of architecture required to send batches of pixels to either the CPU or
GPU would of course facilitate network rendering too. If the CPU has to give
batches of pixels to itself or the GPU to render, and both have to ask for more
pixels when they finish a batch, then there's no reason why this can't be
applied to other computers on the network that, once given the scene files to
parse, can request batches of pixels for their own CPU/GPU.

The main question is this. Is a raytracer simply too complicated to compile/run
on a GPU?


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: DirectX9's HLSL & NVidia's Cg
Date: 2 Nov 2003 17:21:58
Message: <3fa58386$1@news.povray.org>
In article <3fa3a5db@news.povray.org> , "Ryan Bennitt" 
<rya### [at] lycoscouk> wrote:

> I was wondering if we would ever see raytracers written in
> vertex/pixel shader languges like these.

No, the programmable units on 3D graphic accelerators offer only 32 bit
precision floating-point numbers.  That is simply insufficient for
ray-tracing.  Besides, their pipelines are optimised for pixel processing
with local data, while ray-tracing requires very fast random memory access.

    Thorsten

____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde

Visit POV-Ray on the web: http://mac.povray.org


Post a reply to this message

From: Alessandro Falappa
Subject: Re: DirectX9's HLSL & NVidia's Cg
Date: 3 Nov 2003 04:50:03
Message: <Xns94286E4E9294Aalexfalappa@204.213.191.226>
"Ryan Bennitt" <rya### [at] lycoscouk> wrote in
news:3fa3a5db@news.povray.org: 

> The main question is this. Is a raytracer simply too complicated to
> compile/run on a GPU?

No, and in fact it has been already done even if we are currently at a
research stage with no practical application. In my opinion hardware
ray-tracing will not overcome standard software ray-tracing, it will
complement it instead. Anyway see:
http://graphics.stanford.edu/~tpurcell/
for current research status.

Alessandro


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: DirectX9's HLSL & NVidia's Cg
Date: 3 Nov 2003 06:05:33
Message: <3fa6367d@news.povray.org>
In article <Xns### [at] 204213191226> , Alessandro 
Falappa <don### [at] nessunoit>  wrote:

>> The main question is this. Is a raytracer simply too complicated to
>> compile/run on a GPU?
>
> No, and in fact it has been already done even if we are currently at a
> research stage with no practical application. In my opinion hardware
> ray-tracing will not overcome standard software ray-tracing, it will
> complement it instead. Anyway see:
> http://graphics.stanford.edu/~tpurcell/
> for current research status.

No, you got fooled by the style the paper is written in.  If you read it
carefully, you will notice they only talk about a simulation.  They have not
actually implemented it, they just wrote a simulator that shows it could be
possible.  All their figures are based on that simulator!

So, in essence they have shown absolutely nothing new.  In theory you can
also implement a word processor this way.  That doesn't imply it would make
sense or work well! ;-)

    Thorsten

____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde

Visit POV-Ray on the web: http://mac.povray.org


Post a reply to this message

From: Alessandro Falappa
Subject: Re: DirectX9's HLSL & NVidia's Cg
Date: 4 Nov 2003 12:47:31
Message: <Xns9429BF48D98C0alexfalappa@204.213.191.226>
"Thorsten Froehlich" <tho### [at] trfde> wrote in
news:3fa6367d@news.povray.org: 

> No, you got fooled by the style the paper is written in.  If you read
> it carefully, you will notice they only talk about a simulation.  They
> have not actually implemented it, they just wrote a simulator that
> shows it could be possible.  All their figures are based on that
> simulator! 

You are referring to the paper "Ray Tracing on Programmable Graphics 
Hardware" that dates back to 2002 when no hardware met the authors 
requirements.

In the successive paper "Photon Mapping on Programmable Graphics Hardware", 
however, the authors make use of "...a stochastic ray tracer written using 
a fragment program. The output of the ray tracer is a texture with all the 
hit points, normals, and colors for a given ray depth." (citation from 
chapter 2.4) and their prototype application runs on "... a GeForce FX 5900
Ultra and a 3.0 GHz Pentium 4 CPU with Hyper Threading and 2.0 GB RAM. The 
operating system was Microsoft Windows XP, with version 43.51 of the NVIDIA 
drivers. All of our kernels are written in Cg and compiled with cgc version 
1.1 to native fp30 assembly." (citation from chapter 3).

Anyway, my point was not to praise the gpu approach to gpu but simply to 
answer the poster that ray tracing on gpu is possible with some 
limitations. Both papers are a "proof of concept" and to see practical 
applications (likely not substituting traditional rendering) we will have 
to wait and see. I mainly got the impression that commodity hardware 
raytracing is "around the corner".
--

Alessandro


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: DirectX9's HLSL & NVidia's Cg
Date: 4 Nov 2003 14:06:38
Message: <3fa7f8be$1@news.povray.org>
In article <Xns### [at] 204213191226> , Alessandro 
Falappa <don### [at] nessunoit>  wrote:

> You are referring to the paper "Ray Tracing on Programmable Graphics
> Hardware" that dates back to 2002 when no hardware met the authors
> requirements.
>
> In the successive paper "Photon Mapping on Programmable Graphics Hardware",

Ah, I haven't read that one yet (it is rather new) as it seemed like it was
not dealing with ray-tracing.  Will see if I find some time to read it...

    Thorsten

____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde

Visit POV-Ray on the web: http://mac.povray.org


Post a reply to this message

From: Ryan Bennitt
Subject: Re: DirectX9's HLSL & NVidia's Cg
Date: 7 Nov 2003 11:42:23
Message: <3fabcb6f@news.povray.org>
Hmm, both language's cater for 64-bit floating point precision. I guess with
current hardware, their compilers will truncate any use of 64-bit to 32-bit
floats though. If shaders become more considerably complex in the gaming
industry (as is predicted), then the hardware will follow suit, and there may
well be demand for full 64-bit precision. Maybe today's hardware isn't up to it,
but in the future this might be more viable. As far as I can tell though,
there's nothing in the language specifications that rules out the development of
a ray tracer.

Ryan

"Thorsten Froehlich" <tho### [at] trfde> wrote in message
news:3fa58386$1@news.povray.org...
> In article <3fa3a5db@news.povray.org> , "Ryan Bennitt"
> <rya### [at] lycoscouk> wrote:
>
> > I was wondering if we would ever see raytracers written in
> > vertex/pixel shader languges like these.
>
> No, the programmable units on 3D graphic accelerators offer only 32 bit
> precision floating-point numbers.  That is simply insufficient for
> ray-tracing.  Besides, their pipelines are optimised for pixel processing
> with local data, while ray-tracing requires very fast random memory access.
>
>     Thorsten
>
> ____________________________________________________
> Thorsten Froehlich, Duisburg, Germany
> e-mail: tho### [at] trfde
>
> Visit POV-Ray on the web: http://mac.povray.org


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.