POV-Ray : Newsgroups : povray.off-topic : WebGL Server Time
27 Jul 2024 20:30:24 EDT (-0400)
  WebGL (Message 1 to 10 of 28)  
Goto Latest 10 Messages Next 10 Messages >>>
From: Orchid Win7 v1
Subject: WebGL
Date: 5 Jun 2016 10:05:26
Message: <575431a6$1@news.povray.org>
Hey guys. Remember this?

http://madebyevan.com/webgl-path-tracing/

Yeah, well now there's this:

http://jonathan-olson.com/tesserace/tests/3d.html

Firstly, it looks a bit less like a computer simulation and more like 
the Real World. Only a bit, but hey.

My favourite feature is that you can adjust the lens aperture and focus 
distance. I was just wondering whether you could accurately simulate 
depth of field using path tracing! Now we just need to be able to change 
the focal length of the lens too... (There are also options for 
adjusting the exposure.)

For a camera nerd, this is quite fun! (Well, it beats Nikon's "lens 
simulator", which just shows a static JPEG with variable magnification!)

Sometimes I really wish I knew how to do this stuff for myself...

What *would* be interesting is to see how this handles a scene of 
non-trivial complexity. I'm going to say "like a glacier"...


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: WebGL
Date: 5 Jun 2016 10:11:27
Message: <5754330f$1@news.povray.org>
On 05/06/2016 03:05 PM, Orchid Win7 v1 wrote:

> What *would* be interesting is to see how this handles a scene of
> non-trivial complexity. I'm going to say "like a glacier"...

...well, there is this:

http://reindernijhoff.net/2015/04/realtime-webgl-path-tracer/

Granted, it's grainy as *hell*, but at a tiny resolution it's watchable.

Maybe give it ten years, and the GPU will have enough horsepower for 
people to actually put this stuff in game engines.

Maybe.


Post a reply to this message

From: scott
Subject: Re: WebGL
Date: 6 Jun 2016 08:36:10
Message: <57556e3a@news.povray.org>
> My favourite feature is that you can adjust the lens aperture and focus
> distance. I was just wondering whether you could accurately simulate
> depth of field using path tracing! Now we just need to be able to change
> the focal length of the lens too... (There are also options for
> adjusting the exposure.)
>
> For a camera nerd, this is quite fun! (Well, it beats Nikon's "lens
> simulator", which just shows a static JPEG with variable magnification!)
>
> Sometimes I really wish I knew how to do this stuff for myself...

Follow a tutorial on WebGL - or if you want to skip all the html/js 
boilerplate stuff you'll need to know, go straight to something like 
shadertoy. There are plenty of examples, and shadertoy now supports 
reading back pixels from previous frames, so you can do 
multi-frame-averaging for path-tracing amongst other effects.

Building a lens simulator (with real-time path-traced results) sounds 
feasible and very interesting.


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: WebGL
Date: 8 Jun 2016 13:06:27
Message: <57585093$1@news.povray.org>
On 06/06/2016 01:36 PM, scott wrote:
>> Sometimes I really wish I knew how to do this stuff for myself...
>
> Follow a tutorial on WebGL - or if you want to skip all the html/js
> boilerplate stuff you'll need to know, go straight to something like
> shadertoy. There are plenty of examples, and shadertoy now supports
> reading back pixels from previous frames, so you can do
> multi-frame-averaging for path-tracing amongst other effects.
>
> Building a lens simulator (with real-time path-traced results) sounds
> feasible and very interesting.

Well, I went to the ShaderToy website... and discovered that apparently 
Opera has WebGL support that's flaky as *hell*! The number of times I 
have to close and reopen the browser to turn WebGL back on...

Some of the shaders on offer are absurd. I saw one that was a real-time 
simulation of waves crashing on a giant sea... fractal wave 
distribution, subsurface scattering, specular highlights... and wondered 
why the hell they don't put this in games yet!

But mostly, attempting to browse the shaders just broke Opera.

After about an hour of squinting at the sparse ShaderToy documentation 
and making some educated guesses, I did eventually manage to build a 
trivial ray-tracer that runs in real-time. I have no idea how to do 
random number generation yet, but we'll see...


Post a reply to this message

From: scott
Subject: Re: WebGL
Date: 9 Jun 2016 03:11:22
Message: <5759169a$1@news.povray.org>
> After about an hour of squinting at the sparse ShaderToy documentation
> and making some educated guesses, I did eventually manage to build a
> trivial ray-tracer that runs in real-time.

Come on, make it public and post the link then for all to see :-)

I had a bit more of a think about how you might do a lens simulation. 
You start by firing the ray from a random point within the pixel on the 
sensor, and fire it at a random point on the surface of the first lens 
element. Then follow that ray through the lenses (assume no reflection 
for speed here), if it gets too far from the lens axis then return 
black, otherwise once it gets out of the end of the lenses and into the 
scene do the raytrace as normal.

With the above you should be able to move about individual lens elements 
in almost real time (it might take a second or two to smooth out the noise).

 > I have no idea how to do
> random number generation yet, but we'll see...

Yes that is tricky, but I'm sure you've already looked at other shaders 
there to see how they do it. Something involving the sine of the 
coordinates multiplied by some huge value IIRC.


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: WebGL
Date: 9 Jun 2016 13:53:43
Message: <5759ad27$1@news.povray.org>
On 09/06/2016 08:11 AM, scott wrote:
>> After about an hour of squinting at the sparse ShaderToy documentation
>> and making some educated guesses, I did eventually manage to build a
>> trivial ray-tracer that runs in real-time.
>
> Come on, make it public and post the link then for all to see :-)

I'm sure you've all seen a white sphere with some coloured lights. :-P

> I had a bit more of a think about how you might do a lens simulation.
> You start by firing the ray from a random point within the pixel on the
> sensor, and fire it at a random point on the surface of the first lens
> element. Then follow that ray through the lenses (assume no reflection
> for speed here), if it gets too far from the lens axis then return
> black, otherwise once it gets out of the end of the lenses and into the
> scene do the raytrace as normal.
>
> With the above you should be able to move about individual lens elements
> in almost real time (it might take a second or two to smooth out the
> noise).

That sounds quite complex. (In particular, it seems to require me to 
actually design a real lens assembly, and implement real refraction.)

I was thinking more along the lines of a camera entity that fires rays 
in a pattern that matches a theoretical ideal lens. But I need to figure 
out the equations for that first...

>> I have no idea how to do
>> random number generation yet, but we'll see...
>
> Yes that is tricky, but I'm sure you've already looked at other shaders
> there to see how they do it. Something involving the sine of the
> coordinates multiplied by some huge value IIRC.

Yeah, not just a random number per pixel, but *multiple* random numbers! 
Multiple, statistically-independent numbers... This is not trivial.


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: WebGL
Date: 9 Jun 2016 14:00:16
Message: <5759aeb0$1@news.povray.org>
On 08/06/2016 06:06 PM, Orchid Win7 v1 wrote:
> After about an hour of squinting at the sparse ShaderToy documentation
> and making some educated guesses, I did eventually manage to build a
> trivial ray-tracer that runs in real-time. I have no idea how to do
> random number generation yet, but we'll see...

Yesterday I had another go at this, and hit a show-stopping snag: No 
flow control.

It seems that implementing reflection is essentially impossible. You 
can't do recursive functions. You can't do arrays. [Well, you can... but 
the array indices must be known at compile-time, thus negating all 
possible advantages of arrays.] You can't do while-loops with 
complicated break or continue conditions. You can't do function 
pointers... In short, it seems that all code jumps and data accesses 
must be statically known at compile-time.

Obviously, the rendering equation is inherently recursive. You fire a 
ray, which spawns further rays with are recursively traced like the 
first one. But without the ability to dynamically trace different 
primitives differently, or apply different surface characteristics 
dynamically, or basically do *anything* dynamically... You end up 
basically needing to write some kind of engine to *generate* the WebGL 
code, hard-coded to the particular geometry of your scene.

I see now why no computer game will ever use this technology. It's fast 
because it's *completely inflexible*.

I am now baffled as to how all the *other* people managed to do so much 
cool stuff in a language that abhors conditional branching... Clearly 
I'm going to have to cheat and look at the source code. But I fear I 
won't be able to comprehend any of it.

In summary, it appears that implementing reflection is mathematically 
impossible. I may perhaps still be able to realise my dream of rendering 
depth of field effects realistically. (Although without a nice detailed 
scene to look at, it might be rather disappointing...)


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: WebGL
Date: 9 Jun 2016 14:46:06
Message: <5759b96e$1@news.povray.org>
On 09/06/2016 06:53 PM, Orchid Win7 v1 wrote:
> On 09/06/2016 08:11 AM, scott wrote:
>>> After about an hour of squinting at the sparse ShaderToy documentation
>>> and making some educated guesses, I did eventually manage to build a
>>> trivial ray-tracer that runs in real-time.
>>
>> Come on, make it public and post the link then for all to see :-)
>
> I'm sure you've all seen a white sphere with some coloured lights. :-P

OK, here ya go:

struct Ray
{
     vec3 S, D;
};

vec3 RayPoint(in Ray ray, in float t)
{
     return ray.S + ray.D*t;
}

Ray Camera(in vec2 uv)
{
     Ray ray;
     ray.S = vec3(0, 0, -5);
     ray.D = vec3(uv.x, uv.y, 1);
     return ray;
}

struct Sphere
{
     vec3 C;
     float R;
     float R2;
};

Sphere MakeSphere(vec3 center, float radius)
{
     return Sphere(center, radius, radius*radius);
}

float IsectSphere(in Ray ray, in Sphere sphere)
{
     // (P - C)^2 = r^2
     // (P - C)^2 - r^2 = 0
     // ((Dt + S) - C)^2 - r^2 = 0
     // (Dt + S - C)^2 - r^2 = 0
     // (Dt + V)^2 - r^2 = 0
     // D^2 t^2 + 2DVt + V^2 - r^2 = 0

     vec3 V = ray.S - sphere.C;
     float a = dot(ray.D, ray.D);
     float b = 2.0*dot(V, ray.D);
     float c = dot(V, V) - sphere.R2;

     float det = b*b - 4.0*a*c;
     if (det >= 0.0)
     {
         return (0.0 - b - sqrt(det))/(2.0*a);
     }
     else
     {
         return -1.0;
     }
}

float Illuminate(vec3 light, vec3 surface, vec3 normal)
{
     vec3 d = light - surface;
     return dot(normalize(d), normalize(normal));
}

vec2 MapScreen(vec2 xy)
{
     return (xy - iResolution.xy/2.0) / iResolution.y;
}

vec2 MapScreenExact(vec2 xy)
{
     return (xy - iResolution.xy/2.0) / iResolution.xy;
}

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
     Sphere s1 = MakeSphere(vec3(MapScreenExact(iMouse.xy)*4.0, 0), 1.0);

     vec3 l1 = vec3(-5, +5, -3); // Red
     vec3 l2 = vec3(+5, +5, -3); // Green
     vec3 l3 = vec3( 0, -5, -3); // Blue

     Ray cr = Camera(MapScreen(fragCoord.xy));
     float t = IsectSphere(cr, s1);

     if (t > 0.0)
     {
         vec3 surface = RayPoint(cr, t);
         vec3 normal = surface - s1.C;
         float b1 = Illuminate(l1, surface, normal);
         float b2 = Illuminate(l2, surface, normal);
         float b3 = Illuminate(l3, surface, normal);
         fragColor = vec4(b1, b2, b3, 1);
     }
     else
     {
         fragColor = vec4(0, 0, 0, 0);
     }
}

Copy & paste into the ShaderToy website and hit Go. You can click on the 
image to move the sphere around. (It doesn't follow your cursor exactly 
because of the perspective transformation.)


Post a reply to this message

From: scott
Subject: Re: WebGL
Date: 10 Jun 2016 02:59:11
Message: <575a653f$1@news.povray.org>
> In summary, it appears that implementing reflection is mathematically
> impossible. I may perhaps still be able to realise my dream of rendering
> depth of field effects realistically. (Although without a nice detailed
> scene to look at, it might be rather disappointing...)

Obviously it's not, because there are plenty of shadertoy examples 
showing it:

https://www.shadertoy.com/view/4ssGWX

Surfaces that reflect *OR* refract are trivial:

for(i=0;i<MAX_TRACE_DEPTH;i++)
{
   intersect = Trace( ray );
   if( REFLECT(intersect) ) ray = ReflectRay(ray , intersect);
   if( REFRACT(intersect) ) ray = RefractRay(ray , intersect);
}

To do surfaces that do both you need to choose randomly at each 
intersection which ray to follow (weighted appropriately depending on 
the material) and then average over a larger number of samples. You can 
either do many samples per frame (but obviously this gets slow for 
complex scenes), or what is commonly done is to average over many frames 
(and reset the average if the camera is moved). Or both.


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: WebGL
Date: 10 Jun 2016 03:23:54
Message: <575a6b0a@news.povray.org>
On 10/06/2016 07:59 AM, scott wrote:
>> In summary, it appears that implementing reflection is mathematically
>> impossible. I may perhaps still be able to realise my dream of rendering
>> depth of field effects realistically. (Although without a nice detailed
>> scene to look at, it might be rather disappointing...)
>
> Obviously it's not, because there are plenty of shadertoy examples
> showing it:
>
> https://www.shadertoy.com/view/4ssGWX

Interesting. On my browser, that just crashes.

> Surfaces that reflect *OR* refract are trivial:
>
> for(i=0;i<MAX_TRACE_DEPTH;i++)
> {
> intersect = Trace( ray );
> if( REFLECT(intersect) ) ray = ReflectRay(ray , intersect);
> if( REFRACT(intersect) ) ray = RefractRay(ray , intersect);
> }

The difficulty is figuring out what surface was hit just from the 
intersection coordinates. And perhaps a perfect reflection isn't so 
hard, but I wanted to have the colour change slightly on each 
reflection... but I can't figure out how to stack up the colour changes 
until you get to the end of the reflection chain, and then unstack them 
again.

> To do surfaces that do both you need to choose randomly at each
> intersection which ray to follow (weighted appropriately depending on
> the material) and then average over a larger number of samples. You can
> either do many samples per frame (but obviously this gets slow for
> complex scenes), or what is commonly done is to average over many frames
> (and reset the average if the camera is moved). Or both.

Still trying to work out how you "average over several frames". 
ShaderToy is great fun, but so *utterly* undocumented... it's quite hard 
to figure out how to do anything.


Post a reply to this message

Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.