POV-Ray : Newsgroups : povray.off-topic : WebGL Server Time
28 Jul 2024 02:30:49 EDT (-0400)
  WebGL (Message 19 to 28 of 28)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: Orchid Win7 v1
Subject: Re: WebGL
Date: 12 Jun 2016 09:48:33
Message: <575d6831$1@news.povray.org>
On 12/06/2016 02:30 PM, Orchid Win7 v1 wrote:
> In full:
>
> void TracePixel::JitterCameraRay(Ray& ray, DBL x, DBL y, size_t ray_number)
> {
> DBL xjit, yjit, xlen, ylen, r;
> Vector3d temp_xperp, temp_yperp, deflection;
>
> r = camera.Aperture * 0.5;
>
> Jitter2d(x, y, xjit, yjit);
> xjit *= focalBlurData->Max_Jitter * 2.0;
> yjit *= focalBlurData->Max_Jitter * 2.0;
>
> xlen = r * (focalBlurData->Sample_Grid[ray_number].x() + xjit);
> ylen = r * (focalBlurData->Sample_Grid[ray_number].y() + yjit);
>
> // Deflect the position of the eye by the size of the aperture, and in
> // a direction perpendicular to the current direction of view.
>
> temp_xperp = focalBlurData->XPerp * xlen;
> temp_yperp = focalBlurData->YPerp * ylen;
>
> deflection = temp_xperp - temp_yperp;
>
> ray.Origin += deflection;
>
> // Deflect the direction of the ray in the opposite direction we deflected
> // the eye position. This makes sure that we are looking at the same place
> // when the distance from the eye is equal to "Focal_Distance".
>
> ray.Direction *= focalBlurData->Focal_Distance;
> ray.Direction -= deflection;
>
> ray.Direction.normalize();
> }
>
> Good luck *ever* figuring out what the hell any of it means, of course...

I'm not entirely sure why that method is so complicated, but it 
*appears* the key part of the algorithm is this:

ray.Origin += deflection;
ray.Direction *= focus_distance;
ray.Direction -= deflection;
ray.Direction.normalize();

The aperture determines the maximum size of deflection, and the focus 
distance is mentioned above. Plugging these two into my shader, I seem 
to be able to get it to produce blurry images focused at a specific 
distance.


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: WebGL
Date: 12 Jun 2016 09:53:22
Message: <575d6952@news.povray.org>
On 12/06/2016 02:48 PM, Orchid Win7 v1 wrote:
> I'm not entirely sure why that method is so complicated, but it
> *appears* the key part of the algorithm is this:
>
> ray.Origin += deflection;
> ray.Direction *= focus_distance;
> ray.Direction -= deflection;
> ray.Direction.normalize();
>
> The aperture determines the maximum size of deflection, and the focus
> distance is mentioned above. Plugging these two into my shader, I seem
> to be able to get it to produce blurry images focused at a specific
> distance.

In case anybody cares:

float Rand(vec2 v)
{
     return fract(sin(dot(v.xy ,vec2(12.9898,78.233))) * 43758.5453);
}

float Rand(vec4 v)
{
     float a = Rand(v.xy);
     float b = Rand(v.zw);
     vec2 ab = vec2(a, b);
     float c = Rand(ab * v.xy);
     float d = Rand(ab * v.zw);
     vec2 cd = vec2(c, d);
     return Rand(ab * cd);
}



struct Ray
{
     vec3 S, D;
};

vec3 RayPoint(in Ray ray, in float t)
{
     return ray.S + ray.D*t;
}

Ray Camera(in vec2 uv)
{
     Ray ray;
     ray.S = vec3(0, 0, -5);
     ray.D = vec3(uv.x, uv.y, 1.0);

     const float Aperture = 0.2;
     const float FocusDistance = 18.0;

     float r1 = Rand(vec4(uv, 0, iGlobalTime));
     float r2 = Rand(vec4(uv, 1, iGlobalTime));
     float r3 = Rand(vec4(uv, 2, iGlobalTime));
     vec3 deflection = vec3(r1, r2, 0);
     deflection = Aperture*deflection;

     ray.S += deflection;
     ray.D *= FocusDistance;
     ray.D -= deflection;
     ray.D = normalize(ray.D);

     return ray;
}

struct Plane
{
     vec3 N;
     float D;
};

float IsectPlane(in Ray ray, in Plane plane)
{
     // NP = d
     // NP - d = 0
     // N(Dt + S) - d = 0
     // ND t + NS - d = 0
     // ND t = d - NS
     // t = (d - NS)/ND

     return (plane.D - dot(plane.N, ray.S)) / dot(plane.N, ray.D);
}

struct Sphere
{
     vec3 C;
     float R;
     float R2;
};

Sphere MakeSphere(vec3 center, float radius)
{
     return Sphere(center, radius, radius*radius);
}

float IsectSphere(in Ray ray, in Sphere sphere)
{
     // (P - C)^2 = r^2
     // (P - C)^2 - r^2 = 0
     // ((Dt + S) - C)^2 - r^2 = 0
     // (Dt + S - C)^2 - r^2 = 0
     // (Dt + V)^2 - r^2 = 0
     // D^2 t^2 + 2DVt + V^2 - r^2 = 0

     vec3 V = ray.S - sphere.C;
     float a = dot(ray.D, ray.D);
     float b = 2.0*dot(V, ray.D);
     float c = dot(V, V) - sphere.R2;

     float det = b*b - 4.0*a*c;
     if (det >= 0.0)
     {
         return (0.0 - b - sqrt(det))/(2.0*a);
     }
     else
     {
         return -1.0;
     }
}

float Illuminate(vec3 light, vec3 surface, vec3 normal)
{
     vec3 d = light - surface;
     float i = dot(normalize(d), normalize(normal));
     if (i < 0.0)
     {
         return 0.0;
     }
     return i;
}

vec2 MapScreen(vec2 xy)
{
     return (xy - iResolution.xy/2.0) / iResolution.y;
}

vec2 MapScreenExact(vec2 xy)
{
     return (xy - iResolution.xy/2.0) / iResolution.xy;
}

vec3 ColourGround(in vec3 surface, in vec3 normal)
{
     float u = floor(surface.x / 3.0);
     float v = floor(surface.z / 3.0);
     if (mod(u+v, 2.0) == 0.0)
     {
         return vec3(0.4, 0.4, 0.4);
     }
     else
     {
         return vec3(1.0, 1.0, 1.0);
     }
}

vec3 TraceRay(in Ray ray)
{
     Plane ground = Plane(vec3(0, 1, 0), -5.0);
     Sphere sphere1 = MakeSphere(vec3(MapScreenExact(iMouse.xy)*4.0, 0), 
1.0);

     float groundT  = IsectPlane(ray, ground);
     float sphere1T = IsectSphere(ray, sphere1);

     int object = 0;

     if (groundT < 0.0 && sphere1T < 0.0)
     {
         object = 0;
     }

     if (groundT > 0.0 && sphere1T < 0.0)
     {
         object = 1;
     }

     if (groundT < 0.0 && sphere1T > 0.0)
     {
         object = 2;
     }

     if (groundT > 0.0 && sphere1T > 0.0)
     {
         if (groundT < sphere1T)
         {
             object = 1;
         }
         else
         {
             object = 2;
         }
     }

     if (object == 0)
     {
         return vec3(0, 0, 0);
     }

     vec3 surface, normal, colour;

     if (object == 1)
     {
         surface = RayPoint(ray, groundT);
         normal = ground.N;
         colour = ColourGround(surface, normal);
     }

     if (object == 2)
     {
         surface = RayPoint(ray, sphere1T);
         normal = surface - sphere1.C;
         colour = vec3(1, 0, 0);
     }

     float b1 = Illuminate(vec3(0, +10, 0), surface, normal);
     return colour*vec3(b1, b1, b1);
}

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
     vec4 prev = texture2D(iChannel0, fragCoord.xy / iResolution.xy);

     Ray cr = Camera(MapScreen(fragCoord.xy));
     vec3 colour = TraceRay(cr);

     fragColor = vec4(colour/float(iFrame), 1) + prev*(1.0 - 
1.0/float(iFrame));
     fragColor = clamp(fragColor, vec4(0, 0, 0, 0), vec4(1, 1, 1, 1));
}



You'll need to configure this for frame averaging, by setting this up as 
the shader for Buffer A, and configuring iChannel0 = Buffer A. Then set 
the main shader to just render Buffer A to the screen.


Post a reply to this message

From: scott
Subject: Re: WebGL
Date: 13 Jun 2016 03:13:05
Message: <575e5d01$1@news.povray.org>
>> vec3 CCOR = vec3(1,1,1);
>> vec3 colour = vec3(0,0,0);
>> for(int i=0;i<MAX_TRACE_DEPTH;i++)
>> {
>> isect = Raytrace();
>> colour += CCOR * isect.diffuse_colour;
>> CCOR *= isect.reflection_colour;
>> }
>
> I'm thinking about the associative/distributive property of the reals,
> though... If one object adds 4% blue and then reflects 95%, and the next
> object adds %6 yellow and reflects 50%, and the final object is green,
> you need
>
>   ((green * 50%) + 6% yellow) * 95% + 4% blue
>
> but the algorithm above gives
>
>   ((4% blue * 95%) + 6% yellow) * 50% + green

No, the algorithm does give the same result as you want, step through it 
with your example:

CCOR = 100%
col = black

1st intersect:
col = black + (100%)*4% blue = 4% blue
CCOR = 100% * 95% = 95%

2nd intersect:
col = 4% blue + (95%) * 6% yellow
CCOR = 95% * 50%

3rd intersect:
col = 4% blue + (95%) * 6% yellow + (95%*50%) * green
CCOR = 95% * 50% * 0%


Post a reply to this message

From: scott
Subject: Re: WebGL
Date: 13 Jun 2016 03:47:38
Message: <575e651a$1@news.povray.org>
> You'll need to configure this for frame averaging, by setting this up as
> the shader for Buffer A, and configuring iChannel0 = Buffer A. Then set
> the main shader to just render Buffer A to the screen.

Once I realised that you need to set iChannel0 to Buffer A for both 
Buffer A and the main image, it worked a treat :-)


Post a reply to this message

From: scott
Subject: Re: WebGL
Date: 20 Jun 2016 03:00:54
Message: <576794a6$1@news.povray.org>
> Copy & paste into the ShaderToy website and hit Go. You can click on the
> image to move the sphere around. (It doesn't follow your cursor exactly
> because of the perspective transformation.)

I finally got around to porting my WebGL version (which I had to write 
because shadertoy didn't support multiple buffers back then):

https://www.shadertoy.com/view/MsySzd

BufferB is used to store the camera viewing angles and the frame number 
that the viewing angles last stopped changing (so the BufferA knows 
where to start averaging the frames from).

If it doesn't work on your GPU try reducing the number of SAMPLES (at 
the top of BufferA). The end result will be the same, it will just look 
a bit worse whilst rotating the camera.


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: WebGL
Date: 20 Jun 2016 13:59:00
Message: <57682ee4$1@news.povray.org>
On 20/06/2016 08:00 AM, scott wrote:
> I finally got around to porting my WebGL version (which I had to write
> because shadertoy didn't support multiple buffers back then):

Ah yes, I thought I remembered there being prior art in this space...

> BufferB is used to store the camera viewing angles and the frame number
> that the viewing angles last stopped changing (so the BufferA knows
> where to start averaging the frames from).

Interesting. I made a version of mine that only averages together the 
previous N frames, and set the camera to rotate constantly. By changing 
N, you can choose between grainy images or a weird motion blur. Looks 
vaguely like thermal imaging, actually...

> If it doesn't work on your GPU try reducing the number of SAMPLES (at
> the top of BufferA). The end result will be the same, it will just look
> a bit worse whilst rotating the camera.

You'll be unsurprised to hear that this breaks Opera.

Looking at some of the stuff people have done, you wonder why this 
amazing tech isn't in games...

...and then you realise it doesn't scale to non-trivial geometry. I'm 
still figuring out how the GPU actually works, but it *appears* that it 
works by executing all possible code paths, and just turning off the 
cores that don't take that branch. That's find for 4 trivial primitives; 
I'm going to say it doesn't scale to hundreds of billions of objects.

Pity. It would be so cool...


Post a reply to this message

From: scott
Subject: Re: WebGL
Date: 21 Jun 2016 06:58:55
Message: <57691def@news.povray.org>
>> If it doesn't work on your GPU try reducing the number of SAMPLES (at
>> the top of BufferA). The end result will be the same, it will just look
>> a bit worse whilst rotating the camera.
>
> You'll be unsurprised to hear that this breaks Opera.

What GPU do you have? Have you tried this Chrome or IE?

> Looking at some of the stuff people have done, you wonder why this
> amazing tech isn't in games...
>
> ...and then you realise it doesn't scale to non-trivial geometry. I'm
> still figuring out how the GPU actually works, but it *appears* that it
> works by executing all possible code paths, and just turning off the
> cores that don't take that branch.

Yes, that's how I understand it too. Writing this:

if(some_condition)
  DoA();
else
  DoB();

Runs the same speed as this:

DoA();
DoB();

For "small" scenes though, this is still orders of magnitude faster than 
it would ever run on a CPU.

> That's find for 4 trivial primitives;
> I'm going to say it doesn't scale to hundreds of billions of objects.
>
> Pity. It would be so cool...

I suspect if you started to modify and optimise the hardware to cope 
better with more dynamic branching and recursion etc, you would end up 
back with a CPU :-)


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: WebGL
Date: 21 Jun 2016 16:10:50
Message: <57699f4a$1@news.povray.org>
On 21/06/2016 11:58 AM, scott wrote:
>>> If it doesn't work on your GPU try reducing the number of SAMPLES (at
>>> the top of BufferA). The end result will be the same, it will just look
>>> a bit worse whilst rotating the camera.
>>
>> You'll be unsurprised to hear that this breaks Opera.
>
> What GPU do you have? Have you tried this Chrome or IE?

Apparently Opera is "widely known" for having rubbish WebGL support.

>> Looking at some of the stuff people have done, you wonder why this
>> amazing tech isn't in games...
>>
>> ...and then you realise it doesn't scale to non-trivial geometry. I'm
>> still figuring out how the GPU actually works, but it *appears* that it
>> works by executing all possible code paths, and just turning off the
>> cores that don't take that branch.
>
> Yes, that's how I understand it too. Writing this:
>
> if(some_condition)
> DoA();
> else
> DoB();
>
> Runs the same speed as this:
>
> DoA();
> DoB();
>
> For "small" scenes though, this is still orders of magnitude faster than
> it would ever run on a CPU.

The CPU may be superscalar, but having 4-vector float arithmetic in 
hardware, in parallel, on a bazillion cores has *got* to be faster. ;-)

>> That's find for 4 trivial primitives;
>> I'm going to say it doesn't scale to hundreds of billions of objects.
>>
>> Pity. It would be so cool...
>
> I suspect if you started to modify and optimise the hardware to cope
> better with more dynamic branching and recursion etc, you would end up
> back with a CPU :-)

I don't know. I think the main thing about the GPU is that it's SIMD. My 
CPU has 4 cores; my GPU has nearer 400. I gather that each individual 
core is actually slightly *slower* than a CPU core - it's just that 
there's a hell of a lot of them. Also that memory access patterns are 
very predictable (until you do complex texture lookups), which enables 
the memory scheduling to have massive bandwidth with all the latency 
hidden away, so you have no pipeline stalls or cache misses to worry about.

Then again, I don't design GPUs for a living, so...

I suspect there's probably a way to render complex scenes in multiple 
passes such that you can do batch rendering. I'm not sure if it'll ever 
scale to realtime.

(Doesn't Blender or something have an optional unbiased rendering engine 
for the GPU?)


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: WebGL
Date: 14 Jul 2016 16:28:16
Message: <5787f5e0@news.povray.org>
On 06/06/2016 01:36 PM, scott wrote:
>> Sometimes I really wish I knew how to do this stuff for myself...
>
> Follow a tutorial on WebGL - or if you want to skip all the html/js
> boilerplate stuff you'll need to know, go straight to something like
> shadertoy. There are plenty of examples, and shadertoy now supports
> reading back pixels from previous frames, so you can do
> multi-frame-averaging for path-tracing amongst other effects.

Is there some sort of offline client program for ShaderToy? It would be 
really nice to not have to put up with the frailness of Opera, and to be 
able to easily *save* my source code, etc.

> Building a lens simulator (with real-time path-traced results) sounds
> feasible and very interesting.

I still hope to do this soon...

PS. Just for giggles, try running ShaderToy on your *phone*! It actually 
works... kinda...


Post a reply to this message

From: scott
Subject: Re: WebGL
Date: 26 Jul 2016 08:30:12
Message: <579757d4$1@news.povray.org>
> Is there some sort of offline client program for ShaderToy? It would be
> really nice to not have to put up with the frailness of Opera, and to be
> able to easily *save* my source code, etc.

Not that I'm aware of, presumably just saving the web page doesn't work? 
Why don't you use a different browser?

> PS. Just for giggles, try running ShaderToy on your *phone*! It actually
> works... kinda...

Yes I was surprised at that too, at first I thought the web site was 

actually running WebGL! The simple shaders run very smoothly.


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.