![](/i/fill.gif) |
![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
> In summary, it appears that implementing reflection is mathematically
> impossible. I may perhaps still be able to realise my dream of rendering
> depth of field effects realistically. (Although without a nice detailed
> scene to look at, it might be rather disappointing...)
Obviously it's not, because there are plenty of shadertoy examples
showing it:
https://www.shadertoy.com/view/4ssGWX
Surfaces that reflect *OR* refract are trivial:
for(i=0;i<MAX_TRACE_DEPTH;i++)
{
intersect = Trace( ray );
if( REFLECT(intersect) ) ray = ReflectRay(ray , intersect);
if( REFRACT(intersect) ) ray = RefractRay(ray , intersect);
}
To do surfaces that do both you need to choose randomly at each
intersection which ray to follow (weighted appropriately depending on
the material) and then average over a larger number of samples. You can
either do many samples per frame (but obviously this gets slow for
complex scenes), or what is commonly done is to average over many frames
(and reset the average if the camera is moved). Or both.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
On 10/06/2016 07:59 AM, scott wrote:
>> In summary, it appears that implementing reflection is mathematically
>> impossible. I may perhaps still be able to realise my dream of rendering
>> depth of field effects realistically. (Although without a nice detailed
>> scene to look at, it might be rather disappointing...)
>
> Obviously it's not, because there are plenty of shadertoy examples
> showing it:
>
> https://www.shadertoy.com/view/4ssGWX
Interesting. On my browser, that just crashes.
> Surfaces that reflect *OR* refract are trivial:
>
> for(i=0;i<MAX_TRACE_DEPTH;i++)
> {
> intersect = Trace( ray );
> if( REFLECT(intersect) ) ray = ReflectRay(ray , intersect);
> if( REFRACT(intersect) ) ray = RefractRay(ray , intersect);
> }
The difficulty is figuring out what surface was hit just from the
intersection coordinates. And perhaps a perfect reflection isn't so
hard, but I wanted to have the colour change slightly on each
reflection... but I can't figure out how to stack up the colour changes
until you get to the end of the reflection chain, and then unstack them
again.
> To do surfaces that do both you need to choose randomly at each
> intersection which ray to follow (weighted appropriately depending on
> the material) and then average over a larger number of samples. You can
> either do many samples per frame (but obviously this gets slow for
> complex scenes), or what is commonly done is to average over many frames
> (and reset the average if the camera is moved). Or both.
Still trying to work out how you "average over several frames".
ShaderToy is great fun, but so *utterly* undocumented... it's quite hard
to figure out how to do anything.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
>> I'm sure you've all seen a white sphere with some coloured lights. :-P
>
> OK, here ya go:
Welcome to the dark side... :-)
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
> That sounds quite complex. (In particular, it seems to require me to
> actually design a real lens assembly, and implement real refraction.)
Yes I realised that later too, Google didn't seem to throw up any real
data on proper lens design (for pretty obvious reasons), only very
simple examples. Still it might be interesting, and you never know if
you made it slick enough one of the big lens manufacturers may show some
interest in you or your code.
> I was thinking more along the lines of a camera entity that fires rays
> in a pattern that matches a theoretical ideal lens. But I need to figure
> out the equations for that first...
Look through the POV source? :-)
> Yeah, not just a random number per pixel, but *multiple* random numbers!
> Multiple, statistically-independent numbers... This is not trivial.
Exactly, poor RNGs in these sorts of things can cause some bizarre
artefacts.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
>> Obviously it's not, because there are plenty of shadertoy examples
>> showing it:
>>
>> https://www.shadertoy.com/view/4ssGWX
>
> Interesting. On my browser, that just crashes.
Odd. But then you did say half the examples on shadertoy didn't work on
whatever browser you had. Tried Chrome/IE?
> The difficulty is figuring out what surface was hit just from the
> intersection coordinates.
You could have an intersection struct with surface normal and material
ID in it as well.
> And perhaps a perfect reflection isn't so
> hard, but I wanted to have the colour change slightly on each
> reflection... but I can't figure out how to stack up the colour changes
> until you get to the end of the reflection chain, and then unstack them
> again.
You need to keep track of a "cumulative coefficient of reflection" value
as you go (make it a vec3 if you want coloured reflections):
vec3 CCOR = vec3(1,1,1);
vec3 colour = vec3(0,0,0);
for(int i=0;i<MAX_TRACE_DEPTH;i++)
{
isect = Raytrace();
colour += CCOR * isect.diffuse_colour;
CCOR *= isect.reflection_colour;
}
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
>> Interesting. On my browser, that just crashes.
>
> Odd. But then you did say half the examples on shadertoy didn't work on
> whatever browser you had. Tried Chrome/IE?
Yeah, it seems the smaller examples run just fine, but bigger ones freak
it out. In particular, browsing the shaders seems to try to run too many
shaders at once and break Opera. (And once it's broken, only restarting
the browser will fix it. It just doesn't *try* any more.)
>> The difficulty is figuring out what surface was hit just from the
>> intersection coordinates.
>
> You could have an intersection struct with surface normal and material
> ID in it as well.
Material ID is something I hadn't thought of. (I'm trying to do a
checkerboard for the ground, which makes calculating the colour...
interesting.)
> You need to keep track of a "cumulative coefficient of reflection" value
> as you go (make it a vec3 if you want coloured reflections):
>
> vec3 CCOR = vec3(1,1,1);
> vec3 colour = vec3(0,0,0);
> for(int i=0;i<MAX_TRACE_DEPTH;i++)
> {
> isect = Raytrace();
> colour += CCOR * isect.diffuse_colour;
> CCOR *= isect.reflection_colour;
> }
I'm thinking about the associative/distributive property of the reals,
though... If one object adds 4% blue and then reflects 95%, and the next
object adds %6 yellow and reflects 50%, and the final object is green,
you need
((green * 50%) + 6% yellow) * 95% + 4% blue
but the algorithm above gives
((4% blue * 95%) + 6% yellow) * 50% + green
which I don't think simplifies to the same result. You'd have to neither
trace all the rays backwards (i.e., from scene to camera), which is
laughably inefficient, or somehow store a history (array?) of all the
intermediate steps so you can reverse them...
...then again, I think I'll just give up on reflection, and see if I can
make the lens work with a checkerboard. :-P
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
On 10/06/2016 08:33 AM, scott wrote:
>> That sounds quite complex. (In particular, it seems to require me to
>> actually design a real lens assembly, and implement real refraction.)
>
> Yes I realised that later too, Google didn't seem to throw up any real
> data on proper lens design (for pretty obvious reasons), only very
> simple examples. Still it might be interesting, and you never know if
> you made it slick enough one of the big lens manufacturers may show some
> interest in you or your code.
Pfft. I doubt anything I'll ever do will be "slick". But maybe I can
make something interesting.
(Would be nice if ShaderToy would let you add sliders to control your
shader!)
>> I was thinking more along the lines of a camera entity that fires rays
>> in a pattern that matches a theoretical ideal lens. But I need to figure
>> out the equations for that first...
>
> Look through the POV source? :-)
Any idea where in the 25,000,000 LoC I should start looking?
(You never know, maybe there's a file named camera.c or something...)
>> Yeah, not just a random number per pixel, but *multiple* random numbers!
>> Multiple, statistically-independent numbers... This is not trivial.
>
> Exactly, poor RNGs in these sorts of things can cause some bizarre
> artefacts.
There's a couple of StackOverflow questions about this. It seems the
best results are obtained by using an integer hash function... yet
ShaderToy seems to not support bitwise integer operations, so that's
kind of not an option.
That said, there's a widely used sin-based method, which gives awful
results. But if you iterate it a few times... it's not *too* bad.
Still waiting on how you average consecutive frames.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
On 10/06/2016 07:06 PM, Orchid Win7 v1 wrote:
> (You never know, maybe there's a file named camera.c or something...)
There is, in fact, a camera.cpp file.
OMG, look at this:
/// POV-Ray is based on the popular DKB raytracer version 2.12.
/// DKBTrace was originally written by David K. Buck.
/// DKBTrace Ver 2.0-2.12 were written by David K. Buck & Aaron A. Collins.
I am *almost certain* that the man I bought my flat from was called
Aaron Collins... o_O
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
On 10/06/2016 08:24 AM, Orchid Win7 v1 wrote:
> Still trying to work out how you "average over several frames".
> ShaderToy is great fun, but so *utterly* undocumented... it's quite hard
> to figure out how to do anything.
OK, I think I've cracked it:
If you click the "new tab" button, it gives you another shader, that
renders to Buffer A. If you then set iChannel0 to be Buffer A, you can
do a texture2D(iChannel0, coords) to read the previous frame's pixel value.
Now you just need to set the main image shader to be a texture2D()
lookup on Buffer A, and you're golden.
Not what you'd call "obvious"...
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
On 10/06/2016 07:10 PM, Orchid Win7 v1 wrote:
> On 10/06/2016 07:06 PM, Orchid Win7 v1 wrote:
>> (You never know, maybe there's a file named camera.c or something...)
>
> There is, in fact, a camera.cpp file.
Sadly, it contains the data structures for *describing* the camera, but
nothing whatsoever to do with *casting rays*.
It seems *that* is in tracepixel.cpp. Looking into it further, it seems
that TracePixel::CreateCameraRay() has a giant switch-block for every
possible camera type, but they all end with JitterCameraRay(). This,
seemingly, is where the focal blur stuff happens.
In full:
void TracePixel::JitterCameraRay(Ray& ray, DBL x, DBL y, size_t ray_number)
{
DBL xjit, yjit, xlen, ylen, r;
Vector3d temp_xperp, temp_yperp, deflection;
r = camera.Aperture * 0.5;
Jitter2d(x, y, xjit, yjit);
xjit *= focalBlurData->Max_Jitter * 2.0;
yjit *= focalBlurData->Max_Jitter * 2.0;
xlen = r * (focalBlurData->Sample_Grid[ray_number].x() + xjit);
ylen = r * (focalBlurData->Sample_Grid[ray_number].y() + yjit);
// Deflect the position of the eye by the size of the aperture, and in
// a direction perpendicular to the current direction of view.
temp_xperp = focalBlurData->XPerp * xlen;
temp_yperp = focalBlurData->YPerp * ylen;
deflection = temp_xperp - temp_yperp;
ray.Origin += deflection;
// Deflect the direction of the ray in the opposite direction we
deflected
// the eye position. This makes sure that we are looking at the
same place
// when the distance from the eye is equal to "Focal_Distance".
ray.Direction *= focalBlurData->Focal_Distance;
ray.Direction -= deflection;
ray.Direction.normalize();
}
Good luck *ever* figuring out what the hell any of it means, of course...
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |