POV-Ray : Newsgroups : povray.binaries.images : A povr fork test image for new dens_object pattern. : Re: A povr fork test image for new dens_object pattern. Server Time
5 Nov 2024 14:22:25 EST (-0500)
  Re: A povr fork test image for new dens_object pattern.  
From: Samuel B 
Date: 7 Sep 2023 18:35:00
Message: <web.64fa4fc2b1d4735c16bed5696e741498@news.povray.org>
William F Pokorny <ano### [at] anonymousorg> wrote:
> On 9/6/23 19:30, Samuel B. wrote:
> > William F Pokorny <ano### [at] anonymousorg> wrote:
> >> Of course quite a bit longer with dens_object. No getting around there
> >> being a lot more work getting done. Let's see. A render of just the R -
> >> no normal{} at all on my old 2 core i3:
> >>
> >>     1.50user 0.03system 0:01.26elapsed
> >>
> >> and with a dens_object normal 'similar' to that posted prior.
> >>
> >>     69.96user 0.11system 0:18.82elapsed
> >
> > I don't know what any of that means 🙁 Are you using Linux or something?
[...]
>
> Yes, Ubuntu and I trust the unix/linux timers over POV-Ray's internal
> measures because they wrap POV-Ray's processes / sub-processes.

Ah, ok. I was starting to wonder if there was a setting I missed.

I'll have to switch over to some flavor of Linux one of these days, since
Microsoft is progressively making Windows worse in every way. (Bloat is probably
the biggest issue. I suspect Windows uses too many resources... Currently, 44%
of my 8GB of memory is being used, and I only have Firefox open! For some
reason, several Edge instances are running, plus a bunch of other things. It's
crazy.)

> Aside: The povr fork has a normal{micro} perturbation method which
> really only works as a final results with AA. This sounds a little like
> what the clipka code experiment you'd like to do.

Ah, so is it like crand, but for normals? That would probably be perfect,
especially if samples multiply with focal blur.

> Shading... This something about which I don't know very much. I know
> there are shaders available in GPU hardware which can fake effects and
> do certain limited programming like stuff, but that's about it.

I was referring to shading in the general sense, as in what POV-Ray does under
the hood. Being an end user, I only get to change options exposed/implemented by
the SDL.

Ideally, I think a future version could benefit from being more like Blender's
Cycles, where you get near absolute control over diffuse, glossy reflections,
emission, subsurface scatting, etc. (Blender uses nodes for this, so it would be
different in POV-Ray.) Each such feature could receive value and pattern inputs,
while still being simple to use when you don't want to go overboard. In POV, we
can already do a lot with texture_maps but they tend to be slower to render,
even for simple things like having the specular roughness change using a
pattern.

> As for mapping to arbitrary shapes - a tough problem. Probably easier to
> create a mesh and work with that.

Yeah, real shapes are always better overall.

> Aside: I've played a little with df3 environments into which arbitrary
> shapes can be placed for pigment / normal mapping of a sort. However,
> that approach has the non-trivial, practical issue of very large df3
> sizes accompanying those now mapped / painted shapes.
>
> Bill P.

There is a way to 'grow' signed distance functions (SDFs), and then render them
at resolutions higher than the initial array size. In cases where you're making
arbitrary shapes (like if you had a 3D voxel painter or such), the result might
look bumpy (like packed spheres) near the resolution size. But in cases where
you know the shapes you want, it can be very precise.

I can't really explain it better at this time, but if your system supports
hardware shaders, check out these 2D examples:
https://www.shadertoy.com/view/4sK3WK
https://www.shadertoy.com/view/MdGGWt

It's probably not ideal for most things.

Sam


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.