|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christopher James Huff <cja### [at] earthlinknet> wrote in
news:cja### [at] netplexaussieorg:
> The source code should compile on any Linux or Mac OS X machine,
> it is no harder than installing a binary.
>
I can understand the effort required to provide binaries for multiple
platforms... But ...please reread that statement with your user hat on.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christopher James Huff wrote:
>
> > I for example don't really understand why this patch is a
> > separate object and not a special function for using in isosurfaces (which
> > would have a lot of advantages for the user).
>
> I abandoned that approach early on. An isosurface function would make
> many of the optimizations used impossible, make a manually specified
> container object mandatory, etc. Other than those blob2-specific
> optimizations, it probably works very similarly to the isosurface method
> without the evaluate option.
This is exactly what i meant when suggesting a better documentation. I
still fail to see what optimizations would not be possible in a custom
isosurface function (the container is not really important if the function
is fast far away from the components due to optimization). Descriptions
of the used techniques will be the key to understand the strength (and
possibly weak points) of the patch.
I think Tom is quite right, you seem to not enough look at it from the
user's side. From your point of view documentation might be unnecessary
but it would be immensely important. Try writing something under the
premise:
"In what way is the blob2 better than other solutions (existing blob shape
and isosurface blobbing solutions) and how does the patch accomplish
this."
Christoph
--
POV-Ray tutorials, include files, Sim-POV,
HCR-Edit and more: http://www.tu-bs.de/~y0013390/
Last updated 17 Jun. 2003 _____./\/^>_*_<^\/\.______
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <Xns### [at] 204213191226>,
Tom Galvin <tom### [at] imporg> wrote:
> > The source code should compile on any Linux or Mac OS X machine,
> > it is no harder than installing a binary.
>
> I can understand the effort required to provide binaries for multiple
> platforms... But ...please reread that statement with your user hat on.
Maybe a clarification is needed: providing command line binaries would
not make it any easier.
It is really true, installing from source is a matter of typing a
slightly different command at a terminal. This is more work than
"installing" the GUI version (which on the Mac, consists of putting the
main folder somewhere convenient), but as I said, I can't help that.
Making a binary distribution for the CLI version would be a fairly large
amount of work (larger than several patches I've done), figuring out how
to replicate the install process the makefile uses, and it would not be
any easier.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3F193F21.DC20B444@gmx.de>,
Christoph Hormann <chr### [at] gmxde> wrote:
> "In what way is the blob2 better than other solutions (existing blob shape
> and isosurface blobbing solutions) and how does the patch accomplish
> this."
Existing blob shape: the blob2 object is more flexible (having more
component types), and uses a different falloff function that gives
smoother blobs. Look at the pictures I've put up demonstrating the
difference. The old blob looks like lumps covered in goo, the blob2
looks like it's just the goo.
Isosurfaces: the main advantage is that it is faster. It uses
optimizations that can't be done in even a hard-coded isosurface
function. For example, it collects the components that influence a ray,
and uses only those when searching for the intersection. If you have a
blob2 with hundreds of components, but only two components affect the
current intersection being tested, only those two components will get
evaluated.
In addition to this, the blob2 uses this list to automatically figure
out the beginning and end of the interval to check for intersections.
Not only do you not have to specify a container, the blob2 algorithm
comes up with more accurate bounds than you could specify anyway, using
the bounding shapes of individual components.
It also seems to give smoother results with low accuracy. Artifacts do
show up, but the isosurface shape gives a more visible "stepping". This
may just be an illusion, a bug with the isosurface, or something I'm
doing with blob2 that the isosurface doesn't. It might be an effect of
the interval checking mentioned above, it is like having a more
irregular container.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christopher James Huff <cja### [at] earthlinknet> wrote in
news:cja### [at] netplexaussieorg:
>
> It is really true, installing from source is a matter of typing a
> slightly different command at a terminal.
Assuming that the compile is successful, assuming that dependencies are
local and the correct version, assuming that the user knows what a compiler
is, assuming that a complier is even installed on the system....
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christopher James Huff wrote:
>
> [some more explanations]
Well, it looks like a good start. Illustrate it with some images, some
diagrams and formulas of the falloff functions, some render times for
comparison, a syntax summary and you already have a quite helpful addition
for the user.
> Isosurfaces: the main advantage is that it is faster. It uses
> optimizations that can't be done in even a hard-coded isosurface
> function. For example, it collects the components that influence a ray,
> and uses only those when searching for the intersection. If you have a
> blob2 with hundreds of components, but only two components affect the
> current intersection being tested, only those two components will get
> evaluated.
Well, in functions you could do the same on a point basis instead of ray
basis and add some caching if the next point is near the old one. Surely
it will be slower but as i said having it in isosurfaces would also have
some serious advantages.
Note that handcoded isosurface functions for blobbing or CSGing many
components scale extremely badly, even if an internal function for this
would not be as fast as your new shape it would be ways faster than the
manual approach.
Christoph
--
POV-Ray tutorials, include files, Sim-POV,
HCR-Edit and more: http://www.tu-bs.de/~y0013390/
Last updated 17 Jun. 2003 _____./\/^>_*_<^\/\.______
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <Xns### [at] 204213191226>,
Tom Galvin <tom### [at] imporg> wrote:
> > It is really true, installing from source is a matter of typing a
> > slightly different command at a terminal.
>
> Assuming that the compile is successful, assuming that dependencies are
> local and the correct version, assuming that the user knows what a compiler
> is, assuming that a complier is even installed on the system....
If someone can install the official command line version (which is
needed anyway before doing this), the dependencies are already taken
care of. A binary install would involve more time and work for
assembling and testing, and for very little benefit, especially
considering the level of interest I'm seeing.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3F1A7B17.DD5DBD0D@gmx.de>,
Christoph Hormann <chr### [at] gmxde> wrote:
> Well, it looks like a good start. Illustrate it with some images, some
> diagrams and formulas of the falloff functions, some render times for
> comparison, a syntax summary and you already have a quite helpful addition
> for the user.
Demanding, aren't we?
Considering the lack of interest I'm seeing in this, I'm putting this
patch on the backburner, there won't be another MP+ release until I have
more patches moved over. When that happens, there will be documentation
of the syntax and some more sample scenes, and the documentation of the
source code will be improved. In the meantime, I have other projects I
need to attend to first.
> Well, in functions you could do the same on a point basis instead of ray
> basis and add some caching if the next point is near the old one. Surely
> it will be slower but as i said having it in isosurfaces would also have
> some serious advantages.
Caching is useless here...at least, I see no way of applying it that
doesn't just add overhead. And without ray info, you can't do the same
thing. The point of doing it on a per-ray basis is that you can take a
fairly expensive computation (the bounds test/component collection
stage) and use it to optimize a large number of expensive computations
(all the point evaluations needed to find the intersections with the
ray) that would otherwise be many times the first calculation. Drop that
and you're back to looking at every single component for every point
evaluated. You could still derive some benefit from a heirarchial
bounding scheme, but so could the existing algorithm.
There will be a blob2 pattern. This will not be able to use these
optimizations either, but like any other pattern, you will be able to
use it in isosurfaces. But I have good reasons for not doing it this way
for the blob2 primitive.
> Note that handcoded isosurface functions for blobbing or CSGing many
> components scale extremely badly, even if an internal function for this
> would not be as fast as your new shape it would be ways faster than the
> manual approach.
By scaling, you appear to mean performance with increasing numbers of
components...removing these optimizations would make the order of the
algorithm equal to hand coded functions, performance would deteriorate
linearly as number of components increases. You would only get the
benefits of compiled functions. (plus the fact that the functions
themselves are more optimized)
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christopher James Huff wrote:
>
> > Well, it looks like a good start. Illustrate it with some images, some
> > diagrams and formulas of the falloff functions, some render times for
> > comparison, a syntax summary and you already have a quite helpful addition
> > for the user.
>
> Demanding, aren't we?
I'd rather say suggesting. If you don't want to go this way i won't urge
you but you were the one who wondered about the lack of interest.
Christoph
--
POV-Ray tutorials, include files, Sim-POV,
HCR-Edit and more: http://www.tu-bs.de/~y0013390/
Last updated 17 Jun. 2003 _____./\/^>_*_<^\/\.______
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christopher James Huff <cja### [at] earthlinknet> wrote in
news:cja### [at] netplexaussieorg:
>
> If someone can install the official command line version (which is
> needed anyway before doing this), the dependencies are already taken
> care of. A binary install would involve more time and work for
> assembling and testing, and for very little benefit, especially
> considering the level of interest I'm seeing.
>
I used the official binary for my Redhat linux box. I don't want you to
make a binary for me. I am capable of getting it running, but I don't have
the time to play with it. I am just giving you the user perspective.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|