POV-Ray : Newsgroups : povray.general : 3-D printing via 3D SLICER app-- step by step : Re: 3-D printing via 3D SLICER app-- step by step Server Time
3 May 2024 00:09:13 EDT (-0400)
  Re: 3-D printing via 3D SLICER app-- step by step  
From: Kenneth
Date: 7 Apr 2024 18:50:00
Message: <web.6613217fd4c4570a91c33a706e066e29@news.povray.org>
"Bald Eagle" <cre### [at] netscapenet> wrote:
> "Kenneth" <kdw### [at] gmailcom> wrote:
>
> In some ways, I feel that this entire object-slicing-to-3D-SLICER idea is just
> an interim step to the POV-ray 3-D printing problem... waiting for some clever
> fellow to take the image-slices and instead create a triangle mesh (and .stl
> file) *directly* from the image pixels. Like POV-ray does with height_fields
> from a single image, but 'stitching together' all the triangles from all the
> slices. Then we would have a direct-to-.stl solution!
>
> [BE:]
> Notice also that you don't have any holes in a heightfield.
> Try to make a surface from a photo of Swiss cheese, and you'll begin to
> appreciate what the problem is.
>
> In 3D it's even more challenging, because you have internal voids that
> you need to find and be aware of.
> [snip]
> So, there are a few challenges that need to be addressed - the first is
> where is the object and where is it not?  Trying to do that with trace () is
> going to fraught with problems.
> [snip]

Agreed. Your comments are a masterful analysis of the problems involved...and so
quickly! You amaze me.

I was thinking more along the lines of taking the already-made slice images and
using *those* as the basis for further pixel-to-triangle conversion (somehow),
thereby eliminating the scanning problems inherent in tracing the actual object
with inside/outside tests or whatever. The images have nicely-defined
black-to-white transitions, even for holes, undercuts, and internal voids.
Further processing of those images could(?) get us a nicely-defined *shell* of
white pixels at all the required locations-- for the then-MAJOR task of
conversion to triangles. That's my half-formed (half-baked?!) idea so far.

> Then once you accumulate all of the scan data, you need to address the issue
> of _connectivity_, and I, jr, and other can tell exactly how non-trivial
> a problem THAT is.

Yep, I'm beginning to see the problems now-- even the most fundamental one of:
What should each image pixel represent as a triangle conversion? One triangle?
Multiple ones? POV-ray's internal height_field-from-image algorithm uses 4
neighboring pixels to create... two triangles. A similar scheme may or may not
be appropriate, I don't know.

There is also the problem of 'handed-ness' for each triangle-- that is, which
triangle should have clockwise or counter-clockwise ordering of the three vertex
points (which determines the single face-normal required for each final.stl
triangle, to determine which direction it should face for printing.)
>
> Now, your mention of heightfields, and my armchair consideration of those,
> gave me an idea.
>
> The concept is this:
>
> Let's say you take your slicer method and you now have a slice.  One could
> scan the "image" with eval_pigment and see what's white and what's black. The
> position where you switch from one color to the other is obviously an edge.
> That would be a triangle vertex. Once you had all of your slices processed
> into a point cloud of vertices, you
> could scan through it and whatever points were close enough to each other in
> different layers, you could connect into triangles.
> [snip]
>

We're thinking along the same lines ;-)


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.