 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
> Methods or programs to 'simplify' the resulting point cloud so that it
> still is a fair representation of the overall area, but has much fewer
> points ~300k.
>
> Methods or programs to create a single mesh of the resulting point cloud
> that can represent tunnels.
This might be useful:
http://en.wikipedia.org/wiki/Delaunay_triangulation
How accurate are your points? If there is some noise in them you might want
to work on that before creating the mesh (eg using some nearest neighbour
averaging scheme). If they are all pretty accurate then just generate the
mesh first from the points, then simplify the mesh afterwards.
For simplifying the mesh there are probably some known algorithms, but you
could start by just merging smallest triangles with similar normals etc,
maybe even some 3D modellers like Blender have some functions for
simplifying meshes?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Kevin Wampler wrote:
>
> It will almost certainly be simpler to avoid attempting to reconstruct a
> mesh unless you want to use some third-party software or you have a lot
> of time to work on it (which as I understand you don't). Fortunately
> there is some free software you can try. Although I've never used any
> myself and haven't looked into the licensing requirements I suspect most
> are only available for personal use.
>
> This one is pretty recent, and should be able to handle large number of
> points pretty efficiently:
>
> http://students.cs.tamu.edu/jmanson/programs_wavelet_reconstruct.html
>
>
> This one is based on some relatively recent work from from MSR and
> should also be able to manage large numbers of points:
>
> http://research.microsoft.com/en-us/um/people/hoppe/proj/mlstream/
>
>
> The software here is a bit older but might be useful (links at bottom of
> page):
>
> http://grail.cs.washington.edu/projects/scanning/
>
>
I uploaded a video of what I am talking about.
to some this should look familiar as I posted it in the animation group
a while back.
www.stineconsulting.com/s/anim_large.mpg
This is the type of environment that I will be dealing with. Tho the
are will be much larger - more tunnels.
The data in this example is 9 scans located at the red spheres, total
about 2 million points. Each scan was individually converted into a
POV-Ray mesh and all 9 meshes were rendered together. You can see where
the meshes overlap
As you can see, near the scanner locations, the detail is very high
(lots of points) and away from the locations the data is more sparse. -
the nature of spherical data gathering.
Ideally I would be able to simplify all 9 scans into one thinned point
cloud that may have about 30K points in it - then mesh the point cloud.
I have routines that will 'simplify' large point clouds, but once they
are combined, I don't have methods to make a mesh from the resulting
point cloud.
Imagine if the animation that I posted were simplified into 1 point per
foot, then made into one mesh and rendered.
Note: my goal is not necessarily to get the stuff into POV-Ray - but
into any mesh format that I can manipulate.
I appreciate the tips and info that you've already thrown out. I'll be
looking into them closer as time allows.
Tom
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Tom Austin wrote:
>
> www.stineconsulting.com/s/anim_large.mpg
>
Nice! Looking at the video it seems that the only thing which you're
lacking are tools to manage the merged point clouds -- it sounds like
you can both mesh and simplify the individual scans perfectly well. Is
this assessment correct?
If so, is not not sufficient to get a simplified point cloud for the
entire scene by just merging the simplified clouds for each scan? You'd
get some redundant data on the overlap, but I wouldn't think that would
increase the point count too dramatically.
With regards to generating a mesh, are you satisfied with the quality of
the meshes generated from the single scans? If so, then it would
probably be much easier to clean up the seams when merging them than to
generate a mesh from scratch (unless, of course, one of the tools I
linked to solves everything for you).
I suppose my main question is that since it looks from the video like
you have a somewhat reasonable solution already, what about it isn't
satisfactory?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Kevin Wampler wrote:
> Tom Austin wrote:
>>
>> www.stineconsulting.com/s/anim_large.mpg
>>
>
> Nice! Looking at the video it seems that the only thing which you're
> lacking are tools to manage the merged point clouds -- it sounds like
> you can both mesh and simplify the individual scans perfectly well. Is
> this assessment correct?
>
Yes, creating a mesh from a single scan is somewhat easy - there is a
point of view from which to 'look' at the data and make triangles.
> If so, is not not sufficient to get a simplified point cloud for the
> entire scene by just merging the simplified clouds for each scan? You'd
> get some redundant data on the overlap, but I wouldn't think that would
> increase the point count too dramatically.
>
I think I have some fairly good ideas on how to simplify a point cloud -
I was just wondering if anyone had more meat to add to the feast.
So, handling point clouds isn't too big of a deal.
> With regards to generating a mesh, are you satisfied with the quality of
> the meshes generated from the single scans?
Not really. Areas close to the scanner are very dense with data, areas
far away are rather sparse. I would like to 'even' out the point
distribution. Say a point per foot type thing. I'm not after
'creating' data in the far away places - but I defiantly need to thin
the data that is up close.
If I do anything more than remove rows and column from the spherical
data set I start to lose my point of view and it becomes more difficult
to generate meshes using the methods that I have.
If so, then it would
> probably be much easier to clean up the seams when merging them than to
> generate a mesh from scratch (unless, of course, one of the tools I
> linked to solves everything for you).
>
If making everything a mesh, then merging the meshes gives me a good
result, I am not opposed to going that way.
I've always worked with the points, and then made something from the
points. But it seems that I can work with meshes and if I need to, make
points from the mesh.
> I suppose my main question is that since it looks from the video like
> you have a somewhat reasonable solution already, what about it isn't
> satisfactory?
To use the video as an example, I would like to have a fairly ordered
distribution of data. If you look close to the red spheres, you will
see very dense data - too dense to even be useful. If I could merge all
of the data into something like a 1 food cells, that would be great.
My end goal is to have something that is manageable on a normal PC (as
opposed to a souped up one) - likely in a AutoCAD format. The amount of
data in the video does not lend itself to that.
IIRC, the video is about 1000 frames with an hour to generate each frame
in POV-Ray.
I just don't have the experience to make meshes from what seems to be an
unordered point cloud. - That's what I am really after.
Thanks for your interest and help - I really appreciate it.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Ahh, I finally think I understand.
First off, making meshes from unordered point clouds is a well-studied
but tricky problem, and it's probably not worth attempting to solve
yourself unless you really want to spend some time on it. The best bet
if you want to go with this approach is to try the software I linked or
look for other existing packages (google for `point cloud mesh' or
`point cloud reconstruction' or something like that).
Since you have well ordered meshes for each scan the easiest way to go
if you're writing your own software is almost certainly to operate on
the individual scans as meshes and then merge them at the end. Since
(as I understand it) the structure of the scans is essentially a wrapped
rectangular grid, like the intersection of the latitude and longitude
lines on a globe, the process to simplify a scan mesh is almost
identical to those used in LOD (level of detail) terrain
simplifications. You can google it for more info, but time permitting
I'll see if I can whip up some example code for you to look at in the
next few days.
Simplifying the point clouds themselves is relatively simple to do
(depending on how smart you want to make the simplification process),
but if you want a mesh at the end it's probably easier just to stick
with meshes throughout.
On another note, this commercial software looks like it may do what you
want as well:
http://www.3dreshaper.com/en1/En_software.htm
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Kevin Wampler wrote:
> I'll see if I can whip up some example code for you to look at in the
> next few days.
I was a bit busier than I expected, but here's some code which will
(hopefully) do something similar to what you want. It's in Python and
you'll need pyopengl to run it. Once the window pops up, hit 'h' or '?'
to print the list of available commands. I've also tried to commend the
code a lot, but I can clarify things if they're still unclear.
Let me know if you can get it to work or not, if it seems to do what you
want, and if there's anything else I can help with.
Post a reply to this message
Attachments:
Download 'us-ascii' (20 KB)
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Kevin Wampler wrote:
> Kevin Wampler wrote:
>> I'll see if I can whip up some example code for you to look at in the
>> next few days.
>
> I was a bit busier than I expected, but here's some code which will
> (hopefully) do something similar to what you want. It's in Python and
> you'll need pyopengl to run it. Once the window pops up, hit 'h' or '?'
> to print the list of available commands. I've also tried to commend the
> code a lot, but I can clarify things if they're still unclear.
>
> Let me know if you can get it to work or not, if it seems to do what you
> want, and if there's anything else I can help with.
>
Kevin,
what can I say - you've gone way above and beyond what I have expected
anyone to do. You must be really excited about this stuff.
I've been pretty busy myself lately, but I have been able to look over
the programs that you presented - I especially like the one from
washington.edu - but some of the others look very promising as well.
The code that you posted is great - humbles me greatly in my knowledge
of 3D and data handling. It is much appreciated. I will try to work it
in this week to take a look at it.
What can I do in return for your help?
Maybe some of the 3D data that I have lying around?
Tom
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Tom Austin <taustin> wrote:
> If making everything a mesh, then merging the meshes gives me a good
> result, I am not opposed to going that way.
As a quick guess, I'd expect this approach to be the most performant (though
this doesn't say whether it will be easy to implement). After all, you don't
have to re-assemble your meshes from scratch, but just "weave" them together.
Some nice properties of this approach:
- You don't have to bother about points in the same mesh, except for the few
direct neighbors (which are already known) in this mesh.
- You typically don't even have to bother about many points in the other mesh:
Typically you will already have "weaved in" one or two neighboring points,
which gives you a nice start where to insert the next point into the combined
mesh.
The only tricky thing is where to start, but an algorithm to find *some* pair of
fairly close points should be possible to come up with. Octrees come to my mind
here - or just plain user interaction: Pick any point in mesh A for which you
know that mesh B has points nearby, then just brute-force search through mesh B
for the nearest neighbor. And there you go.
For reducing the sample resolution, one very naive (but possibly sufficient)
approach would be to just search the mesh for any faces (or edges) with an area
(or distance) below a certain threshold, and collapse them to a point.
More sophisticated approaches would of course collapse the smallest ones first,
but maybe that's not even necessary for your purposes.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Tom Austin wrote:
> what can I say - you've gone way above and beyond what I have expected
> anyone to do. You must be really excited about this stuff.
Every year I like doing a few small free-time programing projects just
so I can get some variety in what I code and some of the quick
gratification that comes form finishing a small project. This seemed to
be a pretty fun problem to choose for one of them!
> I've been pretty busy myself lately, but I have been able to look over
> the programs that you presented - I especially like the one from
> washington.edu - but some of the others look very promising as well.
I hope they work well -- the algorithms will certainly by much more
sophisticated than what I wrote.
> The code that you posted is great - humbles me greatly in my knowledge
> of 3D and data handling. It is much appreciated. I will try to work it
> in this week to take a look at it.
I am too flattered by your kind words. If you have any trouble
understanding any piece of the code. The data structure I used was a
binary triangle tree similar to that employed by the ROAM algorithm:
http://www.cognigraph.com/ROAM_homepage/
> What can I do in return for your help?
> Maybe some of the 3D data that I have lying around?
No need to worry about repaying me in any way, I wouldn't have done it
if I didn't enjoy it. If I do end up having a use for some 3D laser
scan data, however, I'll let you know!
Actually, wait, I can think of one thing. How about, if you get
something nice working for this project, you post a small video so I can
see what it ended up looking like?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |