|
 |
"Bald Eagle" <cre### [at] netscape net> wrote:
> "GioSeregni" <gms### [at] hotmail com> wrote:
>
> > Yes, really good idea, I understood the concept. Great concept.
> > I'm just worried about the times which are already very long lol
>
> Indeed. This is of course a common problem in dealing with large data sets in
> computer graphics.
>
> One thing you may want to consider is sorting your triangles so that you can
> quickly use something like an octree to only search very nearby triangles for
> matching vertices.
>
> Do one BIG sort, then all of your very very many searches will be a LOT faster.
>
> > I have however noticed that short vectors, here X/10 y/10 z/10 seem to introduce
> > fewer artefacts!
>
> When developing concepts like this, it is usually helpful to use very small test
> scenes that parse and render very quickly, so that you can isolate and address
> any problems quickly. Then once you have everything worked out, you can try it
> on a big mesh, like you are doing now.
>
> Take a few single triangles, and just render them with different normal vector
> lengths and see what happens.
>
> First, all of your normal lengths should normalized as a starting point.
> Then you add all of the weighted face normals with a common vertex,
> and finally normalize the final result.
>
> What language are you doing this in? SDL? :O
>
> - BW
sure, last night I started with two faces, then three, then a cube (bad idea),
and then this.
I use the old RapidQ, it's slow but it's very versatile and it's the language I
know best, and the fastest to write. I also know XBLite, Ruby and AutoLisp, but
it would be much more complicated...
I was thinking, for your idea, instead of the area, which means many operations,
finding the center of the triangle, and comparing the size of the triangles
using the center- 1 vertex distance.
G.
Post a reply to this message
|
 |