 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On Fri, 08 Sep 2000 13:36:17 -0500, Thorsten Froehlich wrote:
>For inserting: O(n * log(n))
>For retrieval: O(n * log(n))
>Total: O(n * log(n) + n * log(n))
>
>
>I can get:
>
>For inserting: O(n)
>For sorting *: O(n * log(n))
>For retrieval: O(n)
>Total: O(2 * n + n * log(n))
>
>* No fancy algorithm, just quicksort.
You seem to have a misconception about how O() notation works. The total
for the first case is O(n*log(n)) and the total for the second case is...
O(n*log(n)). They might have different constant factors, but the determination
of which is faster is entirely a matter of determining those constant factors
(which depend on the constant factors in the component parts of the algorithm.)
BTW, if this comparison method worked, I could beat you both. I have a
balanced, fully-threaded 2-3 tree implementation with O(n log n) insertion
and O(n) traversal.
--
Ron Parker http://www2.fwi.com/~parkerr/traces.html
My opinions. Mine. Not anyone else's.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <39b9050f@news.povray.org>, Warp <war### [at] tag povray org>
wrote:
> So instead of spending 1/2 hour once to learn how to use the STL you
> spend 2 hours every time you want to use a weighted binary tree?
No, I want to spend 2-5 hours learning how to implement it myself. Then
I can spend 1.5 hours figuring out the STL(or fighting with the STL) to
do the same thing.
> : And because I don't want to be
> : separated from what my program is doing?
>
> Sorry, I didn't understand at all what are you talking about here.
If I use the STL, I don't know exactly what is happening in my program.
If something goes wrong with a program using something I implemented, I
know the code, and can figure out the problem quickly. If something goes
wrong and I am using the STL, I end up trying to figure out why STL
files are producing errors, whether or not the problem is actually *in*
the STL.
> : And because when I was *trying*
> : to learn the STL, I often got errors and couldn't figure out why?
>
> Get a proper compiler and try again. You can get one by free.
I have an excellent compiler and good libraries. That doesn't help
understanding the reasons I get errors.
> And read documentation and tutorials.
I have been collecting links to various STL tutorials/overviews on the
web. I also plan to buy some books, when I can. I have a book with a lot
of detailed information on the STL, but it seems more reference oriented
than tutorial oriented.
--
Christopher James Huff
Personal: chr### [at] mac com, http://homepage.mac.com/chrishuff/
TAG: chr### [at] tag povray org, http://tag.povray.org/
<><
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <39b91926@news.povray.org>, "Thorsten Froehlich"
<tho### [at] trf de> wrote:
> Hmm, sounds like reinventing the wheel to me ;-)
Exactly.
> The thing is once you have done it two or three times it gets really
> boring writing the same code over and over again and it distracts
> from the fun part of programming and especially it distracts from the
> actual problem you want to solve.
That is fine, but at the moment, I want to have a grasp of how they
work. I can imagine programmers taught to always use the STL
encountering a simple doubly linked list in someone else's code and
being completely baffled by it...I want to understand the tools I use,
and how to implement them myself if necessary. For example: a game
project with extremely severe speed requirements, developing for
platforms where the STL isn't useable, writing in other languages which
I may learn in the future, etc. Especially that last one...I don't
expect C++ and STL to last forever, knowledge of algorithms is portable,
knowledge of the STL is not.
--
Christopher James Huff
Personal: chr### [at] mac com, http://homepage.mac.com/chrishuff/
TAG: chr### [at] tag povray org, http://tag.povray.org/
<><
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <01c019ce$7da07900$917889d0@daysix>, "H. E. Day"
<Pov### [at] aol com> wrote:
> Yo, Chris, when can we expect the fixed version of the +.3? The memory
> leak has sorta crippled my IRTC anim...
Real Soon Now. I have a couple more things to work out, and a couple
small features I want to add.
--
Christopher James Huff
Personal: chr### [at] mac com, http://homepage.mac.com/chrishuff/
TAG: chr### [at] tag povray org, http://tag.povray.org/
<><
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <slr### [at] fwi com> , ron### [at] povray org (Ron
Parker) wrote:
> You seem to have a misconception about how O() notation works. The total for
> the first case is O(n*log(n)) and the total for the second case is...
I know, but I don't care enough (especially as a full "prove" is rather
long) as all I want to point out is that a map will not perform better than
a slightly different and in plain C easier to write algorithm than the
RB-tree most likely used for the map. As for the total, you are right that
I shouldn't just have copy and pasted it together assuming anybody would get
the idea I wanted to show.
> O(n*log(n)). They might have different constant factors, but the
> determination of which is faster is entirely a matter of determining those
> constant factors (which depend on the constant factors in the component parts
> of the algorithm.)
Yes, I am aware the constants matter here when using a simple sorting method
(merge sort) as I suggested.
To make you happy, I could have written (for the worst cases):
Numbers following variables should be read as subscripts!
T1(n) = (c1 * n * log n + c2) + (c3 * n * log n + c4) =
(c1 + c3) * n * log n + (c2 + c4)
T2(n) = (c5 * n + c6) + (c7 * n + c8) + (c9 * n * log n + c10) =
(c5 + c7) * n + (c9 * n * log n) + (c6 + c8 + c10)
For a large enough n this will hold:
c2 + c4 < (c5 + c7) * log n
and
c6 + c8 + c10 < (c5 + c7) * log n
So, I simply eliminate c2, c4, c6, c8 and c10 (set them to one) and can get:
T1(n) = (c1 + c3) * n * log n
T2(n) = (c5 + c7) * n + (c9 * n * log n)
Lets further assume that c1 = c3:
T1(n) = 2 * c1 * n * log n
T2(n) = (c5 + c7) * n + (c9 * n * log n)
And now comes the tricky part because I have to show that (c1 + c3) > c9. I
do not need to assume anything about c5 and c7 as n * log n > n for a big
enough n. this leaves:
T1(n) = 2 * c1 * n * log n
T2(n) = c9 * n * log n
I assume that for map a RB-tree is used and for list.sort a merge sort that
just flips the pointers of two strings. In both cases the algorithms need
to perform n * log n string comparisons, so we can just not count those.
This leaves the recursive calls and the swap operation for the merge sort,
and the walking through the tree (twice).
I assume that the merge sort will be faster*, but if anybody is not happy
with this, for string sorting there are plenty of other algorithms out
there...
In conclusion it is without problems possible to outperform the map used in
this case, and this is all I wanted to point out.
Thorsten
* I don't want to spend the time to show this now.
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <slr### [at] fwi com> , ron### [at] povray org (Ron
Parker) wrote:
>>For inserting: O(n * log(n))
>>For retrieval: O(n * log(n))
>>Total: O(n * log(n) + n * log(n))
>>
>>
>>I can get:
>>
>>For inserting: O(n)
>>For sorting *: O(n * log(n))
>>For retrieval: O(n)
>>Total: O(2 * n + n * log(n))
>>
>>* No fancy algorithm, just quicksort.
>
> You seem to have a misconception about how O() notation works. The total for
> the first case is O(n*log(n)) and the total for the second case is...
> O(n*log(n)). They might have different constant factors, but the
> determination of which is faster is entirely a matter of determining those
> constant factors (which depend on the constant factors in the component parts
> of the algorithm.)
>
> BTW, if this comparison method worked, I could beat you both. I have a
> balanced, fully-threaded 2-3 tree implementation with O(n log n) insertion
> and O(n) traversal.
(Referring to my previous reply):
Or, I could just prove that
lim 2n lg n > lim 2n + n lg n
n->oo n->oo
=>
lim n lg n > lim 2n
n->oo n->oo
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On Fri, 08 Sep 2000 13:36:17 -0500, Thorsten Froehlich wrote:
>Total: O(n * log(n) + n * log(n))
which is O(n * log(n)
>Total: O(2 * n + n * log(n))
which is also O(n * log(n)
No difference there.
Which one of the two algorithms is really faster depends on the
implementation and the input data.
hp
--
_ | Peter J. Holzer | Nicht an Tueren mangelt es,
|_|_) | Sysadmin WSR | sondern an der Einrichtung (aka Content).
| | | hjp### [at] wsr ac at | -- Ale### [at] univie ac at
__/ | http://www.hjp.at/ | zum Thema Portale in at.linux
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <slr### [at] teal h hjp at> ,
hjp### [at] SiKitu wsr ac at (Peter J. Holzer) wrote:
>>Total: O(n * log(n) + n * log(n))
>
> which is O(n * log(n)
>
>>Total: O(2 * n + n * log(n))
>
> which is also O(n * log(n)
>
> No difference there.
Yes, because I was lazy and just did a copy and paste. In big-O notation
this will be the result, but this is not the whole issue (just eliminate the
big-O in the total!).
> Which one of the two algorithms is really faster depends on the
> implementation and the input data.
No, it is a clear which one is faster, just my notation for the total was
incorrect: See my later posts explaining it in more detail.
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
In article <slr### [at] teal h hjp at> ,
hjp### [at] SiKitu wsr ac at (Peter J. Holzer) wrote:
> Which one of the two algorithms is really faster depends on the
> implementation and the input data.
No, it does not depend on the input data too much. The reason being that
the tree used for the map will be balanced so you should always have about
log n access time. And for the merge sort it also does not matter, it will
always take n log n time.
I fell for the trap of the big-O notation at first, too, when replying to
Ron, but I was simply implying the wrong thing at first by using big-O
notation for the totals, especially because you can't compare big-Os of
functions anyway ...
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Chris Huff wrote:
> > you will have to get most of them sooner or later anyway ;-)
>
> ...and I know I will. :-)
> Especially graphics and algorithms books, all I have are "introduction
> to C/C++/Java" books and one good C++ intro/reference book(C++ Primer
> Plus, which does cover the STL, just in a somewhat incomprehensible way
> for someone who doesn't already use it).
> Hmm, maybe I can get my parents to pay for them...probably not.
Have you got
"Code Complete" - Steve McConnell
"The Mythical Man-Month" - Frederick P. Brooks Jr. (anniversry edition)
"Peopleware" - Demarco & Lister
"Rapid Development" - Steve McConnell
"Extreme Programming Explained" - Kent Beck
Of those, I'd say the first two are a must for you for now, and the others
are for as you get time.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |