"Bald Eagle" <cre### [at] netscapenet> wrote:
> "Kenneth" <kdw### [at] gmailcom> wrote:
> > That's a really nice result of fitting a set of data points to a function.
> It's the reverse. I'm fitting the function describing a line, circle, and
> sphere to the measured data. It's "as close to all of the data points as
> it can be _simultaneously_". And so the overall error is minimized.
Got it. Sorry for my reversed and lazy way of describing things.
> > The general idea of finding the AVERAGE of a set of data points is
> > easy enough to understand...
> This is an acceptable method, and of course can be found in early
> treatments of computing the errors in data sets.
> > But why is 'squaring' then used? What does that actually
> > accomplish? I have not yet found a simple explanation.
> "it makes some of the math simpler" especially when doing multiple dimension
> (the variance is equal to the expected value of the square of the distribution
> minus the square of the mean of the distribution)
So the (naive) question that I've always pondered is, would CUBING the
appropriate values-- instead of squaring them-- produce an even tighter fit
between function and data points? (Assuming that I understand anything at all
about why even 'squaring' is the accepted method, ha.) Although, I imagine that
squaring is perhaps 'good enough', and that cubing would be an unnecessary and
more complex mathematical step.
From reading at least various Wikipedia pages re: the discovery or invention of
'sum of squares' etc, it kind of gives me the impression that Gauss et al came
up with the method in an empirical way(?) rather than from any theoretical
standpoint. And that it simply proved useful.
Post a reply to this message