POV-Ray : Newsgroups : povray.advanced-users : Least Squares fitting : Re: Least Squares fitting Server Time15 Jul 2024 20:39:00 EDT (-0400)
 Re: Least Squares fitting
 From: Kenneth Date: 11 Mar 2023 16:35:00 Message:
```
{
"@context": "https://schema.org",
"@type": "DiscussionForumPosting",
"@id": "#web.640cf37289b6dca79b4924336e066e29%40news.povray.org",
"dateCreated": "2023-03-11T21:35:00+00:00",
"datePublished": "2023-03-11T21:35:00+00:00",
"author": {
"@type": "Person",
"name": "Kenneth"
}
}
That's a really nice result of fitting a set of data points to a function.  But
I wish I understood what all of this was about-- starting with the fundamental
idea of the  'sum of least SQUARES' and why 'squaring' the residual data errors
is used in these techniques.  I don't remember ever being introduced to that
concept, in either high-school or college maths classes. (But, I never took a
course in statistics.) The various internet articles on the subject that I have
read over the years are woefully complex and do not explain the *why* of it.

From the Wikipedia article "Partition of sums of squares":
"The distance from any point in a collection of data, to the mean of the data,
is the deviation. ...If all such deviations are squared, then summed, as in
[equation], this gives the "sum of squares" for these data."

The general idea of finding the AVERAGE of a set of data points is easy enough
to understand, as is finding the deviations  or 'offsets' of those points from
the average. But why is 'squaring' then used? What does that actually
accomplish? I have not yet found a simple explanation.
```