|
![](/i/fill.gif) |
On 10/27/2011 2:55 PM, Jim Henderson wrote:
> On Thu, 27 Oct 2011 16:51:05 -0400, Warp wrote:
>
>> Jim Henderson<nos### [at] nospam com> wrote:
>>> There are those who believe in climate change not because they've
>>> applied any rigor to it, but they have a "sense" that it's true even
>>> though they haven't studied it personally. They "take it on faith"
>>> that the science/ scientists who support their point of view have done
>>> their homework.
>>
>> To be fair, in subjects that I understand very little about (if
>> anything
>> at all) I'm prone to believing the scientific community (especially if
>> it's widely accepted) a lot more easily than anybody else. The reason is
>> that I know (at least a bit) how science works and why it's more
>> reliable than other forms of "investigation". Hence if two differing
>> claims are made about an obscure subject, I find science's claim more
>> reliable by default.
>>
>> So far I have had very few disappointments with this (if at all).
>> All the disappointments have been on the other direction.
>
> The thing is that there have been scientific studies done that show both
> results. Some (on both side of the discussion) have been proven to be
> incomplete, incorrect, or flat out wrong. That's why there's so much
> noise made about "falsified data" on the side of the "pro-climate change"
> group.
>
Sadly, the whole "controversy" is pure bullshit, as is the "falsified
data". Yes, this happens. It happens among paid for shills, working for
places owned by people with clear agendas, who are, as has been said,
"have a hard time understanding something, when they are paid not to."
When there is a concerted effort to use the trappings of scientific
investigation to promote profit, or denial, and the general public is
actively discouraged from either knowing enough to tell the difference,
never mind how to tell the difference, its hardly surprising that public
trust is lost, and it becomes easier to confuse people with false
information.
In many case the "different studies produce opposite results"
unfortunately is a mix of bad studies (there are a whole hell of a lot
of those, especially among those trying to "prove" some seriously stupid
shit, or trying to use scientism to promote pseudoscience), there are
cases where they just failed to do a large enough sample size, of other
conditions distorted the result. And, unfortunately, especially in
medicinal research, there is a time limit between patent, production,
and pay off, which inherently biases the system to try to get positive
results, prove the medicine as soon as feasible, and make back the cost
of research, as quickly as feasible, before the patent runs out. This
means high cost, sometimes poor control in testing, or even cherry
picked data, in the sense of de-enphasizing data that implies possible
problems, combined with fast tracking of what "appears" to be a
successful product.
Usually this isn't critical, but with medicine often becoming more
precise, exacting, specific to conditions, and thus, prone to unknown
variables, or even hard to test for effects, in many cases, its creating
some big problems. However, in nearly every case where a failure has
arisen, there was prior evidence, in the studies, ignored or otherwise,
indicating that the problem *might* have existed. Thus its not a failure
of science, but a failure to "apply it" properly.
Post a reply to this message
|
![](/i/fill.gif) |