POV-Ray : Newsgroups : povray.off-topic : Back to the future Server Time
11 Oct 2024 03:15:57 EDT (-0400)
  Back to the future (Message 125 to 134 of 234)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Mike Raiford
Subject: Re: Back to the future [~200KBbu]
Date: 29 Jul 2008 10:30:22
Message: <488f297e$1@news.povray.org>
Invisible wrote:

> 
> All I know is that when I take a dark image and try to make it brighter, 
> it comes out hopelessly noisy.
> 

As you push those low RGB values up, values in-between maintain their 
relationship by scaling. You're also scaling the baseline noise of your 
camera. S/N ratio is still the same, the only difference is that there 
was very little signal to begin with. You might be able to stretch this 
a small amount, using a noise reduction algorithm, but this will eat 
details, which you have very little of, so the image will be soft and 
begin to show artifacts. You could also take advantage of human 
perception of color and smooth the color channels, which will reduce the 
apparent color noise, you'll still have luminance noise, but this is 
generally regarded as more pleasant than chroma noise.

Also, with 8 bit images, the risk of posterization becomes greater the 
more drastic the changes. You only have 32 levels to work with in an 
image that is underexposed by 3 stops. (which would be rather dark) as 
opposed to 256 levels had the image been properly exposed in the first 
place. Now, lets say you have an image that had been recorded in 12 
bit-- which is common in most digital SLR's, though 14bit is becoming 
more common-- , now that same image has 512 levels, therefore on current 
displays and print technologies, there really is no posterization. This 
gives a much greater lattitude when performing color adjustments.


Post a reply to this message

From: Mike Raiford
Subject: Re: Back to the future
Date: 29 Jul 2008 10:43:03
Message: <488f2c77$1@news.povray.org>
Nicolas Alvarez wrote:

> I definitely remember Windows 98 slowly redrawing the desktop as if it was
> raytracing the damned wallpaper, while the hard disk made horrible
> insane-seeking noises.

I remember that, too. I attributed it to the Wallpaper being somewhat 
dispensable, and being swapped to disk as soon as memory was being 
needed by the application.

What was more likely happening is as applications are opened, and worked 
with, more memory is needed, and the least recently used block happened 
to be the wallpaper.

You minimize a few windows, The desktop is made visible again, 
Explorer's paint routine for the background is executed, it hits the 
image, and gets a page fault, Windows then gets that page from memory, 
GDI draws a bit of the image, hits another page fault, etc... until the 
image is painted in, All the while paging out more data from other apps 
to draw the background. These other apps may need the data just paged 
out, and so begins the harddrive-pounding swap.

Adding more RAM usually resolved the problem. ;) 3MB in a 16MB memory 
space is an awful heavy load on memory for something like a pretty 
background. (Based on a 1024x768 32bpp screen, not unheard of in those days)


Post a reply to this message

From: Jim Henderson
Subject: Re: Back to the future [~200KBbu]
Date: 29 Jul 2008 11:54:37
Message: <488f3d3d$1@news.povray.org>
On Tue, 29 Jul 2008 09:12:12 -0500, Mike Raiford wrote:

> Jim Henderson wrote:
> 
> 
>> Wrong.  http://refocus-it.sourceforge.net/
>> 
>> 
> Ooh, new toy! Could prove to be very useful!

It's not easy to use, and documentation is thin - but I've found it 
useful as well.

Jim


Post a reply to this message

From: Jim Henderson
Subject: Re: Back to the future [~200KBbu]
Date: 29 Jul 2008 12:04:47
Message: <488f3f9f$1@news.povray.org>
On Tue, 29 Jul 2008 09:23:54 +0100, Invisible wrote:

>> True.  But unlikely != impossible.
> 
> You realise that it's completely possible that at one point today you'll
> try to swallow a mouthful of water and accidentally inhale it, right?
> People actually *die* like this. But I don't see you worrying about it -
> because it's rather unlikely. And yet it is many, many times *more*
> likely than somebody solving the Halting Problem.

Have you calculated the odds of both? ;-)

We're not talking about comparative probabilities.  Impossible means *no* 
chance of it ever happening.

>> Maybe.  But then again, throughout history there are examples of really
>> fundamental things needing to be adjusted.  Knowledge continues to
>> grow.
> 
> Scientific facts have been found to be incorrect. There are far fewer
> examples of mathematical truths which have needed to be adjusted. And
> there are vanishingly few examples of widely accepted *proofs* that turn
> out to be wrong - it tends to be things lots of mathematicians "think"
> are true that eventually turn out to be disproven.

Exactly my point, but with a narrower focus.  Things lots of *people* 
"think" are true sometimes/frequently/often turn out to be disproven.

>> This is the problem, though:  The assumption is that computing will
>> always use a Turing model, like I said.
> 
> And like I have explained multiple times, the problem isn't the Turing
> model. Even if we assume some as-yet unknown technology with miraculous
> [but not unlimited] powers, we still have an unsolvable problem. That's
> what is so significant about the Halting Problem. (I mean, face it, what
> *practical* use does HP have? None. It's importance is that it
> demonstrates that some problems are unsolvable.)

<sigh>  We're going in circles here....

>> Maybe.  It's hard to say that that would be the case, because the
>> future is unknowable.
> 
> You're missing my point: It doesn't *matter* that we can't know the
> future. Simple logical deduction demonstrates that ANY machine we can
> construct will have the same problem, REGARDLESS of how it works.

It's simple logical deduction that unless I have a screwdriver, I can't 
drive a screw.

Until you realise that the screw has a hex head and an allen wrench will 
do the job just as nicely.

*Sometimes* all you need is a new tool.  Sometimes the new tool hasn't 
been invented yet.

>> Maybe the halting problem was a bad example to illustrate my point; the
>> bottom line on my point still stands, though.  What's "impossible"
>> yesterday became an everyday occurrence.  It was "impossible" that the
>> solar system was heliocentric a thousand years ago.  Today, it's common
>> knowledge that that is untrue.
> 
> Once again, you are talking about science.
> 
> In science, it is always possible that some "fact" could turn out to be
> partially or wholely wrong. We think we know how the universe works, but
> we could always be wrong about something.
> 
> Mathematics is different. Not quite as different as was originally
> believed, but still. In mathematics, we can construct absolute truths
> which will never be disproven until the end of the universe itself. The
> only question mark is the reliability of the human mind.

I think it's a mistake to say "we know all there is to ever know about 
'x'".  There have been many points in history where humankind has made 
such declarations about many things - including mathematics - and it has 
turned out that we'd only scratched the surface.  It's the height of 
hubris to assume we can't learn anything new.

> For sufficiently complicated proofs, it becomes not merely possible but
> *plausible* that some mistake could exist. For sufficiently simple
> proofs, we can be absolutely certain that only a fundamental flaw with
> renders all of mathematics invalid could disprove the theorum.
> 
> In summary: Science can never have absolute truths. Mathematics can.

Sometimes the devil is in the details (and how detailed your data is).

>> Using today's technologies, maybe.  Again, we don't know what the
>> future holds.  It is impossible to use paper in transistors.  Or is it?
> 
> Making transistors out of paper is a question of physics - a branch of
> science. Infinite compression ratios is a question of mathematics.
> Therein lies the critical difference.

And yet you agreed with another post in this thread that said that 
something was possible.  Look at the refocusing capabilities of some of 
the tools for that to reconstruct detail in blurred images.  Blurring is 
lossy compression, yet being able to recover that data isn't impossible; 
that's been proven.

>>> This is precisely my point: Solving the halting problem DOES defy the
>>> laws of causality. It is NOT just a problem of technology. It is a
>>> problem of "if this algorithm were to exist, it would cause a logical
>>> paradox, regardless of the technology used".
>> 
>> There again, maybe I chose a poor example to illustrate my point.
> 
> I'm not sure what your point is.

That I chose a poor example.

> If your point is that science is sometimes wrong, or at least needs to
> be amended, then I agree. If your point is that widely held beliefs are
> sometimes wrong, then I also agree. If your point is that every proven
> mathematical result could actually be wrong, then I completely disagree.

I believe that's just a limitation of our understanding of things as they 
are now.

Jim


Post a reply to this message

From: Mike Raiford
Subject: Re: Back to the future [~200KBbu]
Date: 29 Jul 2008 12:39:51
Message: <488f47d7$1@news.povray.org>
Invisible wrote:

> 
> I don't know about you, but every time *I* look at either the GIMP or 
> PhotoShop, I can never figure out what magical trick I'm missing that 
> lets you do the impressive stuff everybody else does. To me, it just 
> seems to be a small set of pretty simple tools that don't appear to give 
> you much power to do anything.
> 

There are lots of tools, though. Some are quite sophisticated, such as 
the aforementioned healing brush. Layers, masking and blend modes are 
also quite powerful features.

If you look at the brush tool in PS, it's extremely powerful, pressure 
sensitive, can create brush strokes that splatter and scatter, etc, PSE 
is significantly neutered, though and had much less options on the brush 
tool.


Post a reply to this message

From: Orchid XP v8
Subject: Re: Back to the future [~200KBbu]
Date: 29 Jul 2008 13:17:30
Message: <488f50aa$1@news.povray.org>
>> Scientific facts have been found to be incorrect. There are far fewer
>> examples of mathematical truths which have needed to be adjusted. And
>> there are vanishingly few examples of widely accepted *proofs* that turn
>> out to be wrong - it tends to be things lots of mathematicians "think"
>> are true that eventually turn out to be disproven.
> 
> Exactly my point, but with a narrower focus.  Things lots of *people* 
> "think" are true sometimes/frequently/often turn out to be disproven.

Show me one single mathematical result which was *proven* to be true, 
and verified independently by a large number of mathematicians, and 
subsequently turned out to actually be false.

I can think of any number of results in *science* that were widely 
believed to be true but turned out not to be. But mathematics is different.

>> You're missing my point: It doesn't *matter* that we can't know the
>> future. Simple logical deduction demonstrates that ANY machine we can
>> construct will have the same problem, REGARDLESS of how it works.
> 
> It's simple logical deduction that unless I have a screwdriver, I can't 
> drive a screw.
> 
> Until you realise that the screw has a hex head and an allen wrench will 
> do the job just as nicely.
> 
> *Sometimes* all you need is a new tool.  Sometimes the new tool hasn't 
> been invented yet.

And I suppose next you'll be telling me that some day, some future 
technology might enable us to find a sequence of chess moves whereby a 
bishop can get from a black square to a white square, despite it being 
trivially easy to mathematically prove the impossibility of this...

> I think it's a mistake to say "we know all there is to ever know about 
> 'x'".  There have been many points in history where humankind has made 
> such declarations about many things - including mathematics - and it has 
> turned out that we'd only scratched the surface.  It's the height of 
> hubris to assume we can't learn anything new.

I'm not claiming that nothing new can be learned - I am saying that, at 
least in mathematics, learning new things doesn't invalidate what we 
already know.

>> Making transistors out of paper is a question of physics - a branch of
>> science. Infinite compression ratios is a question of mathematics.
>> Therein lies the critical difference.
> 
> And yet you agreed with another post in this thread that said that 
> something was possible.  Look at the refocusing capabilities of some of 
> the tools for that to reconstruct detail in blurred images.  Blurring is 
> lossy compression, yet being able to recover that data isn't impossible; 
> that's been proven.

Hey, guess what? Blurring isn't compression. It might *look* like it is, 
but it isn't.

>> If your point is that science is sometimes wrong, or at least needs to
>> be amended, then I agree. If your point is that widely held beliefs are
>> sometimes wrong, then I also agree. If your point is that every proven
>> mathematical result could actually be wrong, then I completely disagree.
> 
> I believe that's just a limitation of our understanding of things as they 
> are now.

Sure. And no doubt some day we'll discover that 2+2 isn't actually 4. I 
won't hold by breath for that though. :-P

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Orchid XP v8
Subject: Re: Back to the future [~200KBbu]
Date: 29 Jul 2008 13:18:12
Message: <488f50d4$1@news.povray.org>
>> Whether you can *fake* something that "looks" right is another matter. 
>> But *recover*? No. Impossible.
> 
> Or, you find another source for the missing data.

Yeah, but that wouldn't be "recovering" the data, that would be getting 
it from another source. ;-)

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Orchid XP v8
Subject: Re: Back to the future
Date: 29 Jul 2008 13:20:18
Message: <488f5152@news.povray.org>
>> I definitely remember Windows 98 slowly redrawing the desktop as if it 
>> was
>> raytracing the damned wallpaper, while the hard disk made horrible
>> insane-seeking noises.
> 
> I remember that, too. I attributed it to the Wallpaper being somewhat 
> dispensable, and being swapped to disk as soon as memory was being 
> needed by the application.

It just amused me that a 33 MHz machine would struggle so much to draw 
16 colour graphics when my puny little 7 MHz Amiga could draw 32 colour 
graphics instantly, and at higher resolutions...

...and then, like I said, the Amiga's hardware stood still for 10 years. 
Kinda lost the edge after that.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Darren New
Subject: Re: Back to the future [~200KBbu]
Date: 29 Jul 2008 14:32:44
Message: <488f624c$1@news.povray.org>
Orchid XP v8 wrote:
> Unless quantum computing ever works some day, and it turns out to have 
> _fundamentally_ different capabilities, the halting problem will never 
> be solved.

Quantum computing (today) doesn't even solve NP problems in P time, let 
alone non-computable problems. :-)

-- 
Darren New / San Diego, CA, USA (PST)
  Helpful housekeeping hints:
   Check your feather pillows for holes
    before putting them in the washing machine.


Post a reply to this message

From: Darren New
Subject: Re: Back to the future [~200KBbu]
Date: 29 Jul 2008 14:42:47
Message: <488f64a7$1@news.povray.org>
Jim Henderson wrote:
> This is the problem, though:  The assumption is that computing will 
> always use a Turing model, like I said.

No, computing doesn't today using a Turing model, and the Halting 
problem applies to many more computing models than the Turing model.

The Halting problem isn't solvable. If you come up with a new computing 
model that "solves" it, what you're solving isn't the halting problem 
any more.

It's like arguing "Maybe 2+2 will equal 6 some day, if 2 turns into 3." 
But if 2 turns into 3, you're not longer adding 2+2.

The halting problem is a precisely defined mathematical construct. Maybe 
  newer computing models might conceivably obsolete the implications of 
the halting problem, but they won't actually negate its proof. (In the 
same sense, that computers are far faster may obsolete the problems 
caused by some algorithms taking O(N^3) instructions, but that doesn't 
make the algorithm take fewer instructions.)

-- 
Darren New / San Diego, CA, USA (PST)
  Helpful housekeeping hints:
   Check your feather pillows for holes
    before putting them in the washing machine.


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.