POV-Ray : Newsgroups : povray.off-topic : Back to the future Server Time
11 Oct 2024 03:17:44 EDT (-0400)
  Back to the future (Message 111 to 120 of 234)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Phil Cook
Subject: Re: Back to the future [~200KBbu]
Date: 25 Jul 2008 06:15:52
Message: <op.ueuepe2ac3xi7v@news.povray.org>
And lo on Thu, 24 Jul 2008 20:31:59 +0100, Jim Henderson  
<nos### [at] nospamcom> did spake, saying:

> On Thu, 24 Jul 2008 20:24:51 +0100, Orchid XP v8 wrote:
>
>>>> I meant you can't just give a machine a BW picture of a tree and have
>>>> it automatically know to turn it green. That's impossible.
>>>
>>> I don't know that to be the case.  Again, a case of one's ability to
>>> fathom how something like that is done doesn't translate to "there's no
>>> way it could possibly be done".
>>
>> It's a basic premise of signal processing that you cannot recover data
>> that isn't there any more. Shannon's theorum and all that.
>>
>> Whether you can *fake* something that "looks" right is another matter.
>> But *recover*? No. Impossible.
>
> At least as far as we know today.

If the grains in the film reacted to colour in some currently unreadable  
fashion and/or those alterations were transferred to the photo itself then  
you could, in theory, recover colour from a B&W photo or film by reading  
those imperfections.

-- 
Phil Cook

--
I once tried to be apathetic, but I just couldn't be bothered
http://flipc.blogspot.com


Post a reply to this message

From: Orchid XP v8
Subject: Re: Back to the future [~200KBbu]
Date: 25 Jul 2008 13:27:12
Message: <488a0cf0$1@news.povray.org>
Phil Cook wrote:

>>> Whether you can *fake* something that "looks" right is another matter.
>>> But *recover*? No. Impossible.
>>
>> At least as far as we know today.
> 
> If the grains in the film reacted to colour in some currently unreadable 
> fashion and/or those alterations were transferred to the photo itself 
> then you could, in theory, recover colour from a B&W photo or film by 
> reading those imperfections.

Now *that* at least makes sense, hypothetically.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Chambers
Subject: Re: Back to the future
Date: 28 Jul 2008 05:08:06
Message: <488d8c76$1@news.povray.org>
scott wrote:
>> 16 colours out of 16, verses 32 out of 4,096? Seems like a fairly big 
>> difference to me. ;-)
> 
> Yeh saying that, we've been stuck at 2^24 out of 2^24 for a while now... 
> ;-)

Wasn't the Parhelia that board that did 10 bits per channel, back around 
2004?  It also offered tri-monitor support for "surround gaming", or 
something.

I know people claim you can't tell the difference with more bits, but 
honestly I still see banding in "truecolor" (ie 8 bits per channel) images.

I can't wait for the day that double precision colors become mainstream, 
and DACs offer 16 or 32 bits per channel in the display :)

...Chambers


Post a reply to this message

From: Invisible
Subject: Re: Back to the future
Date: 28 Jul 2008 05:19:04
Message: <488d8f08$1@news.povray.org>
Chambers wrote:

> I know people claim you can't tell the difference with more bits, but 
> honestly I still see banding in "truecolor" (ie 8 bits per channel) images.

My laptop is 24-bit graphics modes. However, the physical display 
hardware only supports 16-bit colour, and does dithering in hardware to 
produce the rest. The end result is, obviously, horrid.

[But then my laptop's LCD is horrid anyway. No matter where you put your 
head, only 50% of the display is visible at any time - the other 50% 
shows up in negative. Talk about narrow viewing angle...!]

I can well understand somebody looking at a "24-bit image" on this 
16-bit display and concluding that 24-bits is insufficient. But 
honestly, on every *real* 24-bit display I've seen, there is no evidence 
of banding at all. Hell, my sister has a gigantic 42-inch LCD TV in her 
front room, and I'm watching digital TV and playing COD4 on a PS3, all 
in 24-bit colour, and it looks damned *perfect*.

(...and then there are those people who claim to be able to tell the 
difference between 44.1 kHz and 48 kHz digital audio - despite the 
proven scientific impossibility of this feat.)

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Warp
Subject: Re: Back to the future
Date: 28 Jul 2008 08:12:06
Message: <488db796@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> My laptop is 24-bit graphics modes. However, the physical display 
> hardware only supports 16-bit colour, and does dithering in hardware to 
> produce the rest. The end result is, obviously, horrid.

  The end result would often be more horrid without the dithering. The
second image at http://en.wikipedia.org/wiki/Ordered_dithering is a rather
good example of how dithering can actually help the quality of the image,
even if the dithering algorithm is braindead simple.

-- 
                                                          - Warp


Post a reply to this message

From: Invisible
Subject: Re: Back to the future
Date: 28 Jul 2008 09:26:19
Message: <488dc8fb@news.povray.org>
>> My laptop is 24-bit graphics modes. However, the physical display 
>> hardware only supports 16-bit colour, and does dithering in hardware to 
>> produce the rest. The end result is, obviously, horrid.
> 
>   The end result would often be more horrid without the dithering.

Undoubtably so. But all the dithering in the world can't really compare 
to having propper shades. (Unless the spatial resolution is *very* high. 
This is how printers get away with it...)

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Jim Henderson
Subject: Re: Back to the future [~200KBbu]
Date: 28 Jul 2008 12:49:13
Message: <488df889$1@news.povray.org>
On Fri, 25 Jul 2008 11:13:52 +0100, Phil Cook wrote:

> And lo on Thu, 24 Jul 2008 20:31:59 +0100, Jim Henderson
> <nos### [at] nospamcom> did spake, saying:
> 
>> On Thu, 24 Jul 2008 20:24:51 +0100, Orchid XP v8 wrote:
>>
>>>>> I meant you can't just give a machine a BW picture of a tree and
>>>>> have it automatically know to turn it green. That's impossible.
>>>>
>>>> I don't know that to be the case.  Again, a case of one's ability to
>>>> fathom how something like that is done doesn't translate to "there's
>>>> no way it could possibly be done".
>>>
>>> It's a basic premise of signal processing that you cannot recover data
>>> that isn't there any more. Shannon's theorum and all that.
>>>
>>> Whether you can *fake* something that "looks" right is another matter.
>>> But *recover*? No. Impossible.
>>
>> At least as far as we know today.
> 
> If the grains in the film reacted to colour in some currently unreadable
> fashion and/or those alterations were transferred to the photo itself
> then you could, in theory, recover colour from a B&W photo or film by
> reading those imperfections.

That's kinda what I'm thinking.

Jim


Post a reply to this message

From: Jim Henderson
Subject: Re: Back to the future [~200KBbu]
Date: 28 Jul 2008 17:51:32
Message: <488e3f64$1@news.povray.org>
On Fri, 25 Jul 2008 09:28:11 +0100, Invisible wrote:

>>>> Mathematical proofs have been proven wrong before, you know.
>>> Yes - but it's really extremely rare. Especially for very simple
>>> proofs. The ones that turn out to be wrong are usually the highly
>>> complex ones.
>> 
>> True.  But as I said, not *impossible*.
> 
> Not impossible, no.

Well, there you go. :-)

> Also, taking a shattered piece of glass and throwing the pieces at each
> other in such a way that the individual atomic laticies just happen to
> line up perfectly and you end up with the original, unshattered piece of
> glass is perfectly "possible", it's merely "unlikely".
> 
> ...unlikely enough that no sane person bothers worrying about it.
> Similarly, the proof of the impossibility of an infinite compression
> ratio is *so* absurdly trivial that the chances of it being wrong are
> vanishingly small.
> 
> There are far more elaborate proofs that *might* be wrong - the four
> colour map theorum immediately leaps to mind - but when one speaks about
> a proof so simple it can be stated in a few sentences... it's really
> astonishingly unlikely to be wrong.

True.  But unlikely != impossible.

>> I don't like absolutes when it
>> comes to things like this; people who think in absolutes usually limit
>> themselves, and also tend to have a very jaded view of the world.
>> 
>> To borrow a line from Patriot Games:  Shades of grey.  The world is
>> shades of grey.
> 
> Well... that's very nice, but unless somebody proves that the laws of
> logic as currently formulated have some really deeply *fundamental* flaw
> [in which case all of mathematics and science as we currently understand
> it is completely wrong], the halting problem isn't going to be disproved
> any time soon.

Maybe.  But then again, throughout history there are examples of really 
fundamental things needing to be adjusted.  Knowledge continues to grow.

>>> See, that's just it. The halting problem is unsolvable in a
>>> theoretical computer with an infinite amount of memory, allowed to run
>>> for an infinite amount of time. It's not a question of computers not
>>> being "powerful enough", the problem is unsolvable even theoretically.
>> 
>> Using current thinking about how computers work.  If/when the computer
>> science geniuses crack true AI, then the halting problem can be solved,
>> can it not?  Can not humans evaluate the halting problem, at least in
>> limited cases?
> 
> Let me be 100% clear about this: NO, even human beings CANNOT solve the
> halting problem. (I have a simple and easy counterexample to this.)
> 
> It is not a question of "not having good enough AI". It's a question of
> "there is a proof of a dozen lines or so that shows that no Turing
> machine program can ever exist which solves this problem".

This is the problem, though:  The assumption is that computing will 
always use a Turing model, like I said.

>> If you only thing in terms of turing machine-style computers, then
>> you're absolutely right.  But turing machines are not (or rather, may
>> not be) the end-all be-all of computing for the rest of the life of the
>> universe.
>> 
>>> Unless quantum computing ever works some day, and it turns out to have
>>> _fundamentally_ different capabilities, the halting problem will never
>>> be solved.
>> 
>> *Bingo*, that's my point.  There's that "unless" phrase.
> 
> I would like to point out that even if you assume that some hypothetical
> device exists which can easily solve the Turning machine halting
> problem, there is now a *new* version of the halting problem (namely,
> does a program for this new machine ever halt?) which will still be
> unsolvable. And if you design a new machine that can somehow solve even
> this new "super-halting problem", you just end up with a
> super-super-halting problem. And so on ad infinitum.

Maybe.  It's hard to say that that would be the case, because the future 
is unknowable.

> The halting problem is not a consequence of the exact way a Turing
> machine works. It is a very basic consequence of simple logic, and
> applies to any hypothetical detministic machine. (That's WHY it's such
> an important result.)

Maybe the halting problem was a bad example to illustrate my point; the 
bottom line on my point still stands, though.  What's "impossible" 
yesterday became an everyday occurrence.  It was "impossible" that the 
solar system was heliocentric a thousand years ago.  Today, it's common 
knowledge that that is untrue.

>>> The impossibility of a lossless compression algorithm with an infinite
>>> compression ratio doesn't even depend on the model of computing used;
>>> it is a trivial exercise in logic.
>> 
>> Again, someday we may have really exceptional AI that can figure this
>> stuff out, not based on current computing technologies.
> 
> Intelligence - artificial or not - isn't the problem. It's not that
> nobody can work out *how* to do it, it's that IT'S IMPOSSIBLE.

Using today's technologies, maybe.  Again, we don't know what the future 
holds.  It is impossible to use paper in transistors.  Or is it?

> Now if we were talking about some phenominon of physics, there would be
> at least some degree of uncertainty - we might be wrong about one of the
> "laws" of physics. There could be some edge case we don't know about
> yet. (E.g., Newton's laws of motion aren't quite 100% correct.)
>
> But we're talking about simple logic here. Unless there is some fatally
> dire flaw in our ability to comprehend logic [in which case, we're
> basically screwed anyway], infinite compression is entirely impossible,
> and always will be. It's not about current computer technologies; this
> is impossible for any deterministic technology that would hypothetically
> exist.

Not necessarily in our ability to comprehend logic, but perhaps in our 
ways of understanding how to process it.

>>> And *my* point is that some things are "impossible" because nobody has
>>> yet figured out how, while other things are "impossible" because they
>>> defy the laws of causality. And there's a rather bit difference.
>> 
>> Sure, but solving the halting problem or properly colouring a photo
>> that started in black and white is not something that defies the laws
>> of causality.  It merely defies our technological abilities at this
>> time.
> 
> This is precisely my point: Solving the halting problem DOES defy the
> laws of causality. It is NOT just a problem of technology. It is a
> problem of "if this algorithm were to exist, it would cause a logical
> paradox, regardless of the technology used".

There again, maybe I chose a poor example to illustrate my point.

Jim


Post a reply to this message

From: Invisible
Subject: Re: Back to the future [~200KBbu]
Date: 29 Jul 2008 04:10:39
Message: <488ed07f$1@news.povray.org>
>> If the grains in the film reacted to colour in some currently unreadable
>> fashion and/or those alterations were transferred to the photo itself
>> then you could, in theory, recover colour from a B&W photo or film by
>> reading those imperfections.
> 
> That's kinda what I'm thinking.

...so in other words, hypothetically the information might not be 
"gone". If that were indeed the case, it is at least plausible that 
somebody could possibly get it back, yes.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Invisible
Subject: Re: Back to the future [~200KBbu]
Date: 29 Jul 2008 04:23:52
Message: <488ed398$1@news.povray.org>
>> There are far more elaborate proofs that *might* be wrong - the four
>> colour map theorum immediately leaps to mind - but when one speaks about
>> a proof so simple it can be stated in a few sentences... it's really
>> astonishingly unlikely to be wrong.
> 
> True.  But unlikely != impossible.

You realise that it's completely possible that at one point today you'll 
try to swallow a mouthful of water and accidentally inhale it, right? 
People actually *die* like this. But I don't see you worrying about it - 
because it's rather unlikely. And yet it is many, many times *more* 
likely than somebody solving the Halting Problem.

>> Well... that's very nice, but unless somebody proves that the laws of
>> logic as currently formulated have some really deeply *fundamental* flaw
>> [in which case all of mathematics and science as we currently understand
>> it is completely wrong], the halting problem isn't going to be disproved
>> any time soon.
> 
> Maybe.  But then again, throughout history there are examples of really 
> fundamental things needing to be adjusted.  Knowledge continues to grow.

Scientific facts have been found to be incorrect. There are far fewer 
examples of mathematical truths which have needed to be adjusted. And 
there are vanishingly few examples of widely accepted *proofs* that turn 
out to be wrong - it tends to be things lots of mathematicians "think" 
are true that eventually turn out to be disproven.

>> Let me be 100% clear about this: NO, even human beings CANNOT solve the
>> halting problem. (I have a simple and easy counterexample to this.)
>>
>> It is not a question of "not having good enough AI". It's a question of
>> "there is a proof of a dozen lines or so that shows that no Turing
>> machine program can ever exist which solves this problem".
> 
> This is the problem, though:  The assumption is that computing will 
> always use a Turing model, like I said.

And like I have explained multiple times, the problem isn't the Turing 
model. Even if we assume some as-yet unknown technology with miraculous 
[but not unlimited] powers, we still have an unsolvable problem. That's 
what is so significant about the Halting Problem. (I mean, face it, what 
*practical* use does HP have? None. It's importance is that it 
demonstrates that some problems are unsolvable.)

>> I would like to point out that even if you assume that some hypothetical
>> device exists which can easily solve the Turning machine halting
>> problem, there is now a *new* version of the halting problem (namely,
>> does a program for this new machine ever halt?) which will still be
>> unsolvable. And if you design a new machine that can somehow solve even
>> this new "super-halting problem", you just end up with a
>> super-super-halting problem. And so on ad infinitum.
> 
> Maybe.  It's hard to say that that would be the case, because the future 
> is unknowable.

You're missing my point: It doesn't *matter* that we can't know the 
future. Simple logical deduction demonstrates that ANY machine we can 
construct will have the same problem, REGARDLESS of how it works.

> Maybe the halting problem was a bad example to illustrate my point; the 
> bottom line on my point still stands, though.  What's "impossible" 
> yesterday became an everyday occurrence.  It was "impossible" that the 
> solar system was heliocentric a thousand years ago.  Today, it's common 
> knowledge that that is untrue.

Once again, you are talking about science.

In science, it is always possible that some "fact" could turn out to be 
partially or wholely wrong. We think we know how the universe works, but 
we could always be wrong about something.

Mathematics is different. Not quite as different as was originally 
believed, but still. In mathematics, we can construct absolute truths 
which will never be disproven until the end of the universe itself. The 
only question mark is the reliability of the human mind.

For sufficiently complicated proofs, it becomes not merely possible but 
*plausible* that some mistake could exist. For sufficiently simple 
proofs, we can be absolutely certain that only a fundamental flaw with 
renders all of mathematics invalid could disprove the theorum.

In summary: Science can never have absolute truths. Mathematics can.

>> Intelligence - artificial or not - isn't the problem. It's not that
>> nobody can work out *how* to do it, it's that IT'S IMPOSSIBLE.
> 
> Using today's technologies, maybe.  Again, we don't know what the future 
> holds.  It is impossible to use paper in transistors.  Or is it?

Making transistors out of paper is a question of physics - a branch of 
science. Infinite compression ratios is a question of mathematics. 
Therein lies the critical difference.

>> This is precisely my point: Solving the halting problem DOES defy the
>> laws of causality. It is NOT just a problem of technology. It is a
>> problem of "if this algorithm were to exist, it would cause a logical
>> paradox, regardless of the technology used".
> 
> There again, maybe I chose a poor example to illustrate my point.

I'm not sure what your point is.

If your point is that science is sometimes wrong, or at least needs to 
be amended, then I agree. If your point is that widely held beliefs are 
sometimes wrong, then I also agree. If your point is that every proven 
mathematical result could actually be wrong, then I completely disagree.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.