|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 06.06.2011 20:14, schrieb Darren New:
> For that matter, there's very few systems out there that require all
> your code to be digitally signed and which checks it every time you fire
> it up, other than game consoles and phones for copy-protection purposes.
... and still, even /those/ seem to keep dropping their pants if
addressed in the right manner. And you'd think, if anyone is willing to
invest real money into such systems, it would be the makers of such
systems (apart from military or secret agencies of course).
Seems like the problem is not /that/ easily solved...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 6/6/2011 11:10, Orchid XP v8 wrote:
> Actually, last time I tried it, you can't do this in Windows either.
Not only can you dismount it, when you remount it, you have the same cwd.
Since every drive has a cwd, it would be impossible to dismount any drive if
you couldn't dismount one where you had a cwd set.
>> How come every time you come across legacy support, you say it's a
>> kludge or a wtf or a random? :-)
>
> What was a good idea 40 years ago is not necessarily a good idea today.
That doesn't make it a kludge or a wtf or a random. It makes it an idea
whose time has passed.
--
Darren New, San Diego CA, USA (PST)
"Coding without comments is like
driving without turn signals."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 6/6/2011 12:51, clipka wrote:
> ... and still, even /those/ seem to keep dropping their pants if addressed
> in the right manner.
Usually through the use of a hardware hack, tho.
The Sony hack required him to actually wire up the motherboard with switch
shorting out traces. The XBox hack was, IIRC, not so much a flaw in the
security of the system, but a broken game that let you run code from a saved
game (via buffer overflow or some such).
That's exactly why you need to do something like formal logical checks that
your code does the right thing at the lowest levels, then make everyone
conform to that.
When you're talking security against malware rather than copy protection,
it's a lot easier to get right, in some ways, because you don't *ever* want
the malware to run. Nobody is going to be sticking debuggers on the chip or
shorting out motherboard traces in order to get malware running on their own
machine.
--
Darren New, San Diego CA, USA (PST)
"Coding without comments is like
driving without turn signals."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 06.06.2011 22:15, schrieb Darren New:
> On 6/6/2011 12:51, clipka wrote:
>> ... and still, even /those/ seem to keep dropping their pants if
>> addressed
>> in the right manner.
>
> Usually through the use of a hardware hack, tho.
>
> The Sony hack required him to actually wire up the motherboard with
> switch shorting out traces. The XBox hack was, IIRC, not so much a flaw
> in the security of the system, but a broken game that let you run code
> from a saved game (via buffer overflow or some such).
>
> That's exactly why you need to do something like formal logical checks
> that your code does the right thing at the lowest levels, then make
> everyone conform to that.
Ah yes - a formal proof... pretty useful if your intention is to make
sure a security-critical system never fails due to unforeseen errors.
As for making sure that a system is secure against /malicious intent/, I
believe it's pretty useless.
Just have a look at smart cards: For a given smart card design, you may
formally prove that there is no input sequence that makes the card
disclose even a single bit of its secret key...
... on its /official/ interface, i.e. the I2C bus data lines. But such a
formal analysis typically forgets a few other channels on which the
device is leaking information. For instance, an analysis may forget
about the timing of the data output, which might give hints about what's
going on inside. Or the power the chip consumes at any given time.
Say you will - I think a formal analysis can never foresee /all/
possible attack vectors a system might exhibit.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 6/6/2011 15:13, clipka wrote:
> Ah yes - a formal proof... pretty useful if your intention is to make sure a
> security-critical system never fails due to unforeseen errors.
That is the intention.
> As for making sure that a system is secure against /malicious intent/, I
> believe it's pretty useless.
No, it's very good at that.
> ... on its /official/ interface, i.e. the I2C bus data lines.
Naturally. All you're saying is that a formal system won't break as long as
the underlying system matches the formalism.
It's *much* more difficult to mount an attack against a smart card when
you're not physically in possession of the smart card than when you are.
Plus, again, the attacks on the smart card aren't the same category as
malware. With DRM, the card (or console or whatever) has to be able to
perform its operations while hiding its results from observers. With
malware, the system doesn't want to perform the operations at all.
The system relies on programs to obey the rules that the system enforces,
just like your operating system relies on the hardware doing what the
hardware says it does.
> Say you will - I think a formal analysis can never foresee /all/ possible
> attack vectors a system might exhibit.
It depends on the kind of attack you're trying to prevent. It's easy to show
that formal analysis can foresee you never running off the end of an array
to access memory belonging to another process. How much malware have you
seen that takes advantages of flaws in the CPU mask? Formal analysis can
foresee all possible attack vectors for particular attacks, assuming the
mathematical system is isomorphic with the physical system. Of course if
you're trying to hide information in one program from being observed by
another, you have to take care that things not accounted for in the
formalism (such as the timing) don't happen. But the things accounted for by
the formalism can be shown not to happen, as long as the hardware (et al)
obeys the same rules as the formalism.
In other words, it becomes far easier to check you're right, because you've
written a very few number of rules that you have to manually check are
correct, and the computer deduces from them that the top-level properties
you want to hold do indeed hold.
And there have been formally-verified chip designs as well, where the
assembly language was expressed in math and they proved that the chip mask
made the chip follow the spec. (To some extent, I assume. I would imagine it
would be difficult to prove that, for example, what you etched is what you
thought you etched.)
--
Darren New, San Diego CA, USA (PST)
"Coding without comments is like
driving without turn signals."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 06/06/2011 08:52 PM, Darren New wrote:
> On 6/6/2011 11:10, Orchid XP v8 wrote:
>> Actually, last time I tried it, you can't do this in Windows either.
>
> Not only can you dismount it, when you remount it, you have the same cwd.
>
> Since every drive has a cwd, it would be impossible to dismount any
> drive if you couldn't dismount one where you had a cwd set.
Oh I see. You're saying that you can dismount it even if there's a
remembered path. I'm saying you can't dismount it if it's the *current*
path.
>> What was a good idea 40 years ago is not necessarily a good idea today.
>
> That doesn't make it a kludge or a wtf or a random. It makes it an idea
> whose time has passed.
A kludge to rush something to market a few months earlier is still a
kludge 40 years later, yes.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 6/7/2011 1:09, Invisible wrote:
> Oh I see. You're saying that you can dismount it even if there's a
> remembered path. I'm saying you can't dismount it if it's the *current* path.
Oh, I see. This changed over time to actually include a lock on the current
path. It didn't always used to do that.
> A kludge to rush something to market a few months earlier is still a kludge
> 40 years later, yes.
Well, in that event, pretty much everything in the world is a kludge,
including every human being (who is born some 9 months earlier than they
should be so the head can fit through the birth canal.)
--
Darren New, San Diego CA, USA (PST)
"Coding without comments is like
driving without turn signals."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> A kludge to rush something to market a few months earlier is still a kludge
>> 40 years later, yes.
>
> Well, in that event, pretty much everything in the world is a kludge,
> including every human being
Oh hey, biology is, like, the *ultimate* super-kludge!
Then again, biology wasn't designed by a concious intelligence.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Now here's another random thing:
The syntax for a drive is a letter followed by a colon.
The syntax for a device is a few letters followed by a colon.
The syntax for an ADS is a file name followed by a colon followed by a
stream name.
So consider the string "C:Foo".
Is that the file "Foo" in the current directory on "C:"?
Or is that a file named "C" with an ADS named "Foo"?
Answers on a postcard...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Wed, 08 Jun 2011 11:57:48 +0100, Invisible wrote:
> So consider the string "C:Foo".
>
> Is that the file "Foo" in the current directory on "C:"? Or is that a
> file named "C" with an ADS named "Foo"?
It's just a string. The context in which the string is used is important.
If "Foo" is a file, then it would hardly be stored as the CWD in memory
on the machine - for example.
Jim
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|