[ILUG] SSH dictionary attacks.
paul at clubi.ie
paul at clubi.ie
Sun Aug 27 15:58:26 IST 2006
On Fri, 25 Aug 2006, Aine Douglas wrote:
> I assure you its more to do with the time of the week than the time of
> the month. Its over burdensome getting into the bowels of the PKCS
> standards on the back of a query about SSH dictionary attacks...
As Colm pointed out, there is no need for every PKCS user to "get
into bowels" of it. It just takes one person, and it's a click away.
And as he pointed out, openssl already seems to support manipulating
PKCS#12 keystores. Try 'man pkcs12', which you may already have
installed, otherwise try installing a newer version of OpenSSL.
> In the event of a courtcase, and you claiming that Ros had your
> private key, if ros has such a policy in place they would be able
> to show that every version of the ros applet that runs on their
> site was tested for validity using signature verification, and they
> have archived the codebase for every issued version of their
> software, and thus can prove that no version of their software ever
> uploaded your key. If they adopt long lived signatures on their
> archives, then they'll be able to do that.
> personal level, I trust sun to only trust trustworty security
> providers in my JVM. Thats a personal trust decision made by me.
Your trust in Sun is appreciated. I suspect Sun (who have many
excellent security people, from the top of the company on down,
starting with Whit Diffie) perhaps would tell you the same thing as
Colm is telling you - that the key file password protection is to
protect the *user*, not the issuer (at least, when it is the user who
'opens' and decrypts the key file).
Security is an old, old field. The tendency for people to place too
much faith in the abilities of new technology to "solve" security is
just as old, be it by the mistake of the inventor(s), and/or the
misunderstanding of users. The latter of those possibly is what have
fallen prey to.
Technology is a tool. It can provide ways to establish trust, the
danger is in assuming it provides trust where it does not (in a very
general sense, i.e. including "mistaking the form of the trust
> Its a judgement call, and in my judgement, I'd prefer to have my
> keys stored in a p12 as if used properly by properly created
> applications, my private key should never touch the raw disk, or
> raw ram, or pagefile.
The only way this is possible is if you have some hardware which
never /ever/ gives up the secret key, hardware which handles the
public key cryptography entirely internally.
Such hardware is available but AFAIK it's not something you could
conveniently issue to every remote-access user, for reasons of price
and size. The "never *ever* give up the secret key" aspect
particularly can make such hardware expensive. Cheaper, smaller
'smartcards' likely still allow a determined attacker to recover the
secret key (e.g. the credit card and "ChipKnip" banking smartcards
are vulnerable iirc).
Simply put: If the public key crypto is done in software on the host,
it's just unavoidable for the secret key to end up in user-readable
memory as things stand.
At best, you could make it just readable to administrator, but
systems with privileged keystores and public-key crypto are not
common, and anyway such a system likely would still depend on the
user to provide the keys for the keystore - it would protect the
user, not the issuer of they key.
If it is the user who must manipulate the keystore, they must have
the secret key.
Other than using expensive hardware, if you want the user to
regularly validate their password, you need to regularly issue with
them new keys. There are security systems based on this principle,
but I suspect it might not really be practical for PKI or other
public-key systems. See, e.g., Kerberos.
1. As always in security, "never" or "unbreakable" is a matter of
degree which assumes some arbitrary limit to the resources of
2. Any software tools the user uses are just /tools/, acting on the
3. There are things like TPM, but even then (unless you in bed with
or are the hardware manufacturer) the software still acts on
the user's behalf, not yours.
Even a proprietary OS that uses TPM still can't be trusted,
a) it's shipped by the hardware manufacturer
b) it can never be reinstalled from untrusted media
For a 3rd party to be able to avail of this trust, they'd need to
be "in bed with" the OS manufacturer so as to allow the OS to
recognise the 3rd party's software and only run trustworthy
versions of it, and subvert the user's control.
TPM to this degree isn't here yet btw.
Also, the more the OS manufacturer 'recognises' 3rd parties, the
wider the hierarchy of trust becomes, the more likely it is to be
compromised somewhere (e.g. particularly with any 3rd party's who
are allowed to insert privileged code).
Eventually, the 'trust' is so 'wide' it is diluted to a degree
that renders it meaningless.
Paul Jakma paul at clubi.ie paul at jakma.org Key ID: 64A2FF6A
We can predict everything, except the future.
More information about the ILUG