Dear Mr Chairman,
I offer these supplemental remarks for the record following my appearance before your committee on March 20, 1997 in support of H.R. 695, the Security and Freedom through Encryption (SAFE) Act.
I would like to respond to the Government witnesses on the first panel. Their testimony included numerous inaccurate and/or misleading statements. I have heard and rebutted many of these claims in my court case, and I would like to do the same in this forum.
The government witnesses go on at great length about the need for what they call a "key management infrastructure" (KMI) to authenticate public encryption keys. See, for example, pages 3-7 of Mr. Crowell's written statement.
This much is true. But on page 7 Mr. Crowell claims, incredibly,
The civilian crypto community is well aware of the need to authenticate public keys. The basic principles are documented at length in Applied Cryptography. Much of the code in a typical encryption software package (e.g., Pretty Good Privacy, PGP) is devoted to this one issue. And a viable and rapidly growing commercial Key Management Infrastructure now exists. Two complementary KMIs, in fact: the hierarchical KMI used for secure World Wide Web transactions, among other things, and the distributed "KMI" introduced by PGP. Each has its advantages.
The government may quibble that these KMIs don't "count" because they don't meet their precise definition of a KMI. For example, contrary to the second item of the list on page 5 of Mr. Crowell's statement, a KMI need not (and should not) generate or store user private keys. Such practices constitute an unnecessary security risk. Key pairs are best generated by the end user's device, and only the public component of the key pair need ever leave the device.
If the DoD's KMIs for classified information do generate and store
all their private keys in a central location where they can be
stolen
True, Mr. Crowell did not actually say that the DoD KMI he mentions is used to secure classified data, though he might have meant us to make that inference. Indeed, other NSA employees have reportedly said that key recovery is not incorporated into encryption systems approved for classified data "because of the obvious risks." Such statements speak volumes about the government's sincerity in supporting strong encryption for the law-abiding.
What counts about the civilian KMIs is that individuals and commercial entities do trust them for sensitive communications and commerce. For the government to say otherwise, to imply that only they have the expertise to make encryption safe for commerce, and to imply that a KMI must also generate and escrow secret keys so that public keys can be trustworthy, seems like a deliberate and cynical attempt to mislead Congress.
For example, several banks already rely on the encryption in web browsers such as Netscape Navigator and Microsoft Internet Explorer to protect transactions with their customers. Each bank has a public key certified by a "certificate authority" (CA) that attests to its authenticity. When the customer connects to the bank's web site, the user's web browser automatically verifies the bank's certificate. The customer can then conduct business knowing that he or she is actually talking to the bank's computer and not to some hacker's computer impersonating the bank.
The CAs follow strict safeguards including secured rooms and tamper-resistant hardware. There are already ten commercial CAs listed in the Security Options page of the Netscape Navigator browser. Some are established companies, such as BBN, AT&T, GTE and MCI; others, such as Verisign, are small entities that specialize in this service. One is even a governmental entity: the United States Postal Service.
Internet banks include Bank of America and Wells Fargo. The latter even lets customers write checks to arbitrary recipients, provided that the customer use the non-exportable (strong encryption) version of Netscape Navigator for this particularly sensitive function. Clearly they have more trust in this "nonexistent" KMI than in the strongest encryption software the government will allow to be exported.
The other type of KMI is the distributed scheme embodied in PGP, where there are no formal CAs. Everyone can sign a key, but no one is required to honor the signature. One may trust only those keys he or she has personally signed, or one may also rely on signatures made by other persons whose competence and integrity in key signing he trusts. The important thing is that the user sets his own policy on key acceptance, while in the hierarchical scheme everyone is forced to trust the CA.
PGP "key signing parties" are now common at large physical gatherings of Internet users. Each attendee identifies him or herself to the others' satisfaction, usually by exchanging drivers licenses, passports and personal introductions. The attendee then reads his or her public key (actually a "fingerprint" of this key) so that the others may verify its correctness and later sign that particular key.
Mr. Litt is apparently unaware that Internet downloading is already the preferred way to obtain PGP, SSH, Netscape Navigator and other popular encryption software packages.
Aside from this fact, his argument does not withstand scrutiny for two reasons:
First, much encryption software is posted to the Internet in source code form. It can be verified by anyone who cares to read it. For example, the source code to Applied Cryptography on the Internet site in Italy can be visually compared against the listings printed in the book. The same is true for the PGP source, which has also been published in book form.
The user can write a test program to compare the results of encrypting a given "test vector" with a result known in advance, and so forth. Even if most users lack the skill or the time to perform these tests themselves, experience shows that if a problem exists in a widely used piece of software, sooner or later someone will discover it and announce it widely on the Internet.
Open publication of encryption software not only makes it very difficult to conceal a deliberate flaw, it also facilitates the discovery of accidental flaws. The flaws in the Cellular Message Encryption Algorithm that were described by Wagner et al certainly would have come to light sooner had it been openly published on the Internet. Clearly the cellular industry didn't do very well by relying on a "known and reliable supplier" that wasn't willing to openly publish its work.
Security flaws in software available on the Internet are often discovered by the public even when the relevant source code is not generally available. The best examples are the various security-related Web browser bugs that are occasionally discovered and announced, usually by college students. In these cases the discoverers were not deterred by having to reverse engineer enough of the program to detect the flaw.
Second, it is already common practice to cryptographically "sign" software distributions on the Internet to guard against malicious creation and/or modification. PGP is widely used for this purpose, not only for cryptographic software such as SSH (from Finland) and PGP itself, but for other software such as operating system patches and updates, particularly those with security implications. While this practice does not guarantee that the software is completely secure, it does eliminate an entire class of potential attacks.
While no one can ever say that a particular piece of software is absolutely secure just as no one can say with certainty that it has no bugs, open publication and cryptographic authentication have made the distribution of software on the Internet in practice no more risky than its distribution by other means, such as floppy disks in retail stores (which are also not invulnerable to tampering.)
Mr. Ames' case is especially illustrative, as much of the evidence against him apparently came from microphones physically planted in his house that picked up many incriminating telephone conversations. Even if Mr. Ames had been using an unbreakable encrypting telephone for these conversations the bugs would have heard his side of the conversation just fine.
One would not know it from Mr. Litt's dire warnings, but wiretaps are not their sole investigative tool. The alternatives include audio bugs (as in the Ames case), visual surveillance, undercover infiltration, informants ("moles"), testimony of collaborators compelled through grants of immunity, information from cooperating witnesses and institutions (e.g., bank records), physical and forensic evidence, and the like. Strong encryption has no effect whatsoever on these methods. In fact, its widespread use could actually enhance them, e.g., by allowing an undercover officer or informant to communicate securely with law enforcement without raising the suspicions of the subjects of the investigation.
Perhaps they do not discuss these alternatives out of fear of compromising their effectiveness. Personally, I think they simply don't want to say anything that might weaken their argument.
Mr. Litt's citation on page 5 of a computer hacker who used encryption is particularly vexing. While encryption is only one tool for keeping hackers out of computers, it is a vital one. It is ironic that he would complain about a hacker using encryption to hide the evidence of his exploits while supporting export controls that limit our ability to stop these attacks in the first place.
This is a classic example of the danger Alexander Hamilton warned us about in Federalist No. 84:
Indeed, if one also looks at the Fifth Amendment, the Founding Fathers were adamant that a suspect could not be compelled to give information that aids his own prosecution, no matter how useful that information may be. Yet that is precisely what the government wants through "key recovery" -- they want everyone, by their Hobson's choice of a key recovery system, to aid in their possible prosecution at a future date. Perhaps even the government recognizes the clear Constitutional implications of this philosophy, which is why they have not yet dared to propose mandatory domestic key recovery.