Published online by Cambridge University Press: 13 January 2009
A major theme in discussions of the influence of technology on society has been the computer as a threat to privacy. It now appears that the truth is precisely the opposite. Three technologies associated with computers—public-key encryption, networking, and virtual reality—are in the process of giving us a level of privacy never known before. The U.S. government is currently intervening in an attempt, not to protect privacy, but to prevent it.
1 For the history of cryptography, see Kahn, David, The Codebreakers (New York: Macmillan, 1967).Google Scholar
2 In addition to the Clipper Chip, discussed below, there are now two public-domain programs which allow someone equipped with an ordinary personal computer and a modem to decrypt and encrypt speech in real time: PGPfone (released for the Macintosh, with a DOS/Windows version coming) and Nautilus (released for DOS/Windows).
3 The idea of public-key cryptography is due to Martin Helman and Whitfield Diffie. The first successful implementation is due to Ronald Rivest, Adi Shamir, and Leonard Adelman, who patented their algorithms and formed a company, RSA Data Security. See Diffie, Whitfield, “The First Ten Years of Public-Key Cryptography,” Proceedings of the IEEE, vol. 76 (1988), pp. 560–77.CrossRefGoogle Scholar A good summary for the lay reader is Paul Fahn, “Answers to Frequently Asked Questions about Today's Cryptography”; the current version is available by anonymous FTP from RSA.com. Information is also available from RSA's home page on the World Wide Web at: http://www.rsa.com/.
4 A prime is an integer divisible only by itself and 1. The number 7 is a prime, 15 is not (its prime factors are 1, 3, and 5). (The RSA algorithm is named for its inventors: Rivest, Shamir, and Adelman [see note 3].)
5 Modern encryption involves very large numbers. A 128-bit key, for example, would be represented by about a thirty-eight digit number—say:
27, 602, 185, 284, 285, 729, 509, 372, 650, 983, 276, 043, 748.
6 This statement, like other statements about the security of encryption schemes, depends in part on the length of the key. The longer the string of numbers making up the key, the more difficult it is to break the encryption.
7 I am describing simpler implementations of public-key encryption than those in common use, in order to keep the explanations in the text from becoming too complicated. Public-key encryption is slower than some equally secure single-key encryption schemes, so in practice the message is encrypted with a single-key system such as DES (Data Encryption Standard), the DES key is encrypted with the recipient's public key, and both are sent. The recipient uses his private key to decrypt the DES key, then uses that to decrypt the message.
Digital signature protocols in current use, such as those supported by Verisign, Inc., are also more complicated than the one I have described. The sender applies a hash function to his message to create a message digest, encrypts that with his private key, and sends the message and the encrypted digest. (A hash function takes a long string of characters, such as a message, and produces a number from it, called the message digest; the number, although large, is much shorter than the string. The important characteristic of a good hash function is that it is easy to calculate the digest from a message, but very difficult, given a digest, to create a message that will hash to that number.) The recipient calculates the digest for himself by hashing the message, decrypts the digest attached to the message using the sender's public key, and checks that the two digests match. Since it is impractical to change a message without changing its digest (i.e., to find another message that hashes to the same value), a digital signature not only guarantees the sender's identity, it also guarantees that the message has not been altered since it was signed. Since the digest is much shorter than the message, encryption and decryption are faster with this protocol than with the simpler one described in the text.
8 PGP (for Pretty Good Privacy) is a public-domain encryption computer program in widespread use, available for both DOS and Macintosh computers. It allows a user to create his own pair of keys, keep track of other people's public keys, and encrypt and decrypt messages. The latest released version can be found at the MIT distribution site for PGP, located on the Web at: http://web.mit.edu/network/pgp.
9 If one wishes current messages to stay private for the next ten years, one should use encryption adequate to defend against the computers of ten years from now—otherwise someone can intercept and record your messages now, then decrypt them when sufficiently powerful computers become available.
10 The bandwidth of a communications channel is a measure of how much information it can carry—the wider the bandwidth, the more information can be transmitted per second.
11 Extensive information on Chaumian digital cash can be found on the World Wide Web at: http://www.digicash.com/publish/pu_fr_dc.html. See also Chaum, David, “Security without Identification: Transaction Systems to Make Big Brother Obsolete,” Communications of the ACM, vol. 28, no. 10 (10 1985), pp. 1030–44.CrossRefGoogle Scholar The Mark Twain Bank of St. Louis recently announced that it is implementing digital cash in partnership with Digicash.
12 The procedure is described in Chaum, David, “The Dining Cryptographers Problem,” Journal of Cryptology, vol. 1 (1988), pp. 65–75CrossRefGoogle Scholar; and Chaum, David, “Achieving Electronic Privacy,” Scientific American, 08 1992.Google Scholar
13 The statement that a transaction cannot be observed must be qualified in three ways. The sorts of encryption I have been discussing are all vulnerable to an observer with a big enough computer and enough time. A transaction is secure if no such observer exists who wants to crack it. And even transactions that are cryptographically secure may be vulnerable to attacks that do not depend on encryption, such as secretly searching your house for the piece of paper on which you have imprudently written down your private key. Even in a world of strong privacy, one must still be careful. Finally, even encryption schemes that are believed to be secure are not, in most cases, provably secure—there is always the possibility that a sufficiently ingenious attack, or some future development in the relevant branches of mathematics, could dramatically reduce the time needed to crack them. For a discussion of recent experience with the vulnerability of encryption schemes, see Brickell, E. F. and Odlyzko, A. M., “Cryptanalysis: A Survey of Recent Results,” Proceedings of the IEEE, vol. 76 (1988), pp. 578–93.CrossRefGoogle Scholar
14 This is the case because a computer program is a set of instructions that go together precisely. If one instruction tells the computer to fetch a number from a particular memory location, the instruction that put the number there has to have put it in the same location. If version one of the program stores the number in location 358,273, and version two stores it in location 527,203, then a program that gets its “put” instruction from version 1 and its “fetch” instruction from version two is not going to work. Multiply that problem by ten thousand instructions, and the chance that a program assembled from several different versions will run becomes essentially zero.
An alternative way for a would-be pirate to proceed is to try to decompile the program—deduce the source code from the machine language version—and then recompile, producing his own different but functionally equivalent version of the program. Decompiling is a hard problem—just how hard is a matter of controversy. It can be made harder by the use of compilers designed to produce machine code whose detailed working is hard to deduce.
The approach I am describing does not protect against copying observable features of a program; whether and when such copying violates present U.S. copyright law is a complicated question, the answer to which is not yet entirely clear.
15 Another way of protecting a computer program is to keep it on the owner's computer, and sell, not the program itself, but the right to access it over the network.
16 This is one of the central ideas of “True Names,” a science fiction story by a computer scientist who was one of the early writers to recognize the social implications of computer networks and associated technologies. In this story, individuals know each other in cyberspace, but their true names, their actual physical identities, are closely guarded secrets. See Vinge, Vernor, “True Names,” originally published in 1981 and available in Vernor Vinge, True Names and Other Dangers (New York: Baen Books, 1987).Google Scholar
17 This point was first brought to my attention on the Extropians E-mail list. The idea of competitive private production of law in the physical world is explored in Friedman, David, The Machinery of Freedom: Guide to a Radical Capitalism (La Salle, IL: Open Court, 1989)Google Scholar, part III. For a more recent discussion, see Friedman, David, “Law as a Private Good,” Economics and Philosophy, vol. 10 (1994), pp. 319–27.CrossRefGoogle Scholar
18 Throughout this discussion, I have been assuming a virtual-reality technology that is only a modestly improved version of what we already have—sight and sound. A more radical alternative would be to bypass the sense organs and feed information directly to the brain—a controlled dream. With that technology, a virtual community would appear to its participants exactly like a real community; the only difference is that they can produce only virtual objects. Virtual food, however tasty, provides no nutrition, so real food must still be produced somewhere in the real world.
19 Nozick, Robert, Anarchy, State, and Utopia (New York: Basic Books, 1974)Google Scholar, part III.
20 The term was originated by Timothy C. May, author of the “Crypto Anarchist Manifesto,” which may be found on the Web at: http://www.quadralay.com/www/Crypt/Crypto-Anarchist/crypto-anarchist.html. For a more extensive exploration of May's thoughtful and unconventional views, see his “Cyphernomicon” at: http://www.swiss.ai.mit.edu/6095/articles/cyphernomicon/CP-FAQ.
21 It is worth noting that the Clinton administration, despite its support for controlling encryption via the Clipper Chip, has been outspoken in its support of the idea of developing computer networks.
22 The reason we can expect governments to behave in this way is twofold. Individuals with political power, such as governors and congressmen, have very insecure property rights in their offices and thus little incentive to make long-term investments — to bear political costs now in order to produce benefits for the holders of those offices twenty years hence. Individual voters have secure property rights in both their citizenship and their franchise, but because of rational ignorance they are unlikely to have the information necessary to evaluate the long-term consequences of the policies followed by their government.
23 Consider the example of cordless phones, whose signals can be easily intercepted by anyone with a scanner. The more expensive models now include encryption, although it is unlikely that drug dealers, terrorists, or bankers make up a significant fraction of the relevant market. Most people prefer to be able to have at least some of their conversations in private.
24 Via a procedure known as anonymous file transfer protocol—anonymous because the user does not require a password to access the server.
25 The case is Bernstein v. U.S. Department of State, et al. Information can be found on the World Wide Web at: http://www.eff.org/pub/Privacy/ITAR_export/Bernstein_case/. A second case has just been filed by Philip R. Karn, Jr., against the U.S. Department of State and Thomas E. McNamara, the Assistant Secretary of the State Department's Bureau of Political-Military Affairs, challenging the Arms Export Control Act, 22 U.S.C. 2778 et seq., and the International Traffic in Arms Regulations (ITAR). Information on that case can be found at: http://www.qualcomm.com/people/pkarn/export/index.html.
26 When Nautilus, a public-domain program for encrypting telephone conversations, was released, it was placed only on servers that had restrictions preventing transmission abroad, in order to avoid any possible violation of the ITAR. Shortly thereafter a posting appeared on one of the Usenet newsgroups from an Australian, asking for someone to download Nautilus and transmit it, via a remailer, to his favorite software archive in Australia. It was immediately followed by a second posting, apologizing for the first; the poster had checked the Australian archive and Nautilus was already there. The incident demonstrates the difficulty of enforcing regulations designed to prevent the spread of software.
27 A good history of the agency, which has played an important role in both the development of encryption and the attempt to limit its general availability, is Bamford, J., The Puzzle Palace (Boston: Houghton Mifflin, 1982).Google Scholar
28 “Q: If the Administration were unable to find a technological solution like the one proposed, would the Administration be willing to use legal remedies to restrict access to more powerful encryption devices? A: This is a fundamental policy question which will be considered during the broad policy review” (White House Press Release on the Clipper, April 16, 1993).
29 “In 1993, 976 U.S. police wiretaps were authorized, 17 were never installed, and 55 were electronic bugs [which are unaffected by encryption], leaving 904 nonbug wiretaps installed. They cost an average of $57,256 each, for a total of $51.7 million spent on legal police wiretaps in 1993…. (I will neglect the possibility that police spent much more than this on illegal taps.) This is less than one part in 600 of the $31.8 billion spent by U.S. police in 1990, and most of it was spent on time the police sat and listened to 1.7 million conversations (at $32 per conversation)…. Wiretaps cost an average of $14,300 per arrest aided in 1993 … or almost five times the $3,000 average spent per non-traffic arrest by U.S. police in 1991 …” (Hanson, Robin, “Can Wiretaps Remain Cost Effective?” Communications of the ACM, vol. 37, no. 12 [12 1994], pp. 13–15CrossRefGoogle Scholar; also available on the Web at: http://www.hss.caltech.edu/-hanson/wiretap-cacm.html).
30 A third explanation for secrecy, and the one favored by the supporters of the Clipper initiative, is that Skipjack (the algorithm used in the Clipper Chip) incorporates new and valuable methods of encryption which the NSA wishes to keep out of the hands of foreign governments. For a discussion of the Clipper proposal from a generally friendly viewpoint, see Denning, Dorothy E., “The Clipper Encryption System,” American Scientist, vol. 81, no. 4 (07–08 1993), pp. 319–23.Google Scholar
31 “Common objections include: the Skipjack algorithm is not public … and may not be secure; the key escrow agencies will be vulnerable to attack; there are not enough key escrow agencies; the keys on the Clipper chips are not generated in a sufficiently secure fashion; there will not be sufficient competition among implementers, resulting in expensive and slow chips; software implementations are not possible; and the key size is fixed and cannot be increased if necessary” (Fahn, “Answers to Frequently Asked Questions” [supra note 3]).
32 The same conclusion applies to other forms of mandatory escrowed encryption that have been suggested in response to criticisms of the Clipper proposal. See, for example, Walker, Stephen T., “Software Key Escrow: A Better Solution for Law Enforcement's Needs?” in Building in Big Brother, ed. Hoffman, Lance J. (New York: Springer-Verlag, 1995), pp. 174–79Google Scholar; and Balenson, David M., Ellison, Carl M., Lipner, Steven B., and Walker, Stephen T., “A New Approach to Software Key Encryption,”Google Scholar in ibid., pp. 180–207.