As recently as a decade ago, an authoritative introduction to computing for historians recommended an approach which essentially employed the computer as a rigid, if very large, tabulator. Edward Shorter’s The Historian and the Computer (1971) described how to reduce complex information to simple fixed-choice codes, transfer the coded data to punched cards, read the cards into fixed-format package programs, and prepare large tabulations or statistical analyses from the data. Shorter’s advice made sense: it encouraged historians who knew little about computers or quantification to move ahead, and enabled them to produce useful results without becoming programmers. During the 1970s, however, three important changes in computing made the sturdy old procedures obsolete. The first change was the increasing availability of flexible, inexpensive microprocessors—small machines with memories as big as many large computers of the 1960s, which would operate by themselves or in conjunction with powerful central computers, which came with a great variety of prepared programs, and which would serve for the entry, transmission, storage, editing, manipulation, analysis, and presentation of many different sorts of information, including ordinary words. The second change was the improvement of interactive computing, in which a relatively inexperienced analyst could carry on a prompted “conversation” with a sophisticated machine while searching or analyzing a complex machinereadable file. The third was the development of data base management systems which, from the user’s point of view, greatly simplified the storage and manipulation of large bodies of machine-readable evidence.