Tragic loss or good riddance? The impending demise of traditional scholarly journals (Preliminary version, December 30, 1993) Andrew M. Odlyzko AT&T Bell Laboratories amo@research.att.com 1. Introduction Traditional scholarly journals will disappear in the next 10 to 20 years. There is wide recognition that electronic publication will significantly affect mathematics [Franks]. However, I expect that changes will come much faster and will be much deeper than is usually anticipated. Further, they could occur suddenly. It is impossible to predict the date or speed of transition, but only because they will be determined by sociological factors. The technology that is necessary for future systems is either already available or will be in a few years. The speed with which this technology will be adopted by the mathematical community will depend on how quickly we are prepared to break with traditional methods in favor of a superior but novel system. However, the pressure towards change is going to grow, and will soon become irresistible. The impending changes are caused by the confluence of two trends. Both trends have exponential growth rates (where by exponential I mean the precise mathematical meaning of this word, not the current journalistic usage). One trend is in the size of the mathematical literature, which is causing a crisis in the traditional approach to dissemination of research results. The other trend is the increasing power and availability of electronic technology. It offers a way to solve the crisis in traditional publishing. However, it is likely to have far greater effects than just the replacement of paper journals by electronic ones. The title of this article is meant to be provocative. I do not mean to imply that the current scholarly journals were always a bad solution to the problem of information dissemination. They have served the mathematical community admirably, and there are dangers in moving away from them to another system, as Quinn [Quinn] argues. At the same time, they are an awkward artifact of the only technology that was available several centuries ago. I feel that a system much better suited to the transmission of mathematical information is going to arise, although the transition may be painful. However, I do not think we have much choice, since drastic changes are inevitable no matter what our preferences are. Most of the comments and predictions that are made here apply to other scientific disciplines as well. I will write primarily about mathematics, since the data I have are clearest for this field, and I am most familiar with it. 2. Sources of current crisis Only some mathematicians perceive that there is a crisis in the present system of journal publications. However, librarians and publishers are well aware of it. New journals are springing up all that time, yet libraries are not buying them, and are even dropping subscriptions to old journals. The blame for this is usually ascribed either to greedy publishers or short-sighted administrators. The basic underlying problem, though, is the exponential growth in the mathematical literature. The number of scientific papers published annually has been doubling every 10-15 years for the last two centuries [Price]. That is also true of papers in mathematics alone. The data in [MR] suggest that during the post-World War 2 era, the rate in our field even increased to a doubling about every 10 years. There is some evidence (private communication from Jane Kister of Mathematical Reviews) that growth has slowed down in the last few years, but it is still substantial. Math. Rev. has stayed at approximately 47,000 reviews per year only by limiting its coverage. The exponential growth in mathematical publishing has interesting implications. Adding up the numbers in [MR] or simply extrapolating from the current figure of about 50,000 papers per year and a doubling every 10 years, we come to the conclusion that about 10^6 mathematical papers have ever been published. What is much more surprising is that almost half of them have been published in the last 10 years. Even if we manage to keep the number of papers down to 5*10^4 per year, we will double the size of the mathematical literature in another 20 years. Similar exponential growth rates can be seen in other indicators of the size of the mathematical research enterprise. The AMS had 1,926 members in 1930, 6,725 in 1960, and 25,623 in 1990, a doubling roughly every 16 years. (The number of papers published has been doubling almost every 10 years, and the share of the papers written by mathematicians in North America has not dropped too much recently, and went up substantially in the '30's and '40's. Does this mean that mathematicians have become more productive, possibly because their jobs involve less teaching and more research, or is it just that they publish more, or are they publishing shorter papers? These are intriguing questions that invite further study.) The Division of Mathematical Sciences of the NSF has seen its budget grow from about $ 13 M in 1971 to about $ 73 M in 1991, which is about a doubling in inflation-adjusted dollars. Awards (which are roughly proportional to the number of researchers supported) went from 537 to 1424, an increase by a factor of 2.65. The complaints one hears about reduced NSF support, just like those about reduced library budgets, are not so much about reductions in absolute or even inflation-adjusted budgets, but about their growth rates not keeping up with the number and output of researchers. Exponential growth in the number of mathematicians cannot continue for long. There are already clear signs (evident in the current job market and the projections one reads) that the number of jobs in North America is not likely to grow at all, or at least will not grow anywhere near as fast as it used to. Still, there is scope for continuing exponential growth in the mathematical literature. Currently most research mathematicians are educated and work in Europe, North America, and Japan. Continued rapid economic growth and better education in countries such as China and India are likely to enlarge vastly our ranks. With world population growing towards 10^10 in the next three or four decades, we might end up in the second half of the 21st century with 10 times as many researchers as we have today. Could your library conceivably cope with a 10-fold increase in the literature? Repeating the point made earlier, even if we don't project more than 50 years into the future, and if we somehow manage to stop the growth in the quantity of mathematical papers published each year, we will still double the size of the existing literature in the next 20 years. Can your library cope with that? Mathematical (as well as general scholarly) publishing has some features that sharply differentiate it from the popular fiction or biography markets, say, and make rapid growth difficult to cope with. Research papers are written by specialists for specialists. Various estimates have been made of how many readers a research paper usually attracts. These estimates are not too trustworthy, since there is difficulty in collecting data, and also in defining what it means to read a paper. Typical estimates for the number of serious readers are around 20. (One version of an old joke has an editor tell a referee, "You are probably the second and last person to read this paper," to which the referee replies, "Second? Are you sure the author has read it?") Whatever the right number is, it is small, and is unlikely to grow. This is a consequence of a simple arithmetic relationship. If a mathematician reads x papers and writes y papers per year, and those numbers do not change much (although they do change some, as the remarks above suggest they might) then the average paper will be read by x/y mathematicians, no matter how large the total mathematical community is. (Well, I am cheating a bit, since this assumes a steady state, whereas with growth in the ranks of mathematicians and papers being read many years after they are published the numbers can be tricky to evaluate, but the general conclusion seems right. All the numbers in this article are back-of-the-envelope type estimates. Given the exponential growth rates we are dealing with, more precise figures would not change the implications of the message.) This would change if we could attract more readers from outside mathematics. However, given the increasing specialization that applies in all fields, not just mathematics, this is unlikely. There are more and more exciting interactions with other disciplines, but they almost always involve specialists in those areas, and this does not affect the trend towards a narrower focus of research, which precludes attracting a general lay audience. The trade press is different. If the potential audience doubles (either because of natural increase in population, or because readers in other countries start buying translations), a popular writer like LeCarre can expect to sell twice as many copies of his books. Mathematical publishing, because of its nature, cannot benefit from the economies of scale that the trade press enjoys. As in other scientific disciplines, as our numbers grow, we tend to work in narrower specialties, so that the audience for our results stays constant. Further, the centers in which we work (typically university mathematics departments) do not grow much, so we get increasingly dispersed. On the other hand, in principle we need access to all the mathematical literature. This leads to a crisis, in which we cannot afford to pay for all the paper journals we need in our work. 3. Technology trends A doubling of mathematical papers published each decade corresponds to an exponential growth rate of about 7 % per year. This is fast, but nowhere near as fast as the rate of growth in information processing and transmission. Microprocessors are currently doubling in speed about every 18 months, corresponding to a growth rate of about 60 % per year. Equally dramatic growth figures can be cited for information storage and transmission. The point of citing these figures and those below is that advances in technology have made it possible to transform scholarly publishing in ways that were impossible even a couple of years ago. There were visionaries who twenty or more years ago foresaw the dramatic changes that technology could have on scientific information dissemination, but their dreams took a long time to be realized. The main reason was that it took a long time for technology to provide the tools that made those futuristic dreams realizable. In those days where most of computing was done in batch mode on mainframes, and the few fortunate enough to have access to time-sharing systems had to content themselves with printing terminals communicating at 300 bits per second, electronic publishing and intensive collaborations across oceans were not feasible. With the rapid advance in technology, though, we are at a stage where the needs of an electronic publishing system for mathematics can be met easily. Further, we can obtain a system that will give us many additional capabilities that are inherently impossible for paper journals to provide. Recall that approximately 5*10^4 mathematical papers are published each year. If they were all typeset in TeX, then at a rough average of 5*10^4 bytes per paper, they would require 2.5*10^9 bytes, or 2.5 GB, of storage. If we include the approximately 1500 mathematics books that are published each year, we find that the annual output of the mathematical community requires under 4 GB of storage. (This assumes everything is in TeX. If we use dvi files, the requirement about doubles, while with PostScript it about quintuples. On the other hand, compression can reduce storage requirements by a factor of 2 or 3. Given the rapid growth in the availability and cost of storage system, factors of 2 or 5 mean only a difference of a couple of years.) We can now buy a 1 GB magnetic disk for under $ 1000. For archival storage of papers, though, we can use other technologies, such optical disks. A disk with a 7 GB capacity that can be written on once costs $ 200-300. (The equipment for writing data on it is still expensive, and costs $ 20,000 - 40,000, but it can be shared by many individuals and even departments). The standard CD-ROM disks, with about 0.7 GB of storage capacity, cost a few dollars each to produce (with the information written on it) in runs of a few thousand. Digital tapes with 250 GB capacities are also becoming available. Thus electronic storage capacity needed for dissemination of research results in mathematics is trivial with today's technology. We conclude that is it already possible to store all the current mathematical publications at an annual cost much less than that of the subscription to a single journal. What about the papers published over the preceding centuries? Since there are about 10^6 of them, it would require about 50 GB to store them all if they were all in TeX. It appears unlikely that anyone will undertake the project of converting them all into TeX (or any other modern electronic format). What I expect to happen is that optical character recognition will eventually be applied to the texts (since this will enable rapid computerized text searches), while the equations will be stored as bitmaps. To provide reliable access to the text, whole papers will have to be available as full bitmaps. This dramatically increases the storage requirements. However, even then, they are not prohibitive. With current fax standards (which produce copies that are not pleasant to read, but are usable), a page of mathematical text requires 30 - 50 KB. Therefore all the 10^6 mathematical papers can be stored in less than 10^12 bytes = 1000 GB. This is large, but it is still less than 150 of the current large optical disks. For comparison, Wal-Mart has a database of over 1000 GB that is stored on magnetic disks, and is processed intensively all the time. The storage requirements of mathematical literature are likely to become even more trivial in the near future. Cable TV and phone companies are getting ready to deliver movie-on-demand to the home. This will require tremendous storage capacity, and is bound to stimulate development of much denser and cheaper media. There is not much progress in optical disks, but magnetic hard disks are becoming both larger and cheaper, and there are many promising new technologies on the horizon, such as optical tape that might be able to store over 1000 GB on a single unit. We can start thinking of 1000 GB storage devices for personal computers becoming available in a decade or so. This means that any one will be able to have the total mathematical literature available right on the desktop. Even sooner than that, though, larger institutions, such as university mathematics departments, will be able to make such systems available to their members. This ability will mean a dramatic change in the way we operate. For example, if you can call up any paper on your screen, and after deciding that it looks interesting, print it out on the laser printer on your desktop, will you need a library? Optical disks can be shipped by parcel post at low cost. However, it is desirable to have faster communication links. Information transmission is a barrier right now. Anyone who has to download files at 2400 baud (roughly bits per second, bps) can attest how annoyingly slow that is. However, that is changing rapidly. Most departments have their machines on Ethernet networks, which operate at almost 10^7 bps. Further, almost all universities now have access to the Internet, which was not the case even a couple of years ago. The Internet backbone operates at 4.5*10^7 bps, and prototypes of much faster systems are already in operations. Movie-on-demand will mean wide availability of networks with speed in the hundreds of megabits per second. If your local supplier can get you the movie of your choice at the time of your choice for under $ 20 (as it will have to, in order for the system to be economic), then sending over the 50 MB of math research papers in your specialty for the last year will cost pennies. Scientists might not like to depend on systems that owe their existence to the demand for X-rated movies, but they will use them when they become available. There is concern in the scientific community that with the withdrawal of NSF support for the Internet, the electronic data highway will charge tolls that will be prohibitive for university researchers. That concern is misplaced. It is based on a static view of the world and does not take into account the increasing power and decreasing cost of technology. Yes, the commercial data highway will have tolls. Yes, the networks that the cable TV companies build may be structured primarily for the broadcast delivery of movies to the home, and may not have the full communication capabilities that scientists require. The point is that these networks are going to be large enough to justify tremendous development efforts that will drive down the costs of all communication technologies. The tolls on the commercial data highway will have to be low enough to allow transmission of movies. Therefore the cost of transmitting mathematics will be trivial. Even if the commercial data highway is not structured in an ideal way for science, or is an economic flop, universities will be able to use the technology that is developed and build their own cheap networks. Just as the recent progress in computers is caused much more by demand from businesses running spreadsheets on PCs than from scientists modeling global warming on supercomputers, so the progress in communications will come primarily from commercial uses of the data highway. Some scientists and mathematicians (such as those getting atmospheric data from satellites) will have to worry about the costs of communications, since they will be transmitting giga- and tera-bytes of data. The rest of us can sit back and go along for the ride provided almost for free by commercial demand. Stevan Harnad (private communication) notes that even now, non-scientific and non-educational uses of the Internet are a significant fraction of the traffic. The latest statistics for Usenet (the decentralized system of discussion groups) collected by Brian Reid of DEC show that the largest volume was generated by alt.binaries.pictures.erotica, which has an estimated 240,000 readers, and generated 31.4 MB of postings during one sampling period, even though it is received by only about half the sites. The second largest load was 16.3 MB from alt.binaries.pictures.misc, which has 150,000 readers. The third highest was 14.1 MB from bionet.molbio.genbank.updates, which has 24,000 readers. Sci.math, which has an estimated 120,000 readers (the highest of any of the sci.* groups, but only 39-th highest among all the groups) generated only 3.6 MB. Even alt.politics.clinton generated 7.1 MB during the sampling period! This only reinforces the conclusion drawn earlier that most of scientific communication can be accommodated in a tiny fraction of the capacity of future networks. (The gene bank data, like the atmospheric data mentioned before, is a special case.) This is just a reflection of the compact ways scholars have developed to present their results. Not only have information storage and transmission capacities grown, but the software has become much friendlier. At the beginning most of us had e-mail. Then came anonymous ftp. Together with the network connections that are now almost universal, they provide powerful tools for communication. However, this is only the beginning. We now have a plethora of new tools, such as Archie, Gopher, WAIS, and Mosaic, which allow easy access to the huge stores of information on the Internet without having to learn arcane details of how it functions. In the next few years these tools are likely to evolve much further, and become widely used in the day-to-day life by most mathematicians. Here I mention a few electronic document delivery systems that I have used at Bell Labs. There are similar systems in use or under development elsewhere, and I mention these just because I have used them, and because they appear to me to be forerunners of systems that will be common soon, and will most likely become integrated with other information retrieval tools. One of these systems is Ferret(TM), and it is used for internal technical documents. All the internal memoranda are digitized and stored on-line, and can be viewed on the screen of one's own workstation. In the old days, I often ordered paper copies of memoranda. After looking at the first page or so, I then usually discarded the whole memo as not being of much interest. Nowadays, I scan the first couple of pages on the screen, and only if it looks interesting do I order a paper copy using the convenient interface. RightPages(TM) is another experimental Bell Labs system that gives the user access to about 60 technical journal from about 10 publishers [Story]. After selecting an issue of a journal involved in the experiment, the user is shown the table of contents, and by clicking on an item, he or she is shown the first page of the article. If it looks interesting, a click on the appropriate icon orders a paper copy for delivery to the user. (At the moment only the first page is scanned, but this is not a fundamental limitation, and is caused by the lack of resources in a small-scale experiment to do more.) The RightPages system is used inside Bell Labs, and also at the University of California in San Francisco. Systems like those above are meant to work with the current paper journals and memoranda. They are supposed to compensate for some of deficiencies of the present system, by allowing users to sift through the increasing amounts of information that are becoming available, and also by giving them access to information they cannot get in their local libraries. However, while they do help to deal with the crisis in scientific publishing, they can also contribute to it. I can now use the Internet to find out what books and journals the Harvard libraries have. Suppose I could also look on my screen at the articles in the latest issues of the journals that Harvard has received recently, and order photocopies (or even electronically digitized scans that can be reproduced right away on my laser printer) of the articles that interest me. Would there be any incentive to pressure my local library to order that journal? Would there be any reason to have more than one copy of the journal in the country (or the world)? If we do have only one copy, how is it going to be paid for? Thus the arrival of technological solutions to the current crisis in scholarly publishing creates new problems. 4. Electronic journals One approach to the threat to scholarly publishers that is posed by electronic access is to limit distribution to paper copies only. Yet that is not a feasible solution in the long run. Because of the publishing crisis, almost all libraries are receiving a diminishing fraction of the literature that is needed by local scholars. Hence there is an absolute need to provide access to journals not on local shelves. Only those journals that can supply it conveniently will survive. Further, even if a paper journal is available, there are advantages to an electronic version. Especially valuable is the ability to do rapid text searches for relevant material. That is surely why many publishers are beginning to provide electronic versions of their publications. As mathematicians become familiar with electronic document distribution and learn how useful it is, they will increasingly demand that all journals be available in that form. There is no great technical difficulty in making current journals available electronically. Most of them already use computerized typesetting, so it's just a matter of making their files available to readers. (Many of them, including the AMS, rely on TeX, which means that little conversion is needed.) The editorial and refereeing functions can be performed just as they are today, and the standards can be maintained. In the simplest version, all that means is that instead of having to go to the library to look at the latest issue of a journal, one can log in remotely on a publisher's machine and look in a directory for the latest papers. With better software tools, it will be unnecessary to know what machine those papers are on, as the software will locate them automatically. One can also arrange to be notified via e-mail about new papers accepted by a journal. Electronic journals have many advantages over paper ones. Let us recall that paper journals evolved in the era where printing was the only practical method of wide dissemination of information. Nobody could expect Gauss to make 50 handwritten copies of his manuscripts for distribution to interested colleagues. This resulted in bulky publications that are expensive to produce and maintain. As a result, a huge infrastructure has evolved of publishers to acquire and edit the manuscripts, and then print and distribute the journals. Further, another large infrastructure has evolved of libraries to acquire, catalog, shelve, etc., these journals, to make them accessible. We should not forget how cumbersome this system is. Except for those living near the 500 to 1000 good libraries, access to this information is hard. Even for those at institutions with good libraries, obtaining the necessary information means a physical trip, often to another building. Often it requires a wait while a copy is brought back from storage, or else is recalled from another borrower. In short, traditional paper journals are not convenient. Electronic publication offers a way to make all mathematical literature accessible around the clock from the comfort of one's office or home. Moreover, it can make this literature accessible to everyone. One point that is made about Ginsparg's theoretical physics preprint service, both by Ginsparg and other users [Ginsparg, Sci1], is that it has made the latest results much more widely available, and diminished the importance of various small "in" groups that were exchanging preprints. Today there are many universities around the world that do not have good electronic connections. However, that is changing rapidly. After all, it is far cheaper to hook up to the Internet than it is to acquire a good mathematical library from scratch. A T1 line of 1.5*10^6 bits per second to the Internet costs around $ 20 K per year, at least in the US. On the other hand, a good library has to spend well over $ 100 K per year just for mathematical journal subscriptions. Further, journal prices are going up, while the cost of connections is going down. Once many journals become available electronically, paper copies are likely to disappear. It will be a case of positive feedback operating in a destructive mode. Necessary information will be available electronically, and most of it will have to be accessed electronically, since the local libraries will not be able to provide copies of all relevant journals. Therefore researchers will have less incentive to press for paper journal subscriptions to be maintained, which will lead to diminished circulation, and therefore to higher prices and more pressure from libraries to cut back on subscriptions. While I do predict that most paper journals will disappear, I am not saying that there will be no place for paper. Display technology is not moving anywhere near as fast as microprocessors or data transmission, and it will be a long time before you curl up in bed with a screen rather than a book. People will wish to print papers out any time they need to study them carefully. However, that will be done on their own laser printers from electronically supplied files. Once journals move to an electronic format, reviewing publications such as Math. Rev. and Zentralblatt are likely to disappear. These journals play an invaluable role in informing scholars of the latest developments in mathematics. However, just how valuable is this contribution going to be in the new era in which everything is available on-line? If I can do text searches over the whole mathematical literature, how important will the abstracts be? Zentralblatt is almost completely devoted to author abstracts. Do I need that if I have the full text available? Math. Rev. provides a better service, since their abstracts are more often written by experts who can provide an evaluation of the work, but even there, the reviews are often perfunctory. Careful surveys are going to increase in importance as research publications get ever more specialized with the growth of the field, but that is not what the reviewing journals provide. I feel that these publications will become superfluous, and their function will be taken over by computerized searches. How are electronic publications to be paid for? That is a crucial question that the whole information industry is struggling with, and there is no clear answer. Some form of pay-per-view is the most likely answer for most situations. It is certainly appropriate for many commercial information services, as well as the entertainment industry. (Credit reporting services have been operating in this mode for a long time, for example, and so have movie theaters.) It is not impossible that some system of this type will be designed for scholarly publications. There are fears among scholars about prohibitive costs of such a system [Franks], but it is conceivable that a solution equitable to all sides could be devised. However, I do not think pay-per-view will become the prevailing model in scholarly publishing. The reason is that the value added by publishers is diminishing, and there will be cheaper and better ways to provide their services. It is widely recognized that scholarly journal publishing differs significantly from the trade press. The primary interest of authors is in making their work widely available. They practically never receive any direct financial remuneration for their papers. Their rewards come from recognition their work earns in their field. The scholarly community is in the business of writing papers, giving them to publishers, and then buying them back as journals and books. Scholars and publishers have developed a mutually beneficial relationship. However, given the crisis in the present system, this relationship is under strain, and is likely to result in the demise of traditional publishers. The main problem right now is the need to reduce costs drastically. What is the cost of producing a typical mathematical paper? It appears that the average researcher publishes two or three papers per year. The total cost of employing such a person is at least $ 150 K per year (this is a conservative estimate, as it is meant to include standard salary, grant support, benefits, as well as all the office space, libraries, and university administration costs). Let us assign one third of this cost to research activities. If we do that, we conclude that each paper costs at least $ 20 K, with the cost paid for by taxpayers, students' parents, or donors to universities. How much does it cost to publish a paper in a research journal? Let us take the figures in [AMSS]. If we assume that each paper is typeset with 5*10^4 characters, and multiply that by the cost per character given in [AMSS] for any given journal, and then multiply by the circulation for that journal, we obtain an estimate for how much it costs to publish at article in that journal. For example, according to [AMSS], Amer. J. Math. has a circulation of 1458, and an annual subscription costs $ 0.048 per 10^3 characters. This produces an estimate of $ 1458*50*0.048, or about $ 3500 for the cost of publishing a single article there. This figure includes all the editorial, printing, and mailing expenses. Doing this for the other journals listed in [AMSS] for which both costs and circulation are given (but excluding Bull. Amer. Math. Soc., which differs substantially in scope and especially circulation from the standard research journals) produces estimates for single article costs between $ 900 (for the Notre Dame J. Formal Logic) and $ 8,700. The median cost figure is about $ 4,000, and is the one I will use. The estimate of $ 4 K for the cost of publishing a paper shows that it is indeed much lower than the $ 20 K cost of writing it in the first place. (The unpaid work of editors and referees is indispensable to scholarly publishing, but any estimate of its monetary value would surely come up with figures well below $ 4 K per article.) Thus the value added by the publishers is dwarfed by that of scholars. Is $ 4 K per article too much to pay? That is not a question that can be discussed in the abstract. Until recently, there was not much choice. The traditional journals were the only method available for transmission of research results. The problem is that the cost per article is increasing, and so is the number of articles. Therefore something will have to be done. To put this figure into perspective, let us recall that about 5*10^4 mathematical papers are published each year, so the total cost of traditional mathematical journals is about $ 200 M per year. If we assume that 35 % of this cost is paid by subscribers in the US (which is probably a low estimate), then we find that US universities, laboratories, and individuals spend $ 70 M per year for mathematical journals. That is almost exactly the same as the NSF budget for mathematical research. If we could eliminate this cost, we could potentially double the NSF budget for mathematics at no extra cost to society at large. Of course, the tradeoff is not that simple, since journals are paid for from different sources than research grants, and we do not have much freedom in moving public and private funding around. However, the $ 70 M figure is sufficiently large that it cannot easily grow, and there will be pressure to lower it if some method for doing so can be found. To what extent can publishing costs be lowered? Some publishers have predicted that electronic publication of journals would lower the costs only by around 30 %, roughly the cost of printing and mailing. My guess is that the visible costs can be lowered much further, and possibly entirely eliminated. By visible costs I mean the subscription costs of the journals. This might mean creating some additional costs for the authors and their institutions, but those are not likely to be large. In my own experience as editor, I have found that papers submitted electronically are much easier to handle than those on paper. In cases of electronic submission, I do all the work without help of a secretary (although there is no reason that a secretary could not handle many of these tasks). Still, I find that sending out copies of the manuscripts and letters to referees electronically is easier than writing out instructions to the secretary, and then, after some delay, checking the paperwork and signing the letters. With more and more potential referees reachable via e-mail, this is becoming increasingly practical. The main problem arises with nonstandard TeX files. That should become less of a burden in the future as authors are persuaded to standardize their styles and as better software becomes available. After all the reviews and revisions are done, I then send the manuscript (electronically, to an increasing extent) to the journal for copy editing, printing, etc. In an electronic journal I could simply place the manuscript file in a special directory or e-mail it to a list of subscribers. That is indeed how many of the electronic journals that have been set up recently function. Today some paperwork is still required by publishers for authentication. (For example, AMS insists on signed letters from editors together with paper copies of manuscripts that are accepted.) However, cryptography has developed the technology to avoid this step, and digital signatures will become common soon. The financial costs of electronic journals should be so low that they will be absorbed into general overhead expenses. ETNA, the Electronic Transactions on Numerical Analysis, published by the Kent State University Library in conjunction with the Institute of Computational Mathematics at Kent State University, is a new electronic journal. Subscriptions to it are currently free, although the journal has announced that this may change after the initial three-year trial period. Arden Ruttan, its Managing Editor, reports (private communication) that so far about 90 % of the ETNA budget has gone to purchasing equipment, with smaller expenditures for telephone (5%) and advertising (5%). There are neither copy editors nor secretarial staff, and papers have to be submitted in LaTeX or TeX. In general, given the improving performance and decreasing cost of computers, I do not think electronic journals will even need special computers. Ginsparg's electronic preprint service, which will be described later, uses only a small fraction of the capacity of a workstation that is devoted to other purposes. The netlib system, developed by Jack Dongarra at Argonne and Eric Grosse at Bell Labs, which has been used widely by the applied mathematics specialists for several years, ran for a while on a Sequent multiprocessor that was donated just for that purpose. Today, although the traffic through the system has increased tremendously, a fraction of a workstation handles it comfortably. Thus equipment costs in the future could be absorbed in the general overhead of providing a computing environment for editors in their normal professional jobs. Electronic journals like ETNA rely currently on free links through the Internet. With communication costs going down, I expect that transmission will also be covered as part of the general overhead. Systems administration requires some care, as not all editors will be willing and able to take care of maintaining their systems. However, with improvements in software, this is becoming much less of a problem, and there are now even programs for dealing with misrouted e-mail. Eric Grosse (private communication) has estimated that netlib could handle all SIAM's publications needs in the next few years with the half-time help of a skilled systems administrator. Arden Ruttan (private communication) reports that the main negative comments about ETNA have been about the lack of copy editing, and the requirement that all manuscripts be in LaTeX or TeX. The restriction on input format is something that is likely to become less of a problem, in that I expect that increasingly mathematicians will have all their papers prepared in one of a few typesetting systems (which may include something other than just LaTeX or TeX). This will shift some of the burden of publishing onto authors, but is not likely to be particularly onerous. I am not sure what, if anything, will happen with copy editing. Its lack results in non-uniform format and substandard appearance. At ETNA, Ruttan does extensive copy editing himself, but this is clearly not something that most editors are going to be willing to do. One solution might be to give up on uniform standards. This might not be a great loss. I have heard more than one author complain betterly about the changes that a publisher made to a TeX manuscript that the author devoted great efforts to. Another solution might be to force authors to prepare their manuscripts in adherence to a rigid format. Still another approach would be to have professional copy editors, either at the author's institution or at the editor's, who would do the job. If this is done at the author's institution, the cost might again be absorbed by the general administrative overhead, as are costs of technical typists. Even if it is done at the editor's location, the cost of copy editing a file in some standard input language is likely to come to less than $ 400 per paper, not the $ 4,000 per paper that the present system costs. That cost might be covered by page charges to authors, say. The final appearance may not be as pleasant as with a truly professional system, but it should prove adequate. The arguments above suggest that it should be possible to have electronic journals that will provide all the essential features of the present paper ones at a small fraction of the cost. Therefore, given the pressure to reduce costs, the present expensive publishing enterprises, commercial as well as non-profit ones, will not be able to survive in anything resembling their present forms. Contributing to the general pressure to reduce costs will be the subversive influence of electronic preprint distribution, which will be discussed later. The transition to a paperless system is likely to be painful, though. It will be painful for the employees of the publishers. After all, what I am predicting is that most of their professional skills, acquired at great effort, and formerly highly valued by the scholarly community, will become obsolete. The transition will also be painful for the publishers themselves, as their main business will disappear. This will be a problem even for non-profit publishers like the AMS, which depends on publishing revenue to support many of its activities. There will be severe conflicts of interests between publishers that are going out of business, but still control the copyrights to publications, and the scholarly community. It will require substantial effort to devise an equitable solution. However, I do not see any way to preserve the present system, since it is too expensive already, and is not able to cope with the exponential growth in production of information. I am convinced that electronic publishing that is free to readers will take over in science and mathematics. It is impossible to predict accurately the date of transition. The basic technology that makes it possible is here, so it's a matter of guessing how soon the necessary infrastructure of editorial systems can be developed, and how quickly it will be accepted by the community. If nothing is done, I expect that traditional paper journals will become irrelevant to mathematicians' needs within 10 years. They might survive for a while longer, just because of the inertia of the entire academic publishing and library system, but then there might come a sudden transition, as the realization spreads that this system is obsolete. This transition might be similar to the one that affected the mainframe computer industry. The experts were predicting 10 years ago that the mainframe was dead. However, there were still many jobs that only mainframes could handle. Five years ago, the mainframe was at last truly obsolete, but inertia ruled, and sales of these machines were at their peak. In the last three years, however, a catastrophic decline took place. Advances in technology will also drastically affect libraries. Libraries are expensive. While mathematical journals cost the US mathematical community about $ 70 M per year just in subscription costs, it is likely that the costs of maintaining library collections (both people and space, but excluding subscription costs) amount to a comparable amount. Right now librarians are the ones facing the crisis in publishing, since they have to deal with increasing prices and increasing scholarly output. They are squeezed between publishers on one hand, and scholars on the other. Technology will solve the librarians' problem, but will also eliminate most of their jobs. Just like publishers, libraries will have to shrink and change their role. After all, if the entire mathematical literature is available on your screen, drawn from a data storage device either on your workstation, or your department's central library server, or some distant location accessed over the Internet, why would you need to use the traditional library? Most functions of the traditional libraries are connected to the notion that information is preserved on pieces of paper that are bulky and expensive. That paradigm is no longer appropriate, at least not if we do have a system like that outlined above, where scholarly information is made available for free. A basic function of a library is to decide what books and journals to obtain, to get the most productive use of the limited budget. The procurement of those journals and books, and their cataloging, shelving, etc., are another important function. However, with practically unlimited storage, and no cost constraint, why shouldn't your (or your department's) computer have all the world's mathematical papers? Moreover, it could have it with a few clicks at appropriate icons on your machine's screen, with all the data transfers taking place automatically. Similarly, an important role for libraries is to keep track of loaned material, since usually there is only one physical copy of each book or journal issue. With electronic data storage, though, that is not a problem, as unlimited copies can be made. Reference librarians are experts at locating information, and are indispensable. As with reviewing journals, though, there will be much less need of this expertise when all information is available electronically. A dumb text search through the few tens of terabytes of data that constitutes the sum of all scholarly publications can frequently substitute for expert human assistance. (While a terabyte search seems outlandish right now, it won't seem so large in a few years. With proper indexing, the search problem becomes even more manageable.) Moreover, much better information-gathering software programs are being developed that will make it possible to pursue intelligent search strategies, further usurping the roles of expert librarians. There will always be positions for experts in information, to help in navigating the oceans of data on the Net, but their roles will cover only the most sophisticated skills that librarians possess. There will be many fewer jobs in libraries themselves, and much less space. As one sign of what is likely to happen, the New York Times on Oct. 7, 1993, reported that Time, Inc., the parent company of Time, Sports Illustrated, and several other magazines, was cutting its library staff from 120 to 80. The justification for the cut was that many of the functions performed by the staff, such as cutting out and filing stories from various sources were being taken over by electronic searches. This is just the beginning of the trend that will see the demise of the traditional library as well as the traditional journal. The new electronic publishing system will require experts for classifying information and linking it to other sources. However, that number will be small, at least in the scholarly publishing area. It should be possible for a few institutions supported by taxpayers, such as the Library of Congress, or by charitable contributions, such as the New York Public Library, to provide most of the services needed. The reason is that most of the work will likely be done by the unpaid authors, editors, and referees. We already have a good subject classification system, and updating it would be much more the job of mathematicians than other experts. The arguments above suggest that in the future both journal publishers and libraries will play a much diminished role in scholarly publications. However, this still leaves open the possibility of our present journals surviving almost intact, except that instead of being printed on paper, they would be electronic. My feeling is that while this might be an intermediate stage, it will not be the final one, and that publications will undergo a much more dramatic change. The reason is that electronic publishing offers many more possibilities than traditional paper journals. 5. The interactive potential of the Net In this section I describe my vision of what future mathematical publications will be like. I start by disputing Frank Quinn's vision [Quinn] of what the present system is and ought to be, and then discuss some novel information dissemination systems that have certain features I expect to find in future system. Finally I present the basics of the system I expect to emerge over the next decade. Scholarly journals have evolved during the last three centuries, in the world shaped by Gutenberg's invention of movable type. This invention made possible wide dissemination of scholarly publications. However, because printing, although much cheaper than hand copying, was still expensive, mathematical journals were constrained into a format that emphasized brevity. Further, the standards have promoted correctness. Since it took a long time to print and distribute journal issues, and corrections likewise required a long time to disseminate, it made sense to have a rigorous refereeing standard. (This was not the only reason, of course, but I believe it was an important one in the development of our current editorial practices.) As a result, mathematical literature has become reliable, in that mathematicians feel free to use results published in reputable journals in their work, without necessarily verifying the correctness of the proofs on their own. Frank Quinn [Quinn] argues that this feature justifies extreme caution in moving away from paper journals, lest we be tempted into "blackboard-style" publishing practices that are common in some fields. I agree that mathematicians should strive to preserve and enhance the reliability of mathematical literature. However, I feel that Quinn's concerns are largely misplaced, and might serve to keep mathematicians from developing better methods for information dissemination. The first point that should be made is that electronic publication does not in any way prevent the maintenance of present publishing standards. Electronic journals, as sketched in the previous section, can follow exactly the same policies (and might even have the same names) as the current paper journals. On the other hand, paper journals are no guarantee against unreliable results, since the practices Quinn deplores are common in some fields, and have been present for a long time. Thus the reliability of literature in any field depends primarily on the publishing norms in that field, and not on the medium. Paper journals serve several additional purposes in addition to that of providing reliable results. By having a hierarchy of journals of various prestige levels, the present system serves to alert readers to what the most important recent results are, and helps in making grant, tenure, and similar decisions. Here again there is no reason that electronic journals could not provide the same services. A more serious objection to Quinn's article [Quinn] (and to a large extent also to [JaffeQ]) is that its picture of mathematicians whose main goal is to produce formally correct proofs is unrealistic. I agree completely with Quinn that it is desirable to have correct proofs. However, it's a mistake to insist on rigor all the time, as this can distract from the main task at hand, which is to produce mathematical understanding. There are many areas of mathematics today that are not completely rigorous (at least so far), with the classification of finite simple groups just one example. This has been true at various times in the past as well. After all, Bishop Berkeley with his "ghosts of departed quantities" jibe about infinitesimals had much the better of the argument about rigor in 18-th century calculus. On the other, from a historical perspective he lost the argument, since the necessary rigor was eventually supplied. In a similar vein, we speak of the Riemann Mapping Theorem, even though experts agree that Riemann did not have a rigorous proof of it, and that a proper proof was only supplied later. Standards have not improved all that much in the intervening years. What I do want to argue here is that the present system does not do a good job of providing the expected reliability, even if by reliability we mean the standards accepted in a given field, and not formal correctness. Even conscientious referees often miss important points. Furthermore, many referees are not all that conscientious. Once a paper appears, there are some additional controls. Sometimes a careful reviewer for Math. Rev. will catch a mistake, but that does not happen often. More typically, an error will be pointed out by someone, and then, depending on who and what is involved, there may be an erratum or retraction published, or else a note will be inserted into another paper, or else the mistake will only be known to the experts in the field. If the topic is important, eventually some survey will point out the deficiencies of the paper, but most papers do not get this treatment. Thus we already have situations where published work has to be treated with caution. It is more reliable than that of just about any other field, but it is not as reliable as Quinn's article might lead us to believe. Lack of complete reliability is only one defect of the current paper journal system. Delays in publication are the one that is best known and most disliked. There are others as well. A major one is caused by the emphasis on brevity that was encouraged by an expensive system of limited capacity. Although this is seldom said explicitly, the standard for presentation in a research paper is roughly at the advanced graduate student level. If you write to be understandable to undergraduates, referees and editors will complain that you are wasting space describing basic steps that the reader is supposed to be able to fill in. If you write at a more advanced level, they will complain that the details are too difficult to fill in. The result is exposition that is often hard to follow, especially by non-experts. Bill Thurston [Thurston], in an article that is largely a rejoinder to that of Jaffe and Quinn [JaffeQ], argues convincingly that formal proofs are fundamental to the correctness of mathematics, but they are a terrible way to convey mathematical understanding. Although this is seldom stated explicitly, implicitly it seems to be understood well by everyone. After all, we do have advanced seminars instead of handing out copies of papers and telling everybody to read them. The reason is that the speaker is expected, by neglecting details, intonation, and body language, to convey an impression of the real essence of the argument, which is hard to acquire from the paper. The medium of paper journals and the standards enforced by editors and referees limit what can be done. While Thurston argues for a more intuitive exposition of mathematical results, Lamport [Lamport] advocates just the opposite. Both Lamport and Thurston feel that the usual standards of mathematical presentation fall far short of the rigor of formal proofs. Lamport feels that our literature would be far more reliable if proofs were more formal. One problem with this suggestion is that it would make proofs even harder to understand. Lamport's solution is to have a hierarchy of proofs, of increasing levels of rigor. However, the current system cannot accommodate this, given the premium placed on brevity. The ideal system would, as Lamport suggests, have multiple presentations of the results, starting possibly with a video of the author lecturing about them, and going down to a detailed formal proof. Such a system is possible with electronic publishing, given the availability of almost unlimited resources (although it will be a while before video presentations can be included routinely), but cannot be accommodated with our paper journals. Electronic publishing and electronic communication in general are likely to have a profound impact on how mathematical research is performed, beyond destroying paper journals. It is likely to promote a much more collaborative mode of research. One mode of mathematical research that is highly prized in our mythology is that of the individual who goes into a study and emerges a few months or years later with a great result. Andrew Wiles' proof of Fermat's Last Theorem falls into this model, and many people have expressed their admiration of Wiles for devoting many years of secret work to his project. However, that is not the only mode of operation. As an example, the reviewing of the Wiles manuscript is being carried out by a large team of referees who communicate via e-mail, and the communication is apparently very helpful in this work. In general, team efforts have been increasing, and rapid electronic communication via e-mail and fax has been instrumental in this. Inspection of Math. Rev. shows that the proportion of coauthored papers has increased substantially over the last few decades. Given the increasing specialization of researchers, this is only natural. Further, it is a congenial mode of operation for many. Laszlo Babai wrote a marvelous article [Babai] that I highly recommend. Its title is a pun. It is an account of the proof of an important recent result in computational complexity, on the power of interactive proofs. At the same time this article is a description of how the proof was developed through exchanges of TeX files of manuscripts and e-mail messages among a group of about two dozen researchers. There are many such interactions going on, and the electronic superhighway will make them easier and more popular. (It will also create new problems, such as that of assigning proper credit for a work resulting from interaction of dozens of researchers. That is another story, however. It might seem unfamiliar and hard for mathematicians, but other scientists, such as elementary particle physicists, have managed to deal with it.) The following sections examine some of the existing methods for information dissemination and their relevance for scholarly publications. All have serious deficiencies, but all have promising features. The final section will present my vision of what future systems will be like. 5.1 Netnews The Net (the informal name for the Internet and a variety of other electronic information services) provides several interesting examples of information dissemination systems. Usenet, the decentralized system of discussion groups, is extremely popular, but not among serious scholars. As has been noted by many, unmoderated discussion groups, such as sci.math, are at the opposite end of the spectrum from the traditional scholarly journals. They have been called a "global graffiti board for trivial pursuit" [Harnad1]. They are full of arrant nonsense, and uninformed commentary of the "As I recall, my high school teacher said that ..." variety. They are also beginning to attract cranks. (I was amazed that sci.math survived for many years without any serious problems with crackpots, although that is unfortunately changing.) Most mathematicians who try sci.math give up in disgust after a few days, since only a tiny percentage of the postings (of which there have been over 60,000 so far) have any real information or interest. I have continued reading it sporadically, more from a sociological interest than anything else. What I have found fascinating is that although there are now cranks posting to it (and, what is worse, generating long discussions), and there is plenty of "flaming," as well as the nonsense alluded to above, there are occasional nuggets of information that show up. Sometimes a well-known researcher like Noam Elkies or Philippe Flajolet will provide a sophisticated solution to a problem that has been posted. What is perhaps even more interesting is that every once in a while some totally unknown person from a medical or agricultural school, say, will post an erudite message, giving a proof, a set of references, or a history of a problem. These are not the people I would think of when choosing referees, yet they clearly have expert knowledge of at least the topic at hand. Reading sci.math also provides a strong demonstration of the self-correcting nature of science and mathematics. The opposite of Gresham's law operates, in that good proofs tend to drive out the bad ones. For example, every few months, the Monty Hall paradox (with a contestant given a choice of three doors, etc.) crops up again, as a new reader brings it up. There is typically a flurry of a few hundred messages (this is Usenet, after all, and there is no centralized control and no synchronization) but after a week or so the discussion dies down, and everyone (or almost everyone, since there are always a few crackpots) is convinced of what the right answer is. In these days when we hear constant complaints about lack of public interest in science and mathematics, it is interesting to note that sci.math has an estimated 120,000 readers world wide. This is a large group of people who do have at least some interest in serious mathematics. Although the Usenet model does have some redeeming features, it is not a solution to the scholarly publishing problem. The information content is far too low. Even the specialized groups, such as sci.math.num-analysis, which are at a higher level then sci.math by virtue of their greater technical focus, are not adequate, as there is too much discussion of elementary topics. A somewhat better solution is that of moderated discussion groups. Dan Grayson runs the sci.math.research group, which is much more interesting for professional mathematicians, because of the filtering he does. However, while it is a useful forum for asking technical questions or picking up odd pieces of mathematical gossip or strange results, it is more like a coffee hour conversation in a commons room than a serious publishing venture. There are other kinds of moderated discussion groups that engage in a form of publication that is clearly useful, but would not fit into the traditional paper journal mode. For example, F. Bookstein (posting to vpiej-l, Pub-EJournals, December 2, 1993) operates a morphometrics bulletin board. Its main function is to provide technical answers to questions from readers. They are seldom novel, as one goal is to avoid mathematical novelty, and provide references to existing results, or combinations of known results. However, they serve an important function, saving researchers immense efforts by providing needed technical advice. There are many mailing lists that provide some of the services that Bookstein's bulletin board does. Many mathematicians are familiar with the Maple and Mathematics user group mailing lists. They consist of e-mail messages from users with questions and complaints, and responses to those messages from other users and the paid support staff for those symbolic algebra systems. They do not qualify for traditional paper journal publications. Too many are basic questions about simple features of the system. Most are about bugs in the current release, or system incompatibilities, and nobody would want to make that part of the archival record. However, they are extremely useful, largely because with electronic storage, it is possible to search the great mess of them for the tidbits relevant to one's current needs. Further, they sometimes do veer into deep questions, as when a simple query about solving a system of polynomial equations evolves into a discussion of what is effectively computable with Groebner bases. All the discussion group formats mentioned above do what Quinn [Quinn] regards as pernicious, namely blur the line between informal communication and formal, reviewed publication. However, where Quinn sees danger, I see opportunity. I feel that it is desirable to blur this line, and I see that happening in many other forms as well. 5.2 FTP and automated preprint servers To see how confining the current paper journal system is, consider preprints. Half a century ago, there were practically no preprints, since there were no technical means for producing them. Journals were the primary means for information dissemination. Today, with xerox machines widely available, preprints are regarded as indispensable. Even Quinn [Quinn], who warns of the dangers of rapid publication, is an advocate of rapid dissemination of results through preprints. Many mathematicians feel that preprints have become the primary method of communicating new results. It has been a long time since I have heard of an expert in any mathematical subject learn of a major new development in his or her own area through a journal publication. (The last such instance I can recall was caused by the Great Cultural Revolution in China, which interrupted the informal communication methods.) Usually the experts either receive preprints as soon as they are written, or they hear of them through the grapevine, and then, when the results seem especially interesting, they request preprints from the author or make copies of their friends' copies. Preprint distribution can be done, and increasingly is done, via e-mail. It is much easier to write a shell script or create an alias to send 50 copies electronically than it is to make 50 xerox copies, stick them in envelopes, and address them. Still, this does lead to a messy decentralized system where each author has to decide who is to receive the preprints. There are two possible enhancements to it. Either one of them would be a major advance, and either one could become the main method of disseminating scholarly information in the space of a year or so. Either one would be extremely subversive to the present journal system, and could lead to its demise. One enhancement to present preprint distribution is to have an automated preprint server. There are several of them already operational in mathematics, such as the one at Duke that covers algebraic geometry. They have not had much impact yet. However, this could change suddenly. An instructive example is provided by the system that was set up by Paul Ginsparg at Los Alamos [Ginsparg, Sci1, Sci2]. In only one year, starting about two years ago, the high energy theoretical physics community switched over almost completely to a uniform system of preprint distribution that is run from Ginsparg's workstation (and now also from some other sites that copy the material from his). Apparently nobody in theoretical physics can afford to stay out of this system, as it has become the primary means of information dissemination. Even brief interruptions of service [Sci2] bring immediate heated complaints. This system has already been extended to several other fields, aside from high energy theoretical physics, and the requests to help in setting up the system are a major chore for Ginsparg (private communication). The main point about this system is that it is cheap. The software is available for free, and not much maintenance is required. Physicists submit preprints electronically (in a prescribed version of TeX, and in prescribed format), the system automatically files them and sends out abstracts to lists of (automatically maintained) subscribers, and then the entire papers can be retrieved through e-mail requests. An important observation about Ginsparg's system is that the transition to it was sudden, at least by the standards of the publishing world. It took under a year from the time Ginsparg wrote his program to the time it became the standard and almost exclusive method for distributing new results in high energy theoretical physics. Ginsparg [Ginsparg] attributes the rapid acceptance of his system to the fact that his area had already switched over to a system of mass mailings of preprints as the primary information dissemination scheme, and regular printed journals were of secondary importance. Mathematics is not at that stage. However, this could change rapidly. The use of e-mail and anonymous ftp for distribution of mathematics preprints is spreading. Other fields have switched or are switching to the use of Ginsparg's system. I would not find it surprising if mathematicians suddenly adopted Ginsparg's system. Can Ginsparg's system coexist with paper journals? Theoretical high energy physicists are still submitting their papers to the traditional journals. However, it is not clear how long that will continue. If I have heard of an interesting result that has been published in some journal, and if I can order via e-mail a preprint of the same version (except possibly for the formatting) from an automated preprint server, why should I bother to go to the library to look for the journal? If I don't look at the journal, why should I mind if my library cancels its subscription? If there is anything that is certain, it is that my library will keep coming back each year with lists of proposed journal cancellations, so the pressure to give up more and more subscriptions will continue. One solution would be for publishers to require that preprint copies be deleted from preprint archives once a paper is published. I doubt if this approach can work. It is technologically hard to enforce. How can anyone keep track of all the copies that might have been squirreled away on private disks? Even more important, would scholars tolerate such requirements? Suppose you are getting extensive praise for your latest masterpiece on cohomology of transcendental monoids that has been circulated in preprint form by an automated archive. A message arrives from the publisher telling you that your paper has just appeared, and asking you to withdraw the preprint. You do, and instantly you start getting e-mail complaints from people who had heard of it from others or had only now gotten around to looking at their accumulated list of preprint announcements, and who suddenly find they cannot obtain it in way they are used to. You tell them to go look in the journal. Their e-mail replies come back instantly and say that their library does not subscribe to that journal, or has not received it yet, or .... Would that not tempt you to resolve never again to submit your papers to any journals that had such a policy? It is not necessary to have a centralized preprint server such as Ginsparg's to subvert our current publishing system. The natural evolution of anonymous ftp directories can accomplish the same result. Many people are familiar with the use of ftp to transfer files. What is happening is that various departments are setting up systematic directories of preprints written by their members that can be accessed by ftp. A uniform naming convention is becoming common, so that preprints from the Department of Mathematics at Celestial University will be accessible at ftp.math.celestialu.edu, and will be organized into directories by year or name of author. The logic of having a uniform system is so overwhelming that I am sure this convention will be adopted quickly and widely. This will be extremely dangerous to the traditional journals. Decentralized data bases that are systematic and accessible with no charge can serve the same purpose as centralized preprint servers. They can be combined into a single data base by a programmer that writes the simple software to keep track of changes in the various departmental databases and copy new additions over the network. Alternatively, they can be accessed by various Internet tools, such as Gopher, Mosaic, or their successors, which present a convenient interface that hides from the user the gathering of information from all over the world. (Given the evolution of software, the distinction between a centralized preprint server and a decentralized database will soon be immaterial.) In either case, the result would be to give the scholar immediate access from anyplace on the Net to preprints. As with preprint servers, this is likely to doom traditional journals. It is possible for a field to rely just on preprints. Ginsparg [Ginsparg] writes that in theoretical high energy physics, they have long been the primary means of communicating research results. Rob Pike, a colleague working in operating systems, reports (private communication) that in his area journals have become irrelevant. Communication is via e-mail and electronic exchange of preprints, typically through anonymous ftp. A recent announcement of reports about a new operating system resulted in over a thousand copies being made via anonymous ftp in just the first week. It is easy to argue that the experience of operating systems is not relevant to mathematics, since the final product is not a paper, but the software. However, the same is increasingly true of various areas of mathematics. There have been many complains by some mathematicians that their efforts were not getting proper recognition, since what they were producing was software, whether for solving partial differential equations or doing geometric modeling, and journal publications were not the right way to evaluate their contributions. With electronic publishing, this problem can be overcome. According to Leo Guibas (private communication), in computer graphics, journals and proceedings of the SIGGRAPH conference are the primary means of communication. The reason for the reliance on paper publications in this area is the lack of standardized color display equipment, so that the delays in publication are tolerated. However, there are problems, such as the publication of only abstracts in SIGGRAPH proceedings, which are not fully refereed, and which are too short to enable researchers to fully the work. Graphics and operating systems might not have much relevance to mathematics. However, there are fields close to mathematics that do not follow the usual standard for mathematical publications. Theoretical computer science has standards of rigor that are comparable to those in mathematics. There are two big annual meetings, STOC and FOCS. Papers (or, to be precise, extended abstracts of about 10 pages) to these conferences are submitted about six months in advance. A program committee then selects 60 or 70 out of 200-300 submissions. The full papers are then published in the proceedings, which are given out at registration to all participants, and can be purchased by anyone from the publishers, ACM and IEEE. The official policy is that since the committee is acting on the basis of extended abstracts only, and in any case does not have time to do a thorough refereeing job, the proceedings publications should be treated as preliminary, and final revised versions should be submitted to refereed journals. However, there have been perennial complaints that this was not being done, and that researchers were leaving the proceedings drafts as the only ones on record. Recently, however, David S. Johnson has compiled a bibliography of STOC and FOCS papers, and it appears that around 60 % of them do get published in a polished form in a journal. This is considerably better than the folklore would lead one to believe. However, for working researchers, practically all the information dissemination takes place through early preprint exchanges, and interactions at these conferences. Moreover, acceptance at these conferences is regarded as more prestigious than journal publications, as one can see from letters of recommendation that get written. Thus here also it is possible to have a healthy field does not depend on journal publications for the information dissemination. However, the situation is not ideal. There are frequent complaints about errors in the proceedings papers, and there have been a few notorious cases where the results turned out to be completely wrong. What is lacking here, as with other informal preprint systems mentioned above, is a peer review system, to provide the assurance of reliability. 5.3 The prepublication continuum and future systems The popularity of preprints, of Usenet groups, and of mailing lists shows that they do fill an important role for scholars. On the other hand, there is also a need for reliability in the literature, to enable scholars to build on the accumulated knowledge. One way to resolve the conflict is to follow Quinn's advice [Quinn] and rigidly separate distribution of information, such as preprints, from publication in journals. The former would be done rapidly and informally, while the latter would follow the conventional model of slow and careful refereeing with even extra delays built in to help uncover problems. I feel a better solution would be to have an integrated system that would combine the informal Usenet-type postings with preprints and electronic journal publication. Stevan Harnad has been advocating just such a solution [Harnad1], and has coined the terms scholarly skywriting and prepublication continuum to denote the process in which scholars merge their informal communications with formal publications. I will describe the system I envisage as if it were operating on a single centralized database machine. However, this is for convenience only, and any working system would almost certainly involve duplicated or different but coordinated systems. At the bottom level, anyone could submit a preprint to the system. There would have to be some control on submissions (after all, computers are great at generating garbage, and therefore malicious users could easily exceed the capacity of any storage system), but it could probably be minor. Standards similar to those at the Abstracts of the AMS might be appropriate, so that proofs that the Earth is flat, or that special relativity is a Zionist conspiracy would be kept out, but discussions of whether Bacon wrote Shakespeare's plays might get in (since there are interesting statistical approaches to this question). There would also be digital signatures and digital timestamping, to provide authentication. The precise rules for how the system would function would have to be decided by experimentation. For example, one feature of the system might be that nothing that is ever submitted could be withdrawn. This is already part of Ginsparg's system, and helps enforce quality, since posters submitting poorly prepared papers risk having their errors exposed for ever. On the other hand, such a rule might be felt to be too inhibiting, and so might not be imposed. Once a preprint was accepted, it would be available to anyone. Depending on subject classification or keywords, notification of its arrival would be sent to those subscribing to alerting services in the appropriate areas. Comments would be solicited from anyone (subject again to some minor length limitations), and would be appended to the original paper. There could be provisions for anonymous comments as well as signed ones. The author would have the opportunity to submit revised versions of the paper in response to the comments (or his/her own further work). All the versions of the papers, as well as all the comments, would remain part of the record. This process could continue indefinitely, even a hundred years after the initial submission. Author X, writing a paper that improves an earlier result Y(123) of author Y, would be encouraged to submit a comment to Y(123) to that effect. Even authors who just reference Y(123) would be encouraged to note that in comments on Y(123). (Software program could do much of this automatically.) This way a research paper would be a living document, evolving as new comments and revisions were added. (I will avoid talking of technical details of how this could be done, but only a small subset of the facilities that hypertext systems are supposed to provide would be required, especially since I do not envisage any requirement for real-time updates.) This process all by itself would go a long way towards providing trustworthy results. There are many issues that would need to be resolved before such a system could be set up. Would anonymous comments be allowed, for example? Grafted on top of this almost totally uncoordinated and uncontrolled system there would be an editorial and refereeing structure. This would be absolutely necessary to deal with many submissions. While unsolicited comments are likely to be helpful in deciding on the novelty and correctness of many papers, they will not be sufficient in all cases. What can one do about a poorly written 100-page manuscript, for example? It seems that a formal review process would be indispensable. That is also Harnad's conclusion [Harnad1, Harnad2]. There would have to be editors who would then arrange for proper peer review. The editors could be appointed by learned societies, or even be self-appointed. (The self-correcting nature of science would take care of the poor ones, I expect. We do have the vanity press even now, and it has not done appreciable damage.) These editors could then use the comments that have accumulated to help them assess the correctness and importance of the results in a submissions and to select official referees. (After all, who is better qualified to referee a paper than somebody who had enough interest to look at it and comment knowledgeably on it? It is usually easy to judge someone's knowledge of a subject and thoroughness of reading a manuscript from their comments.) The referee reports and evaluations would be added as comments to the paper, but would be marked as such. That way someone looking for information in homological algebra, say, and who is not familiar with the subject, could set his or her programs to search the database only for papers that have been reviewed by an acknowledged expert or a trusted editorial board. Just as today, there would be survey and expository papers, which could be treated just like all the other ones. (Continuous updating would have obvious advantages for surveys and bibliographies.) All the advantages that Quinn claims for our present system in providing reliable literature could be provided by the new one. Harnad [Harnad1, Harnad2] advocates a hierarchy of groups, with individuals required to pass scrutiny before being allowed to participate in discussions at higher levels. I feel this will be neither necessary nor desirable, especially in mathematics. The best structure might vary from field to field. Harnad is a psychologist, and he may be reacting to the fact that the proverbial people in the street all fancy themselves experts in psychology, and qualified to lecture on the subject to anybody. On the other hand, most people disclaim any expertise in mathematics. Therefore I do not expect that systems in mathematics will be flooded by crank contributions. After all, what crank will be interested in discussions of crystalline cohomology, or pre-homogeneous vector spaces? There is a need to provide protection against malicious abusers of the system, but beyond that restrictions to specific topics might suffice. In a few cases stronger controls might be needed. For example, Wiles' proof of Fermat's Last Theorem might attract many crank or simply uninformed comments that would be just a distraction. That is not likely to be a problem for the bulk of mathematical papers. Will there be any place for paper in a system such as that sketched above? I suspect it will be very limited, if there will be any. In paper publishing nothing can be changed once it has gone to the printer. Therefore this system cannot be adapted to provide the opportunities for continuous updating and correcting that is available with electronic publishing. We rely on this system because it was the only one that was feasible in the past. However, we now have better alternatives, and I expect them to dominate. Perhaps the exceptionally important papers will be collected periodically (after selection by a panel of experts) and printed as a mark of distinction. If that is done, then the proposed system will in many ways resemble the one advocated by Quinn. Quinn suggested a minimum six month delay between submission of a paper and publication. The scenario I am sketching would provide a 10 or 20 year delay, and even greater assurance of reliability! For all practical purposes, though, it is likely that traditional paper journals and libraries will cease to matter to scholars. Acknowledgements: This article provides my own personal view of the future of mathematical journals. Few of the observations and predictions are original, and I have freely drawn on the ideas in the papers listed below. I have benefited greatly from extensive e-mail correspondence with Paul Ginsparg and Stevan Harnad. Helpful comments were also provided by Joe Buhler, Dick Palais, and Bill Thurston. I thank Carol-Ann Blackwood, Dan Grayson, Eric Grosse, Leo Guibas, Jane Kister, Dan Madden, Rob Pike, and Arden Ruttan for supplying useful information. References [AMSS] Survey of American research journals, Notices Amer. Math. Soc. 40 (1993), 1339-1344. [Babai] L. Babai, E-mail and the unexpected power of interaction, pp. 30-44 in Proc. 5th IEEE Structures in Complexity Theory Conf., Barcelona 1990. [Franks] J. Franks, The impact of electronic publication on scholarly journals, Notices Amer. Math. Soc. 40 (1993), 1200-1202. [Ginsparg] P. Ginsparg, First steps towards electronic research communication, Proc. Gateways to Knowledge Conf., to appear. [Harnad1] S. Harnad, Scholarly skywriting and the prepublication continuum of scientific inquiry, Psychological Science 1 (1990), 342-343. Reprinted in Current Contents 45 (November 11, 1991), 9-13. [Harnad2] S. Harnad, Implementing peer review on the Net: Scientific quality control in scholarly electronic journals, Proc. Intern. Conf. on Refereed Electronic Journals: Towards a Consortium for Networked Publications, to appear. (Available via anonymous ftp, along with [Harnad1] and other related papers, from princeton.edu, in directory pub/harnad/Harnad.) [JaffeQ] A. Jaffe and F. Quinn, "Theoretical mathematics": toward a cultural synthesis of mathematics and theoretical physics, Bull. Amer. Math. Soc. (NS) 29 (1993), 1-13. [Lamport] L. Lamport, How to write a proof, Amer. Math. Monthly, to appear. [MR50] Mathematical Reviews, 50-th Anniversary Celebration, special issue, Jan. 1990. [Price] D. J. Price, The exponential curve of science, Discovery 17 (1956), 240-243. [Quinn] F. Quinn, Roadkill on the electronic highway? The threat to the mathematical literature, to appear. [Story] G. A. Story, L. O'Gorman, D. Fox, L. L. Schaper, and H. V. Jagadish, The RightPages image-based electronic library for alerting and browsing, IEEE Computer 25 (no. 9, Sept. 1992), 17-26. [Sci1] G. Taubes, Publication by electronic mail takes physics by storm, Science 259 (Feb. 26, 1993), 1246-1248. [Sci2] G. Taubes, E-mail withdrawal prompts spasm, Science 262 (Oct. 8, 1993), 173-174. [Thurston] W. P. Thurston, On proof and progress in mathematics, Bull. Amer. Math. Soc., to appear.