The Year 2000 problem (also known as the Y2K problem, the millennium bug or the Y2K Bug) was the result of a practice in early computer program design that caused some date-related processing to operate incorrectly for dates and times on and after January 1, 2000. It caused widespread concern that critical industries (such as electricity or finance) and government functions would cease operating at the stroke of midnight between December 31, 1999 and January 1, 2000 and on other critical dates which were billed "event horizons". This fear was fueled by the attendant press coverage and other media speculation, as well as corporate and government reports. People recognized that long-working systems could break down when the 97, 98, 99 ascending numbering assumption suddenly became invalid. Companies and organizations world-wide checked and upgraded their computer systems. Therefore, while no significant computer failures occurred when the clocks rolled over into 2000, preparation for the Y2K bug had a significant effect on the computer industry. Debate continues on whether the absence of computer failures was the result of the preparation undertaken or whether the significance of the problem had been overstated.
Background
Y2K was the common abbreviation for the year 2000 software problem. The abbreviation combines the letter Y for "year", and k for the Greek prefix kilo meaning 1000; hence, 2K signifies 2000. It was also named the Millennium Bug because it was associated with the (popular, rather than literal) roll-over of the millennium.
The Year 2000 problem was the subject of the early book, "Computers in Crisis" by Jerome and Marilyn Murray (Petrocelli, 1984; reissued by McGraw-Hill under the title "The Year 2000 Computing Crisis" in 1996). The first recorded mention of the Year 2000 Problem on a Usenet newsgroup occurred Saturday, January 19, 1985 by Usenet poster Spencer Bolles.[1]
The acronym Y2K has been attributed to David Eddy, a Massachusetts programmer,[2] in an e-mail sent on June 12, 1995. He later said, "People were calling it CDC (Century Date Change) and FADL (Faulty Date Logic). There were other contenders. It just came off my COBOL calloused fingertips."[citation needed]
It was speculated that computer programs could stop working or produce erroneous results because they stored years with only two digits and that the year 2000 would be represented by 00 and would be interpreted by software as the year 1900. This would cause date comparisons to produce incorrect results. It was also thought that embedded systems, making use of similar date logic, might fail and cause utilities and other crucial infrastructure to fail.
“ The Y2K problem is the electronic equivalent of the El Niño and there will be nasty surprises around the globe. – John Hamre, Deputy Secretary of Defense [3] ”
Special committees were set up by governments to monitor remedial work and contingency planning, particularly by crucial infrastructures such as telecommunications, utilities and the like, to ensure that the most critical services had fixed their own problems and were prepared for problems with others. It was only the safe passing of the main "event horizon" itself, January 1, 2000, that fully quelled public fears.
In North America the actions taken to remedy the possible problems had unexpected benefits. Many businesses installed computer backup systems for critical files. The Y2K preparations had an impact on August 14, 2003 during the Northeast Blackout of 2003. The previous activities had included the installation of new electrical generation equipment and systems, which allowed for a relatively rapid restoration of power in some areas.
[edit] The programming problem
The practice of using two-digit dates for convenience long predates computers, notably in artwork. Abbreviated dates do not pose a problem for humans, as works and events pertaining to one century are sufficiently different from those of other centuries. Computers, however, are unable to make such distinctions.
“ I’m one of the culprits who created this problem. I used to write those programs back in the 1960s and 1970s, and was proud of the fact that I was able to squeeze a few elements of space out of my program by not having to put a 19 before the year. Back then, it was very important. We used to spend a lot of time running through various mathematical exercises before we started to write our programs so that they could be very clearly delimited with respect to space and the use of capacity. It never entered our minds that those programs would have lasted for more than a few years. As a consequence, they are very poorly documented. If I were to go back and look at some of the programs I wrote 30 years ago, I would have one terribly difficult time working my way through step-by-step. – Alan Greenspan[4] ”
In the 1960s, computer memory was scarce and expensive, and most data processing was done on punch cards which represented text data in 80-column records. Programming languages of the time, such as COBOL and RPG, processed numbers in their ASCII or EBCDIC representations. They occasionally used an extra bit called a "zone punch" to save one character for a minus sign on a negative number, or compressed two digits into one byte in a form called binary-coded decimal, but otherwise processed numbers as straight text. Over time the punch cards were converted to magnetic tape and then disk files and later to simple databases like ISAM, but the structure of the programs usually changed very little. Popular software like dBase continued the practice of storing dates as text well into the 1980s and 1990s.
Saving two characters for every date field was significant in the 1960s. Since programs at that time were mostly short-lived affairs programmed to solve a specific problem, or control a specific hardware setup, neither managers nor programmers of that time expected their programs to remain in use for many decades. The realization that databases were a new type of program with different characteristics had not yet come, and hence most did not consider fixing two digits of the year a significant problem. There were exceptions, of course; the first person known to publicly address the problem was Bob Bemer who had noticed it in 1958, as a result of work on genealogical software. He spent the next twenty years trying to make programmers, IBM, the US government and the ISO aware of the problem, with little result. This included the recommendation that the COBOL PICTURE clause should be used to specify four digit years for dates. This could have been done by programmers at any time from the initial release of the first COBOL compiler in 1961 onwards. However, lack of foresight, the desire to save storage space, and overall complacency prevented this advice from being followed. Despite magazine articles on the subject from 1970 onwards, the majority of programmers only started recognizing Y2K as a looming problem in the mid-1990s, but even then, inertia and complacency caused it to be mostly ignored until the last few years of the decade.
Storage of a combined date and time within a fixed binary field is often considered a solution, but the possibility for software to misinterpret dates remains, because such date and time representations must be relative to a defined origin. Rollover of such systems is still a problem but can happen at varying dates and can fail in various ways. For example:
The typical Unix timestamp (time_t) stores a date and time as a 32-bit signed integer number representing, roughly speaking, the number of seconds since January 1, 1970, and will roll over (exceed 32 bits) in 2038 and cause the Year 2038 problem. To solve this problem, many systems and languages have switched to a 64-bit version, or supplied alternatives which are 64-bit.
The popular spreadsheet Microsoft Excel stores a date as a number of days since an origin (often erroneously called a Julian date). A "Julian date" stored in a 16-bit integer will overflow after 65,536 days (approximately 179 years). Unfortunately, the Windows versions of the program by default start at 1900 and the Mac versions by default start at 1904 (although this is usually configurable).
The Microsoft Excel spreadsheet program also had a very elementary Y2K problem: Excel (in both Windows versions and Mac version, when they are set to start at 1900) incorrectly set the year 1900 as a leap year. In addition, the years 2100, 2200 and so on were regarded as leap years. This bug was fixed in later versions, but since the epoch of the Excel timestamp was set as January 1, 1900 in previous versions, the year 1900 is still regarded as a leap year to maintain backward compatibility.
In the C programming language, the standard library function to get the current year originally did have the problem that it returned only the year number within the 20th century, and for compatibility's sake still represents the year as year minus 1900. Many programmers in C, and in Perl and Java, two programming languages widely used in web development that use the C functions, incorrectly treated this value as the last two digits of the year. On the web this usually was a harmless bug, but it did cause many dynamically generated webpages to display January 1, 2000, as "1/1/19100", "1/1/100", or variations of that depending on the format.
Older applications written for the commonly used UNIX source code control system SCCS failed to handle years that began with the digit "2".
In the Windows 3.1 file manager, dates were shown as 1/1/19:0 for 1/1/2000. An update was available.
Even before January 1, 2000 arrived, there were also some worries about September 9, 1999 (albeit lesser compared to those generated by Y2K). This date could also be written in the numeric format, 9/9/99. This date value was frequently used to specify an unknown date; it was thus possible that programs might act on the records containing unknown dates on that day.[5] It is also somewhat similar to the end-of-file code, 9999, in old programming languages. It was feared that some programs might unexpectedly terminate on that date. The bug however was more likely to confuse computer operators than machines.
Another related problem for calculations involving the year 2000 was that it was a leap year even though years ending in "00" are normally not leap years. A year is a leap year if it is divisible by 4 but not divisible by 100 unless also divisible by 400. For example, 1600 was a leap year, but 1700, 1800 and 1900 were not. Fortunately most programs were fixed in time, although the vast majority relied on the oversimplified rule that a year divisible by 4 is a leap year, which works on 2000.
The problem was compounded by the need of many systems, especially in the financial services sector, to calculate expiration and renewal dates in the future. For example, a company tracking five-year bonds would experience Y2K problems in 1995, when its systems needed to calculate an expiration date of 2000, which with two-digit years, its "00" expiration year would seem to be earlier than the "95" of the issue date.
Documented errors
[edit] Before 2000
In 1999 HSBC issued 10,000 card swipe machines, manufactured by Racal, to retailers, but a Y2K flaw prevented them from reading expiration dates properly. The stores had to rely on paper transactions, as they do when there are power failures or phone line failures, until replacement machines arrived.[6]
On 1 January 2000
When January 1, 2000, arrived, there were problems generally regarded as minor. Problems did not always have to occur precisely at midnight. Some programs were not active at that moment and would only show up when they were invoked. Not all problems recorded were directly linked to Y2K programming in a causality; minor technological glitches occur on a regular basis.
Reported problems include:
In Ishikawa, Japan, radiation-monitoring equipment failed at midnight, but officials said there was no risk to the public.[7]
In Onagawa, Japan, an alarm sounded at a nuclear power plant at two minutes after midnight.[7]
In Japan, at two minutes past midnight, Osaka Media Port, a telecommunications carrier, found errors in the date management part of the company's network. The problem was fixed by 2:43 a.m. and no services were disrupted.[8]
In Japan, NTT Mobile Communications Network (NTT DoCoMo), Japan's largest cellular operator, reported on January 1, 2000, that some models of mobile telephones were deleting new messages received, rather than the older messages, as the memory filled up.[8]
In Australia, bus-ticket-validation machines in two states failed to operate.[9]
In the United States, 150 slot machines at race tracks in Delaware stopped working.[9]
In the United States, the U.S. Naval Observatory, which runs the master clock that keeps the country's official time, had a Y2K glitch on its Web site. Due to a programming problem, the site reported that the date was Jan. 1, "19100."[10]
In the United States, fare collection turnstiles at PATH Stations throughout New York City & New Jersey failed to operate.
In France, the national weather forecasting service, Meteo France, said a Y2K bug made the date on a webpage show a map with Saturday's weather forecast as "01/01/19100".[