()
Definition
Two layers of encoding make up Unix time. The first layer encodes a point in time as a scalarEncoding time as a number
Unix time is a single signed number that increments every second, which makes it easier for computers to store and manipulate than conventional date systems. Interpreter programs can then convert it to a human-readable format. The ''Unix epoch'' is the time 00:00:00UTC on 1 January 1970. There is a problem with this definition, in that UTC did not exist in its current form until 1972; this issue is discussed below. For brevity, the remainder of this section usesLeap seconds
The above scheme means that on a normal UTC day, which has a duration of seconds, the Unix time number changes in a continuous manner across midnight. For example, at the end of the day used in the examples above, the time representations progress as follows: When a leap second occurs, the UTC day is not exactly seconds long and the Unix time number (which always increases by exactly each day) experiences a discontinuity. Leap seconds may be positive or negative. No negative leap second has ever been declared, but if one were to be, then at the end of a day with a negative leap second, the Unix time number would jump up by 1 to the start of the next day. During a positive leap second at the end of a day, which occurs about every year and a half on average, the Unix time number increases continuously into the next day during the leap second and then at the end of the leap second jumps back by 1 (returning to the start of the next day). For example, this is what happened on strictly conforming POSIX.1 systems at the end of 1998: Unix time numbers are repeated in the second immediately following a positive leap second. The Unix time number is thus ambiguous: it can refer either to start of the leap second (2016-12-31 23:59:60) or the end of it, one second later (2017-01-01 00:00:00). In the theoretical case when a negative leap second occurs, no ambiguity is caused, but instead there is a range of Unix time numbers that do not refer to any point in UTC time at all. A Unix clock is often implemented with a different type of positive leap second handling associated with the Network Time Protocol (NTP). This yields a system that does not conform to the POSIX standard. See the section below concerning NTP for details. When dealing with periods that do not encompass a UTC leap second, the difference between two Unix time numbers is equal to the duration in seconds of the period between the corresponding points in time. This is a common computational technique. However, where leap seconds occur, such calculations give the wrong answer. In applications where this level of accuracy is required, it is necessary to consult a table of leap seconds when dealing with Unix times, and it is often preferable to use a different time encoding that does not suffer from this problem. A Unix time number is easily converted back into a UTC time by taking the quotient and modulus of the Unix time number, modulo . The quotient is the number of days since the epoch, and the modulus is the number of seconds since midnight UTC on that day. If given a Unix time number that is ambiguous due to a positive leap second, this algorithm interprets it as the time just after midnight. It never generates a time that is during a leap second. If given a Unix time number that is invalid due to a negative leap second, it generates an equally invalid UTC time. If these conditions are significant, it is necessary to consult a table of leap seconds to detect them.Non-synchronous Network Time Protocol-based variant
Commonly a Mills-style Unix clock is implemented with leap second handling not synchronous with the change of the Unix time number. The time number initially decreases where a leap should have occurred, and then it leaps to the correct time 1 second after the leap. This makes implementation easier, and is described by Mills' paper. This is what happens across a positive leap second: This can be decoded properly by paying attention to the leap second state variable, which unambiguously indicates whether the leap has been performed yet. The state variable change is synchronous with the leap. A similar situation arises with a negative leap second, where the second that is skipped is slightly too late. Very briefly the system shows a nominally impossible time number, but this can be detected by the TIME_DEL state and corrected. In this type of system the Unix time number violates POSIX around both types of leap second. Collecting the leap second state variable along with the time number allows for unambiguous decoding, so the correct POSIX time number can be generated if desired, or the full UTC time can be stored in a more suitable format. The decoding logic required to cope with this style of Unix clock would also correctly decode a hypothetical POSIX-conforming clock using the same interface. This would be achieved by indicating the TIME_INS state during the entirety of an inserted leap second, then indicating TIME_WAIT during the entirety of the following second while repeating the seconds count. This requires synchronous leap second handling. This is probably the best way to express UTC time in Unix clock form, via a Unix interface, when the underlying clock is fundamentally untroubled by leap seconds.Variant that counts leap seconds
Another, much rarer, non-conforming variant of Unix time keeping involves incrementing the value for all seconds, including leap seconds; some Linux systems are configured this way. Time kept in this fashion is sometimes referred to as "TAI" (although time stamps can be converted to UTC if the value corresponds to a time when the difference between TAI and UTC is known), as opposed to "UTC" (although not all UTC time values have a unique reference in systems that don't count leap seconds). Because TAI has no leap seconds, and every TAI day is exactly 86400 seconds long, this encoding is actually a pure linear count of seconds elapsed since 1970-01-01T00:00:10TAI. This makes time interval arithmetic much easier. Time values from these systems do not suffer the ambiguity that strictly conforming POSIX systems or NTP-driven systems have. In these systems it is necessary to consult a table of leap seconds to correctly convert between UTC and the pseudo-Unix-time representation. This resembles the manner in which time zone tables must be consulted to convert to and fromRepresenting the number
A Unix time number can be represented in any form capable of representing numbers. In some applications the number is simply represented textually as a string of decimal digits, raising only trivial additional problems. However, certain binary representations of Unix times are particularly significant. The Unix time_t
data type that represents a point in time is, on many platforms, a signed integer, traditionally of 32time_t
has been widened to 64 bits. This expands the times representable by approximately 292 billion years in both directions, which is over twenty times the present age of the universe per direction.
There was originally some controversy over whether the Unix time_t
should be signed or unsigned. If unsigned, its range in the future would be doubled, postponing the 32-bit overflow (by 68 years). However, it would then be incapable of representing times prior to the epoch. The consensus is for time_t
to be signed, and this is the usual practice. The software development platform for version 6 of the QNX operating system has an unsigned 32-bit time_t
, though older releases used a signed type.
The POSIX and Open Group Unix specifications include the C standard library, which includes the time types and functions defined in the <time.h>
header file. The ISO C standard states that time_t
must be an arithmetic type, but does not mandate any specific type or encoding for it. POSIX requires time_t
to be an integer type, but does not mandate that it be signed or unsigned.
Unix has no tradition of directly representing non-integer Unix time numbers as binary fractions. Instead, times with sub-second precision are represented using composite data types that consist of two integers, the first being a time_t
(the integral part of the Unix time), and the second being the fractional part of the time number in millionths (in struct timeval
) or billionths (in struct timespec
). These structures provide a decimal-based fixed-point data format, which is useful for some applications, and trivial to convert for others.
UTC basis
The present form of UTC, with leap seconds, is defined only starting from 1 January 1972. Prior to that, since 1 January 1961 there was an older form of UTC in which not only were there occasional time steps, which were by non-integer numbers of seconds, but also the UTC second was slightly longer than the SI second, and periodically changed to continuously approximate the Earth's rotation. Prior to 1961 there was no UTC, and prior to 1958 there was no widespread atomic timekeeping; in these eras, some approximation of GMT (based directly on the Earth's rotation) was used instead of an atomic timescale. The precise definition of Unix time as an encoding of UTC is only uncontroversial when applied to the present form of UTC. The Unix epoch predating the start of this form of UTC does not affect its use in this era: the number of days from 1 January 1970 (the Unix epoch) to 1 January 1972 (the start of UTC) is not in question, and the number of days is all that is significant to Unix time. The meaning of Unix time values below (i.e., prior to 1 January 1972) is not precisely defined. The basis of such Unix times is best understood to be an unspecified approximation of UTC. Computers of that era rarely had clocks set sufficiently accurately to provide meaningful sub-second timestamps in any case. Unix time is not a suitable way to represent times prior to 1972 in applications requiring sub-second precision; such applications must, at least, define which form of UT or GMT they use. , the possibility of ending the use of leap seconds in civil time is being considered. A likely means to execute this change is to define a new time scale, called ''International Time'', that initially matches UTC but thereafter has no leap seconds, thus remaining at a constant offset from TAI. If this happens, it is likely that Unix time will be prospectively defined in terms of this new time scale, instead of UTC. Uncertainty about whether this will occur makes prospective Unix time no less predictable than it already is: if UTC were simply to have no further leap seconds the result would be the same.History
The earliest versions of Unix time had a 32-bit integer incrementing at a rate of 60 Hz, which was the rate of the system clock on the hardware of the early Unix systems. The value 60 Hz still appears in some software interfaces as a result. As late as November of 1971, Unix was still counting time in 60ths of a second since an epoch of 1 January 1971, which is a year later than the epoch currently used. This timestamp could only represent a range of about 2.5 years, forcing the epoch and units counted to be redefined. Early definitions of Unix time also lacked timezones. The current epoch of 1 January 1970 00:00:00 UTC was selected arbitrarily by Unix engineers because it was considered a convenient date to work with. The precision was changed to count in seconds in order to avoid short-term overflow. When POSIX.1 was written, the question arose of how to precisely definetime_t
in the face of leap seconds. The POSIX committee considered whether Unix time should remain, as intended, a linear count of seconds since the epoch, at the expense of complexity in conversions with civil time or a representation of civil time, at the expense of inconsistency around leap seconds. Computer clocks of the era were not sufficiently precisely set to form a precedent one way or the other.
The POSIX committee was swayed by arguments against complexity in the library functions, and firmly defined the Unix time in a simple manner in terms of the elements of UTC time. This definition was so simple that it did not even encompass the entire Limitations
Unix time was designed to encode calendar dates and times in a compact manner intended for use by computers internally. It is not intended to be easily read by humans or to store timezone-dependent values. It is also limited by default to representing time in seconds, making it unsuited for use when a more precise measurement of time is needed, such as when measuring the execution time of programs.Range of representable times
Unix time by design does not require a specific size for the storage, but most common implementations of Unix time use a signed integer with the word size of the underlying computer. As the majority of modern computers are 32-bit or 64-bit, and a large number of programs are still written in 32-bit compatibility mode, this means that many systems using Unix time are using signed 32-bit integer fields. The maximum value of a signed 32 bit integer is 2-1, and the minimum value is -(2), making it impossible to represent dates before December 13, 1901 (at 20:45:52 UTC) or after January 19, 2038 (at 03:14:07 UTC). The early cutoff can have an impact on databases that are storing historical information; in some databases where 32-bit Unix time is used for timestamps, it may be necessary to store time in a different form of field, such as a string, to represent dates before 1901. The late cutoff is known as the Year 2038 problem and has the potential to cause issues as the date approaches, as dates beyond the 2038 cutoff would wrap back around to the start of the representable range in 1901. 64-bit storage of Unix time is generally assumed not to have issues with date range limitations, as the effective range of dates representable with Unix time stored in a signed 64-bit integer is over 584 billion years, or around 42 times larger than the current estimated age of the universe.Alternatives
Within Unix operating systems, alternative methods for storing and presenting time exist, including , which stores microseconds as well as seconds. Unix time is also not the only standard for time that counts away from an epoch. The Active Directory Time Service is used primarily onNotable events in Unix time
Unix enthusiasts have a history of holding "time_t parties" (pronounced "time tea parties") to celebrate significant values of the Unix time number. These are directly analogous to thetime_t
values in decimal. Among some groups round binary numbers are also celebrated, such as +230 which occurred at 13:37:04 UTC on Saturday, 10 January 2004.
The events that these celebrate are typically described as "''N'' seconds since the Unix epoch", but this is inaccurate; as discussed above, due to the handling of leap seconds in Unix time the number of seconds elapsed since the Unix epoch is slightly greater than the Unix time number for times later than the epoch.
* At 18:36:57 UTC on Wednesday, 17 October 1973, the first appearance of the date in In popular culture
Vernor Vinge's novel '' A Deepness in the Sky'' describes a spacefaring trading civilization thousands of years in the future that still uses the Unix epoch. The " programmer-archaeologist" responsible for finding and maintaining usable code in mature computer systems first believes that the epoch refers to the time when man first walked on the Moon, but then realizes that it is "the 0-second of one of humankind's first computer operating systems".See also
*Notes
References
External links