Last time, we started our quest to understand the art of time-keeping by first trying to understand what years and dates are. Today, we shall take a look at time itself, and understand how we might divide up a day.

For most of us, the basic unit of time is the second, ticking constantly. However, the reality is far more complicated than that. The question “what is a second?” has more than one correct answer, and it all depends on which time standard (i.e. clock) you are using.

In this post, we shall cover the most common time standards:

  1. Uniform Time.
  2. Solar Time.
  3. Universal Time.
  4. Reconciling the difference between uniform time and solar time.

On the way, we shall also examine what a day actually is.

Uniform Time

Out of all time standards, perhaps the one easiest to understand is uniform time, commonly realized with high-precision atomic clocks. While the concept may sound daunting, it is mostly free from the nuances that the other time standards have.

With uniform time, each second is based on the SI second, defined officially as being equal to the time duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the fundamental unperturbed ground-state of the 133Cs atom. The clock simply ticks at that constant rate, and that’s all. We use atomic clocks as the definition of uniform time because they are the highest-precision clocks we have available.

The concept of minutes, hours, and days exist with uniform time, but more as a convenience than anything else. They have no particular meaning, rather following a very simple definition: every minute is exactly 60 SI seconds, every hour is exactly 3600 SI seconds (60 minutes), and every day is exactly 86400 SI seconds (24 hours). Dates from both the Julian and Gregorian calendars can be used, but that’s just a convention carried over from other, non-uniform time standards that we shall discuss later.

However, this simple picture of uniform time is marred by time dilation. This includes both time differences caused by relative velocity from special relativity (“kinetic” time dilation) and the difference in gravitational potential from general relativity (gravitational time dilation). We live on a planet that rotates and hurls through space, and we experience different gravitational potential energies depending on our altitude. Naturally, both special and general relativistic time dilation affect our perception of time.

Time dilation gives rise to many different possible coordinate time scales, representing the time as viewed from different locations in space, subject to different motion and gravitational potential. The coordinate time of a certain location is equivalent to the proper time1 experienced by a clock fixed at that location.

Still confused? Perhaps we should use the famous twin paradox thought experiment as an example. In this scenario, there are two identical twins AA and BB on Earth. BB leaves on a high-speed rocket and returns home to find that AA has aged more. I will not go into the details why here. With a coordinate time CC in the Earth’s frame of reference, a fixed amount of time has elapsed during the journey. However, the twins physically experienced different proper times, which is why they ended up with different ages. However, twin AA was always on Earth2, so his proper time is identical to that of the coordinate time CC, which is defined in the same frame of reference.

The most commonly used coordinate time scales are:

  1. Geocentric Coordinate Time (TCG, from French temps-coordonnée géocentrique) — the proper time experienced by a clock that is in a coordinate frame co-moving with the centre of the Earth (and whose time, therefore, does not dilate kinetically with respect to it) but outside of the Earth’s gravity well. Since it does not rotate with the surface of the Earth and is not in its gravity well, the clock ticks faster than those on the surface by around 22 ms per year (a factor of 7.0×10107.0 \times 10^{-10}).
  2. Barycentric Coordinate Time (TCB, from French temps-coordonnée barycentrique) — the proper time experienced by a clock co-moving with the centre of mass of the solar system but outside of its gravity well. This is similar to TCG except it is meant for calculations involving objects in the solar system instead of just around the Earth. This clock ticks faster than those on the Earth’s surface by around 490 ms per year (a factor of 1.55×1081.55 \times 10^{-8}).
  3. Terrestrial Time (TT3) — the proper time experienced by a clock that’s on the Earth’s geoid. That is, on the idealized shape that the Earth’s oceans would take under the Earth’s gravity, but excluding other ephemeral influences like wind or tides. This is defined by scaling TCG down by a constant factor LgL_g, currently defined as 6.969290134×10106.969\,290\,134 \times 10^{-10}. More precisely, TT=(1Lg)×TCG+E\text{TT} = (1-L_g)\times\text{TCG} + E, where EE is a constant to resolve the epoch difference.
  4. International Atomic Time (TAI, from French temps atomique international) — perhaps the most commonly used uniform time standard. It is also based on the passage of proper time on the Earth’s geoid, like Terrestrial Time. The difference is that TT is an idealized time scale, of which TAI (with a fixed offset) is the primary realization. Due to historical reasons, TAI is 32.184 seconds behind TT.

    TAI is a weighted average of time kept by over 400 atomic clocks in over 50 countries. Due to the averaging, it’s an order of magnitude more stable than the best atomic clock. However, it is still possible for there to be errors. For consistency, the published time is never revised and errors remain uncorrected.

The first three time standards reflect the coordinate time at three important locations in the solar system. 1977-01-01T00:00:32.184 in all three is defined to be the exact same instant as TAI 1977-01-01T00:00:00.000. This is the moment where TAI started to include corrections for gravitational time dilation.

  1. Barycentric Dynamical Time (TDB, from French temps dynamique barycentrique) — a linear scaling of TCB so that when observed from the Earth’s surface, the difference with TT is as small as possible without making the definition inconsistent. This difference will remain less than 2 ms for at least a few millennia. Note that here, the 2 ms difference is the absolute difference, unlike TCG and TCB, whose differences of 22 and 490 ms per year respectively accumulate year after year, drifting further away from TT.

Out of the five time standards, there are really only three distinct tick rates based on the reference frame used. TT, TAI, TDB are all using the reference frame of the surface of the Earth, and as such, they essentially all tick at the same rate. In lower precision calculations, they can be conflated (as long as you account for the fact that TAI the 32.184 s behind the other two).

TCG and TCB tick differently from the other three and also each other. This poses some problems with a lot of time-related constants, e.g. the orbital period of planets, since we must use different values for these time standards. This is the motivation behind TDB, as it allows for calculations to use the values measured on the surface of the Earth.

TDB is of particular interest here, since it is the time standard used by JPL’s DE440 and DE441 ephemerides (tables documenting the motion of celestial objects, singular ephemeris). These are some of the best ephemerides that are readily available, and I used them in the calculations for my interactive French Republican Calendar.

What is a Day?

Sourced from https://commons.wikimedia.org/wiki/File:Sidereal_day_(prograde).svg, used under CC BY-SA 3.0 2 31 Sourced from https://commons.wikimedia.org/wiki/File:Sidereal_day_(prograde).svg, used under CC BY-SA 3.0 2 31

Before we can continue, we need to take a step back and examine what a day is supposed to measure. This might sound like a trivial question and you may be tempted to blurt that a day is the time it takes for the Earth to rotate once about its axis, but that would be wrong… well, sort of.

You see, while the Earth rotates about its axis, it also rotates around the sun. As you can see in the diagram, after one full 360° rotation, the sun does not end up at the same position in the sky. This means that from one noon to the next, the Earth has to rotate slightly more than 360°. Complicating things further, the Earth’s axis of rotation rotates around a secondary axis by itself, causing the precession of the equinoxes, which we will get into detail in the next part.

Naturally, this complexity motivates a few different definitions of the word day:

  1. The stellar day — the rotation period of the Earth relative to the International Celestial Reference Frame (ICRF), defined by distant quasars. In essence, this is the time it takes the Earth to rotate once so that distant stars pass over the same meridian (line in the sky directly above a line of longitude), i.e. exactly 360° rotation about the axis of rotation. This is around 86164 seconds (23 hours, 56 minutes, 4 seconds). This is also the time it takes for the Earth to move from point 1 to 2 in the diagram.
  2. The sidereal day — the rotation period relative to the precessing March equinox4. We usually imagine the equinox as a moment in time, but in astronomy, it is also the point where the celestial equator (the line directly above the Earth’s equator) intersects with the ecliptic (the plane of the Earth’s orbit). We will go into more detail in the next part. In essence, this includes the effects of the rotation of the axis itself. As a result, the sidereal day is around 8.4 ms shorter than the stellar day.
  3. The synodic day (or solar day) is the time it takes for the sun to pass over the same meridian. This is the time it takes for the Earth to go from point 1 to point 3 in the diagram. This is around 3 minutes 56 seconds longer than both the stellar and sidereal days, making it almost exactly 24 hours.

You’ll notice that after the Earth revolves once around the sun, exactly one more sidereal day passes than synodic days.5

Solar Time

Finally, armed with the understanding of what a day truly is, we can take a look at solar time. Of course, nothing is ever so simple, and there are multiple ways of measuring it:

  1. Apparent solar time (also called true solar time) — this is based on the true sun as seen by an observer on the Earth. A day under using the apparent solar time would be the length of a synodic day. By convention, we partition the synodic day into 24 hours, each hour into 60 minutes, and each minute into 60 seconds. The moment where the sun crosses the meridian is defined as noon, or 12:00:00. This is the time that would be measured using a sundial.

    Note that the length of the solar day varies over the year in accordance with the seasons, and accumulated effects of such variation could produce deviations of up to 16 minutes from the mean. This is somewhat inconvenient, as the length of a day (and hours, minutes, and seconds) varies significantly throughout the year.

  2. Mean solar time is an attempt to average the differences in apparent solar time, producing a more uniform time scale by making each day nearly constant. By convention, every day starts at midnight, the point between two successive noons.

Solar time is inherently location-specific. While places on the same line of longitude will observe the same noon due to the sun crossing the same meridian, and thus the same time, any location on different longitude will see different times. This makes these scales difficult to use on a global scale. To counter this, we introduce…

Universal Time

To standardize solar time, we must select a specific meridian to use globally. This is called the prime meridian. Currently, the international standard is the IERS Reference Meridian (IRM), which was derived from the Greenwich meridian, the line of longitude crossing the Royal Observatory in Greenwich, England. This produced what was once called Greenwich Mean Time (GMT), but this has been succeeded by the term Universal Time.

Naturally, nothing is simple, and there is no one version of Universal Time. However, the only version based on solar time that actually matters is UT1. Conceptually, this is the mean solar time at 0° longitude, but in reality, it is defined using the rotation angle of the Earth with respect to the ICRF we talked about earlier, based on the measurement of quasars. The time is the same everywhere on Earth.

The mean solar time at any other location can be derived as an offset to UT1. For a given longitude λ\lambda (where λ>0\lambda>0 to the east and λ<0\lambda<0 to the west), the UT1 offset of its mean solar time is:

MSTλUT1=λ1 h15°\text{MST}_\lambda - \text{UT1} = \lambda \frac{1~\text{h}}{15 \degree}

For example, in Toronto, Canada, at longitude 79° 23′ 13.3944″ W6, λ=79.387054°\lambda = -79.387054\degree. Multiplying this by 1 h/15°1~\text{h}/15\degree yields 5.29247 h-5.29247~\text{h}, which is 5 hours, 17 minutes, and 33 seconds behind UT1. Note that we typically round this to the nearest hour, which is the basis of timezones. Under this scheme, the local time in Toronto would be 5 hours behind Universal Time, which is indeed the standard (non-DST) timezone that Toronto uses.

Reconciling Uniform and Solar Time

It is important to note that seconds in solar time and Univeral Time are not the same as the SI second based on atomic clocks. The former is based on partitioning the average length of the synodic day, the latter is based on the fundamental properties of atoms.

As such, UT is not synchronized with TT (and by extension, TAI and TDB) and drifts apart. We refer to the difference between TT and UT as ΔT\Delta T. This value is around 0 at the start of the year 1902, but in 2022 this is around 69 seconds. Note that this value varies greatly over time due to the variations in the Earth’s rotation. This is caused by things like the tidal forces from the moon and the post-glacial rebound of the Earth’s surface. The past values can be inferred from the historical records, which are not of very high precision. We do know that in the year -500, ΔT\Delta T is around 17000 seconds, which is almost 4 hours, and the value has steadily decreased until around 1900 at which point it started increasing again. Due to the cumulative nature of ΔT\Delta T, any slight uncertainty in the Earth’s rotation is greatly amplified. We simply do not know the value of ΔT\Delta T in the past and future to great precision, with the standard deviation in hours once we get a few millennia from the present.

There is one thing we know for sure — that atomic clocks are much more precise than measurements of the Earth’s rotation. Therefore, it is often useful to measure civil time using atomic clocks, but still keep it synchronized to Universal Time and ultimately to the sun. For this purpose, we introduced the Coordinated Universal Time (UTC7), which is the most often used time standard for daily life.

In UTC, a second is identical to that of a TAI second, but occasionally has seconds inserted or deleted to keep it within one second of UT1. This is the source of the dreaded leap seconds; they are fundamental to the definition of UTC and can’t be removed without creating a new time standard.

For historical reasons, when the current definition of UTC was adopted in 1972, the UTC time 1972-01-01T00:00:00 was exactly 1972-01-01T00:00:10 TAI, i.e. 10 seconds behind TAI. Since then, 27 leap seconds were inserted, and the difference grew to 37 seconds.

The UTC standard allows leap seconds to be added to the end of any month, with the first preference being June and December, and the second preference being March and September. When a leap second is inserted, the time 23:59:60 is inserted after 23:59:59 on the last day of the month. A negative leap second would instead remove 23:59:59, causing 23:59:58 to be followed by 00:00:00 of the next day.

The quantity DUT1\text{DUT1} is defined as UT1UTC\text{UT1} - \text{UTC}. By definition of UTC, this quantity must be kept in range [0.9 s,0.9 s][-0.9~\text{s}, 0.9~\text{s}] via leap seconds. The International Earth Rotation and Reference Systems Service (IERS) is responsible for scheduling leap seconds. Typically, they schedule them when DUT1|\text{DUT1}| approaches 0.6 s to ensure that DUT1|\text{DUT1}| never exceeds 0.9 s.

Now that you understand exactly how time standards work, we’ll next look at what equinoxes and solstices are, and we will then be able to finally understand how to calculate the French Republican Calendar using the original equinox method.

Notes

  1. Here, the word proper means “own” instead of “suitable” or “accurate”. Compare the French propre, Latin proprius, and their derivative property

  2. For simplicity, we assume that twin AA was at rest during twin BB’s entire journey. 

  3. It’s temps terrestre in French. For all I know, the abbreviation could be in French and just appears identical in English. 

  4. In literature, this is often referred to as the vernal equinox. The word vernal, however, means “related to spring”, making this definition only correct in the northern hemisphere. The March equinox is the vernal equinox in the northern hemisphere and the autumnal equinox in the southern hemisphere. 

  5. Strictly speaking, this refers to the tropical year — the time between one March equinox and the next. We’ll dive into that in the next part… 

  6. There is no commonly agreed coordinate for Toronto given the size of the city, so I have arbitrarily chosen the longitude of CN Tower. 

  7. UTC is supposed to be a compromise abbreviation between the English Coordinated Universal Time and the French temps universel coordonné. This also keeps it in line with the other forms of universal time, all of which start with UT.