Microsecond Precision with the PTP and Hardware Clocks for Unikernels

In Nov 2023 Amazon announced micro-second precision with their PTP clock. Within two weeks we had the first bit of support out the door.

But we're getting ahead ourselves. PTP? PHC? Is this some kind of a new drug? What happened to NTP?

NTP is accurate up until under ten milliseconds measured in microseconds. PTP, however, is accurate up to less than a microsecond and is measured in nanoseconds. PTP is also built to address various forms of delays.

NTP clients work a little bit differently from PTP as well. NTP clients request time from a NTP server whereas the PTP grandmaster clock pushes out the time within its domain. We'll go over this in a second.

Then there is the fact that PTP utilizes hardware based timestmaping whereas NTP relies more on software estimates.

When you need more precision you need to account for latency in delays. There can be delays in propagation, queueing and processing amongst other things.

Propagation delay: This is the time it for the signal to actually move through the link - PREAM (physics rules everything around me).

Queueing delay: Packets get delayed, links get congested and buffering will occur.

Processing delay: More delay can be introduced because of routing table lookups, error correction and frame switching.

You can run PTP without specialized hardware but if you have it it'll work a lot better.

Use Cases

Ok, we get that PTP can be a lot more precise than NTP but what are some of the use-cases? Factories, telco base stations, HFT and electric utilities make use of it. While the cloud might not be involved in use-cases for things like industrial factory floors (today) video and audio streaming definitely are. So is gaming. In audio and video streaming we typically want to sync streams.

When you send raw video and audio from different sources at different times you're going to want correct timestamps to sync. (eg: have all packets of a video frame with the same timestamp). If you're using RTP there's a timestamp in each packet header. Another one of the big benefits at least on the cloud would be to speed up distributed transactions.

How Does It Work?

First off, the clock is split up into multiple types of clocks:

  • Grandmaster clock (GMC)
  • Boundary clock (BC)
  • Transparent clock (TC)
  • Ordinary clock (OC)
The grandmaster clock uses GPS or an atomic clock. The grandmaster is the source of truth and it is used as it's not feasible to attach GPS or an atomic clock to every single computer that wants one. Boundary clocks are intemediaries that can talk to multiple VLANs for multiple PTP domains. Transparent clocks measures residence time which is the time needed for PTP to transit through a switch/router. They are like boundary clocks but can only talk on one VLAN. The general path this would take is from the grandmaster clock to the boundary to transparent to the ordinary clock.

Typically there is a two step sync and (of course) there are a few types. You can have an end-to-end transparent clock or a peer to peer. That dance looks roughly somewhat like this where each t is a specific point in time and t1 and t4 are on one clock and and t2 and t3 is on the other one:

CL1                      CL2
\   \___ sync ________-> t2
    \__follow up___
                   \__-> t3
           del req
t4 <-___/
  \______ del resp ___->

As you can see two syncs are issued by CL1 (the two-step sync). Then a del request comes back from CL2 followed with a del response by CL1. The same happens for a peer to peer clock, however you'd have a request in between the grandmaster clock and the transparent clock to account for the residence time. Then you'd have the same between the transparent clock and the ordinary clock.

Using the example above, to calculate the offset and delay we have these simple formulas:

delay = ((t2 - t1) + (t4 - t3) / 2
offset = ((t2 - t1) - (t4 - t3) / 2

We can now use this to get a much more precise time.

We enhanced the AWS ENA driver to add support for retrieving the current time from a PTP hardware clock, when supported by the network interface.

Wait a minute. You added support to the network driver for a clock?

Yes. PTP can, and typically does, utilize hardware based timestamping. This hardware based timestamping helps provide more precise timestamps as it's dedicated and outside of the operating system. Those intermediary clocks can be on switches and routers as well calculating the residence time delay introduced.

We had to add additional support for our existing NVME driver and the Graviton3 processor as well used by the AWS R7G instances type.

Together with the NTP klib, this feature can be used to synchronize the system time with the grandmaster clock. We have had existing support for NTP through our NTP klib for a while now. We also added chrony like support a while back too. To add support we had to update our existing ENA driver and also make changes to our NVME driver. If a PTP clock is not available, the klib falls back to NTP. These features can be used on selected AWS instances (currently, from the R7g (ARM Graviton 3) instance family in the Asia Pacific (Tokyo) region). This instance family type is built on top of the Nitro system.

If you'd like to try this out you can utilize this sample config:

  "Klibs": ["ntp"],
  "CloudConfig" : {
    "ProjectID": "my-project",
    "Zone": "ap-northeast-1a",
    "BucketName": "my-bucket",
    "Flavor": "r7g.medium"
  "ManifestPassthrough": {
    "chrony": {"refclock": "ptp"}

What did the broken clock say to the watchmaker?
"I’m feeling ticked off!"

Deploy Your First Open Source Unikernel In Seconds

Get Started Now.