Skip to main content

Network-focused analysis of the Windows Time Service

Due to some recent posts on the comp.protocols.time.ntp newsgroup, I took it upon myself to investigate the behavior of the Windows Time Service a bit further using the Wireshark protocol analyzer.

  1. It appears that in Windows XP, 2003, and Vista, the Windows Time Service (w32time) will by default always try to form a "symmetric active" association with configured NTP servers. This can be problematic with some time servers, violates the published RFC-1305 specification, and is not necessary. I could find no explanation on Microsoft's site for this behavior; I suspect it has something to do with interoperability with older Windows 2000 domain controllers that had very broken NTP.

    However, there is a simple workaround. You can simply add ",0x8" to the end of any configured time server, and Windows will only use a client-mode association. For example, the command:
    w32tm /configure /manualpeerlist:",0x8,0x8,0x8" /syncfromflags:MANUAL /update

    will configure your Windows machine to form client-mode associations with three different NTP Pool servers.

  2. The minimum polling interval on all Windows machines except domain controllers is set to 1024 seconds by default. Windows domain controllers have a minimum poll interval of 64s.

    This is reasonable as clients usually do not need extremely accurate time. However, quite a few servers that are not domain controllers do need to get accurate time offset and frequency synchonization quickly. You can configure "MinPollInterval" and "MaxPollInterval" through the registry or using Group Policy tools, as documented here.

    Important note: never set the minimum poll value to less than 6 (which is 26 = 64 seconds). You won't get better time synchronization, and will be abusing the servers you have configured. Many time server administrators have automated tools that block clients that poll too frequently.

  3. Windows Time Service does follow sensible rules for "backing off" the polling interval, and adjusting the interval to network conditions. In my testing, a Windows Server 2003 domain controller began polling at 64 seconds, and then backed off to one poll every 1024 seconds within about 30 minutes. This is the same behavior as the reference ntpd implementation.

    Also, in my tests, Windows Time Service did respond to unreachable servers sensibly, backing off the polling interval to 215s. However, when a server became unreachable, it did increase polling in steps down to 24s before reverting to 215s. This rather strange polling pattern (15-9-9-8-7-6-5-4) continued until the server became reachable again. There have been quite a few problems in the past caused by NTP implementations that polled too frequently. Fortunatley, the Windows Time Service should not cause problems in this area, as an unreachable server results in an average of one poll every 212s (about once an hour).


matt said…
So, you enforce a polling interval on your DCs (and PDC emulator)? I also have a few clients that need to have very accurate time. I'm going to test having them sync directly to a level 3 internet ntp.
RPM said…
By setting Windows Time Service to create client-mode NTP associations (basically put ",0x8" after the ntp server name in your configuration), Windows will adjust the polling interval automatically from 64 to 1024 seconds based on the needs of the clock filter. This is the same behavior as the reference ntpd.

Polling more frequently does not necessarily give you "better" time - longer poll intervals make the clock more stable. The reasons are complex, but there is a short synopsis here.

Finally, using a short poll interval (<60s) against other people's servers is considered rude, and some NTP implementations will simply start dropping your requests or end you a Kiss-of-Death packet if you poll too frequently.

Popular posts from this blog

Fixing slow NFS performance between VMware and Windows 2008 R2

I've seen hundreds of reports of slow NFS performance between VMware ESX/ESXi and Windows Server 2008 (with or without R2) out there on the internet, mixed in with a few reports of it performing fabulously.
We use the storage on our big Windows file servers periodically for one-off dev/test VMware virutal machines, and have  been struggling with this quite a bit recently. It used to be fast. Now it was very slow, like less than 3 MB/s for a copy of a VMDK. It made no sense.
We chased a lot of ideas. Started with the Windows and WMware logs of course, but nothing significant showed up. The Windows Server performance counters showed low CPU utilization and queue depth, low disk queue depth, less than 1 ms average IO service time, and a paltry 30 Mbps network utilization on bonded GbE links.
So where was the bottleneck? I ran across this Microsoft article about slow NFS performance when user name mapping wasn't set up, but it only seemed to apply to Windows 2003. Surely the patch me…

Google's public NTP servers?

I was struggling with finding a good set of low-ping NTP servers for use as upstream sources in the office. Using is great and all, but the rotating DNS entries aren't fabulous for Windows NTP clients (or really any NTP software except the reference ntpd implementation).

ntpd resolves a server hostname to an IP once at startup, and then sticks with that IP forever. Most other NTP clients honor DNS TTLs, and will follow the rotation of addresses returned by This means Windows NTP client using the built-in Windows Time Service will actually be trying to sync to a moving set of target servers when pointed at a source. Fine for most client, but not great for servers trying to maintain stable timing for security and logging purposes.

I stumbled across this link referencing Google's ntp servers at hostname time[1-4] These servers support IPv4 and IPv6, and seem to be anycast just like Google's public DNS servers at time…

Presets versus quality in x264 encoding

I'm scoping a project that will require re-encoding a large training video library into HTML5 and Flash-compatible formats. As of today, this means using H.264-based video for best compatability and quality (although WebM might become an option in a year or two).
The open source x264 is widely considered the state of the art in H.264 encoders. Given the large amount of source video we need to convert as part of the project, finding the optimal trade-off between encoding speed and quality with x264-based encoders (x264 itself, FFmpeg, MEencoder, HandBrake, etc.) is important.
So I created a 720p video comprised of several popular video test sequences concatenated together. All of these sequences are from lossless original sources, so we are not re-compressing the artifacts of another video codec. The sequences are designed to torture video codecs: scenes include splashing water, flames, slow pans, detailed backgrounds and fast motion. I did several two-pass 2500 kbps encodings using …