Skip to main content

Posts

Forcing a correct Subversion branch merge despite tree conflicts

Let's say your development team is following the common pattern for maintaining a "clean" releasable trunk in Subversion. All your development is done in "feature branches", which you merge into a "development integration" branch. From there code is merged into a QA branch for further testing, and is eventually merged into trunk and released.

So your bog-standard SVN repository would look something like this:

/trunk /branches /DEVELOPMENT /QA /feature1 /feature2 /tags /v1.0.0 /v1.1.1
Now, what frequently happens in this situation is issues are merged from /branches/DEVELOPMENT to /branches/QAout of order. That is, sometimes you have a bug-fix that you want to move rapidly into production, so it gets plucked from /branches/DEVELOPMENT by itself without merging some of the commits that came before it.

This "cherry picking" of code can be problematic in almost any version control system (including Git), but in Subversion …
Recent posts

Google's public NTP servers?

I was struggling with finding a good set of low-ping NTP servers for use as upstream sources in the office. Using pool.ntp.org is great and all, but the rotating DNS entries aren't fabulous for Windows NTP clients (or really any NTP software except the reference ntpd implementation).

ntpd resolves a server hostname to an IP once at startup, and then sticks with that IP forever. Most other NTP clients honor DNS TTLs, and will follow the rotation of addresses returned by pool.ntp.org. This means Windows NTP client using the built-in Windows Time Service will actually be trying to sync to a moving set of target servers when pointed at a pool.ntp.org source. Fine for most client, but not great for servers trying to maintain stable timing for security and logging purposes.

I stumbled across this link referencing Google's ntp servers at hostname time[1-4].google.com. These servers support IPv4 and IPv6, and seem to be anycast just like Google's public DNS servers at 8.8.8.8. time…

Windows Server 2012 storage = awesome sauce

We've been playing with the Windows Server 2012 release candidate on a new NAS system, and the combination of Storage Spaces and deduplication make for an impressive combination (see screenshot).

We copied a week's worth of database and disk-image backups from a few servers to a deduplication-enabled volume on the test system. This amounted to a total of 845 GiB of raw, uncompressed data files. After waiting a bit for the deduplication to kick in, we ended up with a 90% savings in space.

This is the kind of result usually seen on purpose-built and reassuringly expensive dedplication appliances such as those from Data Domain.

The data copy process itself was also quite interesting. We configured twelve 2TB 7200 RPM drives into a Windows Storage Spaces pool, and set up a 5 TB NTFS volume on them in parity mode. Storage Spaces give you much of the flexibility of something like ZFS or Drobo: you create a pool of raw disks, and can carve it up into thin-provisioned volumes with dif…

Fixing slow NFS performance between VMware and Windows 2008 R2

I've seen hundreds of reports of slow NFS performance between VMware ESX/ESXi and Windows Server 2008 (with or without R2) out there on the internet, mixed in with a few reports of it performing fabulously.
We use the storage on our big Windows file servers periodically for one-off dev/test VMware virutal machines, and have  been struggling with this quite a bit recently. It used to be fast. Now it was very slow, like less than 3 MB/s for a copy of a VMDK. It made no sense.
We chased a lot of ideas. Started with the Windows and WMware logs of course, but nothing significant showed up. The Windows Server performance counters showed low CPU utilization and queue depth, low disk queue depth, less than 1 ms average IO service time, and a paltry 30 Mbps network utilization on bonded GbE links.
So where was the bottleneck? I ran across this Microsoft article about slow NFS performance when user name mapping wasn't set up, but it only seemed to apply to Windows 2003. Surely the patch me…

Alternatives to Cisco for 10 Gb/s Servers

This is a post written in response to Chris Marget's well done series "Nexus 5K Layout for 10Gb/s Servers". While I appreciate the detail and thought that went into the Cisco-based design, he's clearly a "Cisco guy" showing a teensy-weensy bit of bias toward the dominant vendor in the networking industry. The end result of his design is a price that is over $50K per rack just for networking - approaching the aggregate cost of the actual servers inside the rack! Cisco is the Neiman Marcus of the networking world.

So I came up with an alternate design based on anything-but-Cisco switches. The 10G switches from Arista here use Broadcom's Trident+ chip, which supports very similar hardware features as the Cisco solution (MLAG/vPC, DCB, hardware support for TRILL if needed in the future, etc.). Many other vendors offer 10G switches based on this merchant silicon, such as Dell/Force10 and IBM. Because this is commodity switching silicon which BRCM will sell t…

Bufferbloat on a 3G network

I've been experiencing terrible network performance lately on Sprint's 3G (EVDO) network in downtown Chicago. I have both a Sprint Mobile Broadband card for my laptop and an iPhone 4S on Sprint's network. Sprint performance used to be fantastic compared with AT&T and Verizon mobile data networks in Chicago, but the introduction of the iPhone on Sprint seems to have caused some capacity problems. The worst spot I've come across seems to be the Ogilvie train station.

I decided to run some diagnostics from my laptop in the area during a busy rush hour. I expected to see that the network was just hopelessly oversubscribed, with high packet loss. This is a very busy commuter train station, and there are probably tens of thousands of 3G smart-phones in the vicinity at rush hour. There's also lots of mirrored glass and steel in the high-rise buildings above - basically it's the worst "urban canyon" radio environment imaginable.

However, some simple ping …

What the Internet looks like in 2012

This is a graph of links between visible Autonomous Systems on the Internet generated from public BGP routing tables early on 1 Jan 2012. Each link of each AS path in the BGP data is represented as an edge, with duplicates removed. The data was then graphed using the twopi layout tool from Graphviz. Links to the top twelve most-connected service provider networks are highlighted in color, with all other AS links in white.
I'm struck by the sheer density of connectivity on the modern Internet. Each of the 94865 lines on this graph represents at least one physical link between organizations. But in the case of larger networks that same thin line might represent dozens of routers and 10 Gb/s fibers at many locations throughout the world.
It certainly looks as robust as originally intended, but also chaotic and disordered. Surely no government, organization, or evil genius bent on world domination could possibly control all those links. The sooner our politicians figure that out, the…