Etc.


— 9:33 AM on August 27, 2014

Hey, folks. I haven't posted an Etc. in a while here. Sometimes, when things get really busy, I tend to clam up and just focus on the project(s) at hand. Right now, as you might have surmised from all of the rumors and announcements, things are very busy indeed. I had one free weekend in the last four, and that was kind of an indulgence. Looking forward, I expect to be working through the next three to four weekends, just to keep pace.

The good news for you is that we have lots of reviews and articles coming up. Should be interesting, and I'm already excited about some of the stuff we're producing behind the scenes right now.

We made a server move a few weeks back, and I said at the time that I'd write something about the new setup once I had time. The story is a pretty simple one, with a very happy result for us, so I can share it quickly.

Our old hosting arrangement involved two separate servers that we owned, both 1U rack units in a co-lo facility in Virginia. The primary server was based on dual six-core Opterons, 32GB of RAM, dual SLC SSDs in RAID 1, and dual HDDs for logging and backup storage. We made it the primary mainly due to the large amount of RAM and the fact it had dual redundant power supplies. Our secondary server was a live backup that we rarely, if ever, had to use as a stand-in for the primary during its run. That box was faster than the primary, since it was based on newer six-core Xeons with 16GB of RAM and a similar storage setup. We used it to host a VM that served as our development environment.

These systems were plenty fast for our needs, since our home-brewed content management system is pretty efficient. Trouble is, our hosting provider had some kind of issue with conditioning the power to these boxes, near as we can tell. One morning not long ago, the backup box went offline and didn't come back. At the same time, our primary server rebooted out of the blue. After that, the secondary box wouldn't POST at all, and the primary one started throwing ECC errors.

We needed to move away from that situation ASAP. In a bit of a rush, Bruno brought up the site on a VM served at Linode. We'd been looking at those guys for a while, especially after their upgrade to SSDs and Xeon E5-2680 processors earlier this year.

Long story short, we got the site running a relatively modestly sized Linode VM, moved it into production, and to my surprise, the site got to be even snappier, if anything. Bruno did a great job with the transition to the new box, so we had few issues after the move. Immediately after the transition, we posted both the Broadwell architecture reveal and the TSX erratum news, and the new server handled a healthy and sustained influx of requests without even breaking a sweat.

We now have a secondary VM that serves as our development environment, and Bruno has been playing with backup tools that let us do a bare-metal restore to a new Linode VM across the Internet. Linode itself has some slick monitoring tools, too. We can even spool up a new VM in any of Linode's several data centers around the nation, if we want geographical diversity. It's all very nice indeed.

Best of all, if the hardware breaks, we don't have to pay someone an hourly fee for remote troubleshooting. I like PC hardware, but sometimes, owning it just isn't the right move.

So that's the story of our hosting change. Yay for Moore's Law.

   
Register
Tip: You can use the A/Z keys to walk threads.
View options