The Great Server Migration 2

The Great Server Migration

It’s been a while since I’ve had to move. I don’t mean moving in real life, I mean moving my server infrastructure.

I’ve been using the same hosting company for a while now. And… something has gone wrong. They’ve had increasing downtime. They’ve also had issues with tech giving me bad answers (the last time I emailed them with an issue, I included a very long description, and instructions to ‘go make a cup of coffee or something, relax, and watch carefully for 20 minutes or so to see the problem.’ They failed and gave the wrong answer anyway.) They’ve become unreliable for the scale of some projects I deal with.

But, it’s also a good time to change damned near almost everything about my setup. For hosting & DNS I have classically used one provider, then wrote a weird little fallover system on top of that structure. (Though, I also have a mix of other servers handling specific services all over – my setup isn’t nearly as widespread as some I’ve known, but it’s also not simple.) While at SEMA Show, the primary server, backup server, and database server – which all exist with the same vendor space – all fell over at the same time, and never actually recovered completely. In fact, there’s slight chance this will “hitch” when trying to load this page – it just can’t keep up.

Because the server setup it key to a product, I had a rare chance: I could scrap EVERYTHING and start from scratch. If I were going to start from scratch, what should it look like?

No Vendor Lock-in: Every step either has a replacement already, or can be swapped to a new vendor quickly. My web and db servers now exist at opposite ends up of the US, and different companies handle eache.

Fallover: While it was cute that I wrote my own scripts to handle an old-school style fallover setup, it really wasn’t enough. I picked a DNS vendor who had fallover testing and execution built in. Then I found a second one so I could rebuild it instantly if something happened. (Fallover is if a web server or the infrastructure for it dies, then the FancyDomainName.com points at another server, it changes all the settings, and almost no one notices the difference. Downtime is limited to a couple minutes.)

Geographical & Corporate Isolation: As mentioned in a previous entry, the new web & db servers don’t exist in the same city, or even the same coast. If there’s a hurricane, the system can keep on cruzing. For web & database, there’s actually 3 vendors, and 3 cities – if one loses power and the generator catches fire (that happened once) we still keep cruzing. If someone in IT or corporate starts making bad decisions (which is what led to this changeover, IMO) we still keep cruzing.

Better hardware: the questionable current host had upgraded me to new hardware recently, for all of the servers I had with them. Except… they really weren’t particularly buff. I needed something with more muscle and speed. Lots of RAM, better processors, etc. While the goal is high reliability, moving to a new server with the same specs would be “meh”. Time to really get moving.

Distributed Infrastructure: I already use caching systems on-server, and some CDN stuff (Content Distribution Network – basically, if I upload an image to the server, that gets duplicated to more points elsewhere, and when you visit the site, it pics the nearest one to you.) But, most of that is just on auto-pilot. For one project, I need to up the game: I picked a CDN provider that lets me upload the thumbnails and such directly to the CDN rather than uploading to the website and having the CDN duplicate it. This changes how requests are done, and gives it a heck of a bandwidth and speed boost. And, these days, both storage and bandwidth is cheap, so unless I start uploading TB of data, it’s cheap – it just required lots of programming changes.

Faster DNS: One thing that has annoyed me for YEARS – and I’ve mentioned it to them, and it’s been written up online by others – is slow DNS resolution. At one time, 15 years ago, it wouldn’t have mattered much. Now it matters a lot, particularly since I have an application that needs faster response times. While I selected the DNS vendor for it’s fallover, this is a huge secondary benefit they bring.

And, of course, better backups, better interface, a bit close to the metal, etc. The list above it the 6 high points of what needed to happen, but the list of requirements / desires for the system goes on and on. Which is why I’m kind, and stopped where I did – if you’re not a techie, it’s probably not THAT interesting, and if you are a techie, I’m not revealing enough about the design to be truly interesting (for reasons techies will understand 🙂 )

All of this brings me two things: much, much faster servers, and much, much more reliable services. The new DNS server with fallover is in place, the new CDN is partially in place, and the first new web and database server is in place. Stuff is currently being migrated to it – even this site should be moving fairly soon (only the main project and project specific sites get moved onto both servers and get wired for fallover redundancy. My blog doesn’t need redundancy 😉 I’ll probably update this post once it’s over in it’s new home 🙂

The Great Server Migration 3
Data center with endless servers. Network and information servers behind glass panels

Talk to me (and everyone else) by commenting!