LGF Technical Detour: Whole Lotta Hamsters
Ooo. We have a new server tonight, a third spanking new prison for some unlucky hamsters. I’m rubbing my hands in diabolical glee.
Since I know we have a whole lotta geeks reading (or at least 5), I’m going to open the floor for opinions on the best way to use this hunk of silicon, bearing in mind that we don’t have a real load-balancing setup in place just yet. Clever way to get some free consulting, eh?
It’s really just feedback I’m after; I’ve pretty much settled on a plan after a whole lotta research (but if you can talk me out of it, I’m still open): to set up replication from our current database server to the new one, for a couple of reasons.
1) to have the equivalent of a live backup of all the data, so we could (theoretically) quickly switch to the new server if an earthquake, fire, flood, or Chinese hacker attack takes out our main web server. We have a lot of information here. I’d hate to see it go all Library of Alexandria on us, if you know what I mean.
2) to use the new replicated database for some read-only functions that are now shared on the live database, like searching and archiving. Since these functions tend to be time-consuming, with lots of unchanging data going out over the web, this should take a significant load off the main DB server, letting it read and write and spin happily in its little wheel. By replicating the database we can also do some neat tricks (well, neat if you’re a geek) like setting the master DB tables to use InnoDB (for added concurrency) and the slave (sorry, no offense, not PC, oh no, that’s what it’s called) to use MyISAM tables so that the search function can take advantage of FULLTEXT indexing.
I know that made no sense to 99.81% of the lizard army, but bear with me. I’ll open another non-techie thread in about 3.782 minutes.
Opinions?