Following a highly successful Black Friday/Cyber Monday on LowEndTalk/LowEndBox, providers endured a round of YABS (Yet Another Benchmark Script). Users signed up for cheap VPSes – sometimes many more than they really needed – and proceeded to excitedly run YABS and post the results. Amusingly, this usually means that performance on BF/CM nodes is worse for the 24-48 hours after purchase because there is so much of this activity going on.
YABS – and many others like it over the years – attempts to produce a meaningful report to judge or grade the VM. It reports CPU type and other configuration information, then runs various disk, network, and CPU tests to inform the user if the VPS service he’s just bought is good, bad, or middling. But does it really?
It’s valid to check CPU and disk performance for outliers. We’ve all seen overcrowded nodes. Hopefully, network performance is checked prior to purchase through test files and Looking Glass. But other than excluding ancient CPUs and floppy-drive-level performance, is the user really likely to notice a difference day-in and day-out between a 3.3Ghz CPU and a 3.4Ghz one? Particularly since any operation touches many often-virtualized subsystems.
If I could write a benchmark suite, here is what I would like it to report.
How often does the providers’ hardware go bump in the night? How often does the network falter? I care less about that extra few MB/sec in a synthetic benchmark and more about how my rsync backups flow or web pages are served around the clock.
Every provider has issues – whether you’re MomNPop.hosting or Amazon. The question is how they respond them. Does the provider dawdle or are they right on the spot? Are they open and transparent? Do they communicate well? I can think of several providers who had issues and then went silent – which is disastrous. On the other hand, I was affected by an outage with a prominent LET provider years ago and they sent hourly status reports until it was resolved. Which would you feel better hosting with?
I generally don’t need hand-holding – though if I was paying for managed hosting, I’d expect it to be excellent. But eventually everyone needs support of some kind. I went round and round with a host once because my outbound mail was blocked. Their support insisted it was something in my VPS and wanted to charge me to look at it. Eventually it escalated to someone more senior who said they block outgoing mail but they could unblock me. That’s an example of how even no-support hosts must provide some support.
I’m responsible for keeping my VPS secure. No worries. But if the hypervisor is hacked, there’s nothing I can do. How much do you trust the provider? Even worse, what if their panel is hacked and attackers can get in on any console? Or if their WHMCS is hacked and your credentials are sold into a hundred black markets?
You find a great Black Friday deal, eagerly send your money, and then…discover they haven’t updated their VM templates since the Beijing Olympics. And what of next year when the next great Debian distro comes out – how long am I going to have to wait for their “validation”?
When my provider hands me an IP, is it on every RBL from Bangor to Bangkok? Am I going to find the reverse DNS is set to domain called “i-love-kiddie-porn.com”? (Yes, this actually happened – previous owner thought it was funny. If I hadn’t noticed, could have been a headache down the road.)
Is someone trying to mine LowEndCoins on my physical node and burning CPU 24×7? Is someone running a DDoS-magnet gameserver?
Hard to Quantify
Unfortunately, all of these things are impossible to quantify in a shell script. If you ask me which providers I recommend, the benchmarks that result from VMs on their nodes are not likely to factor into my response. Rather, I’ll point to the provider’s history and these “unquantifiables” as major considerations on which providers to choose.
New Benchmarking Series on LowEndTalk