While actively monitoring hosts, Hostpeek.com was hosted on a high quality VPS from Liquidweb.
If you like HostPeek.com, please link to it, or recommend it to other people. Thank you!
Charts and figures
While the information presented on this site is my sincere and accurate representation of my experiences with these companies, I believe you should also know that I receive commissions for sales I refer to them. I'd rather have them pay, than charge you directly for access to the site. Click here for more details.
|The charts, the variables, and their meanings|
Let's see how to read the charts, how an ideal chart would look like and what should the values for all these variables be. We'll detail all those things in the order in which they will appear on the server assigned pages. Keep in mind that due to permissions related reasons, one or more of these variables may not be available at a given host or on a given server, or may become available/unavailable at some point in time. All the charts will be presented for all servers nevertheless, in order to record all values when available, even if that happens only temporarily.
This applies for all the monitored applications (Wordpress - blogging application, Joomla (1.5) - a full featured content management system, phpbb3 - a popular forum software, and any others that might be added in time). The applications will be installed and then upgraded regularly via Fantastico, Installatron or some other such tool that the host provides, if available, or manually, using the code as provided by its developer. It will not be modified, tweaked, optimized or otherwise further configured in any way, unless required by a malfunction rendering the application unusable. The aim is to have these applications there in a standardized condition so that load time comparisons will make sense, otherwise we'd be comparing apples to oranges.
This page-generation/load time measurement has certain unwanted components to it. The testing script itself must run a function in order to load the page. This takes a small unknown amount of time that can't be excluded from our measurement. This is not as bad as it may sound, because this time itself depends on the server's computing abilities. The rest, really the bulk of the reported value, is the application's page generation time. The unit of measurement is milliseconds; that is 1/1000th of a second. As the script itself and the application are locally run, this value is not influenced by the monitoring server's location in any way. This was done intentionally, to help facilitate fair comparisons between servers' performance, and subsequently hosting companies.
This load time refers only to the HMTL page that results from running the application, and does not include any external files that it might call, should it be opened in a browser (CSS files, images, etc.). These are static files that don't require processing server side, and their load time will depend more on the network's capabilities, which will be discussed later.
To be even more clear, as a result of the conditions explained above, the measured value is ideal for server to server comparison purposes, but you should not expect your forum/blog page etc. to load as fast when live on the web. It takes time for information to travel over the Internet. Also, these applications are using an almost empty database, which won't be the case for a real website.
How do we want these particular charts of this load time value to look like? First of all, we want the average value to be as small as possible (you can see it as well as other statistical values in the legend area of the chart). You also want as few spikes as possible, and as small as possible. That is because consistency is important. In my opinion it can be even more important than the average value itself. For one, visitors get used to a certain website responsiveness (good or bad is not relevant for this argument). When their expectations are no longer met, and the site slows significantly, it can lead to frustration. Also, if every 5 minutes your site becomes painfully slow then fast again, it really doesn't matter that it was the fastest website on the net in its fast period.
Server load is reported by the operating system, and it consists of averages over 1, 5, and 15 minutes respectively, of the number of processes that are using or waiting for CPU access. Thus, it is not measured in percents, and can go well over 100 in value (though it's usually not a pretty sight when it happens). Do keep in mind that server load is not a precise measure of server "busyness". I just know that a lot of people are curious to know these values and their evolution, and I figured I may just share them here. It is more of a guide though, a possible alarm sign. It will nevertheless be interesting to see if and how server load values translate into different page generation times.
Ideally, we should witness low, relatively constant values. Many say that a load of 1 per processor can be considered OK, so you could use the phpsysinfo application (read on) to try and find out the number of processors that a particular server has. I expect that short peaks will be relatively common (say when a user is will be making backups). There may be daily traffic peaks on the server (nighttime will be silent compared to daytime), so we could see this being reflected in server load values. The graph features the three values in a stack, one on top of the other, color coded. So if at a point in time the load values are 1, 2 and 3, the graph will be 6 points high, but that doesn't mean that the server's load was 6. Read the individual values (current/max/avg) from the legend!
It is what is says. It is measured in percents, so 100 will be the highest it can get. This value may be missing on some servers. It doesn't mean that there's no usage, it is just that the host has taken measures to avoid this value being accessible at customer's user level. This is of no huge importance though. Page generation time is what really matters. The rest of the measurements are "accessory".
Ideally, this value should be low and relatively constant. As the value is taken momentarily (no pre-averaging like in the case of server load values), we can expect significant fluctuations. We do want these readings to stay as far away from 100% as possible, as often as possible though. We also want the average to be low.
Due to the testing script's CPU usage, the accuracy of the measurement will be affected to a degree, despite the steps I have taken to minimise its influence. A vague estimate is of about 5-10% extra usage being reported, which will of course vary. Less powerful servers will probably be at a slight disadvantage.
This is the number of days that the operating system reports as being up and running, not a percents measurement. A reboot would result in a reset to zero. It will result in a bit of an unconventional, perhaps not very intuitive to interpret chart. Ideally, we would want to see this value growing constantly, day by day by day, ad infinitum. In reality, reboots will be required from time to time. Major software upgrades etc. require such. The value of the conventional calculation of "average" will be misleading/hard to interpret correctly, but I decided to let it be calculated anyway as it can still help in making some direct comparisons.
We want the reboots to be rare, and the gaps in the charts resulting from reboots to be as short as possible. The other uptime reports (in percents) will be of further help. Keep in mind that a server unreachable (a.k.a. downtime), does not mean that the server was rebooted, as the cause of downtime can vary, but a reset to zero of the value here will mean a server reboot, and thus a downtime occurrence, even if shortly lived.
While Linux typically appears to use up all of the memory it has at its disposal for cache purposes, only part of it is actually "not free". All that cache can and will be fast discarded if applications need more memory. What we measure here is the kernel and applications current memory usage in percents. The rest of the internal memory can be considered free.
We want the values we read to be low (they typically can't be zero, nor very close to it). There may be some humps along the way, as usage won't and can't be constant throughout the day, but a well provisioned server should not, with truly exceptional situations, get very close to or at 100% memory usage (as previously defined). Once this happens, swap space (a specially allocated hard disk space) starts to be used as if it were internal memory. But swap space will be extremely slow (hard disks are way slower than internal memory chips), and server performance will quite likely be severely affected (we should witness application responsiveness slow down quite significantly).
This is swap space usage, in percents. Ideally, it should be zero or thereabouts. In time however, the operating system may decide to keep some "stall" information in the swap space rather than have it use physical RAM needlessly, so the value may end up being "non zero" and very slightly growing over time, especially if reboots are rare and server activity reduced.
Various considerations regarding the above
You may be wondering why many of these values are measured in relative values (percents) rather than absolutes. To answer that we need to consider what we are and what we are not judging with these measurements. Going to an extreme scenario, we could assume that we don't really care what hardware a host is using. We care what they're able to squeeze out of it. We're after results. The host may have a low end server with 1GB of memory. If it places only 2 shared hosting accounts on it, and it may very well fly like the wind. Or it may slow to a halt, if the customers are very heavy users, and the host doesn't make sure that the customer(s) that are not a good fit for this sort of service are nicely invited to upgrade to something more suitable for their level of success.
Percentages give us a picture of how intensely the present resources are being used - how much is left over for eventual (benign or malign) traffic spikes. A server may have 8GB RAM, but be using 90% of it all the time, while another may have only 2GB, with 90% being free all the time. One is virtually at the limit of its capabilities, the other is virtually unused. Two servers may be using 0.5GB of RAM at all times, but one may have 1GB total memory, the other 2GB. Absolute figures would make us compare 0.5GB to 0.5GB - the servers appear to be similarly used, but percentages - 50% to 25% - show us very fast that one of them has significantly less wiggle room left, and enable us to make a fairer comparison between them.
Uptime for each server will also me measured and reported using Pingdom, a service that checks the proper functioning of a web page (keyword verification included). The checks are set at a frequency of 1 minute, and are made from several locations around the world. This ensures that there will be very few or no false alarms.
Two files of different sizes (data that was compressed with zip) will be placed on each server. You may download either one to see how the connection fares, if it lives up to your expectations etc. (assuming that I still maintain an account with the company at the time of your visit). If your connection is fast, download the bigger file so the connection gets to stabilize a bit. Otherwise, the smaller file will do the job too. Keep in mind that if your website visitors are closer to the server than you are, they are likely to witness higher speeds. It is not your experience that you're trying to enhance, but that of your visitors, so ideally, you should try this test from a computer in their approximate location. It may be helpful to have some on-line friends or acquaintances living there to help you out with this.
A single test may be inconclusive or, rather, lead to the wrong conclusion. It may be that the end user has connectivity problems and there's no issue with the server you're testing, so it's good to at least double check.
I will routinely (used to be bimonthly, but monthly should do) use host-tracker.com's uptime check feature with the 10MB file, and keep a record of the results. It works by downloading the file from servers all around the world, and recording the download speed for each, and calculating the average. I will use this average figure for the comparison on the main page. Given the large number of locations, it is likely to be fairly accurate. It will however tend to be biased in favor of hosts in the US, as most of the checks are done from that area, and distance (well latency really) tends to have a significant effect on maximum achievable transfer speed over a single connection in TCP/IP.
It is a local installation of phpsysinfo. This enables you to see the momentary value of various server related variables, some server hardware information etc. It's rather self explanatory once you see it, and the info tends to vary from one host to another as each makes its own hardware, partition choices etc. Unfortunately, it doesn't work as expected on all servers, so it'll be missing from those.
I figured that programmers and technically inclined visitors might be curious to see this info prior to signing up. You can read about the information that this function provides on the official PHP site: http://www.php.net/phpinfo
|Last Updated on Sunday, 07 August 2011 21:11|