When developing websites, never forget that page render time is critical. I’ve seen way too often that production code just performs poorly. Often the blame goes the servers or the connection to the servers. Reports for each and any error logs will show what, if any issues are present. If routine automated maintenance is taken against the disk, database back-end, and memory recycling then the issue lays elsewhere. More often than not the problem is with the code being rendered to the client. Even if cross-platform browser testing is conducted to verify the rendered formatting, it does nothing to actually benchmark the performance of the site.
Test-Driven Development
Over a number of years I was taught about concept of Test-driven development. Which for websites can be done with something like SimpleTest for PHP development or NUnit for ASP.NET development. The problem I find with this methodology is that the code generated as a result of the tests written are only as good as the developer writing the tests. The larger problem is when moving into the web development arena. Unless the developer is spending all their time working on a single internally developed website framework, development of a full-compliment of tests internally is a waste of time and development resources.
What a waste
So then what? Often outside of performing some simple render tests against a couple of different browsers and walking through some of the website forms nothing, ABSOLUTELY NOTHING. Often this method for testing is considered good enough. But I question, how many visitors to a site do you loose when your homepage is over a megabyte? Typically the developers have a 100Meg or faster connection to the websites on a LAN connection latency < 2ms. The problem is real-world users don’t have that kind of pipe available. Websites should load quickly on whatever the lowest common user-base is. If it’s an internal site-only, what about users coming in via VPN from home? Now try performing your browser testing on a 1Meg or slow link with at least 100ms in latency, you’ll start to see what I’m getting at. The site no-longer performs like it use to.
Make Time
As everyone’s time is limited you may begin to wonder, where am I going to find all the time to perform all this testing? Well, I hate to burst your bubble regardless of what type testing is conducted there is always some time used. The good news is there are tools freely available out there (as I know we all work without a budget) to assist with this testing. The first issue which may arise would be corporate policies dictating we only use *browser XYZ*, no other browsers are allowed or should be supported. If you are in this scenario, RUN!….. *just kidding* talk with your manager about loading Firefox on the development workstations to assist with testing your code.
W3C Validation
The most basic testing would be a add-on called HTML Validator. This add-on allows you to validate against W3C HTML standards, while not necessarily performance this helps to minimize any cross-browser rendering issues. Below is a screenshot of the window presented when single-clicking icon within firefox.
By default this add-on will run on every page so all you need to look for is the red and white “X” in the corner. When double-clicking on the icon, it will present a window as seen below stating the line of rendered code the can be found, what the problem is, and some documentation about the error being presented.
I will occasionally get false-positives with this add-on, this add-on is designed to test W3C compliance of HTML generated code. Anything beyond HTML may throw warnings or errors (of which inline CSS comes to mind).
Now for the meat and potatoes.
When looking at website performance, the real test is time. There are basically:
3 things to check
- The number of files being requested
- Code complexity
- total page weight
Different aspects of these three variables presented will determine the time it takes for the page to be fully presented to the user.
The many requests
The number of files being requested is a latency centric issue. When a specific page is being request as it is loading there will be references to images, css files, js files, and so forth. Each file needs to be requested separately which has two issues overall latency build-up or a variation on the “rubber band effect” and browser limitations. The rubber band effect in general is when sending or requesting information I need to go through points generally:
- Lets start at A
- then go to B
- then go to A
- then go to B
- then go to C
- then go to A
- then go to B
- then go to C
- … rinse, wash, repeat until your blue in the face
More specifically for our case:
- Request: somepage.html
- Response: here’s page somepage.html
- Request: my.css
- Request: mysecond.css
- Response: my.css
- Response: mysecond.css
- Request: image_xyz
- and so on and so forth
The problem here is a lot of general page request I/O. This results in a latency buildup issue. So if your average latency is 100ms to the server and it is a very simple page with just one small image, there will be two separate non-concurrent requests being made resulting in 200ms render time outside of transmission time. In a CMS-based site there are between 10 requests going well up over 30 requests which is why sometimes it takes seconds even on a broadband link to load components. If there are multiple Javascript or CSS files could they be pipe-lined on the back-end and pulled down in a single request? Instead of using imaging slices with a number of different pieces, could parts be rendered with CSS instead?
Code Smell
The second problem is code complexity or inefficiency. Beyond combing through the code line by line for “Code Smell“, take a look with Firebug or HTTPWatch for the time taken to obtain each request. If there a few outliers taking a 100ms or more than the rest of the code there is typically a sign there is a problem with the given file. When run the rendering tests, make sure to run them a few times to get a baseline to try to rule out any abnormalities.
Here you can see a test against my Linkedin homepage as I was noticing some slowness issues today. The sluggishness of the site appears to be due to the RSS styled feed of my connection updates.
Trim the fat
The last issue is the total page weight. Here is where YSlow really shines, take a look specifically at the statistics and components tabs of the rendered page.
Uncompressed or poorly compressed images are the largest offenders on most sites I’ve seen, typically art received from a graphics arts department come over as PNG files or uncompressed JPEG’s running at a resolution many times that of what is required. Re-size the images down and then save them either as JPEG’s or GIF’s with appropriate compression. PNG files on occasion can produce smaller files but unless you are targeting only browsers which support PNG files, leave them for development purposes only.
Next it’s time to look at the code, assuming you already looked for and corrected issues with “code smell” are your files gzip’d? Files can be compressed in advanced but on modern servers it very rarely shows any advantage over run-time compression. Compressing all your PHP, ASP, ASPX, CSS, JS, and XML files can save a large amount on the file size. You may ask yourself but what about the increased CPU load on the server and client when performing all that compression? Truthfully the load increase is very minimal compared to the amount of savings in time as most times the slowest link of anywhere is the users internet or network connection.
Linkage
Below are some links for some of the tools listed above along with a few extras for test automation and sql injection tests.
Firefox – http://www.mozilla.com/en-US/firefox/
Firebug – http://getfirebug.com/ – there is also FlashFirebug and FirePHP
YSlow –
HTML Validator – https://addons.mozilla.org/en-US/firefox/addon/html-validator/
Live Headers – https://addons.mozilla.org/en-us/firefox/addon/live-http-headers/
SQL Inject Me – https://addons.mozilla.org/en-us/firefox/addon/sql-inject-me/
HttpWatch basic edition – http://www.httpwatch.com/ – for IE performance testing, though it can be used for Firefox as well. There is also a more featured commercial version available.
iMacros – https://addons.mozilla.org/en-US/firefox/addon/imacros-for-firefox/ – Rendered test-driven performance automation, a bit more involved but can be useful for automated testing in certain scenarios.
Hi mate! I’ve just stopped by to thank you for this cool website! Keep posting that way.