Friday 25 January 2013

Performance Analysis - Part 3: Develop performance tests



Last week, we looked at how to choose a test metric, to measure the state of a problem. Just as important, this will allow us to measure the difference our changes are making to the system.

The conclusion was, if we can sustain the load seen at peak ( 20 home page requests per second ), and keep the full request time under 10 seconds, then our problem has now improved.If the response time is worse, or very irregular, then we're moving in the wrong direction.
Now, let's look at how to turn our idea for a test into reality.

Basic performance tests with Jmeter.


We're going to use Jmeter, because it's very easy to get started with. It's not suitable for everything - for example, I'd recommend against using it for microbenchmarking of application components - but it's powerful, flexible, scalable, and easy to use.

Getting started & Thread Groups.


Make sure you have a recent version of Java installed, then install Jmeter from here.
Now go and install the extra Jmeter plugins here. These externally managed plugins add a great set of additional tools to Jmeter, including some much needed better graphing.




Then, start Jmeter,you should see Jmeter's main window.










The first thing we need to add is a Thread Group, this controls a few global parameters about how the test will run. eg.


  • how many threads (simulated users, in this case).


  • how long the test takes to ramp up to full user load.
  • how many times the thread loop runs.
  • how long the thread runs for.


Right click on "Test Plan", choose "Add" --> "Threads" --> "Thread Group"

From our previous investigations, we obtained a few numbers. The first important one is:
"100 simultaneous active clients" - this is our number of active threads.

Also select "loop count" = forever, and set the schedule to 600 seconds (10 mins) - don't worry, it will ignore the Start/End time. Unless you know otherwise, always set a reasonable length of time for your tests. 1 minute is often not enough to fully warm-up and saturate a system with load.

Adding Requests Defaults


The first thing we need to add is some global HTTP configuration options.
Right click on the "Thread Group" you added previously, and choose "Add" --> "Config Element" -> "HTTP Cache Manager", repeat for "HTTP Cookie Manager", "HTTP Request Defaults" and "HTTP Header Manager".

In "HTTP Cache Manager", set:

  • "Clear cache each iteration". We want each test request to be uncached, to simulate lots of new users going to the site.


In "HTTP Cookie Manager", set

  • "Clear cookies each iteration". Again, we want each new simulated user to have never been to the site before, to simulate maximum load.


In "HTTP Request Defaults", set

  • Any headers that your users browser typically set. eg. "Accept-Encoding", "User-Agent", "Accept-Language". Basically anything to trick the test system into thinking a real browser is connecting. 


In "HTTP Request Defaults", set:

  • Web Server name or IP  -  your test system address. Don't use your "live" system, unless you have absolutely no other choice!
  • "Port number" -  usually 80
  • "Retrieve all embedded resources from HTML files" - true - Jmeter does not understand Javascript. This will parse any HTML it finds and retrieve sub-resources, but will not execute javascript to find any resources. If javascript drives web-server requests on your site, you'll need to add those requests manually.
  • "Embedded URLs must match" - a regex for your systems web URLs. This is here to prevent you from accidentally load testing any external web hosted files. Likely those external providers will not be happy with you if you do any unauthorised load tests.


Adding the Home Page Request


Now, we can add our home page request.

Going back to last week's post again, remember we needed to simulate a load of 20 home page requests per second, and keep the response time under 10 seconds.

Add "Sampler"--> HTTP Request":
name it "Home Page"
set the URL to "/" - it will inherit the default values we set earlier, including the server URL to use.





We wanted to simulate 20 home page requests per second, so we need to restrict the test to do that, otherwise it will loop as fast as it can.

Click on the "Home Page" item you just added, then add "Timer" --> "Constant Throughput Timer", then set:

  • Target throughput (per min) - 1200  (20 per second)
  • Calculate throughput - "All active threads". (this has some disadvantages, but the better solutions are outside the scope of a "getting started" guide!)




Adding analysis plugins.


One of Jmeter's great strengths is it's data-analysis and test-debugging tools, we're going to add the minimum required to our test.

Now add the following "Listeners"


  • "View Results Tree" - Used for a quick view on the data you're sending and receiving. Great for a quick sanity check of your test.
  • "Aggregate Report" - Aggregated stats about your test run.
  • "jp@gc Response Times over Time" - A view of how your response times change over time.


First test run!


If you haven't already done so, save your test.

Now we need to select a place to run our test from. Running a performance test from a slow laptop, connected via WiFi, to a contended DSL line, is not a good idea.

Choose somewhere with suitable CPU power, reliable network links, close to your target environment. The test must be repeatable, without having to worry about contention from other systems or processes.

Run the test.


Analysing test results.


You should keep detailed notes about each test run, noting the state of the environment, and any changes you've made to it. Version your tests, as well as keeping records of the result. The ability to look back in a spreadsheet, to look at results from a similar test weeks ago, is very valuable.

Here's a quick look at an example result.


  • Results Tree - each result should come up in green, showing a 2xx response from the server. By default, Jmeter treats non 2xx response codes as a failure. This view allows you to do a quick debug of your test requests and responses to ensure it's behaving as you expect. Once the test works correctly, disable the results tree, as it slows down the test.





  • Aggregate Report - statistics about each request type that was made. (Taken from an unrelated test).













  • Response Times over Time - self explanatory. (Taken from an unrelated test) 












Automation.


For the most reliable results, you shouldn't run the tests in interactive mode, with live displays and analysis. It skews the results slightly, as your computer is wasting CPU cycles processing the displays.

Instead, using command line mode, disable all the analysis tools, and use the Simple Data Writer to write a file to disk with the results, for later analysis. All the analysis plugins that I've used here accept data loaded back from one of these results files, so you can analyse and re-analyse results files at your leisure.

I'll cover the techniques to run Jmeter tests in an automated fashion, with automated analysis, in a future blog post.

Conclusion


We've now got a repeatable way to measure our chosen performance metric(s).
In our imaginary case, we're going to pretend that the test we ran showed that Home page load speed averaged 15 seconds across our 10 min test, and that the Response Time over Time started out at approximately 5 seconds, and increased to 15 seconds within a few minutes of test start.

In this example, this shows our test is part of the cause, and not just an effect. If we had run the test, and found that response times were acceptable, then our test would have just been measuring the effect.

Measuring only the effect is not a bad thing, as it gives us acceptance criteria for fixing the problem, but it means we would have to look for the cause later on, after we've instrumented the environment.


Next time, I'll cover how to approach instrumenting the environment, to collect the data we need to know what's happening to the environment.





Saturday 19 January 2013

Performance Analysis - Part 2: Choosing a test metric


In the previous post, I discussed how to start the process of gathering all the information together that you need to understand the system you're troubleshooting.

Now, let's try and understand how to select a metric for measuring the state of the problem you need to troubleshoot.

Describing the Problem:

Unfortunately, the first contact with a new problem usually starts with the wonderfully unhelpful statement. "My XYZ site/widget/job is slow"....

SysAdmins/Support people will recognise this as similar to "My computer doesn't work". It doesn't tell you much about the problem!


So the point of this exercise is to:

  1. Clarify the problem
  2. Verify the problem and record it in progress
  3. Narrow down exactly how to measure the problem



Clarify the problem


If we take the very simple LAMP stack environment from my previous post as an example....


On closer questioning, the clients report that the web-site home page starts being very slow, all of a sudden, and that this happens at random intervals, and that the site quickly becomes unusable.

Verify the problem and record it in progress

In this case,  it's time to break out tcpdump and wireshark. I'll delve into the details about how to use them to track changes in response time in another post, this post on another blog seems relevant.

Interrogating our imaginary packet capture from in front of the loadbalancer, we see a large number of HTTP requests incoming. They consist of a few static files, and a dynamic request. The response time for the static files seem a little changeable, but the dynamic requests seem hugely variable in response times. Eventually the site dies. Home page requests increase to approximately 20 per second at peak.

You could also verify this by looking at load balancer metrics (if it's smart enough), or by instrumenting the Apache servers to record request time, and analysing the logs (only if you're sure the load-balancer is not part of the problem!). I'll describe how to start instrumenting Apache in a later post.

Narrow down exactly how to measure the problem

The accepted standard for the time a user will wait before getting bored, is about 10 seconds.

If we can sustain the load seen at peak ( 20 home page requests per second ), and keep the full request time under 10 seconds, then our problem has now improved.
If the response time is worse, or very irregular, then we're moving in the wrong direction.

We now have a metric to measure the problem by.

Cause or Effect?

The metric we have chosen may not represent the actual problem, it may just be an effect of the problem. At this stage, this doesn't matter, we're just after a metric that measures the effect.


Next time:

 In the next post I'll look at how to use Jmeter to try and replicate the problem, and test for improvements.







Tuesday 15 January 2013

Performance Analysis - Part 1: Understanding interactions inside IT environments


My first post was about the general process around doing performance analysis in a scientific fashion.

Now I'm going to dive into a process I use to understand large, interconnected IT systems. Having a good mental model of how a system interacts with it's components is essential. It's very difficult to form useful hypotheses about problems, if you don't have an idea of the data flows and connection interactions involved.

Bear with me, this is a long one, but fear not, there are diagrams!

Needless to say, experience with the software and hardware you're investigating is pretty essential. It's difficult to know what "normal" looks like, if you haven't seen it before!

I always start these processes with a diagram - even if it's in my head, or on a whiteboard.

Now for the diagrams!




Below is a hybrid physical/logical diagram of a very simple LAMP system.



















Here I'm adding some simple information about system specifications.
 nothing too technical ;-)










I'm now going to add TCP connection information to the diagram.

  • The Clients TCP connection terminates on the loadbalancer
  • The loadbalancer talks to the PHP servers over a seperate TCP connection.
  • The PHP servers talk TCP to MySQL.


This is important, because it marks the boundaries between potentially independent moving parts of the system, as well as reminding us that there's possibly some potential in optimising connection overheads, and the network stack.

This diagram assumes that there's either no NAT/firewall, or that the loadbalancer is doing it itself. If we ran an independent firewall, or had Layer 3/4 loadbalancing, the TCP connection paths would look a little different.




Here's some information about the thread pools available on different parts of the system.

This shows a reasonably well matched system in thread-pool terms (for an arbitrary web site workload).

I have in the past encountered some very mismatched configurations, but we'll talk about the effects of getting thread pools wrong in another blog post.










And lastly, any application specific information that might be relevant.

This may come from knowledge of the business, talking to Developers, as well as direct knowledge of important settings in the applications and infrastructure used.











Still here?

At this point, you should be able to take an imaginary requests from a client, and trace the interactions all the way through the system. Bear in mind that your knowledge of this system is far from complete yet, and parts of it may be wrong! This is just a starting point.

Next time I'll walk you through choosing a metric to use as a basis for a performance test. Check back on http://www.jmips.co.uk/blog soon.


The basics of infrastructure/app performance troubleshooting.


Hopefully this will be the first in a series of posts trying to demystify the black art of performance testing and analysis on Linux based infrastructure.

There seems to be a lot of confusion around the processes involved, and how you get results. Note, I'm not saying this is the only way to do it, but these methods do get results!

I'm going to go over the basic steps involved first, and then dive into each step in detail in later posts.



  1. Using all available sources of information, create a mental model of how the system you're testing operates, and how the components interact with each other.
  2. Find a metric that shows the problem, and hence shows any improvements or regressions after changes.
  3. Develop performance tests ( synthetic or realistic ) that reliably demonstrates this test metric.
  4. Instrument the infrastructure to collect very high resolution data for various infrastructure metrics, through Network, OS, applications.
  5. Use the information., data collected, and your test results to make a hypothesis about the source of the problem, or bottleneck.
  6. Make a change, to infrastructure or application, to test your hypothesis.
  7. Re run tests, noting differences in performance test results, and changes in Infrastructure metrics.
  8. Use these test results to adjust your understanding of the system, and where the problem is.
  9. Repeat steps 5-8

Overall, the most important thing to remember is to be scientific!

DO:
  • Be ruthless in making sure that the data supports the conclusions you are drawing.
  • Use a configuration management tool like Puppet to record all the changes you're making.

DONT:
  • Leave "test changes" in the system, if they've not helped.
  • Test systems in Production use, if you can possibly avoid it.