Hopefully this will be the first in a series of posts trying to demystify the black art of performance testing and analysis on Linux based infrastructure.
There seems to be a lot of confusion around the processes involved, and how you get results. Note, I'm not saying this is the only way to do it, but these methods do get results!
I'm going to go over the basic steps involved first, and then dive into each step in detail in later posts.
- Using all available sources of information, create a mental model of how the system you're testing operates, and how the components interact with each other.
- Find a metric that shows the problem, and hence shows any improvements or regressions after changes.
- Develop performance tests ( synthetic or realistic ) that reliably demonstrates this test metric.
- Instrument the infrastructure to collect very high resolution data for various infrastructure metrics, through Network, OS, applications.
- Use the information., data collected, and your test results to make a hypothesis about the source of the problem, or bottleneck.
- Make a change, to infrastructure or application, to test your hypothesis.
- Re run tests, noting differences in performance test results, and changes in Infrastructure metrics.
- Use these test results to adjust your understanding of the system, and where the problem is.
- Repeat steps 5-8
Overall, the most important thing to remember is to be scientific!
- Be ruthless in making sure that the data supports the conclusions you are drawing.
- Use a configuration management tool like Puppet to record all the changes you're making.
- Leave "test changes" in the system, if they've not helped.
- Test systems in Production use, if you can possibly avoid it.