Monday, October 24, 2011

Webinar: Load Testing Meets Data Analytics

This Thursday, October 27 at 10 am PDT*, I'll be participating in a webinar sponsored by SOASTA, Inc. They make a new breed of load-testing product called CloudTest® which, despite its name, is not restricted to load testing cloud-based apps, although it can do that too.

CloudTest facilitates load testing any web-based apps currently being deployed in your data center, at scale, in less time. The "at scale" part refers to the fact that thousands of user-loads can be provisioned dynamically via cloud-based servers, and the "less time" part refers to constructing test scenarios more rapidly than can be achieved with conventional scripting. Therefore, test coverage can approach 100%, which also means a lot of test data might be generated. The potential data torrent caused by such a high degree of coverage necessitates being able to transform all that data into information, thereby exposing performance and scalability issues. Enter CloudTest data visualization and real-time analytics.

In that vein, I'll be presenting three case studies where better test-data analytics would have been very useful:

  1. CASE STUDY: Unseen Error Rates. The inability to visualize RDBMS transaction errors in the SUT meant that the reported throughput data were overly optimistic because they were actually comprised of 70% committed transactions + 30% errors due to failed transactions. None of the performance test engineers were aware of this problem during 18 months of testing (until I pointed it out—over the phone!). Real-time analytics could have avoided a lot of useless testing.
  2. CASE STUDY: Local vs. Global Test Data. Like ships passing in the night, QA and WebOps, being in different organizations, never talked to each other so there was no awareness of the glaring discrepancy between user-latency data (measured by services like Keynote and Gomez) and load-testing SLA targets (measured with LoadRunner, JMeter, etc.) for apps development. CloudTest incorporates both.
  3. CASE STUDY: More Testing is Better. The early releases of memcached were thread-limited, which meant it would still scale out but not scale up on next gen multicore servers; a situation that would be tantamount to considerable wasted capacity—defeating the purpose of cheap K-V caching. The ability for CloudTest to resample data via more tests in the same time ensures more accurate statistical analysis. I'll demonstrate how the USL nonlinear regression model can be applied to predicting improved scalability using the memcached load tests.
Because data and information are not the same thing, these examples will also show you how to transform your current test data into more useful information.


* This webinar was recorded. Also check out the online Q&A section.
† See N. Gunther, S. Subramanyam and S. Parvu, "Hidden Scalability Gotchas in Memcached and Friends," Velocity conference 2010.

1 comment:

Neil Gunther said...

It sees to have been lost in the shuffle but here finally is the link to the webinar recording.