XML, SOAP, REST Testing for SOA and Cloud Computing

SOA Testing

Subscribe to SOA Testing: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get SOA Testing: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


SOA Testing Authors: Mamoon Yunus, John Savageau, Tim Hinds, SOASTA Blog, Hervé Servy

Related Topics: Software Testing Journal, Web Analytics, SOA Testing, Application Performance Management (APM)

Article

London 2013: The New Battlefield for Testing?

Both single-user performance and multi-user performance tests are required

Bonjour foxes!

I was at the Velocity Conference in London last week. Of course, fish & chips is still a must eat, a double-decker is far more than a bus, and pea shooting is, as always, so... British!

But away from these classic London cultural elements, at Velocity I heard more and more about performance. I saw many tracks related to rendering, browser optimization, even the neuronal impact of slowness; etc. But it was always about one user at a time... only. Not so much as a word was uttered about challenges with multiple users or back-end and infrastructure related tests.

Does it mean back-end load testing is over and I'm the only one who is not aware of it? Did an international committee sign a declaration to the end of problems on the server? Did the server problems melt in the sun behind the clouds? Am I to believe that your user experience with one user will be the same with 10 or 1 million users?

When sharing my concerns with others, I rapidly discovered two groups. The first group is for the single-user performance. The second is for the multi-user performance. Is it a new war in the making? Should we assemble our armies of keyboards and be prepared for the next battle? I would like to be the peacekeeper. So let's forget Waterloo and Trafalgar, at least for a few minutes...

To be perfectly honest, I work for a performance testing company, Neotys, which focuses on multi-user performance testing, but I do recognize that single-user performance testing is critical too.

Let me explain this middle ground to you.

"I want it all. I want it now."
I use many apps and mobile websites on my devices and my laptop for both business and personal purposes. Some occasionally have issues.

As a consumer, I don't care about what is causing the problem. As a consumer, I just want to consume. Any time, any place. Like Freddy Mercury (UK singer) quoted, "I want it all. I want it now." That's the promise of "mobility" isn't it?

I'm sure you have all been in similar situations to these:

  • The application was brilliant over Wi-Fi at home. I got the news from this great French newspaper. Their articles were brilliant and I liked how they went digital. However, the news was not so "fresh" when used over 3G. When I was on a train with plenty of time to read my favorite newspaper... on my mobile, the application failed... I was on 2G. Later on, when I understood the amount of data that had to be transferred, I just uninstalled it. Back to paper.
  • I saw an advert for a website with very good prices on shoes. I should not, but I love shoes. That's my Carrie Bradshaw side. I spent two minutes on the site. Slow... so slow that it drove me to the question: "Will a company unable to create a functional website to handle multiple users at the same time be able to properly handle my £500 order (I love British shoes)?" I never bought from them. I even don't remember their name.
  • Another application was wonderful, nice, and pretty. It was always up to date with the latest news pushed. In one word: Fast. Fast to suck the life out of my battery in just 4 hours. In another word: Trash.

Your users are not experiencing your app in a vacuum. You cannot avoid end-to-end load testing. Both front end and back end are interacting. Both single-user performance and multi-user performance tests are required. The conjunction of the two situations must be optimized in concert.

"Revolution! All right, all right" The Beatles are back?
Every year since I started working in IT around ‘99, I hear that a new technology is going to start a revolution. I know it is true... until next year.

But I have to admit that this year I began to see the first projects using technologies that are drastically changing the principles of the network dialog between a browser and a server. I see our first SPDY projects. Does it change the server capacity? It just changes everything! Have a look at the measures I performed and documented in this article.

Also, I see our first WebSocket applications based on the Kaazing stack. I would never have imagined seeing a direct socket connection between a browser and a server. It does change all the principles of network allocation resources. For the server, it is the end of polling requests and definitively the end of the stateless principle (and resource saving) of http. Your server will have to handle this.

I studied relational algebra and then its son, SQL. I spent hours normalizing tables or mapping objects to tables. I'm almost certain you did it too. This year, I saw a very big project in EMEA... without a relational database... and it was not a mainframe sequential file ;>. Everything was a NoSQL file/base/stack called MongoDB. How could it work? It seems to work well - surprisingly well to a relational mindset guy like me. All the assumptions we have about number of users, number of tables, ratio of index, tables in memory, etc., have to be re-analyzed. Everything is new. There's not an index to add when things get slow.

Don't think that these technologies, which have wonderful packages and are quite easy to integrate, have no impact on the server. They do. On top of this, the most critical issue is that no one has a lot of experience with them and you will have to produce your own assessments.

How will your 15 minutes be? Andy is excited...

Even the US President can have a website that is having problems (even with the support of the NSA supercomputers to handle the load... joke).

Certainly, you may not be running a business the size of the US Department of Health and Human Services, but that is the reason why the US government will survive this website crash. Victoria's Secret survived after their website crashed during the Super Bowl. But your company, which spent all its energy on a new application, or a new version of the website, won't. Your $1M or $10M company can get killed by such an incident.

Andy Warhol stated, "In the future everyone will be world famous for 15 minutes." Be sure that this 15 minutes isn't cut short after 1 minute with a 503 code.

To be sure that yours will be fine, you must test. Single-user performance and multi-user performance. The battle to shine for these 15 minutes is the only one that is worthwhile. Don't you think?

More Stories By Hervé Servy

Hervé Servy is a Senior Performance Engineer at Neotys. He has spent 10 years working for IBM-Rational and Microsoft pre-sales and marketing in France and the Middle East. During the past 3 years, as a personal project, Hervé founded a nonprofit organization in the health 2.0 area. If that isn’t techie enough, Hervé was also born on the very same day Apple Computer was founded.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.