Cloud computing provides “computing” as a service, transforming IT services. Both the software and hardware landscape have rapidly changed in recent years based on the requirements of cloud infrastructure and its increasing popularity. Cloud computing can offer more agility for both software applications and hardware infrastructure.
Performance testing, load testing, tuning and monitoring in the cloud has undoubtedly generated a lot of new demands for the IT staff. When it comes to the cloud, the term “performance” refers in general to an application’s response time, throughput and resource utilization, among many others.
The single most important benefit of cloud computing is the scalability of the application: applications deployed in a cloud environment can harness the power of thousands of computers whenever needed. As a result, the performance of the application can increase significantly. This means more customer loyalty and higher revenue.
To ensure that the testing program is comprehensive, consider these seven best practices strategies:
Best Practice #1: Measure Performance from the User’s Perspective, not the Server's
Performance is not merely a question of load times and application responsiveness. The more important question is: how satisfied are my users? Performance may mean one thing to you but another thing to your user. If you are simply measuring load times, you're missing the big picture. Your users are waiting for the app to do something useful rather than waiting for pages to load with stopwatches in hand.
So how quickly can users get to useful data? To find out, you need to include client processing time in your measure of load times. It is easy to cheat on a performance test by pushing processing work from the server to the client. From the server standpoint, this makes pages appear to load more quickly. However, forcing the client to do extra processing may actually make the load time longer for users.
Best Practice #2: Make Performance Testing a Part of Agile Development
All too often, performance testing is isolated and left until the end of a development project. At that point, it's probably too late to fix issues easily. Any problems discovered are likely to significantly delay your project by throwing development into fire-fighting mode.
To avoid this problem, performance testing should be made part of the Agile development process. That way you can find problems and fix them quickly.
Specifically, testing must be integrated into development, which means performance engineering should be represented in scrum meetings on a daily basis and tasked with the responsibilities of measuring and tracking performance as the code is developed within the same development cycle. Leaving testing for the end of the development process is too late.
Best Practice #3: Make Performance Tests Realistic
Throwing thousands or even millions of clients at a server cluster may stress-test your environment, but it is not going to accurately measure how your application or site performs in a real-world scenario. There are two major issues you need to consider when setting up your testing environment.
First, the scenarios must reflect the variety of devices and client environments being used to access the system. Traffic is likely to arrive from hundreds of different types of mobile devices, web browsers and operating systems, and the test load needs to account for that.
Also, this load is far from predictable, so the test needs to be built with randomness and variability in mind, mixing up the device and client environment load on the fly. By continuously varying the environment and the type of data that is passed, the development organization faces fewer surprises down the road after the application is put into production.
Second, the simulation can't start from a zero-load situation. Many test plans start from a base, boot-up situation, then begin adding clients slowly until the desired load is reached. This simply isn't realistic and provides the testing engineer an inaccurate picture of system load. As applications are updated and rolled out, the systems they're running on will already be under load. That load may change over time, but it won't fall to zero and slowly build back up.
Best Practice #4: Develop Performance Tests Specifically for Mobile Applications
The increasing adoption of smart phones, tablets and other mobile devices has fueled the growth of mobile applications in recent years. Mobile devices have become the primary medium of interaction for consumers as well as businesses worldwide, and mobile applications are driving these interactions. To a large extent, mobile applications have put business operations in the hands of the consumers, and they are literally running businesses and influencing business decisions. However, the big question is: what makes a mobile application compelling enough to influence consumer behaviors and make them engage with your brand and products? Of course, a strong mobile application development strategy is the foundation of any successful mobile application, but there’s one key component in its development that ensures your mobile application meets customer expectations and business goals – the mobile application testing strategy. Mobile application testing is the quality test your mobile apps have to pass before they reach their target mobile devices or app stores and become available to public.
Best Practice #5: Develop Test Environment Setup Procedures
When the cache and cookies are not clean while recording a user scenario, the web browser uses cached data and cookies to process client requests instead of sending data to and receiving responses from the server. With that said, it is very important to clear browser cache and cookies before recording traffic.
Start recording a new scenario from the web browser’s start. If you start recording a scenario after you connect to the tested web server and open a few web pages, the playback of the scenario will fail. This will happen because the recorded traffic will not reproduce the authentication procedure, and the tested web server will ignore the simulated requests.
Parameterize scenarios to simulate more realistic loads on the server. Scenarios can be parameterized by replacing recorded parameters in the HTTP requests with variable values, which lets virtual users send user-specific data to the server. This lets you add dynamic behaviors to your scenarios as if they were run by a group of unique human users. Note: Before you parameterize a scenario, carefully explore its HTTP requests and HTTP responses to better understand what data is being transferred to and received from the server.
Verify user scenarios. Before creating tests on the basis of a recorded scenario, make sure that the scenario is executed successfully for one virtual user. This can help you identify bottlenecks of the scenario and eliminate problems that are unrelated to the number of virtual users and additional testing conditions.
Best Practice #6: Accurately Monitor and Analyze Test Results
Add validation logic to the script to make sure the results you get under load are consistent, valid and expected. You can validate that a request has completed correctly based on many things, including page (HTML) content, content size, page title and end page time, among others. Make sure you test the application in a production-like environment, including factors like Secure Sockets Layer (SSL), single sign-on (SSO), load balancing and firewalls. This is essential for simulating accurate end-to-end behavior.
Avoid the temptation to extrapolate results – one server with 100 users may not run like two servers with 200 users. Test the equivalent of your full production stack. Collect statistics from your back-end environment, including web, application and database servers, to diagnose load impact on all applications tiers. Analyze server-side statistics side-by-side with client-side statistics and look for any discrepancies that might indicate a problem. To communicate test results effectively, generate reports for each team member in their preferred format.
Best Practice #7: Schedule Test Scenarios Periodically
Test often. After completing the initial load test, use the results as a baseline and look for regression in performance – a seemingly insignificant change can cause unexpected performance issues. The easiest way to do this is through automation. It’s important to check all of the application parameters, including the number of connections, response time, transaction times and throughput. There are many reasons why an application can break under load and the actual culprit is not always easy to predict.
In conclusion, while the above list is not intended to be exhaustive of all cloud-testing activities, it does serve as a viable starting point for all IT staff personnel considering how to successfully test real-world applications and services deployed to the cloud.
No comments:
Post a Comment