Tuesday, 15 April 2014

Wasted Time in LoadRunner

Wasted Time:-
If your guess is Replay log, Yes… We get this in the Replay Log of VuGen Script as shown below.
Action.c(21): Notify: Transaction "Home" ended with "Pass" status (Duration: 20.1061 Wasted Time: 0.0010)


Here is what HP says about Wasted Time.


Wasted time is time spent on activities whose purpose is to support test analysis, but would never be performed by a browser user, for example, time spent keeping transaction statistics for later reporting. Wasted time is calculated internally by LoadRunner. Your script can also add wasted time with lr_wasted_time. 


Sometimes, you may enter activities in a script that your do not want reported as part of the transaction statistics. Generally, these are activities related to record keeping, logging, or custom analysis. If you enhance the script with steps whose durations should not be included in the test statistics, you can track the time used by these steps with lr_start_timer and lr_end_timer. Then, the function lr_wasted_time is used for adding this user-determined time to the internally generated wasted time. 


You can retrieve the total wasted time (both that generated by LoadRunner automatically and that added with lr_wasted_time) with the function lr_get_transaction_wasted_time, or with lr_get_trans_instance_wasted_time, as appropriate.

When VuGen creates the log file, output.txt, and when on line testing generates the Vuser log files, no action is taken with the wasted time. The actual elapsed transaction times are reported. The lr_get_transaction_durationfunction also returns the actual elapsed transaction time. This information may be useful in analyzing and developing test scripts. 


However, in the on-line graphs in the LoadRunner Controller and the transaction response time graphs in the LoadRunner Analysis, the transaction times are reported after subtracting the wasted time. This is the time most pertinent to understanding the system being tested.

Monday, 14 April 2014

Scripts: Remove Think Time

Yesterday, my team got a requirement to randomize the think time and subtract the think time to reflect the actual response time of the transactions of the load test.

To achieve the two requirements, 

 (1) randomize think time and
 (2) subtract the think time from the transaction response time, two KBs were used.

  1. Randomize the think time with a random parameter described in KB 3448, "How to generate a random think time". Proceed to complete enhancing the script and port it over to the Controller. Run the scenario as per norm
  2. Once the scenario completes and anaylsis graph generated, filter the think time in the Analysis graph to get the processing time. Use KB 13748, "How to exclude think time from transaction timings in the Analysis graphs" to filter out the think time.

Loadrunner Web Services Tutorial Scripting In Two Ways

Web Services in load runner: We can do web Services scripting in two ways:
  1. By using Web Services Protocol. (Need additional License)
  2. By using Web Protocol.

Scripting Using Web Services Protocol:

I have taken weather web service as example shown in this link.

Steps:
  1. First open the vugen and select Web Services protocol.
  2. Click on Manage Services on top nav bar and Click on Import and give the WSDL URL which is usually ends with .wsdl

Now Click on Add Web Service Call on top nav bar. Give the Input Arguments and Leave the out put arguments empty. And click on ok. As shown in the following image.

Input Arguements in Web Services
Input Arguements in Web Services
It will create a script in loadrunner as shown below. You can perform the steps one by one in this way for all the web services steps. You can use lr_xml_find and lr_xml_get_values to validate the page.

web_service_call"StepName=GetCitiesByCountry_101",
        "SOAPMethod=GlobalWeather|GlobalWeatherSoap|GetCitiesByCountry",
        "ResponseParam=response",
        "Service=GlobalWeather",
        "ExpectedResponse=SoapResult",
        "Snapshot=t1396977083.inf",
        BEGIN_ARGUMENTS,
        "CountryName=India",
        END_ARGUMENTS,
        BEGIN_RESULT,
        END_RESULT,
        LAST);

Using Web Protocol:

We can create the same request using Web (HTTP/HTML) protocol. You need take the xml request as shown in the following and place in the web custom request.

XML Request Example for Web Custom Request
XML Request Example for Web Custom Request
You need to keep this soap request in the web request body. As shown below and you can capture all the response using correlation function. You can also add check point using web_reg_find. The url should start end with .asmx as shown in the request.

web_reg_find("Text/IC=New Delhi",
        LAST);

  web_reg_save_param_ex(
        "ParamName=Web Service Response",
        "LB=",
        "RB=",
        SEARCH_FILTERS,
        LAST);
 
    web_custom_request("Weather SOAP Request",
      "URL=http://www.webservicex.net/globalweather.asmx",
      "Method=POST",
      "TargetFrame=",
      "Resource=0",
      "RecContentType=text/xml",
      "Referer=",
      "Mode=HTML",
      "EncType=text/xml; charset=utf-8",
      "Body= Your Request should be here as shown in the following image"
      LAST);

Friday, 11 April 2014

Pre-Load test check lists

Before starting loadtest we have to make sure the following check lists: :)
1.End users, customer, and project members have been notified in advance of the execution dates and hours for the capacity test
2.All service level agreements, response time requirements have been agreed upon by all stakeholders.
3.Contact list with names and phone numbers has been drafted for support personnel (onsite and remote)
4.Functional Testing of the application has been completed.
5.Restart the controller machine.
6.Ramp Up / Duration / Ramp Down is configured correctly.
7.All Load Generators are in a "Ready" status.
8.All Load Generators are assigned to appropriate scripts.
9.All scripts have the correct number of Vusers assigned to them.
10.All scripts have the correct number of Iterations assigned to them.
11.Correct pacing has been agreed upon and configured for all appropriate scripts.
12.Logging is set to Send messages only when an error occurs for all scripts.
13.Think Times have been enabled/disabled in the test scripts
14.Generate snapshot on error is enabled for all appropriate scripts.
15.Timeout values have been set to the appropriate values.
16.All content checks have been updated for the appropriate scripts.
17.Rendezvous points have been enabled/disabled for appropriate scripts
18.All necessary data has been prepared/staged and is updated in all scripts.
19.Any scripts with unique data requirements have been verified.
20.All scripts have been refreshed in the controller and reflect the most recent updates.
21.IP Spoofing has been enabled in the controller.
22.IP Spoofing has been configured on all appropriate Load Generators.
23.All LoadRunner Monitors have been identified, configured and tested.
24.Auto Collate Results should be enabled
25.Results directory and file name should be updated.

Tuesday, 8 April 2014

Performance Testing Life Cycle

What is Performance Testing?

Performance Testing is a process of evaluating system’s behavior under various extreme conditions. The main intent in Performance testing is monitoring and improving key performance indicators such as response time, throughput, memory, CPU utilization etc.
There are three objectives (three S) of Performance testing to observe and evaluate; Speed, Scalability and Stability.
Performance testing evaluates the system. It identifies the bottleneck and critical issues by implementing various workload models.
When to start Performance Testing?
Performance Testing starts parallel with Software Development Life Cycle (SDLC). NFR elicitation happens parallel with System Requirement Specification (SRS).
Now we will see the phases of Performance Testing Life Cycle (PTLC).
  1. Non-Functional Requirements Elicitation and Analysis
  2. Performance Test Strategy
  3. Performance Test Design
  4. Performance Test Execution
  5. Performance Test Result Analysis
  6. Benchmarks and Recommendations
Performance Testing Life Cycle - QAInsights

1.     Non-Functional Requirements Elicitation and Analysis
Understanding non-functional requirement is the inception and most critical phase in PTLC.
Entry Criteria
  • Application Under Test (AUT) Architecture
  • Non-Functional Requirement Questionnaire
Tasks
  • Understanding AUT architecture
  • Identification of critical scenarios and understanding
  • Understanding Interface details
  • Growth pattern
Exit Criteria
  • Client signed-off NFR document
 2.     Performance Test Strategy
This phase defined how to approach Performance Testing for the identified critical scenarios. Following are to be addressed during this phase.
  1. What kind of performance testing?
  2. Performance tool selection
  3. Hardware and software environment set up
Entry Criteria
  • Signed-off NFR document
Activities
  • Prepare the Test Strategy and Review
  • Data set up
  • Defining in-scope and out-scope
  • SLA
  • Workload Model
  • Prepare Risks and Mitigation and Review
Exit Criteria
  • Baselined Performance Test Strategy doc
 3.     Performance Test Design
This phase involves with the script generation using identified testing tool in a dedicated environment. All the script enhancements should be done and unit tested.
Entry Criteria
  • Baselined Test Strategy
  • Test Environment
  • Test Data
Activities
  • Test Scripting
  • Data Parameterization
  • Correlation
  • Designing the action and transactions
  • Unit Testing
Exit Criteria
  • Unit tested performance scripts
 4.     Performance Test Execution
This phase involves dedicated to the test engineers who design scenarios based on identified workload and load the system with concurrent virtual users (VUsers).
Entry Criteria
  • Baselined Test scripts
Activities
  • Designing the scenarios
  • Loading the test script
  • Test script execution
  • Monitoring the execution
  • Collecting the logs
Exit Criteria
  • Test script execution log files
 5.     Performance Test Result Analysis
The collected log files are analyzed and reviewed by the experienced test engineers. Tuning recommendation will be given if any conflicts identified.
Entry Criteria
  • Collected log files
Activities
  • Create graphs and charts
  • Correlating various graphs and charts
  • Prepare detailed test report
  • Test report analysis and review
  • Tuning recommendation
Exit Criteria
  • Performance Analysis Report
 6. Benchmark and Recommendations
This is the last phase in PTLC which involves benchmarking and providing recommendation to the client.
Entry Criteria
  • Performance Analysis Report
Activities
  • Comparing result with earlier execution results
  • Comparing with the benchmark standards
  • Validate with the NFR
  • Prepare Test Report presentation
Exit Criteria
  • Performance report reviewed and base lined
In next blog post, we will see how to select tool for Performance Testing. If you are enjoying our articles, please subscribe for our free weekly newsletter.

What is Pacing? Where/why to use it?


Pacing is the time which will hold/pause the script before it goes to next  iteration. i.e Once the Action iteration is completed the script will wait for the specific time(pacing time) before it starts the next one. It works between two actions. eg, if we record a script there will be three default actions generated by the Load Runner: vuser_init, Action and vuser_end, the pacing will work after the Action block and hold the script before it goes to repeat it.
The default blocks generated by LoadRunner is shown below:

Actions marked in Red

Now we know what is pacing and we use it between two iteration. The next question comes to mind is why we use pacing:

Pacing is used to:
  • To control the number of TPS generated by an user.
  • To control number of hits on a application under test.
   
Types of Pacing:
There are three options to control the pacing in a script:

General Pacing: 

 1. As soon as the previous iteration ends:
 Loadrunner starts the next iteration as soon as possible after the previous iteration ends.

2. After the previous iteration ends:
In this Loadrunner starts each new iteration after a specified amount of time, the end of the previous iteration. We can either give an exact number of seconds or a range of time.

3.  At fixed intervals
In this we can specify the time between iteration either a fixed number of seconds or a range of seconds from the beginning of the previous iteration. The next scheduled iteration will only begin when the previous iteration is complete. 

Below example will explain how it works:

Difference between Manual and Goal Oriented Scenario


Manual Scenario:
In Manual scenario we define the number of Virtual Users that will hit the application, In this scenario we don't bother about the number of hits or throughput
e.g, if we want to check the stability of application on the Vuser load of 500, then we can specify the number of Vusers to 500.


Goal Oriented Scenario:

In this scenario we decide the goal of execution by setting the number of Hits/sec, transactions/sec, Number of Virtual Users, Transaction response time and Page per minute. The scenario will run till it achieves the number of hits or the defined parameter.
e.g if we set hits=50, the scenario will run until it reaches hits to 50.