Tuesday, 15 April 2014

Wasted Time in LoadRunner

Wasted Time:-
If your guess is Replay log, Yes… We get this in the Replay Log of VuGen Script as shown below.
Action.c(21): Notify: Transaction "Home" ended with "Pass" status (Duration: 20.1061 Wasted Time: 0.0010)


Here is what HP says about Wasted Time.


Wasted time is time spent on activities whose purpose is to support test analysis, but would never be performed by a browser user, for example, time spent keeping transaction statistics for later reporting. Wasted time is calculated internally by LoadRunner. Your script can also add wasted time with lr_wasted_time. 


Sometimes, you may enter activities in a script that your do not want reported as part of the transaction statistics. Generally, these are activities related to record keeping, logging, or custom analysis. If you enhance the script with steps whose durations should not be included in the test statistics, you can track the time used by these steps with lr_start_timer and lr_end_timer. Then, the function lr_wasted_time is used for adding this user-determined time to the internally generated wasted time. 


You can retrieve the total wasted time (both that generated by LoadRunner automatically and that added with lr_wasted_time) with the function lr_get_transaction_wasted_time, or with lr_get_trans_instance_wasted_time, as appropriate.

When VuGen creates the log file, output.txt, and when on line testing generates the Vuser log files, no action is taken with the wasted time. The actual elapsed transaction times are reported. The lr_get_transaction_durationfunction also returns the actual elapsed transaction time. This information may be useful in analyzing and developing test scripts. 


However, in the on-line graphs in the LoadRunner Controller and the transaction response time graphs in the LoadRunner Analysis, the transaction times are reported after subtracting the wasted time. This is the time most pertinent to understanding the system being tested.

Monday, 14 April 2014

Scripts: Remove Think Time

Yesterday, my team got a requirement to randomize the think time and subtract the think time to reflect the actual response time of the transactions of the load test.

To achieve the two requirements, 

 (1) randomize think time and
 (2) subtract the think time from the transaction response time, two KBs were used.

  1. Randomize the think time with a random parameter described in KB 3448, "How to generate a random think time". Proceed to complete enhancing the script and port it over to the Controller. Run the scenario as per norm
  2. Once the scenario completes and anaylsis graph generated, filter the think time in the Analysis graph to get the processing time. Use KB 13748, "How to exclude think time from transaction timings in the Analysis graphs" to filter out the think time.

Loadrunner Web Services Tutorial Scripting In Two Ways

Web Services in load runner: We can do web Services scripting in two ways:
  1. By using Web Services Protocol. (Need additional License)
  2. By using Web Protocol.

Scripting Using Web Services Protocol:

I have taken weather web service as example shown in this link.

Steps:
  1. First open the vugen and select Web Services protocol.
  2. Click on Manage Services on top nav bar and Click on Import and give the WSDL URL which is usually ends with .wsdl

Now Click on Add Web Service Call on top nav bar. Give the Input Arguments and Leave the out put arguments empty. And click on ok. As shown in the following image.

Input Arguements in Web Services
Input Arguements in Web Services
It will create a script in loadrunner as shown below. You can perform the steps one by one in this way for all the web services steps. You can use lr_xml_find and lr_xml_get_values to validate the page.

web_service_call"StepName=GetCitiesByCountry_101",
        "SOAPMethod=GlobalWeather|GlobalWeatherSoap|GetCitiesByCountry",
        "ResponseParam=response",
        "Service=GlobalWeather",
        "ExpectedResponse=SoapResult",
        "Snapshot=t1396977083.inf",
        BEGIN_ARGUMENTS,
        "CountryName=India",
        END_ARGUMENTS,
        BEGIN_RESULT,
        END_RESULT,
        LAST);

Using Web Protocol:

We can create the same request using Web (HTTP/HTML) protocol. You need take the xml request as shown in the following and place in the web custom request.

XML Request Example for Web Custom Request
XML Request Example for Web Custom Request
You need to keep this soap request in the web request body. As shown below and you can capture all the response using correlation function. You can also add check point using web_reg_find. The url should start end with .asmx as shown in the request.

web_reg_find("Text/IC=New Delhi",
        LAST);

  web_reg_save_param_ex(
        "ParamName=Web Service Response",
        "LB=",
        "RB=",
        SEARCH_FILTERS,
        LAST);
 
    web_custom_request("Weather SOAP Request",
      "URL=http://www.webservicex.net/globalweather.asmx",
      "Method=POST",
      "TargetFrame=",
      "Resource=0",
      "RecContentType=text/xml",
      "Referer=",
      "Mode=HTML",
      "EncType=text/xml; charset=utf-8",
      "Body= Your Request should be here as shown in the following image"
      LAST);

Friday, 11 April 2014

Pre-Load test check lists

Before starting loadtest we have to make sure the following check lists: :)
1.End users, customer, and project members have been notified in advance of the execution dates and hours for the capacity test
2.All service level agreements, response time requirements have been agreed upon by all stakeholders.
3.Contact list with names and phone numbers has been drafted for support personnel (onsite and remote)
4.Functional Testing of the application has been completed.
5.Restart the controller machine.
6.Ramp Up / Duration / Ramp Down is configured correctly.
7.All Load Generators are in a "Ready" status.
8.All Load Generators are assigned to appropriate scripts.
9.All scripts have the correct number of Vusers assigned to them.
10.All scripts have the correct number of Iterations assigned to them.
11.Correct pacing has been agreed upon and configured for all appropriate scripts.
12.Logging is set to Send messages only when an error occurs for all scripts.
13.Think Times have been enabled/disabled in the test scripts
14.Generate snapshot on error is enabled for all appropriate scripts.
15.Timeout values have been set to the appropriate values.
16.All content checks have been updated for the appropriate scripts.
17.Rendezvous points have been enabled/disabled for appropriate scripts
18.All necessary data has been prepared/staged and is updated in all scripts.
19.Any scripts with unique data requirements have been verified.
20.All scripts have been refreshed in the controller and reflect the most recent updates.
21.IP Spoofing has been enabled in the controller.
22.IP Spoofing has been configured on all appropriate Load Generators.
23.All LoadRunner Monitors have been identified, configured and tested.
24.Auto Collate Results should be enabled
25.Results directory and file name should be updated.

Tuesday, 8 April 2014

Performance Testing Life Cycle

What is Performance Testing?

Performance Testing is a process of evaluating system’s behavior under various extreme conditions. The main intent in Performance testing is monitoring and improving key performance indicators such as response time, throughput, memory, CPU utilization etc.
There are three objectives (three S) of Performance testing to observe and evaluate; Speed, Scalability and Stability.
Performance testing evaluates the system. It identifies the bottleneck and critical issues by implementing various workload models.
When to start Performance Testing?
Performance Testing starts parallel with Software Development Life Cycle (SDLC). NFR elicitation happens parallel with System Requirement Specification (SRS).
Now we will see the phases of Performance Testing Life Cycle (PTLC).
  1. Non-Functional Requirements Elicitation and Analysis
  2. Performance Test Strategy
  3. Performance Test Design
  4. Performance Test Execution
  5. Performance Test Result Analysis
  6. Benchmarks and Recommendations
Performance Testing Life Cycle - QAInsights

1.     Non-Functional Requirements Elicitation and Analysis
Understanding non-functional requirement is the inception and most critical phase in PTLC.
Entry Criteria
  • Application Under Test (AUT) Architecture
  • Non-Functional Requirement Questionnaire
Tasks
  • Understanding AUT architecture
  • Identification of critical scenarios and understanding
  • Understanding Interface details
  • Growth pattern
Exit Criteria
  • Client signed-off NFR document
 2.     Performance Test Strategy
This phase defined how to approach Performance Testing for the identified critical scenarios. Following are to be addressed during this phase.
  1. What kind of performance testing?
  2. Performance tool selection
  3. Hardware and software environment set up
Entry Criteria
  • Signed-off NFR document
Activities
  • Prepare the Test Strategy and Review
  • Data set up
  • Defining in-scope and out-scope
  • SLA
  • Workload Model
  • Prepare Risks and Mitigation and Review
Exit Criteria
  • Baselined Performance Test Strategy doc
 3.     Performance Test Design
This phase involves with the script generation using identified testing tool in a dedicated environment. All the script enhancements should be done and unit tested.
Entry Criteria
  • Baselined Test Strategy
  • Test Environment
  • Test Data
Activities
  • Test Scripting
  • Data Parameterization
  • Correlation
  • Designing the action and transactions
  • Unit Testing
Exit Criteria
  • Unit tested performance scripts
 4.     Performance Test Execution
This phase involves dedicated to the test engineers who design scenarios based on identified workload and load the system with concurrent virtual users (VUsers).
Entry Criteria
  • Baselined Test scripts
Activities
  • Designing the scenarios
  • Loading the test script
  • Test script execution
  • Monitoring the execution
  • Collecting the logs
Exit Criteria
  • Test script execution log files
 5.     Performance Test Result Analysis
The collected log files are analyzed and reviewed by the experienced test engineers. Tuning recommendation will be given if any conflicts identified.
Entry Criteria
  • Collected log files
Activities
  • Create graphs and charts
  • Correlating various graphs and charts
  • Prepare detailed test report
  • Test report analysis and review
  • Tuning recommendation
Exit Criteria
  • Performance Analysis Report
 6. Benchmark and Recommendations
This is the last phase in PTLC which involves benchmarking and providing recommendation to the client.
Entry Criteria
  • Performance Analysis Report
Activities
  • Comparing result with earlier execution results
  • Comparing with the benchmark standards
  • Validate with the NFR
  • Prepare Test Report presentation
Exit Criteria
  • Performance report reviewed and base lined
In next blog post, we will see how to select tool for Performance Testing. If you are enjoying our articles, please subscribe for our free weekly newsletter.

What is Pacing? Where/why to use it?


Pacing is the time which will hold/pause the script before it goes to next  iteration. i.e Once the Action iteration is completed the script will wait for the specific time(pacing time) before it starts the next one. It works between two actions. eg, if we record a script there will be three default actions generated by the Load Runner: vuser_init, Action and vuser_end, the pacing will work after the Action block and hold the script before it goes to repeat it.
The default blocks generated by LoadRunner is shown below:

Actions marked in Red

Now we know what is pacing and we use it between two iteration. The next question comes to mind is why we use pacing:

Pacing is used to:
  • To control the number of TPS generated by an user.
  • To control number of hits on a application under test.
   
Types of Pacing:
There are three options to control the pacing in a script:

General Pacing: 

 1. As soon as the previous iteration ends:
 Loadrunner starts the next iteration as soon as possible after the previous iteration ends.

2. After the previous iteration ends:
In this Loadrunner starts each new iteration after a specified amount of time, the end of the previous iteration. We can either give an exact number of seconds or a range of time.

3.  At fixed intervals
In this we can specify the time between iteration either a fixed number of seconds or a range of seconds from the beginning of the previous iteration. The next scheduled iteration will only begin when the previous iteration is complete. 

Below example will explain how it works:

Difference between Manual and Goal Oriented Scenario


Manual Scenario:
In Manual scenario we define the number of Virtual Users that will hit the application, In this scenario we don't bother about the number of hits or throughput
e.g, if we want to check the stability of application on the Vuser load of 500, then we can specify the number of Vusers to 500.


Goal Oriented Scenario:

In this scenario we decide the goal of execution by setting the number of Hits/sec, transactions/sec, Number of Virtual Users, Transaction response time and Page per minute. The scenario will run till it achieves the number of hits or the defined parameter.
e.g if we set hits=50, the scenario will run until it reaches hits to 50.

Load Runner code for writing data in notepad


Below code is to save the Vendor numbers to a notepad.
(put it in vuserinit starting after Header)
//Path for saving the notepad file, the file name is "MatDocument.txt"

#define filename "C:\\Documents and Settings\\ravi\\Desktop\\MatDocument.txt"

f_Write(char *str1)
{
          long a;
          a = fopen(filename,"a+");
          if (a==NULL)
           {
             return -1;
          }
          else
          {
             fprintf(a,"Material Document Number is %s\n",str1);
             fclose(a);
          }
}



Save a parameter in notepad file:

This code will create a notepad file and save the parameter into it.


long fd;
web_reg_save_param("saveID","LB=---","RB=---","ORD=1",Last); //Capture the value in "saveID"
fd=fopen( "C:\\NotepadName.txt","a" ); //Path of notepad
fprintf( fd,"%s\n",lr_eval_string("{saveID"})); //It will print the data and go to next line
fclose( fd ); // It will close notepad.

How to find out Memory Leak in an application?

How to find out Memory Leak in an application?

There are many reasons for memory leak and being a tester its tough to find exact reason. The basic thing we need to understand what is memory leak? How it occurs?
Memory leak can be defined as the un-allocated space in a system which was used by a program and after  use it was never returned to the operating system. This un-allocated (unused memory) can cause shortage of memory and may crash the entire system.

There are few steps which may help to find out memory leak in  a system:
  1. We can monitor the memory graph by using perfmon and check if the memory utilization is keep on increasing and remain at peak even without any load, it shows the memory that was used while executing the program is still in use even the program doesn't needs it. The un-managed code is not efficient enough to release the memory and free it after use, which may cause out of memory issue.
  2. You can check the throughput graph while load testing and if there is sudden spike in it then it can be one of the reason for memory leak. But we can't assure the spike in graph is because of memory leak, the other reason could be due to network traffic, data base connectivity issue etc.
  3. We have GC to collect all the un-allocated space and return it to system, it has some limitations:
  • In a manually managed memory environment: Memory is dynamically allocated and referenced by a pointer. The pointer is erased before the memory is freed. After the pointer is erased, the memory can no longer be accessed and therefore cannot be freed.
  • In a dynamically managed memory environment: Memory is disposed of but never collected, because a reference to the object is still active. Because a reference to the object is still active, the garbage collector never collects that memory. This can occur with a reference that is set by the system or the program.
  • In a dynamically managed memory environment: The garbage collector can collect and free the memory but never returns it to the operating system. This occurs when the garbage collector cannot move the objects that are still in use to one portion of the memory and free the rest.
  • In any memory environment: Poor memory management can result when many large objects are declared and never permitted to leave scope. As a result, memory is used and never freed.

How do we write a user defined function in LoadRunner?

How do we write a user defined function in LoadRunner?

Create the external library that contains the function. This library must then be added to the bin directory of VuGen. And then, the user-defined function can be assigned as a parameter.

-Source-Internet:

We need to create an external DLL libraray to use the user defiend function. It can be substitued as paramter values.

The syntax is:
__declspec(dllexport) char *(char *, char *)
Both the arguments passed are NULL for this function.

An User-defined function could be :
__declspec(dllexport) char *GetSysDate(char *, char *)

This would return the system date. You could substitute this returned value for a parameter instead.

Once the .dll is declared In LoadRunner script you can directly use this function:

lr_load_dll("mydll.dll")

Then you can call the functions in the .dll directly:
date = GetSysDate(char1, char2)

char1 and char2 and the names of variables

Note: Within a header file you can also define other user defined functions to be called within the script. This process is followed in most of the situations as working with DLL is costlier for simple functions which can be created within a header file.

Different Ways To Do Correlation

Different Ways To Do Correlation:

1. Generation Log

One of the common options in loadrunner is generation log. You will take the left and right boundary and search in the generation log and use CTRL+F and find the request name and place the web_reg_save_param before that request.

2. Replay Log

Sometime you do not find correlation value in generation log than the alternative way is go to run time settings and click on Log->Extended Log-> "Data Returned by Server" option and replay the script. Once you replay the script you will useCTRL+F and find the left and right boundaries. Sometimes replay log may fail or it will take a long time if you select “data returned by server” as it is printing all the response to the replay log then the alternate method to use is  lr_set_debug_message. 

3. Redirection

If you are not able to find the value in generation log and replay log then there might be 301 or 302 redirection.  Find out the pages that have 301 or 302 redirection. Once you find the pages that have 301 redirection go to that page manually in your browser, try to find the left and right boundary values by using view source (Left Click on the page -> "View Page Source" in chrome and "View Source" in internet explorer).

4. Text Flags:

Sometime you find the value, but the left and right boundaries will by changing dynamically then the alternate option is using text flags SaveLen and SaveOffset.

5. Automatic Correlation (Design Studio):


One of the best method that has from loadrunner 11.5 and loadrunner 12 is updated automatic correlation called design studio. It has improved a lot from this version. You can do the scripting at a faster pace by using this option.

Once you record the application go to Design-> Design Studio. The Design Studio will highlight the different values that need to be done correlation. Based on our requirement, we will select the value click on correlate value. We can also add it as “Add as Rule”. It will be easy if we are reusing the correlation in other scripts.

Note: Make sure you select the following option:

Go to Recording Options ->Correlation->Configuration-API and select web_reg_save_param_ex.  You can also use web_reg_save_param_regexp as well. I prefer using web_reg_save_param_ex.

Wednesday, 26 March 2014

What is 90 percentile?

What is 90 percentile?

90 percentile is average of 90% of the response times, here arrange the 100 response times from good to bad then eliminate the worst 10% of the response times. Then calculate the average for 90% of the response times.

Example: The response times for an application for 10 users is
2,4,3,8,9,16,6,8,3,10 then arrange them from Good to bad 

2,3,3,4,6,8,8,9,10,16 

Then take the average for 90% of the response times 2,3,3,4,6,8,8,9,10 (eliminate the 16) is 5.88.
Overall (100%) response time is 10.94 

Why 90 percentile required:-
End users need to satisfy with the application response time / speed. Then users should increase other wise users won't use that application. For satisfaction calculation purpose Performance testers uses the 90 percentile. Means 90% of the users satisfying with that response time, remaining 10% users not satisfying with that response times.

What is concurrent users and simultaneous users ?

What is concurrent users and simultaneous users ?

For example,  bank application has below transactions     
                      1. Login                          
                      2. Account summary                      
 ---------------------------Transfer money Action ---------                          
                      3. Click Transfer funds                         
                      4. Select "Beneficiary"                          
                       5. Enter "Amount"                          
                       6. Transfer                           
                       7. Enter "User id" and  Password and click "Confirm"--------------------------------------Transfer Money Action end--------                           
                         8.Log out

So we have 8 transaction for this application and we have one Action (Transfer Money is one action but in that action you can have 5 transaction).

Concurrency : Performing same activity at a time by all users.Mean 100 users are login into the application at a time. But simultaneous users are login into the application one by one. Concurrency has various levels

           1.Transaction concurrency          
           2.Business Process Concurrency           
           3.Application Concurrency 

Transaction concurrency :

We need to test the application with 100 concurrent users, Means "All users(100 users) should perform the same transaction (login or   Account summary .... or  Logout) at a time" this is called transaction level concurrency.

Business Process Concurrency:

We need to test the application with 100 concurrent users for Transfer money Business action, means  "All users(100 users) should perform the same Transfer money action (here may be 30 users are clicking "transfer funds, 20 users are on selecting "Beneficiary", 20 users are in enter "Amount" page,10 users are in "Transfer" page  and 20 users are Enter "User id" and  Password and click "Confirm" ). 
Overall 100 users should be in transfer money action.

Application Concurrency :

All 100 users should perform different transaction or same transaction but at any time 100 user should be accessing the application.

Simultaneous users All 100 users are login into the application one by one.