Load testing ensures that an application works whilst under pressure, or load from a large number of simultaneous users over a duration of time. It helps to determine the behaviour of a system under various conditions and ensures that performance is not affected during times of heavy load.
A Virtual User (VU) is a simulated user that acts just like a real user would when accessing a web application. In JAR:TestLab, the actions of a VU are programmed in a Script. Load testing works by creating multiple virtual users, which in turn create a load on the target web application.
JAR:TestLab has a Datasets feature, which allows you to upload a CSV file containing data that can be used during Script execution. Instead of all Virtual Users having the same input data written statically within the Script, the VUs can request input data from the Dataset. Datasets can be used to provide information such as login details or search terms.
JAR:TestLab uses real browsers during tests, this means that the browser will do all of the heavy lifting for you. As a browser would function under normal use, the end user is totally unaware of correlation taking place and this is exactly how JAR:TestLab functions. You will not be required to parameterize session IDs into any of your Scripts, as the browser will automatically manage these.
Yes, JAR:TestLab is fully flexible, allowing you to use the same Scripts, Recordings, Datasets and Filesets for both Load and Monitor Tests. You can use each of these records as many times as you like.
If you wish to chat directly to our support engineers, you can open the Live Chat section by selecting this from underneath the JAR:Support section on the navigation. The Support Chat room will be displayed in the main panel. Within this panel you can discuss your requirements with support engineers (shown in red) or talk with other members of the JAR:Load system.
JAR:TestLab is a technical tool and whilst we provide full assistance through online resources, it is often essential to discuss things with one of our engineers.
If you have purchased an on-demand or continuous testing plan with JAR:TestLab you can call our engineers between 8am UTC to 10pm UTC on +44 2890 233 322. If we are busy, please leave contact information and an engineer will call you back shortly.
If you are a trial user, we recommend using the Live Chat system, or email us firstname.lastname@example.org – we aim to respond within a few hours.
JAR:TestLab has a dedicated ticket system for recording and tracking support issues. If you believe you have found a bug, or wish to tell us about a feature you require, then please add a support ticket at https://tickets.jartechnologies.com/
We aim to reply to all trial users within 24 hours, and customers will receive a response within two working hours.
In order to have a ticket correctly accepted, you must enter the following information on the form:
• Email address – this must match your JAR:TestLab / Load / Monitor email address.
• Issue Summary
• Issue Details
Please provide us with as much information as possible on how to reproduce the issue. Attaching screenshots of any issue will assist with resolving it.
JAR:TestLab uses a real web browser for every virtual user (VU) when executing tests. This means that the back-end is tested with real web browser traffic. The real browsers enable functional testing of your website, gaining an overview of responsiveness to the user as well as the back-end functionality.
The only down side of using real browsers is the slight memory overhead required on your cloud servers. A normal website requires around 40 Megabytes of memory per virtual user (this includes physical and virtual memory, so you can have a lot of VUs per machine).
How do I configure what I see on the Dashboard?
The data shown in your Dashboard is defined by a Report Template which is set to the Default Report for either JAR:Load or JAR:Monitor Dashboard. Select the Report Designer section on the navigation, create a Report which includes the widgets for the specified Dashboard. Once you have saved the Report, return to the view of the Report Designer table, right click on the Report name and select the option to set it as the default report for JAR:Load or JAR:Monitor. This will display the Report in the Dashboard for the corresponding section.
Summary Values options to display:
• Active Virtual Users
• Current Bandwidth
• Requests per second
• Average response time
• Total requests
• Total failed requests
• Total percentage failed requests
Test Summary displays the test status, progress, duration, start time and end time.
Deployment server monitoring: within this widget, you can set percentage limits for CPU and RAM usage which will determine the status of the Deployment server:
• CPU warning limit
• CPU critical limit
• RAM warning limit
• RAM critical limit
• CPU usage
• RAM usage
• Summary text
Chart / Histogram can display various metrics:
• Received bytes per second
• Transmitted bytes per second
• Requests per second
• Average response time (ms)
• Total requests
• Total bytes received
• Total bytes transmitted
• Transactions started
• Transactions finished
• Successful requests
• Failed requests
• Script lines executed
• Failed Requests
• Script Lines Executed
• Active Virtual users
• Total failed virtual users
• Total failed virtual users
• User terminated virtual users
• Memory in use (bytes)
• Memory in use (%)
• CPU usage total (%)
• CPU Usage user (%)
• CPU usage system (%)
• CPU usage IO/wait (%)
• Network total bytes (B/s)
• Network total errors (err/s)
• Network total packets (pkt/s)
• Network Tx bytes (B/s)
• Network Tx errors (err/s)
• Network Tx (pkt/s)
• Network Rx bytes (B/s)
• Network Rx errors (err/s)
• Network Rx (pkt/s)
• Disk total I/O rate (B/s)
• Disk total utilisation (%)
• Disk write rate (B/s)
• Disk write operations (op/s)
• Disk write utilisation (%)
• Disk read rate (B/s)
• Disk read operations (op/s)
• Disk read utilisation (%)
Logs can display:
• Test plan logs
• Virtual user logs – a virtual user ID should be entered in the text field to specify which VU logs should be displayed in the log
• Deployment logs – these show information relating to the deployment servers
• Network Requests – this provides a log showing network requests for a specified Virtual User
• Transactions – this shows recent transactions, including hits, fails, minimum, maximum, average and 90th percentile.
• VU feedback – this provides a live screenshot of the test progress
Yes, prior to carrying out a Load Test, you will need to create a Report Template and set it as the Default Report for Load Tests. Within this Report, you can define which results you would like displayed by adding the relevant widgets to the template.
If you have finished editing your Script or Recording and are having difficulty testing it, please check that you have created a Report Template in the Report Designer section and set it as the Default Report for Web Script Test. This type of Report is used when testing a script to show test results.
To create a Default Report for Web Script Tests, create a Report, then right click on the Report title in the Report Designer table and select the option to set as the Default Report for Web Script Test.
When it comes to testing and debugging a Script or Recording, some useful widgets to include on your Default Web Script Test Report are:
• Summary Value – Total Requests and Total Failed Requests. These will give an indication if the Script is failing due do failed requests
• Test Summary – this widget provides details of the test status, duration, start and end time. It will help you to identify whether the test has completed successfully or not.
• Network Requests – this widget will show a log of all network requests, indicating the status and time taken to complete. Successful network requests should have a status of “200”, other values in this field may indicate issues with the requests.
• Logs – in the Logs Properties panel on the right hand side of the Report Editor, select “Virtual User Logs” in the Log Source dropdown and set the Virtual User ID to 1. This widget will then show logs returned by the Virtual User as programmed in the Script.
• Virtual User Screenshot – this provides a live screenshot showing the current stage of the Script Test. This can be useful to diagnose issues with the script, for instance, if an error message is being displayed during a test.
Please make sure that you have created a Report Template in the Report Designer section and set it as the Default Report for Load Tests. This type of Report is required when completing a Load Test, as it determines which results will be presented during and after test completion.
To create a Default Report for Load Tests, create a Report, then right click on the Report title in the Report Designer table and select the option to set as the Default Report for Load Test.