Skip to main content

Auto Analysis

Auto Analysis manages the full test execution and reporting cycle within Perf Agent. Upload your JMX script to BlazeMeter, trigger test runs through a connected GitHub or Azure DevOps pipeline, monitor results inside Perf Agent, and automatically send result reports to stakeholders by email — all from one place.

Auto Analysis Dashboard showing configured and not-configured applications with View and Configure actions


Prerequisites

Before using Auto Analysis, ensure the following are in place:

RequirementDetails
Perf Agent accessYou must be logged in with a valid account
Application configuredYour application must be added via Settings → Applications
GitHub integrationConfigure GitHub with a Personal Access Token (PAT) under Settings → Integrations before configuring any application
BlazeMeter integrationBlazeMeter must be connected under Settings → Integrations
JMX scriptEither a script generated via Auto Script, or a local .jmx file ready to upload

Step 1 — Add and Configure an Application

Applications appear in the Auto Analysis Dashboard once they have been added via Settings → Applications. Each application card shows its configuration status and most recent build.

Configuration Statuses

StatusMeaning
NOT CONFIGUREDThe application has been added but not yet configured for Auto Analysis
CONFIGUREDThe application is fully set up — script, GitHub workflow, and BlazeMeter are all in place
PARTIALLY CONFIGUREDA previously configured resource (GitHub workflow or BlazeMeter test) has been deleted externally. Reconfigure the application to restore full functionality

Click Configure next to any unconfigured application to open the configuration panel.


Step 2 — Configure the Application

Configure Application panel showing JMX Source, Select Script, GitHub Repository URL, Concurrent Users, Duration, and Throughput fields

The Configure Application panel collects the details Auto Analysis needs to set up your test in BlazeMeter and create a GitHub Actions workflow in your repository.

Fields

FieldDescription
Application NamePre-filled with the application name from Settings — read-only
JMX SourceWhere the script comes from — select Select from AutoScripting if the JMX was generated by Auto Script for this application, or Upload from Local to upload a .jmx file from your machine
Select Script (AutoScripting only)Dropdown of completed scripts generated under this application in Auto Script
GitHub Repository URLThe URL of the GitHub repository where the Actions workflow will be created
Concurrent UsersNumber of simultaneous virtual users for the test run (default: 10)
Duration (min)How long the test should run in minutes (default: 30)
Throughput (hits/s)Maximum requests per second for the test (default: 100)

How to Configure

  1. Select your JMX Source and either choose a script from the dropdown or upload a local .jmx file
  2. Paste your GitHub Repository URL
  3. Adjust Concurrent Users, Duration, and Throughput to match your target load
  4. Click Start Configuration

When you click Start Configuration, Auto Analysis automatically:

  • Uploads the JMX script to your connected BlazeMeter account
  • Creates a GitHub Actions workflow file in the specified repository
  • Syncs the required secrets to the repository
note

Your GitHub integration must be configured with a valid PAT token under Settings → Integrations before clicking Start Configuration. Without this, the workflow cannot be created in your repository.

Once configuration completes, the application status updates to CONFIGURED on the dashboard.


Step 3 — View Application Details

Click View on any configured application to open its detail page. The View screen has four collapsible sections.

Application detail page for Qwantity showing four sections: Integrations Overview, Build History, Execution Strategy, and Email Recipients


Integrations Overview

Integrations Overview expanded showing Script (Configured), GitHub (Configured), BlazeMeter (Configured), Azure DevOps (Configured), and Datadog (Not Configured) status tiles

The Integrations Overview section shows the connection status of each integration for this application:

IntegrationStatus Meaning
ScriptThe JMX file has been uploaded to BlazeMeter
GitHubThe Actions workflow has been created in the configured repository
BlazeMeterThe test plan is set up in your BlazeMeter account
Azure DevOpsConnected if configured under Settings → Integrations
DatadogConnected if a Datadog URL is configured — required for APM monitoring data in build reports. If not configured, a warning banner is displayed and APM data will be unavailable
note

If you delete the GitHub workflow or BlazeMeter test plan externally (outside of Perf Agent), the corresponding tile will show Not Configured and the application status will change to PARTIALLY CONFIGURED. Re-run the configuration to restore it.


Build History

The Build History section lists all test runs triggered for this application.

Each entry shows:

  • Build ID — a unique identifier in the format BUILD_YYYYMMDD_HHMMSS
  • Timestamp — when the build was triggered

The section is split into the most recent build and an Older Builds expandable list for previous runs.

Click any build entry to open the build report. See Viewing Build Results below.


Execution Strategy

The Execution Strategy section displays the current execution mode and provides the option to trigger a test run manually.

Current Mode shows how the test is triggered — typically manual.

Click RUN NOW to be redirected to the GitHub Actions page for your repository, where you can click Run workflow to start the test. GitHub then triggers the BlazeMeter test run, monitors execution, and sends result emails to all configured recipients upon completion.

caution

The RUN NOW button is disabled if the GitHub integration has been removed or is not configured under Settings → Integrations. Reconfigure the GitHub integration before attempting to trigger a run.

tip

Add all email recipients (see Email Recipients below) before clicking RUN NOW or triggering the GitHub workflow. Recipients added after a run has started will not receive the report for that build.


Email Recipients

The Email Recipients section manages who receives automated test result reports after each run.

To add a recipient:

  1. Enter an email address in the input field
  2. Click ADD
  3. The address appears in the list below

To remove a recipient, click the delete icon (🗑️) next to their email address.

There is no limit on the number of recipients. Add all relevant stakeholders — team leads, project managers, or clients — who need visibility into test results.


Viewing Build Results

Click any build entry in Build History to open the full build report.

Build report for BUILD_20260407_092014 showing Project, Application, Test Time, Transactions count, Observations, Datadog Remarks, ADO Defects, and Transaction Metrics table

Report Sections

SectionDescription
Build headerBuild ID, Project, Application name, Test Time, and total number of Transactions recorded
ObservationsAI-generated analysis comparing this build's response times to the previous build. Highlights significant improvements or regressions across endpoints
Datadog RemarksInfrastructure and APM observations pulled from Datadog during the test window. Shows "No data available" if Datadog is not configured
ADO DefectsLists any Azure DevOps defects automatically raised from this build. Shows "No data available" if ADO is not connected or no defects were raised
Transaction MetricsA detailed table of per-transaction performance data

Transaction Metrics Table

The Transaction Metrics table provides the following columns for each recorded transaction:

ColumnDescription
TransactionName of the transaction (e.g., Login, Launch)
Total HitsTotal number of requests recorded for this transaction
TPS (Req/s)Transactions per second — the throughput achieved during the test
Avg RT (ms)Average response time in milliseconds
P90 RT (ms)90th percentile response time — 90% of requests completed within this time
P95 RT (ms)95th percentile response time
Max RT (ms)The slowest response time recorded for this transaction
Error RatePercentage of requests that returned an error response

Click Copy CSV to copy the transaction metrics data to your clipboard for use in external reports or spreadsheets.

Downloading the Report

Click Download Report (top right of the build page) to download the full build report as a file. Share this with stakeholders who were not included in the automated email distribution.


What to Do Next

Once you have reviewed build results:

  • Compare against your NFR thresholds — Use the SLAs defined in the NFR Strategy Hub as the benchmark for evaluating whether transactions are meeting their targets.
  • Use the AI Chatbot — The built-in chatbot on the build report screen has full context of the results and can answer questions about specific observations or transactions.
  • Re-run after fixes — After addressing any regressions identified in Observations, trigger a new run via RUN NOW and compare the new build against the previous one.