AI Test Parameter Data-Driven Testing (DDT)

AI Test Parameter Data-Driven Testing (DDT)

Test parameterization enables data-driven testing by executing a single test logic across multiple input values. This approach eliminates the need for redundant test cases, significantly reducing maintenance overhead while expanding functional coverage.

Data-Driven Testing (DDT) empowers teams to execute a single test scenario against thousands of unique datasets, eliminating the need for redundant code or manual duplication.

AI Test Parameters

AI Test Parameters - Import Functionality

What is Test Parameterization?

Test parameterization is a simple way to run the same test with different sets of data. Instead of writing multiple test cases for each input, you create one test and use a data table to cover many scenarios.

This separates:

Test logic → the steps you perform Test data → inputs and expected results

With this approach, you can test one feature across multiple conditions quickly and efficiently.

Test Parameterization Example:

Instead of creating 10 login test cases for different users, you can:

Write one login test Add multiple user data (admin, user, guest, invalid, etc.)

Benefits:

  1. Saves time and effort
  2. Reduces duplicate test cases
  3. Keeps test cases clean and organized
  4. Improves test coverage

How It Works (Step-by-Step)

  1. Login to QA Touch
  2. Go to your Project Dashboard, Click on Test Cases
  3. Click on ➕ Add Test Case button
  4. Click Test Case Parameter Template
  5. Add Test Case Parameter” template screen with multiple tabs: Test Case Details Test Case Parameters ✅ Attachments Custom Fields Integrations (Jira, GitLab, etc.)
  6. 👉 Test Case Parameters
  7. Configure AI Prompt
    1. Select Prompt Template
    2. Prompt Description - Provide clear instructions for AI: Example: Login with role-based access and verify permissions for admin, user, and guest
    3. Set Parameter Limit - 5,10
  8. Generate Test Parameters
  9. Review Generated Test Data Once generation is complete, QA Touch automatically creates a parameter table.
  10. Edit or Customize Parameters You have full control over the generated data: You can: ✅ Edit any field manually ✅ Add new rows ✅ Delete unwanted rows ✅ Modify test scenarios
  11. 👉 + Add Row to create additional test data
  12. Fill Test Case Details
  13. Click: Save → Save test case / Save & Continue → Add more

Creating Test Run & Result

  1. Navigate: From your dashboard, go to Project and select Test Runs.
  2. Initiate: Click the + Add Test Run button.
  3. Selection: Choose the All Test Cases option to include every case in the project.
  4. Details: Enter a descriptive Test Run Name, assign the appropriate User, and select the current Release.
  5. Finalize: Click Save. Your full suite is now ready for execution!

Shareable Public Result

Enable public sharing of test execution summaries to keep stakeholders informed. These reports offer real-time visibility into quality metrics and coverage, segmented to highlight the data most critical to your audience. This transparency allows teams and stakeholders to monitor real-time coverage and quality, ensuring they stay aligned on the specific metrics that impact their roles.