AI Test Parameter Data-Driven Testing (DDT)
AI Test Parameter Data-Driven Testing (DDT)
Test parameterization enables data-driven testing by executing a single test logic across multiple input values. This approach eliminates the need for redundant test cases, significantly reducing maintenance overhead while expanding functional coverage.
Data-Driven Testing (DDT) empowers teams to execute a single test scenario against thousands of unique datasets, eliminating the need for redundant code or manual duplication.
AI Test Parameters
AI Test Parameters - Import Functionality
What is Test Parameterization?
Test parameterization is a simple way to run the same test with different sets of data. Instead of writing multiple test cases for each input, you create one test and use a data table to cover many scenarios.
This separates:
Test logic → the steps you perform Test data → inputs and expected results
With this approach, you can test one feature across multiple conditions quickly and efficiently.
Test Parameterization Example:
Instead of creating 10 login test cases for different users, you can:
Write one login test Add multiple user data (admin, user, guest, invalid, etc.)
Benefits:
- Saves time and effort
- Reduces duplicate test cases
- Keeps test cases clean and organized
- Improves test coverage
How It Works (Step-by-Step)
- Login to QA Touch
- Go to your Project Dashboard, Click on Test Cases
- Click on ➕ Add Test Case button
- Click Test Case Parameter Template
- Add Test Case Parameter” template screen with multiple tabs: Test Case Details Test Case Parameters ✅ Attachments Custom Fields Integrations (Jira, GitLab, etc.)
- 👉 Test Case Parameters
- Configure AI Prompt
- Select Prompt Template
- Prompt Description - Provide clear instructions for AI: Example: Login with role-based access and verify permissions for admin, user, and guest
- Set Parameter Limit - 5,10
- Generate Test Parameters
- Review Generated Test Data Once generation is complete, QA Touch automatically creates a parameter table.
- Edit or Customize Parameters You have full control over the generated data: You can: ✅ Edit any field manually ✅ Add new rows ✅ Delete unwanted rows ✅ Modify test scenarios
- 👉 + Add Row to create additional test data
- Fill Test Case Details
- Click: Save → Save test case / Save & Continue → Add more
Creating Test Run & Result
- Navigate: From your dashboard, go to Project and select Test Runs.
- Initiate: Click the + Add Test Run button.
- Selection: Choose the All Test Cases option to include every case in the project.
- Details: Enter a descriptive Test Run Name, assign the appropriate User, and select the current Release.
- Finalize: Click Save. Your full suite is now ready for execution!
Shareable Public Result
Enable public sharing of test execution summaries to keep stakeholders informed. These reports offer real-time visibility into quality metrics and coverage, segmented to highlight the data most critical to your audience. This transparency allows teams and stakeholders to monitor real-time coverage and quality, ensuring they stay aligned on the specific metrics that impact their roles.