Home/Use Cases/Generate Test Cases
Coding

How to Generate Test Cases with AI

Create structured QA test cases covering functional, edge, and negative scenarios for any feature or user story.

Manual test case writing is slow and coverage is inconsistent. AI can analyze a feature specification or user story and generate a comprehensive test case suite — including test ID, preconditions, steps, expected results, and edge cases — in a format ready for import into TestRail, Jira, or any QA tool.

Why manual test case writing leaves gaps

Manual QA test case writing has a systematic bias toward cases the writer already knows to test. Happy path coverage is typically thorough because testers write the scenarios they understand intuitively. Negative cases — invalid inputs, permission violations, concurrent operations, boundary conditions — are inconsistently covered because they require deliberately thinking adversarially about the feature being tested. Security-related edge cases (SQL injection in form fields, path traversal in file uploads, auth token manipulation) are consistently undertested because they require security-specific knowledge most feature testers do not have top of mind. AI generates test cases by systematically applying all of these categories rather than relying on the tester's intuition and knowledge.

How AI generates comprehensive test case coverage

AI generates test cases by analyzing the feature specification and identifying the complete decision space: what inputs are valid, what inputs are invalid and how, what system states affect behavior, what permissions boundaries exist, and what concurrent or race condition scenarios are possible. For each decision branch, it generates a test case with the appropriate preconditions, steps, and expected result. This systematic approach consistently produces 40-60% more test cases than manual writing — not because it invents unrealistic scenarios, but because it covers the decision space methodically rather than intuitively. The highest-value AI-generated cases are typically in the negative and security categories, where human testers have the most blind spots.

What inputs produce the most useful test case output

Test case quality from AI depends on the specificity of the feature specification you provide. Acceptance criteria written in Given/When/Then format produce better test cases than narrative feature descriptions because they make the expected behavior explicit. Providing the user roles and permission model helps AI generate permission boundary tests. Specifying the output format before generating (TestRail import format, Jira table, plain text with specific columns) ensures the output is directly usable rather than requiring reformatting. If there are known edge cases from previous bugs or production incidents, include them — AI cannot know your production history, but it can ensure those scenarios are covered if you provide them.

Step-by-step guide

1

Provide the feature specification

Paste the user story, acceptance criteria, or feature description as the basis for test generation.

2

Specify test case format

Define the columns you need: ID, description, preconditions, steps, expected result, and priority.

3

Request negative and edge cases

Explicitly ask for invalid input tests, permission boundary tests, and concurrency scenarios.

4

Organize by test type

Ask AI to group cases by: happy path, validation, security, and performance categories.

Ready-to-use prompts

Feature test suite from acceptance criteria
Generate a complete test case suite for the following feature. Output as a Markdown table with these columns: Test ID | Test Type | Description | Preconditions | Test Steps | Expected Result | Priority (High/Medium/Low).

Feature: [FEATURE NAME]
User roles: [LIST ROLES — e.g. admin, standard user, guest]
Acceptance criteria:
[PASTE ACCEPTANCE CRITERIA]

Generate test cases covering all of the following categories (label each with its type):
1. Happy path — standard successful flows for each user role
2. Validation — invalid inputs, missing required fields, format violations
3. Authorization — accessing features without correct permissions, cross-user data access
4. Boundary — edge values (empty, maximum length, 0, negative numbers)
5. Security — SQL injection, XSS in text fields, CSRF if applicable
6. State — behavior when system is in unexpected state (already logged in, session expired, concurrent requests)

Prioritize High for cases that could cause data loss or security issues, Medium for functional failures, Low for UI/UX edge cases.

Why it works

Explicitly labeling test types and listing all categories prevents AI from defaulting to happy path and validation tests only. The priority rubric (data loss = High, functional = Medium, UX = Low) produces a prioritized test plan that QA can work through systematically.

Negative and security test cases for an API endpoint
Write [NUMBER] negative and security test cases for the following API endpoint. Include the expected HTTP status code and response body description for each.

Endpoint: [METHOD] [PATH]
Description: [WHAT THE ENDPOINT DOES]
Authentication: [AUTH TYPE]
Request constraints: [FILE SIZE LIMITS, FIELD VALIDATIONS, RATE LIMITS, ETC.]

Cover at minimum:
- Authentication: missing token, expired token, malformed token, token for wrong user
- Input validation: missing required fields, wrong field types, values exceeding limits
- File handling (if applicable): wrong MIME type, corrupted file, zero-byte file, file exceeding size limit
- Injection: SQL injection in string fields, path traversal in filename fields, XSS in text fields
- Rate limiting: behavior at and beyond rate limit threshold
- Concurrency: duplicate requests within the same second

Format: numbered list with Test Case | Input | Expected Status Code | Expected Response.

Why it works

Listing injection and concurrency categories explicitly ensures they are covered — these are the categories most commonly omitted from manually written negative test suites. Asking for expected status codes alongside each case makes the output actionable for developers building or reviewing the endpoint.

Practical tips

  • Provide acceptance criteria in Given/When/Then format rather than narrative prose — it makes the expected behavior explicit and AI generates more targeted test cases from it.
  • Always explicitly list security test categories (SQL injection, XSS, path traversal) in your prompt — AI will not include security tests unless you ask for them by category.
  • Ask AI to generate test cases in import-ready format (CSV for TestRail, Markdown table for Jira) from the start — reformatting 50 test cases after generation wastes the time you just saved.
  • After generating the suite, ask 'what scenarios that could cause data loss or security vulnerabilities are not covered?' — this surfaces the highest-priority gaps.
  • Include known past bugs as additional test cases in your prompt — AI cannot know your production incident history, but ensuring those scenarios are always covered prevents regressions.

Recommended AI tools

ChatGPTClaudeGitHub Copilot

Continue learning

Write unit testsCode review automationDebug code

Build the perfect prompt for this task

PromptIt asks smart questions and tailors the prompt structure to your specific situation in seconds.

✦ Try it free

More Coding use cases

Debug Code

Diagnose and fix bugs faster by giving AI your error, stack trace, and

View →

Write Unit Tests

Generate comprehensive unit tests that cover happy paths, edge cases,

View →

Write API Documentation

Generate clear, developer-friendly API docs with endpoint descriptions

View →

Refactor Legacy Code

Modernize and clean up legacy codebases by identifying code smells and

View →
← Browse all use cases