Homeโ€บBlogโ€บAI Prompts for QA & Testing: Unit Tests, E2E, API,โ€ฆ
Programming21 min read ยท May 10, 2026

AI Prompts for QA & Testing: Unit Tests, E2E, API, Security & Performance (2026)

By Promptprepare Team ยท AI Prompt Experts

Complete AI prompt library for QA engineers and developers. Production-ready prompts for unit testing (Jest, Pytest, JUnit), integration tests, E2E with Playwright and Cypress, API testing, security testing, performance testing with k6, and CI/CD test automation.

#QA Testing#Jest#Playwright#Cypress#Pytest#k6#AI Testing Prompts

Why AI Is the Most Powerful Tool in a QA Engineer's Stack

Testing is the highest-ROI use case for AI in software development. A senior engineer can write 10 unit tests per hour by hand. With well-engineered AI prompts, the same engineer generates 50 tests per hour โ€” and catches edge cases they would have missed. But AI-generated tests without the right prompts produce happy-path-only test suites that give false confidence and miss the production bugs that matter.

This guide gives you the exact prompts that produce comprehensive, maintainable test suites: unit tests with 100% branch coverage, E2E tests that survive CI, security tests against OWASP Top 10, and k6 performance tests that catch regressions before deployment.

1. Unit Tests with Full Branch Coverage (Jest)

The most common AI testing mistake: asking for tests without specifying the branches to cover. Here is how to fix that.

You are a test engineering expert specializing in JavaScript/TypeScript unit testing.

Write comprehensive Jest 29 unit tests for this function:

[PASTE FUNCTION CODE HERE]

Requirements:
- Test every branch of the function โ€” list the branches you identify before writing any tests
- Mock all external dependencies using jest.mock() with factory functions
- Use jest.spyOn() for functions where you need to verify they were called with specific args
- Cover these categories:
  * Happy path: valid inputs producing expected output
  * Invalid inputs: each type of validation failure
  * Error cases: each external dependency failure (network error, DB timeout, validation error)
  * Boundary values: empty arrays, zero, max int, null, undefined, empty string
  * Async errors: rejected promises and thrown errors inside async functions
- AAA pattern with labeled comments: // Arrange, // Act, // Assert
- Descriptive test names: 'should return X when Y' pattern
- beforeEach for shared setup, afterEach to restore mocks (jest.restoreAllMocks())

Target: 100% branch coverage. Show the --coverage output header in a comment at the top of the file.

Why it works: Asking the AI to list branches before writing tests ensures it identifies the error paths your function has โ€” not just the happy path. This single step catches 80% of the coverage gaps in AI-generated tests.

2. Unit Tests for Python with pytest

You are a pytest expert for Python 3.12 applications.

Write comprehensive pytest unit tests for this function:

[PASTE PYTHON FUNCTION HERE]

Requirements:
- Use pytest-asyncio for async functions (asyncio_mode='auto')
- Mock with unittest.mock.patch and MagicMock / AsyncMock (for async functions)
- Parametrize validation tests with @pytest.mark.parametrize โ€” group all invalid inputs into one parametrized test
- Cover:
  * Happy path with realistic data
  * Each validation error condition
  * Each external dependency failure (database, Redis, HTTP client)
  * Boundary values specific to this function's logic
- Fixtures in conftest.py: database session (rolled back after each test), mock Redis (fakeredis)
- Custom assertion helpers if asserting on complex objects
- Test names: test_[function_name]_[scenario]_[expected_result]

Show: all test functions, conftest.py fixtures used, and the pytest.ini configuration needed to run them.

3. React Component Tests with Testing Library

You are a React Testing Library expert.

Write comprehensive tests for this React component:

[PASTE COMPONENT CODE HERE]

Testing strategy:
- Test user behavior, not implementation details (no testing internal state or refs)
- userEvent over fireEvent (async user interactions)
- Accessible queries: getByRole, getByLabelText, getByText โ€” never getByTestId unless no alternative
- MSW (Mock Service Worker) for API calls โ€” no jest.mock() of fetch/axios

Test cases required:
- Renders correctly with default props (snapshot test for structure, not style)
- Each user interaction flow (click, type, submit)
- Loading state while API call is pending
- Success state after API response
- Error state when API returns error
- Accessibility: all interactive elements reachable by keyboard, screen reader labels present
- Edge cases: empty data array, single item, maximum items, special characters in text

Setup: custom render wrapper with providers (React Query, Router, Context). Show the test-utils.tsx file.

4. API Integration Tests (Supertest + Jest)

You are an API integration testing expert.

Write Supertest integration tests for this Express endpoint:

Endpoint: [PASTE ENDPOINT DEFINITION]

Test every HTTP scenario:
1. 200/201: valid request โ€” assert full response shape (not just status code)
2. 400: each Zod validation error โ€” assert field-level error messages
3. 401: no Authorization header โ€” assert WWW-Authenticate header present
4. 401: malformed JWT โ€” assert error code 'invalid_token'
5. 401: expired JWT โ€” assert error code 'token_expired'
6. 403: authenticated but missing permission โ€” assert error code 'forbidden'
7. 404: resource not found โ€” assert error message references the resource type
8. 409: conflict (duplicate or version mismatch) โ€” assert the conflicting field
9. 422: unprocessable entity (valid JSON but business logic violation)
10. 429: rate limit exceeded โ€” assert Retry-After header is a valid number
11. 500: database connection failure โ€” assert no stack trace in response body

Test setup:
- In-memory MongoDB (mongodb-memory-server) or test PostgreSQL (testcontainers)
- Fresh database state before each test (seed in beforeEach, cleanup in afterEach)
- JWT token factory: createTestToken(userId, role, tenantId) helper
- No shared state between parallel test workers

Why it works: Testing all 11 status code scenarios, including rate limiting and the absence of stack traces in 500 errors, ensures your API passes both a security audit and a QA review before it ships.

5. Playwright E2E Tests

You are a Playwright E2E testing expert for modern web applications.

Write Playwright E2E tests for this critical user journey:

Journey: [DESCRIBE THE USER FLOW โ€” e.g., "New user signs up, creates a project, invites a team member, creates a task, and marks it complete"]

Architecture requirements:
- Page Object Model: one class per page/section (LoginPage, DashboardPage, ProjectPage)
- Each POM method wraps a user action (not a Playwright API call)
- Auth fixture: authenticate once, save storage state, reuse across all tests in the suite
- External service mocks: use page.route() to intercept and mock payment APIs, email services
- Parallel execution safe: each test creates its own data with unique identifiers (no shared test data)
- Visual regression: page.screenshot() on key states, compare to baseline
- Network assertions: use page.waitForResponse() to assert API calls happen with correct payload

Test cases:
- Happy path: complete the journey successfully
- Validation errors: trigger form validation at each step, assert error messages
- Network failure: mock a failed API call mid-journey, assert recovery behavior
- Concurrent access: two users editing the same resource, assert conflict resolution

Include: playwright.config.ts, fixture files, and GitHub Actions job configuration.

6. Cypress Component & E2E Tests

You are a Cypress testing expert.

Write Cypress tests for a React dashboard application:

Component tests (cy.mount):
- DataTable component: test sorting, filtering, pagination, row selection with real data
- Form component: test validation, submission, error handling, loading state

E2E tests (full browser):
Custom commands (commands.ts):
- cy.login(email, password): authenticate via UI, store session in Cypress.env
- cy.createProject(name, members[]): API call to seed test data before the test
- cy.interceptAndWait(method, url, alias): wrap cy.intercept + cy.wait pattern

Test structure:
- describe blocks for each feature area
- beforeEach: log in, seed required data via cy.task() (Node.js API calls, not UI)
- afterEach: clean up created data via API

Specific tests:
- Project creation flow with all field validations
- File upload: drag-and-drop a CSV, assert processing and row count
- Real-time updates: two cy.session() windows, update in one, assert change in other

Show: cypress.config.ts, commands.ts, a component test, and an E2E test file.

7. Security Testing Prompts (OWASP Top 10)

You are an application security engineer specializing in OWASP Top 10.

Write security tests for this REST API endpoint:

[PASTE ENDPOINT CODE + URL]

Test every attack category:

1. Injection (SQL/NoSQL/Command):
   - Send MongoDB operators in JSON body: { "email": { "$gt": "" } }
   - Send SQL UNION attack in query params
   - Expected: 400 validation error, no data returned, attack logged

2. Broken Authentication:
   - Present expired JWT โ€” assert 401, not 500
   - Brute force: 10 rapid requests โ€” assert 429 after threshold
   - Token from a different user โ€” assert data isolation (403 or 404)

3. Broken Access Control:
   - Authenticated user accessing another tenant's resource via direct URL manipulation
   - Expected: 404 (not 403 โ€” don't reveal the resource exists)

4. Security Misconfiguration:
   - Request to /.env, /admin without auth, /__debug__
   - Check response headers: X-Frame-Options, X-Content-Type-Options, Strict-Transport-Security

5. XSS:
   - Submit script tags and event handlers in text fields
   - Assert stored content is HTML-escaped when retrieved

6. SSRF:
   - If endpoint accepts a URL parameter: submit internal IP addresses (169.254.169.254, 192.168.1.1)
   - Expected: validation rejection before any HTTP request is made

Use pytest with the requests library. Include a setup fixture that starts the test server.

Why it works: Structuring security tests by OWASP category turns "does it have security tests?" into a checklist you can report on. Each test type maps to a specific CVE class, making the coverage auditable.

8. Performance Testing with k6

You are a performance engineering expert specializing in k6 load testing.

Write a comprehensive k6 performance test for this API:

Target endpoint: [URL + METHOD + example request payload]
SLA requirements: P95 < 200ms, P99 < 500ms at 1,000 concurrent virtual users, error rate < 0.1%

Test scenarios (implement all in one script with scenarios config):
1. Smoke test: 5 VUs for 1 minute โ€” verify basic correctness
2. Load test: ramp from 0 to 1,000 VUs over 5 min, hold 10 min, ramp down 5 min
3. Spike test: 100 VUs baseline, spike to 5,000 for 30 seconds, return to baseline
4. Stress test: ramp to 3,000 VUs and hold until error rate exceeds 1% โ€” find the breaking point
5. Soak test: 500 VUs for 2 hours โ€” detect memory leaks and degradation over time

Assertions per scenario:
- http_req_duration: ['p(95)<200', 'p(99)<500']
- http_req_failed: ['rate<0.001']
- custom metric: track login_duration separately from API duration

Output formats: xunit (for CI), JSON summary (for dashboards), console (for local runs)

Include: k6 script, GitHub Actions integration that fails the job if thresholds are breached, and a Grafana dashboard JSON for k6 Cloud metrics.

9. Contract Testing with Pact

You are a contract testing expert specializing in Pact for microservices.

Set up consumer-driven contract testing between a React frontend (consumer) and a Node.js API (provider):

Consumer tests (frontend, Jest + @pact-foundation/pact):
- Define the contract for GET /api/tasks: expected request headers, response shape, example response
- Define the contract for POST /api/tasks: request body shape, success response, validation error response
- Publish the generated pact file to Pact Broker

Provider verification (API, Jest + @pact-foundation/pact):
- Load all consumer contracts from Pact Broker
- For each interaction: replay the request against the real (test) server, verify the response matches the contract
- Provider states: set up database fixtures for "a task exists" and "task list is empty" states

CI integration:
- Consumer CI: run consumer tests + publish pact on every merge to main
- Provider CI: verify ALL consumer contracts before deploying โ€” fail deployment if any contract is broken
- Pact Broker: webhook configuration to trigger provider verification when consumer publishes a new contract

Output: consumer test file, provider verification file, Pact Broker webhook config, and GitHub Actions for both services.

10. Test Coverage Analysis & Strategy

You are a QA strategy expert.

Analyze the test coverage for this codebase and recommend a testing strategy:

Coverage report: [PASTE ISTANBUL/COVERAGE.PY REPORT]
Current test files: [PASTE FILE LIST OR DESCRIBE TEST SUITE]
Application type: [web API / React SPA / microservice / etc.]

Provide:
1. Coverage gap analysis: which uncovered branches represent the highest production risk?
2. Test pyramid audit: is the ratio of unit/integration/E2E tests appropriate? What is the optimal ratio for this type of application?
3. The 10 test cases that would provide the most ROI to add next (rank by risk ร— complexity)
4. Flaky test diagnosis: identify patterns in test descriptions that suggest brittleness (time-dependent, network-dependent, order-dependent)
5. Missing test categories: security tests, accessibility tests, visual regression, performance regression
6. CI/CD integration: at what coverage threshold should the build fail? Which test suites should block merge vs run post-merge?

Output a testing roadmap with quarterly milestones.

11. CI/CD Test Pipeline

You are a CI/CD engineer specializing in test automation pipelines.

Design a GitHub Actions test pipeline for a full-stack TypeScript application:

Stage 1 โ€” Fast feedback (2-3 min, blocks PRs):
- TypeScript compile check (tsc --noEmit)
- ESLint + Prettier
- Unit tests (Jest, Vitest) โ€” parallel across 4 workers
- Coverage gate: fail if unit coverage drops below 80%

Stage 2 โ€” Integration (5-8 min, blocks PRs):
- Spin up PostgreSQL + Redis via services: block (healthcheck)
- API integration tests (Supertest)
- React component tests (Testing Library + Vitest)

Stage 3 โ€” E2E (10-15 min, required for main branch merge only):
- Build Docker image, start application
- Playwright tests across Chromium + Firefox (no IE/Safari in CI)
- Visual regression comparison against baseline (fail on diff > 0.1%)

Stage 4 โ€” Security + Performance (parallel, non-blocking for PRs, blocking for main):
- npm audit --audit-level=high
- OWASP ZAP baseline scan against running application
- k6 smoke test (5 VUs, 60s) โ€” fail if P95 > 500ms

Cache strategy: node_modules by package-lock.json hash, Playwright browsers by version, Docker layers by dependency hash.
Notification: Slack on E2E failure with screenshot attachment.

12. Test Data Management

You are a test data engineering expert.

Build a test data factory system for a TypeScript full-stack application:

Factories (using @faker-js/faker):
- userFactory(overrides?): creates realistic User objects with email, name, avatar URL, role
- organizationFactory(overrides?): creates Organization with plan, feature flags, billing status
- projectFactory(org: Organization, overrides?): creates Project linked to org
- taskFactory(project: Project, assignee?: User, overrides?): creates Task with realistic content

Database seeding:
- seed-minimal.ts: one org, two users, one project, 10 tasks โ€” used for development
- seed-scale.ts: 5 orgs, 50 users, 20 projects, 1,000 tasks โ€” used for performance testing
- seed-edge-cases.ts: unicode characters, max-length strings, null optional fields, boundary dates

Test helpers:
- createTestScenario(scenario: 'overdue-tasks' | 'full-project' | 'empty-org'): returns fully set-up database state with references
- cleanDatabase(tables?: string[]): delete in correct FK order
- withDatabase(fn): wrap test in transaction that rolls back after

TypeScript throughout. Show: factory functions, seed scripts, and usage in a Jest test.

Good vs Bad QA Testing Prompts

Test TypeโŒ Bad Promptโœ… Good Prompt
Unit tests "Write tests for my function" "Write Jest 29 tests for this validateTask() function: [CODE]. List every branch first. Cover: happy path, missing required fields, dueDate in past, title too long, XSS in title field, and the async Zod validation failure. Mock the database. Target 100% branch coverage."
E2E tests "Test the login page" "Write Playwright tests for login using Page Object Model. Auth fixture that stores session state and reuses it. Mock the email service with page.route(). Cover: valid login, wrong password (assert same 401 to prevent enumeration), expired session redirect, remember-me checkbox persistence."
API tests "Test my API endpoint" "Write Supertest tests for DELETE /tasks/:id. Test: 204 success (assert DB record soft-deleted), 401 no token, 403 wrong tenant, 404 not found, 409 already deleted, 429 rate limited (assert Retry-After header). Use in-memory MongoDB, JWT factory, seed data in beforeEach."
Performance "Load test my app" "Write a k6 script for GET /api/dashboard. SLA: P95 < 300ms at 500 VUs. Phases: 2-min ramp, 10-min hold, 2-min ramp-down. Thresholds: p(95)<300, p(99)<500, rate<0.001. Authenticate each VU with a JWT from /auth/login setup. Export JSON summary for Grafana."
Security "Check my app for security issues" "Write pytest security tests for POST /api/tasks. Test NoSQL injection via MongoDB operators in body, SSRF via URL field (try 127.0.0.1 and 169.254.169.254), XSS in title field (assert HTML-escaped on retrieval), and brute force protection (assert 429 after 5 rapid requests)."

Generate a custom testing prompt for your exact framework and coverage goals โ†’ Try PromptPrepare free

Found this helpful? Share it.