In Catalio, a Test Case is a structured validation plan that verifies a requirement or use case works correctly, and a Test Result is the recorded outcome of executing that plan. Together they form your quality assurance loop: test cases define what success looks like, and test results prove whether that standard was met.
Unlike informal testing approaches, Catalio test cases are explicit, traceable, and linked directly to the requirements they validate. Each execution produces a result record that feeds your team’s quality dashboards and surfaces trends over time. This creates a clear chain from business need to implementation to verified evidence.
Why Testing Matters
Testing isn’t just about finding bugs—it’s about confidence. Well-designed test cases provide:
- Validation: Confirm requirements are implemented correctly
- Regression Prevention: Detect when changes break existing functionality
- Documentation: Test cases serve as executable specifications
- Risk Management: Identify failure scenarios before production
- Stakeholder Communication: Demonstrate how quality is verified
- Traceability: Link validation directly to business requirements
When requirements change, test cases must change too. This bidirectional relationship ensures your validation strategy stays synchronized with your specifications.
Test Case Types
Catalio supports four distinct test case types, each serving different validation purposes:
Unit Tests
What they test: Individual components, functions, or isolated pieces of functionality.
Example: Validation Rule Syntax Check
A Salesforce administrator creates a validation rule to ensure email addresses contain the “@” symbol. A unit test verifies:
- The validation rule expression syntax is correct
- The rule triggers on invalid email formats
- The rule passes on valid email formats
- Error messages display properly
When to use: Validating individual rules, formulas, components, or functions in isolation.
Integration Tests
What they test: How components interact with each other and with external systems.
Example: Salesforce Validation Rule Activation
A validation rule must integrate with Salesforce’s object lifecycle. An integration test verifies:
- The rule activates when saving a Contact record
- The rule integrates with the page layout correctly
- Error handling works with the UI framework
- The rule doesn’t conflict with other validation rules
When to use: Validating connections between systems, API integrations, data flows, or multi-component workflows.
End-to-End Tests
What they test: Complete user workflows from start to finish across the entire system.
Example: Complete Contact Creation Workflow
A sales representative creates a new contact in Salesforce. An end-to-end test validates:
- User navigates to Contacts tab
- User clicks “New Contact” button
- User fills in contact form (including email)
- Validation rules trigger on invalid data
- User corrects validation errors
- Contact saves successfully
- Contact appears in contact list
- Email triggers to contact owner
When to use: Validating complete user journeys, business processes, or cross-system workflows that span multiple requirements.
Manual Tests
What they test: Aspects requiring human judgment, user experience validation, or scenarios difficult to automate.
Example: User Interface Verification
A business analyst verifies the contact form user experience:
- Error messages are clear and actionable
- Field placement is intuitive for sales users
- Required field indicators are visible
- Tab order follows natural workflow
- Help text provides sufficient guidance
- Visual design meets brand standards
When to use: User experience validation, visual design checks, exploratory testing, accessibility verification, or complex judgment-based scenarios.
Components of a Test Case
Every test case in Catalio includes:
Core Identification
Title: Brief, descriptive name that clearly indicates what is being tested
- ✅ Good: “Validate email format on contact creation”
- ❌ Poor: “Test email”
Description: Detailed explanation of what this test case validates and why it matters. Include context about the business requirement being verified.
Test Definition
Test Type: Specify whether this is a unit, integration, end-to-end, or manual test. This helps teams organize test execution and understand the validation scope.
Expected Result: Clear statement of what should happen when the test passes. This must be specific, observable, and measurable.
Test Steps (optional): Ordered sequence of actions to execute the test. Particularly valuable for manual tests and complex scenarios.
Execution Tracking
When test cases are executed, Catalio records:
- Executed At: Timestamp when the test ran
- Executed By: User who executed the test
- Execution Time: How long the test took (in milliseconds)
- Environment: Where the test ran (staging, production, development)
- Notes: Additional observations, issues, or context from execution
This metadata provides audit trails and helps identify patterns in test failures.
Linking Test Cases
Test cases don’t exist in isolation—they’re connected to the artifacts they validate.
Linking to Requirements
Direct Validation: Link test cases directly to requirements for straightforward validation.
Example:
- Requirement: “Email addresses must be validated on contact creation”
- Test Case: “Validate email format accepts valid addresses”
- Test Case: “Validate email format rejects invalid addresses”
- Test Case: “Validate email format shows clear error messages”
Direct linking works well for requirements with clear, testable acceptance criteria.
Linking to Use Cases
Scenario-Based Testing: Link test cases to use cases when validating specific user scenarios.
Example:
- Use Case: “Sales rep creates contact with invalid email”
- Test Case: “Verify validation rule blocks save with invalid email”
- Test Case: “Verify error message guides user to correction”
- Test Case: “Verify successful save after email correction”
Scenario-based testing validates complete workflows and edge cases described in use cases.
Flexible Linking
Catalio requires each test case to link to either a requirement or a use case (or both). This flexibility supports different testing strategies:
- Requirement-focused: Test individual requirements in isolation
- Use case-focused: Test complete scenarios and workflows
- Hybrid: Link to both when a test validates a requirement within a specific scenario
Choose the linking strategy that best fits your validation needs.
Test Results
A Test Result is a first-class record in Catalio that captures the outcome of executing a test case. Every time you run a test — whether manually or through an automated process — you create a test result. Over time, these records build an evidence base for the quality of your requirements.
Purpose
Test results serve several complementary purposes:
- Validation evidence: Prove that a requirement has been tested and passed
- Regression detection: Spot when a previously-passing test starts failing after changes
- Trend analysis: Track test stability and failure patterns over time
- Audit trail: Provide a timestamped record of every execution for compliance
- Quality metrics: Feed dashboards showing pass rates across requirements and use cases
Status Values
Every test result carries one of four status values, matching the Catalio.Documentation.TestResult
resource:
| Status | Meaning |
|---|---|
| Passed | Test executed successfully; expected result was achieved |
| Failed | Test ran but did not produce the expected result |
| Skipped | Test was intentionally not executed during this run |
| Blocked | Test could not run — environment issues, missing data, or dependencies |
Use notes to explain the reason for any non-passing result. A well-documented failure is far more
useful to the team than a bare status code.
Key Fields
| Field | Description |
|---|---|
status |
One of :passed, :failed, :skipped, :blocked |
executed_at |
Timestamp (UTC) of when the test ran; defaults to now |
executed_by |
The user who ran the test — automatically set from the authenticated actor |
execution_time |
How long the test took, in milliseconds |
environment |
Where the test ran, e.g. "staging", "production", "uat" |
notes |
Free-text observations, failure details, or context about the execution |
executed_at has a database-level default (now() AT TIME ZONE 'utc'), so results created via the
:record_result action automatically capture the correct timestamp even when callers omit it.
Relationship to Test Cases
The relationship between test cases and test results is one-to-many: a single test case can have
many results over its lifetime, one per execution run. Each result holds a test_case belongs-to
relationship pointing back to its parent, and the test case exposes a test_results has-many
relationship to all its execution history.
When you delete a test case, Catalio cascades the deletion to all its associated test results, keeping the database clean without leaving orphaned records.
The Test Case detail page shows the full result history for that case, sorted with the most recent execution first. The page surfaces the pass/fail trend, the last executor, and any notes attached to recent runs so you can see at a glance whether the test is healthy or has started failing.
Execution Lifecycle
Test results flow through a clear lifecycle based on what happens during execution:
[Test Case Created]
↓
[Ready for Execution]
↓
[Execute Test] → :passed → [Requirement Validated]
↓
├→ :failed → [Investigate] → [Fix] → [Re-execute]
├→ :skipped → [Document reason in notes] → [Schedule future run]
└→ :blocked → [Remove blockers] → [Re-execute]
When a test fails: Record detailed notes with the error message, unexpected behavior, and any screenshots. Identify whether the root cause is a requirement defect, an implementation bug, or a flaw in the test case itself. Re-execute after the fix and record a new result — do not overwrite the original failure.
When a test is blocked: Document what is preventing execution (missing test data, environment outage, unresolved dependency). Blocked tests represent hidden validation gaps — the longer they stay blocked, the larger the unknown risk.
Quality Dashboards and Statistics
The test result history for each test case is accessible from the test case detail page in Catalio. The application computes summary statistics across results including:
- Pass/fail/skip/blocked counts by test case
- Pass rate trends over time (is the test becoming more stable or less?)
- Execution frequency (how often is this test being run?)
- Environment coverage (are you testing in staging and production?)
Use these metrics to identify chronically failing tests, which often signal either a poorly-worded requirement or an implementation that has quietly drifted from its specification.
Best Practices for Capturing Results
- Record every execution, including failures. A failed result recorded is a defect caught before production. An unrecorded execution is invisible to your team.
- Fill in the
environmentfield. A test that passes in staging but fails in production points to an environment configuration issue — but only if you recorded which environment each result came from. - Use
execution_timefor performance regression detection. If a test that ran in 200ms now takes 2,000ms, that is a signal worth investigating even if the result is still:passed. - Keep notes concise but specific. “Failed: null pointer on line 42 of invoice_validator.rb” is useful. “It didn’t work” is not.
- Re-test promptly after fixes. A
:failedresult sitting unchallenged gives stakeholders a false picture of requirement health.
Test Status Lifecycle
Test results flow through a clear lifecycle:
[Test Created]
↓
[Ready for Execution]
↓
[Execute Test] → Passed ✅ → [Requirement Validated]
↓
├→ Failed ❌ → [Investigate Failure] → [Fix] → [Re-test]
├→ Skipped ⏭️ → [Document Reason] → [Schedule Future Execution]
└→ Blocked 🚫 → [Remove Blockers] → [Re-test]
Handling Failed Tests
When tests fail:
- Record detailed notes: Capture error messages, unexpected behavior, screenshots
- Identify root cause: Is it a requirement defect, implementation bug, or test issue?
- Create work items: Track fixes in your development workflow
- Re-test after fixes: Execute test again after resolution
- Update test case: If the test itself was flawed, update the test case
Failed tests are valuable—they prevent defects from reaching production.
Blocked Test Resolution
When tests are blocked:
- Document blockers: What’s preventing execution?
- Identify resolution path: What needs to happen to unblock?
- Track blocker resolution: Assign ownership and deadlines
- Prioritize unblocking: Blocked tests hide requirement validation gaps
- Execute when unblocked: Don’t let blocked tests linger
Blocked tests represent validation gaps—resolve them quickly to maintain confidence in your requirements.
Best Practices for Effective Test Cases
Writing Testable Requirements
Good test cases start with good requirements. Ensure requirements include:
- Clear, measurable acceptance criteria
- Specific expected behaviors
- Edge cases and error scenarios
- Performance expectations (where relevant)
Test Case Design
Do:
- ✅ Focus each test case on one specific aspect
- ✅ Write clear, specific expected results
- ✅ Include test steps for manual tests
- ✅ Test both happy path and error scenarios
- ✅ Update test cases when requirements change
- ✅ Link test cases to the artifacts they validate
Don’t:
- ❌ Create vague expected results like “it should work”
- ❌ Combine multiple unrelated validations in one test
- ❌ Assume everyone knows the test steps
- ❌ Forget to test error handling
- ❌ Let test cases drift from requirements
- ❌ Create orphan test cases not linked to requirements
Test Coverage Strategy
Comprehensive Coverage:
- Test all acceptance criteria for each requirement
- Include happy path, error scenarios, and edge cases
- Cover different user roles and permissions
- Validate data integrity and business rules
- Test performance requirements
Risk-Based Prioritization:
- Test critical requirements thoroughly
- Test high-risk areas more extensively
- Ensure compliance requirements have complete coverage
- Balance coverage with resource constraints
Test Maintenance
Test cases require ongoing maintenance:
- Review regularly: Ensure tests remain relevant as requirements evolve
- Update after changes: Synchronize test cases with requirement updates
- Remove obsolete tests: Archive test cases for deprecated requirements
- Refactor duplicates: Consolidate redundant test cases
- Improve clarity: Enhance test case descriptions based on execution feedback
Execution Discipline
Consistent Execution:
- Run tests at appropriate points in development lifecycle
- Record all test executions (even failures)
- Document environment details for reproducibility
- Track execution time to identify performance issues
- Maintain test result history for trend analysis
Quality Metrics:
- Track pass/fail rates across requirements
- Monitor test coverage (% of requirements with tests)
- Identify frequently failing tests (potential requirement issues)
- Measure blocked test resolution time
- Analyze test execution frequency
Example Test Case Scenarios
Example 1: Unit Test for Validation Rule
Title: Email address format validation
Description: Verify that the Contact email validation rule correctly identifies valid and invalid email addresses according to RFC 5322 simplified format (local@domain pattern).
Test Type: Unit
Expected Result:
- Valid emails (containing @ symbol with text before and after) pass validation
- Invalid emails (missing @, empty local/domain parts) trigger validation error
- Error message states: “Please enter a valid email address”
Test Steps:
- Attempt to save contact with valid email:
user@example.com - Verify save succeeds
- Attempt to save contact with invalid email: “userexample.com”
- Verify validation error appears
- Verify error message content
Linked to: Requirement “Email address validation on contact creation”
Example 2: Integration Test for API Integration
Title: Salesforce validation rule activation during save
Description: Verify that custom validation rules integrate correctly with Salesforce’s record save operation and properly block invalid data from being persisted.
Test Type: Integration
Expected Result:
- Validation rule evaluates during before-save trigger
- Invalid data prevents record save
- Page layout displays error message correctly
- No record is created in database when validation fails
- Record saves successfully when validation passes
Test Steps:
- Open Contact creation form
- Fill all required fields with valid data except email
- Enter invalid email format
- Click Save button
- Verify error message appears on form
- Verify no Contact record created (query database)
- Correct email format
- Click Save button again
- Verify Contact record created successfully
Linked to: Use Case “Sales rep creates new contact with data validation”
Example 3: Manual Test for User Experience
Title: Contact form error message clarity and usability
Description: Verify that validation error messages on the Contact form are clear, actionable, and support user success. This manual test evaluates user experience aspects that automated tests cannot validate.
Test Type: Manual
Expected Result:
- Error messages are displayed near the relevant field
- Error text clearly explains what went wrong
- Error text provides guidance on how to fix the issue
- Error styling makes messages visually prominent but not alarming
- Multiple errors are displayed simultaneously
- Error messages disappear when fields are corrected
- Overall error experience does not frustrate users
Test Steps:
- Open Contact creation form
- Intentionally enter invalid data in multiple fields (email, phone, etc.)
- Click Save and observe error display
- Evaluate error message placement (near relevant fields?)
- Evaluate error message clarity (understandable?)
- Evaluate error message actionability (provides guidance?)
- Evaluate visual design (appropriately styled?)
- Correct one error and verify behavior
- Evaluate whether corrected field error disappears
- Correct remaining errors and complete save
- Document overall user experience observations
Linked to: Use Case “Sales rep corrects validation errors during contact creation”
Next Steps
Now that you understand test cases and results, explore how to:
- Define Requirements - Create requirements that are testable and clear
- Create Use Cases - Build scenarios that test cases can validate
- Create an Initiative - Track the work that makes failing tests pass
- Upload Artifacts - Attach test evidence to requirements for audit purposes
Pro Tip: Start with test cases for critical requirements and high-risk scenarios. Don’t try to achieve 100% test coverage immediately—focus on the validations that matter most to your business. Record every execution — even failures — so your quality picture is honest. As you build confidence in your testing process, expand coverage strategically.
Support
Questions about test cases? We’re here to help:
- Documentation: Continue reading about Requirements and Use Cases
- In-App Help: Look for the 🤖 AI assistant in the test case interface
- Email: support@catalio.ai
- Community: Share testing best practices with other Catalio users