Core Concepts

Validating Requirements with Test Cases

In Catalio, a Test Case is a structured validation plan that verifies a requirement or use case works correctly. Test cases are your quality assurance strategy—they define what success looks like and provide repeatable validation processes to ensure your product delivers what was specified.

Unlike informal testing approaches, Catalio test cases are explicit, traceable, and linked directly to the requirements they validate. This creates a clear chain from business need to implementation to verification.

Why Testing Matters

Testing isn’t just about finding bugs—it’s about confidence. Well-designed test cases provide:

  1. Validation: Confirm requirements are implemented correctly
  2. Regression Prevention: Detect when changes break existing functionality
  3. Documentation: Test cases serve as executable specifications
  4. Risk Management: Identify failure scenarios before production
  5. Stakeholder Communication: Demonstrate how quality is verified
  6. Traceability: Link validation directly to business requirements

When requirements change, test cases must change too. This bidirectional relationship ensures your validation strategy stays synchronized with your specifications.

Test Case Types

Catalio supports four distinct test case types, each serving different validation purposes:

Unit Tests

What they test: Individual components, functions, or isolated pieces of functionality.

Example: Validation Rule Syntax Check

A Salesforce administrator creates a validation rule to ensure email addresses contain the “@” symbol. A unit test verifies:

  • The validation rule expression syntax is correct
  • The rule triggers on invalid email formats
  • The rule passes on valid email formats
  • Error messages display properly

When to use: Validating individual rules, formulas, components, or functions in isolation.

Integration Tests

What they test: How components interact with each other and with external systems.

Example: Salesforce Validation Rule Activation

A validation rule must integrate with Salesforce’s object lifecycle. An integration test verifies:

  • The rule activates when saving a Contact record
  • The rule integrates with the page layout correctly
  • Error handling works with the UI framework
  • The rule doesn’t conflict with other validation rules

When to use: Validating connections between systems, API integrations, data flows, or multi-component workflows.

End-to-End Tests

What they test: Complete user workflows from start to finish across the entire system.

Example: Complete Contact Creation Workflow

A sales representative creates a new contact in Salesforce. An end-to-end test validates:

  • User navigates to Contacts tab
  • User clicks “New Contact” button
  • User fills in contact form (including email)
  • Validation rules trigger on invalid data
  • User corrects validation errors
  • Contact saves successfully
  • Contact appears in contact list
  • Email triggers to contact owner

When to use: Validating complete user journeys, business processes, or cross-system workflows that span multiple requirements.

Manual Tests

What they test: Aspects requiring human judgment, user experience validation, or scenarios difficult to automate.

Example: User Interface Verification

A business analyst verifies the contact form user experience:

  • Error messages are clear and actionable
  • Field placement is intuitive for sales users
  • Required field indicators are visible
  • Tab order follows natural workflow
  • Help text provides sufficient guidance
  • Visual design meets brand standards

When to use: User experience validation, visual design checks, exploratory testing, accessibility verification, or complex judgment-based scenarios.

Components of a Test Case

Every test case in Catalio includes:

Core Identification

Title: Brief, descriptive name that clearly indicates what is being tested

  • ✅ Good: “Validate email format on contact creation”
  • ❌ Poor: “Test email”

Description: Detailed explanation of what this test case validates and why it matters. Include context about the business requirement being verified.

Test Definition

Test Type: Specify whether this is a unit, integration, end-to-end, or manual test. This helps teams organize test execution and understand the validation scope.

Expected Result: Clear statement of what should happen when the test passes. This must be specific, observable, and measurable.

Test Steps (optional): Ordered sequence of actions to execute the test. Particularly valuable for manual tests and complex scenarios.

Execution Tracking

When test cases are executed, Catalio records:

  • Executed At: Timestamp when the test ran
  • Executed By: User who executed the test
  • Execution Time: How long the test took (in milliseconds)
  • Environment: Where the test ran (staging, production, development)
  • Notes: Additional observations, issues, or context from execution

This metadata provides audit trails and helps identify patterns in test failures.

Linking Test Cases

Test cases don’t exist in isolation—they’re connected to the artifacts they validate.

Linking to Requirements

Direct Validation: Link test cases directly to requirements for straightforward validation.

Example:

  • Requirement: “Email addresses must be validated on contact creation”
  • Test Case: “Validate email format accepts valid addresses”
  • Test Case: “Validate email format rejects invalid addresses”
  • Test Case: “Validate email format shows clear error messages”

Direct linking works well for requirements with clear, testable acceptance criteria.

Linking to Use Cases

Scenario-Based Testing: Link test cases to use cases when validating specific user scenarios.

Example:

  • Use Case: “Sales rep creates contact with invalid email”
  • Test Case: “Verify validation rule blocks save with invalid email”
  • Test Case: “Verify error message guides user to correction”
  • Test Case: “Verify successful save after email correction”

Scenario-based testing validates complete workflows and edge cases described in use cases.

Flexible Linking

Catalio requires each test case to link to either a requirement or a use case (or both). This flexibility supports different testing strategies:

  • Requirement-focused: Test individual requirements in isolation
  • Use case-focused: Test complete scenarios and workflows
  • Hybrid: Link to both when a test validates a requirement within a specific scenario

Choose the linking strategy that best fits your validation needs.

Recording Test Results

Each test execution generates a Test Result record that captures:

Test Status

Test results use four status values:

Passed ✅: Test executed successfully, expected result achieved, requirement validated.

Failed ❌: Test executed but did not produce expected result. Record failure details in notes.

Skipped ⏭️: Test was not executed during this test run. Document reason in notes.

Blocked 🚫: Test could not be executed due to blockers (environment issues, dependencies, data unavailable). Identify blockers in notes.

Execution Metadata

  • Status: Current outcome (passed/failed/skipped/blocked)
  • Executed At: When the test ran
  • Executed By: Who ran the test
  • Execution Time: Duration in milliseconds
  • Environment: Test environment (staging, production, etc.)
  • Notes: Additional context, failure reasons, or observations

Test Result History

Catalio maintains complete test execution history, enabling:

  • Trend analysis (test stability over time)
  • Failure pattern identification
  • Performance tracking (execution time trends)
  • Audit trail for compliance
  • Regression detection

Access test result history from the test case detail page, sorted by execution date.

Test Status Lifecycle

Test results flow through a clear lifecycle:

[Test Created]
[Ready for Execution]
[Execute Test] → Passed ✅ → [Requirement Validated]
├→ Failed ❌ → [Investigate Failure] → [Fix] → [Re-test]
├→ Skipped ⏭️ → [Document Reason] → [Schedule Future Execution]
└→ Blocked 🚫 → [Remove Blockers] → [Re-test]

Handling Failed Tests

When tests fail:

  1. Record detailed notes: Capture error messages, unexpected behavior, screenshots
  2. Identify root cause: Is it a requirement defect, implementation bug, or test issue?
  3. Create work items: Track fixes in your development workflow
  4. Re-test after fixes: Execute test again after resolution
  5. Update test case: If the test itself was flawed, update the test case

Failed tests are valuable—they prevent defects from reaching production.

Blocked Test Resolution

When tests are blocked:

  1. Document blockers: What’s preventing execution?
  2. Identify resolution path: What needs to happen to unblock?
  3. Track blocker resolution: Assign ownership and deadlines
  4. Prioritize unblocking: Blocked tests hide requirement validation gaps
  5. Execute when unblocked: Don’t let blocked tests linger

Blocked tests represent validation gaps—resolve them quickly to maintain confidence in your requirements.

Best Practices for Effective Test Cases

Writing Testable Requirements

Good test cases start with good requirements. Ensure requirements include:

  • Clear, measurable acceptance criteria
  • Specific expected behaviors
  • Edge cases and error scenarios
  • Performance expectations (where relevant)

Test Case Design

Do:

  • ✅ Focus each test case on one specific aspect
  • ✅ Write clear, specific expected results
  • ✅ Include test steps for manual tests
  • ✅ Test both happy path and error scenarios
  • ✅ Update test cases when requirements change
  • ✅ Link test cases to the artifacts they validate

Don’t:

  • ❌ Create vague expected results like “it should work”
  • ❌ Combine multiple unrelated validations in one test
  • ❌ Assume everyone knows the test steps
  • ❌ Forget to test error handling
  • ❌ Let test cases drift from requirements
  • ❌ Create orphan test cases not linked to requirements

Test Coverage Strategy

Comprehensive Coverage:

  • Test all acceptance criteria for each requirement
  • Include happy path, error scenarios, and edge cases
  • Cover different user roles and permissions
  • Validate data integrity and business rules
  • Test performance requirements

Risk-Based Prioritization:

  • Test critical requirements thoroughly
  • Test high-risk areas more extensively
  • Ensure compliance requirements have complete coverage
  • Balance coverage with resource constraints

Test Maintenance

Test cases require ongoing maintenance:

  • Review regularly: Ensure tests remain relevant as requirements evolve
  • Update after changes: Synchronize test cases with requirement updates
  • Remove obsolete tests: Archive test cases for deprecated requirements
  • Refactor duplicates: Consolidate redundant test cases
  • Improve clarity: Enhance test case descriptions based on execution feedback

Execution Discipline

Consistent Execution:

  • Run tests at appropriate points in development lifecycle
  • Record all test executions (even failures)
  • Document environment details for reproducibility
  • Track execution time to identify performance issues
  • Maintain test result history for trend analysis

Quality Metrics:

  • Track pass/fail rates across requirements
  • Monitor test coverage (% of requirements with tests)
  • Identify frequently failing tests (potential requirement issues)
  • Measure blocked test resolution time
  • Analyze test execution frequency

Example Test Case Scenarios

Example 1: Unit Test for Validation Rule

Title: Email address format validation

Description: Verify that the Contact email validation rule correctly identifies valid and invalid email addresses according to RFC 5322 simplified format (local@domain pattern).

Test Type: Unit

Expected Result:

  • Valid emails (containing @ symbol with text before and after) pass validation
  • Invalid emails (missing @, empty local/domain parts) trigger validation error
  • Error message states: “Please enter a valid email address”

Test Steps:

  1. Attempt to save contact with valid email: user@example.com
  2. Verify save succeeds
  3. Attempt to save contact with invalid email: “userexample.com”
  4. Verify validation error appears
  5. Verify error message content

Linked to: Requirement “Email address validation on contact creation”

Example 2: Integration Test for API Integration

Title: Salesforce validation rule activation during save

Description: Verify that custom validation rules integrate correctly with Salesforce’s record save operation and properly block invalid data from being persisted.

Test Type: Integration

Expected Result:

  • Validation rule evaluates during before-save trigger
  • Invalid data prevents record save
  • Page layout displays error message correctly
  • No record is created in database when validation fails
  • Record saves successfully when validation passes

Test Steps:

  1. Open Contact creation form
  2. Fill all required fields with valid data except email
  3. Enter invalid email format
  4. Click Save button
  5. Verify error message appears on form
  6. Verify no Contact record created (query database)
  7. Correct email format
  8. Click Save button again
  9. Verify Contact record created successfully

Linked to: Use Case “Sales rep creates new contact with data validation”

Example 3: Manual Test for User Experience

Title: Contact form error message clarity and usability

Description: Verify that validation error messages on the Contact form are clear, actionable, and support user success. This manual test evaluates user experience aspects that automated tests cannot validate.

Test Type: Manual

Expected Result:

  • Error messages are displayed near the relevant field
  • Error text clearly explains what went wrong
  • Error text provides guidance on how to fix the issue
  • Error styling makes messages visually prominent but not alarming
  • Multiple errors are displayed simultaneously
  • Error messages disappear when fields are corrected
  • Overall error experience does not frustrate users

Test Steps:

  1. Open Contact creation form
  2. Intentionally enter invalid data in multiple fields (email, phone, etc.)
  3. Click Save and observe error display
  4. Evaluate error message placement (near relevant fields?)
  5. Evaluate error message clarity (understandable?)
  6. Evaluate error message actionability (provides guidance?)
  7. Evaluate visual design (appropriately styled?)
  8. Correct one error and verify behavior
  9. Evaluate whether corrected field error disappears
  10. Correct remaining errors and complete save
  11. Document overall user experience observations

Linked to: Use Case “Sales rep corrects validation errors during contact creation”

Next Steps

Now that you understand test cases, explore how to:

  • Define Requirements - Create requirements that are testable and clear
  • Create Use Cases - Build scenarios that test cases can validate
  • Track Test Results - Monitor test execution and quality metrics

Pro Tip: Start with test cases for critical requirements and high-risk scenarios. Don’t try to achieve 100% test coverage immediately—focus on the validations that matter most to your business. As you build confidence in your testing process, expand coverage strategically.

Support

Questions about test cases? We’re here to help:

  • Documentation: Continue reading about Requirements and Use Cases
  • In-App Help: Look for the 🤖 AI assistant in the test case interface
  • Email: support@catalio.com
  • Community: Share testing best practices with other Catalio users