Integrations

Setting Up LLM API Keys for AI Features

Unlock Catalio’s full AI-powered capabilities by connecting Large Language Model (LLM) providers like OpenAI and Anthropic. This integration enables intelligent requirement analysis, automated refinement suggestions, semantic search, and AI-assisted writing that transforms how your team creates and manages product specifications.

Overview

Why LLM Integration Matters

Catalio’s AI features go far beyond simple text generation. By connecting enterprise-grade LLM providers, you enable sophisticated analysis that helps your team:

Write Better Requirements

  • AI suggests improvements to clarity, completeness, and testability
  • Automatically identifies missing acceptance criteria
  • Generates user story components from natural language descriptions
  • Detects ambiguous language and offers alternatives

Accelerate Requirement Creation

  • Convert meeting notes into structured requirements
  • Generate acceptance criteria from requirement descriptions
  • Suggest relevant personas based on requirement content
  • Auto-create related use cases and test scenarios

Ensure Quality and Consistency

  • Analyze sentiment to detect stakeholder concerns
  • Score requirements on completeness and clarity metrics
  • Suggest appropriate priority and complexity levels
  • Identify duplicate or conflicting requirements

Enable Intelligent Search

  • Semantic search understands meaning, not just keywords
  • Find related requirements using natural language queries
  • Discover dependencies through content analysis
  • Group requirements by conceptual similarity

Provide Business Insights

  • Summarize requirements for executive dashboards
  • Extract key themes across requirement portfolios
  • Identify gaps in requirement coverage
  • Generate release notes from completed requirements

Supported LLM Providers

Catalio integrates with the leading LLM platforms:

OpenAI (GPT-4/GPT-3.5)

  • Industry-leading text generation and analysis
  • Excellent for complex reasoning and refinement suggestions
  • Strong performance on technical documentation
  • Broad availability and competitive pricing

Anthropic (Claude)

  • Industry-leading context windows (200K tokens)
  • Exceptional at analyzing large requirement sets
  • Strong safety features and alignment
  • Excellent for nuanced business reasoning

Azure OpenAI Service

  • Enterprise security and compliance (SOC 2, HIPAA, FedRAMP)
  • Deployed in your Azure tenant for data residency
  • Integration with Microsoft enterprise identity
  • Guaranteed uptime SLAs and dedicated capacity

Future Support

  • Google Vertex AI (Gemini) - Coming Q2 2025
  • AWS Bedrock (Claude, Titan) - Coming Q2 2025
  • Self-hosted models (Llama, Mistral) - Coming Q3 2025

Data Privacy and Security

Critical Understanding: When you enable LLM integration, requirement content is sent to the provider’s API for processing. Understanding data handling is essential for compliance:

Data Transmission:

  • Requirement text, personas, use cases sent to LLM provider
  • Transmitted over encrypted HTTPS connections
  • API requests include only necessary context
  • User credentials never transmitted to LLM providers

Provider Data Policies:

Catalio Safeguards:

  • AI Accessible Toggle: Mark sensitive requirements as non-AI-accessible
  • Selective Processing: Choose which features use AI
  • Audit Logging: Track all AI API calls for compliance
  • Data Residency: Azure OpenAI keeps data in your region

Compliance Considerations:

  • GDPR: Ensure provider has adequate safeguards (Azure recommended for EU)
  • HIPAA: Use Azure OpenAI with BAA for healthcare applications
  • SOC 2: All providers support enterprise security standards
  • Export Control: Verify provider availability in your jurisdiction

Prerequisites

Account Setup with LLM Providers

Before configuring Catalio, establish accounts with your chosen LLM providers.

OpenAI Prerequisites

For OpenAI Platform (Recommended for Most Teams):

  1. Create OpenAI Account: Sign up at platform.openai.com
  2. Verify Account: Complete email verification and phone verification
  3. Add Payment Method: Navigate to Billing → Payment methods → Add payment
  4. Review Pricing: Understand token-based pricing for your expected usage
    • GPT-4: ~$0.03-0.06 per 1,000 tokens (varies by model)
    • GPT-3.5-Turbo: ~$0.002 per 1,000 tokens
  5. Set Usage Limits: Configure hard spending limits to control costs
    • Settings → Billing → Usage limits
    • Recommended: Start with $100/month limit, adjust based on usage

For OpenAI Enterprise (Large Organizations):

  1. Contact OpenAI sales for enterprise agreements
  2. Negotiate volume pricing and dedicated capacity
  3. Enterprise features include:
    • No training on your data by default
    • Extended context windows
    • Priority access during high-demand periods
    • Dedicated account management

Anthropic Prerequisites

For Anthropic API (Claude):

  1. Request Access: Sign up at console.anthropic.com
  2. Account Approval: Anthropic reviews applications (typically 1-2 business days)
  3. Organization Setup: Configure your workspace and team members
  4. Add Payment: Navigate to Settings → Billing → Add payment method
  5. Understand Pricing: Claude pricing varies by model
    • Claude 3.5 Sonnet: ~$0.003-0.015 per 1,000 tokens
    • Claude 3 Opus: ~$0.015-0.075 per 1,000 tokens
    • Claude 3 Haiku: ~$0.00025-0.00125 per 1,000 tokens
  6. Set Budget Alerts: Configure notifications for usage thresholds

For Anthropic Enterprise:

  1. Contact Anthropic for enterprise licensing
  2. Features include:
    • Guaranteed capacity and uptime SLAs
    • Extended rate limits
    • Priority support
    • Custom deployment options

Azure OpenAI Prerequisites

For Azure OpenAI Service:

  1. Azure Subscription: Active Azure subscription required
  2. Request Access: Apply for Azure OpenAI access
    • Navigate to Azure portal
    • Search for “Azure OpenAI Service”
    • Complete access application (approval typically 1-5 days)
  3. Create Resource: After approval, create Azure OpenAI resource
    • Choose region for data residency requirements
    • Select pricing tier (S0 standard for most use cases)
  4. Deploy Models: Deploy specific model versions
    • GPT-4, GPT-3.5-Turbo, or text-embedding-ada-002
    • Choose deployment name and capacity
  5. Configure Networking: Set up virtual network or private endpoints if required
  6. Set Up Managed Identity: For secure authentication without keys

Azure Enterprise Considerations:

  • Integration with Azure Active Directory for SSO
  • Private Link for network isolation
  • Customer-managed keys for data encryption
  • Compliance certifications (HIPAA, FedRAMP, etc.)

Network and Security Requirements

Network Access:

  • Outbound HTTPS access to LLM provider endpoints
  • OpenAI: api.openai.com on port 443
  • Anthropic: api.anthropic.com on port 443
  • Azure: {your-resource}.openai.azure.com on port 443

Firewall Configuration:

  • Whitelist provider domains in corporate firewalls
  • Allow DNS resolution for API endpoints
  • Configure proxy settings if required

Secrets Management:

  • Use secure storage for API keys (environment variables, vault services)
  • Rotate keys regularly (recommended: every 90 days)
  • Restrict key access to authorized personnel only
  • Never commit API keys to source control

OpenAI Setup

Creating API Keys

Step 1: Navigate to API Keys Section

  1. Log into platform.openai.com
  2. Click your profile icon in top-right corner
  3. Select API keys from dropdown menu
  4. Click + Create new secret key

Step 2: Configure Key Settings

  1. Key Name: Provide descriptive name
    • Good: “Catalio Production - AI Features”
    • Good: “Catalio Development Environment”
    • Bad: “Key 1”, “Test”
  2. Permissions (if available): Select appropriate scopes
    • Model access: Enable GPT-4 and GPT-3.5-Turbo
    • Usage: Set permission level (recommended: “All” for simplicity)
  3. Project (if using projects): Assign to appropriate project
  4. Click Create secret key

Step 3: Secure Your Key

  1. Copy Immediately: Key shown only once—copy to secure location
  2. Test Key: Verify key works before leaving the page
  3. Store Securely: Use password manager or secrets vault
  4. Document: Record key name, creation date, and purpose

Example API Key Format:

sk-proj-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Organization Configuration

Organization ID:

OpenAI uses organizations to group API usage and billing:

  1. Navigate to SettingsOrganization
  2. Note your Organization ID (starts with org-)
  3. Configure organization settings:
    • Name: Your company or project name
    • Team members: Add authorized users
    • Roles: Assign appropriate permissions

Usage Tracking:

Set up monitoring to track API consumption:

  1. Navigate to UsageActivity
  2. View real-time usage by:
    • Model (GPT-4, GPT-3.5, embeddings)
    • Date range
    • API key
  3. Export usage reports for cost analysis

Setting Usage Limits

Hard Limits (Spending Caps):

Prevent unexpected costs with hard spending limits:

  1. Navigate to SettingsBillingUsage limits
  2. Set Hard limit (maximum monthly spend)
    • Recommended starting point: $100/month
    • API calls rejected when limit reached
  3. Set Soft limit (notification threshold)
    • Example: $75/month (75% of hard limit)
    • Receive email when threshold crossed

Rate Limits:

OpenAI enforces rate limits based on account tier:

Free Tier:

  • 3 requests per minute (RPM)
  • 40,000 tokens per minute (TPM)
  • Insufficient for production use

Pay-as-you-go (Tier 1):

  • 3,500 RPM for GPT-3.5-Turbo
  • 200 RPM for GPT-4
  • 350,000 TPM across models

Tier 2+ (Higher Usage):

  • Automatically upgraded based on usage history
  • Up to 10,000 RPM for GPT-4 at highest tier
  • Check current tier: Settings → Organization → Limits

Best Practices:

  • Start conservative, increase limits as needed
  • Monitor usage daily during first week
  • Set Slack/email alerts for 50%, 75%, 90% thresholds
  • Review and adjust monthly based on actual usage

Model Selection

Choose appropriate models for different use cases:

GPT-4 (Recommended for Core Features):

  • Use for: Requirement analysis, quality scoring, complex reasoning
  • Strengths: Superior reasoning, fewer hallucinations, better instruction following
  • Cost: Higher (~10-20x GPT-3.5)
  • Speed: Slower (3-10 seconds per request)

GPT-3.5-Turbo (Recommended for High-Volume):

  • Use for: Semantic tagging, summaries, high-volume tasks
  • Strengths: Fast, cost-effective, good for straightforward tasks
  • Cost: Very low (~$0.002 per 1K tokens)
  • Speed: Fast (1-3 seconds per request)

Text-Embedding-3-Small (Semantic Search):

  • Use for: Vector embeddings for semantic search
  • Strengths: Improved performance over ada-002, better multilingual support
  • Cost: ~$0.00002 per 1K tokens (80% cheaper than ada-002)
  • Speed: Very fast (<1 second)

Catalio Default Configuration:

  • Analysis and quality scoring: GPT-4
  • Summaries and tagging: GPT-3.5-Turbo
  • Semantic search embeddings: text-embedding-3-small

Anthropic Setup

Creating API Keys

Step 1: Access API Console

  1. Log into console.anthropic.com
  2. Navigate to SettingsAPI Keys
  3. Click Create Key

Step 2: Configure Key

  1. Key Name: Descriptive identifier
    • Example: “Catalio Production Environment”
  2. Workspace: Select appropriate workspace (if using multiple)
  3. Permissions: Configure access levels
    • Model Access: Enable Claude 3.5 Sonnet (recommended)
    • Rate Limits: Organization-level (cannot be set per-key)
  4. Click Create Key

Step 3: Save and Secure

  1. Copy Key: Shown only once, starts with sk-ant-
  2. Store Securely: Password manager or secrets vault
  3. Test Immediately: Verify key works before leaving page

Example API Key Format:

sk-ant-api03-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Workspace Configuration

Organization Settings:

  1. Navigate to SettingsOrganization
  2. Configure:
    • Organization Name: Your company or team name
    • Team Members: Add authorized users with roles
    • Default Model: Claude 3.5 Sonnet (recommended for Catalio)

Billing Setup:

  1. Navigate to SettingsBilling
  2. Add payment method (credit card or invoicing for enterprise)
  3. Configure billing alerts:
    • Set alert thresholds: $50, $100, $200, etc.
    • Configure notification emails
  4. Review pricing tiers and volume discounts

Model Selection

Choose the right Claude model for your needs:

Claude 3.5 Sonnet (Recommended for Most Teams):

  • Use for: General requirement analysis, quality scoring, refinement
  • Strengths: Best balance of capability, speed, and cost
  • Context Window: 200,000 tokens (analyze entire requirement portfolios)
  • Cost: ~$0.003-0.015 per 1K tokens
  • Speed: Fast (2-5 seconds per request)

Claude 3 Opus (Premium Analysis):

  • Use for: Complex reasoning, highest-quality analysis
  • Strengths: Most capable model, best for nuanced business logic
  • Context Window: 200,000 tokens
  • Cost: Higher (~$0.015-0.075 per 1K tokens)
  • Speed: Slower (5-15 seconds per request)

Claude 3 Haiku (High-Volume Tasks):

  • Use for: Simple tagging, summaries, classification
  • Strengths: Extremely fast and cost-effective
  • Context Window: 200,000 tokens
  • Cost: Very low (~$0.00025-0.00125 per 1K tokens)
  • Speed: Very fast (1-2 seconds per request)

Catalio Recommendation:

  • Default: Claude 3.5 Sonnet for all features (best balance)
  • Cost Optimization: Claude 3 Haiku for tagging/summaries, Sonnet for analysis
  • Premium: Claude 3 Opus for mission-critical requirement validation

Rate Limits and Quotas

Anthropic Rate Limits:

Limits based on account tier:

Tier 1 (New Accounts):

  • 50 requests per minute (RPM)
  • 40,000 tokens per minute (TPM)
  • $100 maximum monthly spend

Tier 2 (Established Use):

  • 1,000 RPM
  • 100,000 TPM
  • $500 maximum monthly spend

Tier 3+ (Higher Tiers):

  • Automatically upgraded based on:
    • Account age (typically 7-14 days active use)
    • Payment history (consistent on-time payments)
    • Usage patterns (higher spend increases limits)
  • Up to 5,000 RPM at highest tier

Monitoring Usage:

  1. Navigate to UsageAPI Activity
  2. View metrics:
    • Requests per day/week/month
    • Token consumption by model
    • Cost breakdown
    • Rate limit utilization
  3. Export reports for analysis

Azure OpenAI Setup

Creating Azure OpenAI Resource

Step 1: Navigate to Azure Portal

  1. Log into portal.azure.com
  2. Click + Create a resource
  3. Search for “Azure OpenAI”
  4. Click Create on Azure OpenAI Service

Step 2: Configure Resource

  1. Basics:

    • Subscription: Select your Azure subscription
    • Resource Group: Create new or select existing
    • Region: Choose based on data residency requirements
      • US: East US, South Central US, West Europe
      • EU: West Europe, Sweden Central (GDPR compliance)
      • Asia: Japan East
    • Name: Unique name (e.g., catalio-openai-prod)
    • Pricing Tier: S0 Standard (most common)
  2. Network:

    • Connectivity: Public or Private Endpoint
      • Public: Accessible from internet (use API keys)
      • Private: Azure Virtual Network only (highest security)
    • Network Security: Configure firewall rules if needed
  3. Tags:

    • Add organizational tags (Environment: Production, Owner: IT, etc.)
  4. Review + Create:

    • Verify configuration
    • Click Create (deployment takes 2-5 minutes)

Step 3: Deploy Models

After resource creation:

  1. Navigate to your Azure OpenAI resource
  2. Click Model deploymentsManage Deployments (opens Azure OpenAI Studio)
  3. Click + Create new deployment
  4. Configure deployment:
    • Model: Select GPT-4 or GPT-3.5-Turbo
    • Model version: Choose latest stable version
    • Deployment name: Descriptive name (e.g., gpt-4-catalio)
    • Content filter: Default or configure custom filters
    • Rate limit (TPM): Tokens per minute capacity
      • Start with 10K TPM, increase based on usage
  5. Click Create
  6. Repeat for additional models (embeddings, GPT-3.5, etc.)

Endpoint Configuration

Retrieve Connection Information:

  1. Navigate to your Azure OpenAI resource
  2. Click Keys and Endpoint in left menu
  3. Copy required information:
    • Endpoint: https://{your-resource}.openai.azure.com
    • Key 1 or Key 2: API key for authentication
    • Location: Azure region

Example Configuration:

Endpoint: https://{your-resource}.openai.azure.com
Key: YOUR_AZURE_OPENAI_API_KEY
Region: eastus
Deployment: gpt-4-catalio

Managed Identity Setup

For enhanced security, use managed identities instead of API keys:

Step 1: Enable Managed Identity

  1. Navigate to Azure OpenAI resource
  2. Click Identity in left menu
  3. System assigned tab → Toggle Status to On
  4. Click Save
  5. Copy Object (principal) ID

Step 2: Grant Permissions

  1. Navigate to Access control (IAM)
  2. Click + AddAdd role assignment
  3. Select role:
    • Cognitive Services User: Read access to models
    • Cognitive Services OpenAI User: Use models via API
  4. Assign to system-assigned managed identity
  5. Click Save

Step 3: Configure Catalio

In Catalio, use managed identity authentication:

  • No API key required
  • Authentication via Azure AD tokens
  • Catalio must be deployed in Azure for this approach

Benefits:

  • No API keys to manage or rotate
  • Integration with Azure AD for access control
  • Audit logs in Azure AD
  • Higher security posture

Configuring in Catalio

Adding API Keys

Step 1: Navigate to Integrations Settings

  1. Log into Catalio as an administrator
  2. Navigate to SettingsIntegrationsAI & LLM
  3. Click Configure LLM Providers

Step 2: Add OpenAI Configuration

  1. Click + Add ProviderOpenAI
  2. Fill in configuration:
    • Provider Name: Descriptive name (e.g., “OpenAI Production”)
    • API Key: Paste your OpenAI API key (starts with sk-)
    • Organization ID (optional): Your org ID (starts with org-)
    • Default Model: Select default model
      • GPT-4 (best quality)
      • GPT-3.5-Turbo (cost-effective)
    • Enable for Features: Select which features use this provider
      • ☑️ Requirement Analysis
      • ☑️ Quality Scoring
      • ☑️ Semantic Search
      • ☑️ Auto-Summarization
      • ☑️ Sentiment Analysis
  3. Click Test Connection
  4. Verify test succeeds (sample request sent to OpenAI)
  5. Click Save Configuration

Step 3: Add Anthropic Configuration

  1. Click + Add ProviderAnthropic (Claude)
  2. Fill in configuration:
    • Provider Name: “Anthropic Production”
    • API Key: Paste your Anthropic API key (starts with sk-ant-)
    • Default Model: Claude 3.5 Sonnet (recommended)
    • Enable for Features: Select features
      • ☑️ Requirement Analysis
      • ☑️ Quality Scoring
      • ☑️ Long-Context Analysis (uses Claude’s 200K context)
  3. Click Test Connection
  4. Verify test succeeds
  5. Click Save Configuration

Step 4: Add Azure OpenAI Configuration

  1. Click + Add ProviderAzure OpenAI
  2. Fill in configuration:
    • Provider Name: “Azure OpenAI Production”
    • Endpoint: Your Azure OpenAI endpoint URL
    • API Key: Key 1 or Key 2 from Azure portal
    • Deployment Names: Map model types to deployments
      • GPT-4 Deployment: gpt-4-catalio
      • GPT-3.5 Deployment: gpt-35-turbo-catalio
      • Embeddings Deployment: text-embedding-3-small
    • API Version: 2024-02-01 (or latest available)
    • Enable for Features: Select features
  3. Click Test Connection
  4. Verify all deployments accessible
  5. Click Save Configuration

Model Selection

Feature-to-Model Mapping:

Configure which models power specific features:

Requirement Analysis:

  • Recommended: GPT-4 or Claude 3.5 Sonnet
  • Why: Complex reasoning requires most capable models
  • Fallback: GPT-3.5-Turbo (if cost-sensitive)

Quality Scoring:

  • Recommended: GPT-4 or Claude 3 Opus
  • Why: Nuanced evaluation benefits from advanced reasoning
  • Fallback: Claude 3.5 Sonnet

Semantic Search (Embeddings):

  • Recommended: OpenAI text-embedding-ada-002
  • Why: Industry-leading embedding quality, cost-effective
  • Alternative: Azure OpenAI embeddings (for data residency)

Auto-Summarization:

  • Recommended: GPT-3.5-Turbo or Claude 3 Haiku
  • Why: Simple task, cost-effectiveness matters
  • Upgrade: GPT-4 for critical summaries

Sentiment Analysis:

  • Recommended: GPT-3.5-Turbo or Claude 3 Haiku
  • Why: Classification task, speed and cost matter
  • Upgrade: Rarely needed

Tag Generation:

  • Recommended: GPT-3.5-Turbo
  • Why: Pattern recognition, high-volume task
  • Upgrade: GPT-4 for more nuanced categorization

Example Configuration Strategy:

Feature Type Recommended Model Provider
Requirement Analysis GPT-4 OpenAI
Quality Scoring GPT-4 OpenAI
Semantic Search (Embeddings) text-embedding-ada-002 OpenAI
Long-Context Analysis Claude 3.5 Sonnet (200K) Anthropic
Summarization GPT-3.5-Turbo Azure OpenAI
Sentiment Analysis GPT-3.5-Turbo Azure OpenAI
Tag Generation GPT-3.5-Turbo Azure OpenAI

Cost Optimization Strategy:

  1. Start with GPT-4 for all features to establish quality baseline
  2. Monitor usage and costs for 1 week
  3. Identify high-volume, simple tasks (summaries, tags, sentiment)
  4. Downgrade those to GPT-3.5-Turbo or Claude Haiku
  5. Keep GPT-4 for core analysis and quality scoring
  6. Review monthly and adjust based on budget

Testing Connection

Automated Connection Test:

After entering configuration, click Test Connection:

Test Steps:

  1. Authentication: Verify API key is valid
  2. Model Access: Confirm default model is accessible
  3. Rate Limits: Check rate limit headroom
  4. Sample Request: Send test prompt and verify response
  5. Latency: Measure response time

Success Criteria:

  • ✅ Authentication successful
  • ✅ Model accessible
  • ✅ Response received within 10 seconds
  • ✅ No rate limit errors

Troubleshooting Failed Tests:

“Invalid API Key”

  • Verify key copied correctly (no extra spaces)
  • Check key hasn’t been revoked
  • Ensure key has appropriate permissions

“Model Not Found”

  • For Azure: Verify deployment name matches exactly
  • For OpenAI/Anthropic: Ensure model is available to your account
  • Check for typos in model name

“Rate Limit Exceeded”

  • Wait 60 seconds and retry
  • Check if other systems using same API key
  • Verify account tier supports expected usage

“Connection Timeout”

  • Check network connectivity to provider endpoint
  • Verify firewall allows outbound HTTPS to provider
  • Confirm DNS resolution working

Manual Verification:

Test with sample requirement:

  1. Navigate to any requirement in Catalio
  2. Click AI ActionsAnalyze Requirement
  3. Wait for analysis to complete (5-15 seconds)
  4. Verify:
    • Quality score displayed
    • Improvement suggestions shown
    • No error messages
  5. Check Integration Logs for API call details

AI Features Enabled

Once LLM providers are configured, these powerful features become available:

AI-Assisted Requirement Writing

How It Works:

As you write requirements, AI provides real-time assistance:

  1. Natural Language Input: Start typing in plain English
    • Example: “Users need to export their data”
  2. Structure Suggestion: AI converts to user story format
    • As a User
    • I want to export my personal data in CSV format
    • So that I can backup my information locally
  3. Acceptance Criteria: AI suggests testable criteria
    • User can click “Export Data” button on profile page
    • System generates CSV with all profile fields
    • Download begins within 5 seconds
  4. Refinement: AI identifies missing details and suggests improvements

Activation:

  • Navigate to RequirementsNew Requirement
  • Click AI Assistant button (🤖)
  • Enter natural language description
  • Review and refine AI suggestions
  • Accept to populate requirement fields

Best Practices:

  • Provide context about the persona and benefit
  • Be specific about the capability needed
  • Review AI suggestions carefully—they’re starting points, not final requirements
  • Iterate: Use AI suggestions, refine, then re-analyze for quality

Automatic Requirement Refinement

Quality Analysis:

AI evaluates requirements across multiple dimensions:

Completeness Score (0-100%):

  • Checks for required user story components
  • Identifies missing acceptance criteria
  • Flags absent assumptions or constraints
  • Verifies persona linkage

Clarity Score (0-100%):

  • Detects ambiguous language (“good,” “fast,” “user-friendly”)
  • Identifies vague pronouns (“it,” “that,” “this”)
  • Flags undefined technical terms
  • Suggests more specific alternatives

Testability Score (0-100%):

  • Evaluates if acceptance criteria are measurable
  • Identifies subjective criteria without metrics
  • Suggests quantifiable alternatives

Overall Quality Score:

  • Composite score combining all dimensions
  • Red (0-60%): Needs significant work
  • Yellow (61-80%): Good but improvable
  • Green (81-100%): Excellent quality

Improvement Suggestions:

AI provides specific, actionable recommendations:

Quality Score: 72% (Good)
Completeness: 80% ✓ User story format present ✓ Acceptance criteria defined ⚠️ Missing: Assumptions
section ⚠️ Missing: Technical constraints
Clarity: 65% ⚠️ Ambiguous term: "quickly" (line 3) Suggestion: Replace with specific time (e.g.,
"within 3 seconds") ⚠️ Vague pronoun: "it" (line 5) Suggestion: Replace with "the export file"
Testability: 70% ⚠️ Non-measurable criterion: "easy to use" Suggestion: Replace with "completable in
3 clicks or less"
Recommended Actions:
1. Add assumptions about data format and size limits
2. Specify performance metric instead of "quickly"
3. Replace "easy to use" with measurable usability metric

Activation:

  • Open any requirement
  • Click AI ActionsAnalyze Quality
  • Review quality scores and suggestions
  • Click Apply Suggestion to auto-fix issues
  • Re-run analysis to verify improvements

Use Case Generation

From Requirement to Scenarios:

AI generates concrete use cases from requirement descriptions:

Input Requirement:

As a Business Analyst
I want to export requirements data as CSV
So that I can analyze trends in Excel

AI-Generated Use Cases:

Use Case 1: Standard Export

  • Actor: Business Analyst
  • Precondition: User logged in, requirements exist in project
  • Main Flow:
    1. User navigates to Requirements page
    2. User clicks “Export” button
    3. User selects “CSV format”
    4. System generates CSV with all fields
    5. Browser downloads file “requirements-2025-01-15.csv”
  • Postcondition: User has CSV file for analysis
  • Success Criteria: CSV file downloads within 5 seconds

Use Case 2: Filtered Export

  • Actor: Business Analyst
  • Main Flow:
    1. User applies filters (status: Active, priority: High)
    2. User clicks “Export Filtered”
    3. System generates CSV with only filtered requirements
  • Success Criteria: CSV contains only filtered subset

Use Case 3: Scheduled Export

  • Actor: Business Analyst
  • Main Flow:
    1. User navigates to Settings → Exports
    2. User creates scheduled export (daily at 8am)
    3. System emails CSV daily
  • Success Criteria: User receives email with CSV attachment every morning

Activation:

  • Open requirement
  • Click AI ActionsGenerate Use Cases
  • Review generated scenarios
  • Edit for accuracy and organizational context
  • Click Save Use Cases to create linked use case records

Acceptance Criteria Suggestions

Smart Criteria Generation:

AI analyzes requirement and suggests comprehensive acceptance criteria:

Functional Criteria:

  • Core capability checks
  • User interface requirements
  • Data validation rules
  • Error handling scenarios

Non-Functional Criteria:

  • Performance metrics (response time, throughput)
  • Scalability requirements
  • Reliability and availability targets
  • Security and privacy requirements

Edge Cases:

  • Boundary conditions
  • Error scenarios
  • Concurrent usage
  • Data integrity checks

Example:

Requirement: OAuth authentication integration

AI-Generated Acceptance Criteria:

## Functional Criteria
- [ ] User can click "Sign in with Google" button on login page
- [ ] System redirects to Google OAuth consent screen
- [ ] After authorization, user redirected back to Catalio dashboard
- [ ] User profile populated with name and email from Google account
- [ ] System creates user record if first-time login
- [ ] System associates existing account if email matches
## Non-Functional Criteria
- [ ] OAuth redirect completes within 3 seconds
- [ ] System handles 1,000 concurrent OAuth flows
- [ ] Failed authentication shows clear error message
- [ ] OAuth tokens stored encrypted (AES-256)
- [ ] Session expires after 24 hours of inactivity
## Security Criteria
- [ ] OAuth state parameter prevents CSRF attacks
- [ ] Access tokens never logged or exposed
- [ ] Token refresh handled automatically
- [ ] Revoked tokens result in logout
## Edge Cases
- [ ] User denies OAuth consent → returns to login with message
- [ ] User already logged in → option to link Google account
- [ ] Email domain not in allowlist → access denied with explanation
- [ ] Google API unavailable → fallback to password login

Activation:

  • Open requirement
  • Scroll to Acceptance Criteria section
  • Click AI Suggest Criteria
  • Review suggestions
  • Check boxes for criteria to add
  • Click Add Selected

Requirement Quality Analysis

Continuous Quality Monitoring:

AI continuously monitors requirement quality across your portfolio:

Project Dashboard:

Requirement Quality Overview:
Total Requirements: 247
Average Quality Score: 78% (Good)
Distribution:
Excellent (81-100%): 89 requirements (36%)
Good (61-80%): 124 requirements (50%)
Needs Work (0-60%): 34 requirements (14%)
Common Issues:
1. Missing assumptions: 67 requirements
2. Ambiguous language: 45 requirements
3. Non-measurable criteria: 34 requirements
4. Missing persona links: 23 requirements

Quality Trends:

  • Track quality improvement over time
  • Identify teams or categories with quality issues
  • Celebrate quality improvements

Quality Gates:

Configure quality thresholds for workflow transitions in SettingsWorkflowsQuality Gates:

Example Quality Gate Configuration:

Setting Value Purpose
Minimum Quality Score 75% Overall threshold to approve requirements
Required Completeness ≥ 80% Ensures all key sections are filled
Required Clarity ≥ 70% Validates readability and structure
Required Testability ≥ 70% Confirms acceptance criteria are clear

How It Works:

  • When a user attempts to mark a requirement as “Approved”, the quality gate is checked
  • If the quality score is below the threshold, the transition is blocked
  • The requirement owner receives a notification with AI-generated improvement suggestions
  • Once the requirement meets the quality threshold, it can be approved

Semantic Search:

Find requirements by meaning, not just keywords:

Traditional Keyword Search:

  • Query: “user authentication”
  • Finds: Requirements containing exact phrase “user authentication”
  • Misses: Requirements about OAuth, SSO, login, credentials

Semantic Search with AI:

  • Query: “user authentication”
  • Finds requirements about:
    • OAuth2 integration
    • Single sign-on (SSO)
    • Password reset flows
    • Multi-factor authentication (MFA)
    • Session management
    • Login rate limiting

How It Works:

  1. AI generates vector embeddings for all requirements (one-time)
  2. User query converted to vector embedding
  3. System finds requirements with similar vector representations
  4. Results ranked by semantic similarity
  5. Even different terminology matches conceptually related content

Advanced Search Features:

Natural Language Queries:

  • “Requirements about exporting data”
  • “Features for business analysts”
  • “Performance and scalability requirements”
  • “Security-related features added in last 3 months”

Conceptual Grouping:

  • “Show me requirements similar to REQ-123”
  • “Find duplicates or overlapping requirements”
  • “Group requirements by conceptual theme”

Activation:

  • Navigate to Requirements page
  • Toggle Semantic Search (🔍🧠)
  • Enter natural language query
  • View results ranked by relevance
  • Click requirement to view details

Usage Management

Monitoring API Usage

Real-Time Usage Dashboard:

Catalio provides comprehensive usage monitoring:

  1. Navigate to SettingsIntegrationsAI & LLMUsage
  2. View metrics:
    • Requests Today: API calls by feature
    • Tokens Consumed: Input and output tokens
    • Cost Estimate: Approximate spend based on provider pricing
    • Rate Limit Utilization: Percentage of limits used
    • Average Latency: Response time by model

Example Dashboard:

AI Usage - Last 30 Days
OpenAI (GPT-4):
Requests: 1,247
Tokens: 2,456,789 (Input: 1,234,567 | Output: 1,222,222)
Estimated Cost: $147.41
Features:
- Requirement Analysis: 456 requests ($89.21)
- Quality Scoring: 678 requests ($52.14)
- Use Case Generation: 113 requests ($6.06)
Anthropic (Claude 3.5 Sonnet):
Requests: 89
Tokens: 876,543
Estimated Cost: $8.77
Features:
- Long-Context Analysis: 89 requests ($8.77)
OpenAI (Embeddings):
Requests: 12,456
Tokens: 34,567,890
Estimated Cost: $3.46
Features:
- Semantic Search: 12,456 requests ($3.46)
Total Estimated Cost: $159.64
Projected Monthly Cost: $163.42 (based on trend)

Export Reports:

  • Click Export Usage Report
  • Select date range
  • Choose format: CSV, JSON, PDF
  • Use for billing reconciliation or cost allocation

Setting Budgets

Monthly Budget Limits:

Configure spending limits to control costs:

  1. Navigate to SettingsIntegrationsAI & LLMBudget
  2. Click Set Budget Limit
  3. Configure:
    • Monthly Budget: $200 (example)
    • Alert Thresholds:
      • 50% ($100): Email notification
      • 75% ($150): Email + Slack notification
      • 90% ($180): Email + Slack + warn users in UI
    • Budget Exceeded Action:
      • Warn: Continue but notify admins
      • Limit: Disable AI features until next month
      • Prompt: Ask user to confirm before expensive operations

Per-Feature Budgets:

Allocate budget across features in SettingsAI & LLMBudget Management:

Example Monthly Budget Allocation ($200 total):

Feature Monthly Budget Percentage Priority
Requirement Analysis (GPT-4) $100 50% High
Quality Scoring (GPT-4) $50 25% High
Semantic Search (Embeddings) $20 10% Critical
Summaries & Tags (GPT-3.5) $20 10% Medium
Buffer/Overrun $10 5% -

How Budget Enforcement Works:

  • Each feature has its own individual quota
  • When a feature exceeds its quota, only that feature is disabled
  • Critical features (like semantic search) continue with a warning to admins
  • The system prevents total budget overrun across all features

Managing Costs

Cost Optimization Strategies:

1. Model Tiering

Use appropriate models for each task to balance cost and quality:

Cost-Optimized Model Selection:

Feature Type Recommended Model Rationale
Requirement Analysis GPT-4 Complex reasoning worth the cost
Quality Scoring Claude 3.5 Sonnet Nuanced evaluation requires advanced AI
Summaries GPT-3.5-Turbo Simple task, high frequency
Tags GPT-3.5-Turbo Pattern recognition, predictable
Sentiment Claude 3 Haiku Fast and affordable classification

Expected Savings: This configuration typically saves 60% compared to using GPT-4 for all features.

2. Caching and Deduplication

Catalio automatically caches AI responses:

  • Identical requests return cached results
  • Cache TTL: 24 hours (configurable)
  • Estimated savings: 15-30% of API calls

3. Batch Processing

Process multiple requirements in one request:

  • Single API call for multiple summaries
  • Reduced overhead and latency
  • Up to 50% cost reduction for bulk operations

4. User Controls

Empower users to manage costs:

  • Manual Trigger: AI features on-demand, not automatic
  • Bulk Actions: “Analyze Selected” for multiple requirements
  • Preview Mode: Show estimated cost before running analysis

5. Scheduled Optimization

Run expensive operations during off-peak:

  • Nightly batch: Generate embeddings for new requirements
  • Weekly: Re-analyze all requirements for quality trends
  • Monthly: Comprehensive portfolio analysis

Example Cost Trajectory:

Month 1 (Unoptimized): $487
- All features use GPT-4
- Auto-analysis on every save
- No caching
Month 2 (Optimized): $186
- Model tiering implemented
- Manual trigger for analysis
- Caching enabled
- Savings: 62%
Month 3 (Fully Optimized): $143
- Batch processing
- Scheduled operations
- User education on costs
- Savings: 71% vs. Month 1

Security Best Practices

API Key Rotation

Regular Key Rotation Schedule:

Recommended Rotation Frequency:

  • Production: Every 90 days
  • Development/Staging: Every 180 days
  • Compromised Keys: Immediately

Rotation Process:

Step 1: Generate New Key

  1. Log into provider console
  2. Create new API key with same permissions
  3. Test new key before replacing old key

Step 2: Update Catalio Configuration

  1. Navigate to SettingsIntegrationsAI & LLM
  2. Edit provider configuration
  3. Add new key alongside old key (dual-key period)
  4. Click Save

Step 3: Monitor

  1. Verify all features work with new key
  2. Monitor for errors (keep old key active during verification)
  3. Wait 24-48 hours to ensure stability

Step 4: Revoke Old Key

  1. After verification period, log into provider console
  2. Revoke old API key
  3. Remove old key from Catalio configuration

Automation:

  • Set calendar reminders for rotation schedule
  • Use secrets management tools (HashiCorp Vault, Azure Key Vault)
  • Implement automated rotation with terraform or scripts

Access Control

Principle of Least Privilege:

Restrict AI feature access based on user roles. Configure permissions in SettingsRoles & PermissionsAI Features:

Role-Based AI Feature Permissions:

Permission Viewer Contributor Admin
Use semantic search
View AI summaries
Trigger requirement analysis ✓ (own) ✓ (all)
Generate use cases
Generate acceptance criteria
Configure AI providers
Manage API keys
Set budgets and limits
Enable/disable features
View personal usage
View org-wide usage
View costs

Catalio Configuration:

  1. Navigate to SettingsRoles & Permissions
  2. Edit role → AI Features section
  3. Toggle permissions per role
  4. Click Save

API Key Permissions:

Use provider-level permissions to restrict key capabilities:

OpenAI:

  • Create separate keys for production vs. development
  • Use project-scoped keys to isolate usage
  • Enable only required models per key

Azure OpenAI:

  • Use Azure RBAC for fine-grained control
  • Assign “Cognitive Services OpenAI User” role to service principals
  • Configure network restrictions (private endpoints, firewalls)

Data Privacy

PII and Sensitive Data:

Protect personally identifiable information (PII):

AI Accessible Toggle:

Mark requirements containing PII as non-AI-accessible:

  1. Open requirement with sensitive data
  2. Toggle AI Accessible to OFF
  3. AI features disabled for this requirement
  4. Requirement excluded from semantic search embeddings

Automatic PII Detection:

Catalio can detect potential PII:

  • Email addresses
  • Phone numbers
  • Social security numbers
  • Credit card numbers

Enable: SettingsAI & LLMPrivacyPII Detection

  • When detected, prompt user to mark requirement as non-AI-accessible

Data Residency:

For compliance requirements (GDPR, HIPAA):

European Customers (GDPR):

  • Recommended: Azure OpenAI with EU region (West Europe)
  • Alternative: Anthropic (data processing agreement available)
  • Avoid: OpenAI direct API (data processed in US)

Healthcare (HIPAA):

  • Required: Azure OpenAI with Business Associate Agreement (BAA)
  • Configure Azure OpenAI with PHI safeguards
  • Document data handling in compliance reports

Financial Services:

  • Use Azure OpenAI in appropriate region
  • Enable customer-managed keys (CMK) for encryption
  • Audit all AI API calls

Data Minimization:

Send only necessary context to AI:

  • Exclude user names and email from AI requests
  • Redact sensitive fields before sending
  • Use anonymized examples in prompts

Configuration:

  • SettingsAI & LLMPrivacyData Minimization
  • Select fields to exclude: Created By, Updated By, Internal Notes

Troubleshooting

Common Errors and Solutions

Error: “API Key Invalid or Expired”

Symptoms:

  • AI features return authentication errors
  • “401 Unauthorized” in integration logs

Solutions:

  1. Verify API key in provider console (not revoked)
  2. Check for extra spaces when copying key
  3. Regenerate key if expired
  4. Ensure key has appropriate permissions
  5. For Azure: Verify endpoint and deployment names exact

Error: “Rate Limit Exceeded”

Symptoms:

  • “429 Too Many Requests” errors
  • AI features slow or unavailable intermittently

Solutions:

  1. Check current usage in provider console
  2. Verify account tier and rate limits
  3. Implement request queuing in high-usage periods
  4. Upgrade provider tier if limits consistently hit
  5. Distribute load: Use multiple API keys or accounts
  6. Enable caching to reduce duplicate requests

Error: “Model Not Available”

Symptoms:

  • “Model not found” errors
  • Specific features fail while others work

Solutions:

  1. Verify model name spelling (GPT-4 vs gpt-4 vs GPT4)
  2. Check model access in provider console
  3. For Azure: Confirm deployment created and active
  4. Ensure account has access to requested model
  5. Try alternative model as fallback

Error: “Request Timeout”

Symptoms:

  • AI features hang, then fail after 30-60 seconds
  • Inconsistent availability

Solutions:

  1. Check network connectivity to provider endpoint
  2. Verify firewall allows outbound HTTPS
  3. Test provider API directly with curl/Postman
  4. Increase timeout in Catalio settings (Settings → AI → Advanced → Timeout)
  5. Check provider status page for outages

Error: “Budget Limit Exceeded”

Symptoms:

  • AI features disabled with budget message
  • Users cannot trigger analysis

Solutions:

  1. Review current budget: Settings → AI & LLM → Budget
  2. Increase budget limit if appropriate
  3. Check usage breakdown for unexpected consumption
  4. Temporarily increase limit for critical needs
  5. Wait until next monthly budget reset

Error: “Quality Score Calculation Failed”

Symptoms:

  • AI analysis starts but returns no results
  • Generic error message without details

Solutions:

  1. Check requirement content isn’t empty
  2. Verify requirement has minimum required fields
  3. Try analyzing different requirement (isolate issue)
  4. Review integration logs for detailed error
  5. Contact support if persistent

Debugging Integration Issues

Enable Debug Logging:

  1. Navigate to SettingsAI & LLMAdvanced
  2. Enable Debug Logging
  3. Set Log Level: Debug
  4. Click Save

Viewing Integration Logs:

  1. Navigate to SettingsAI & LLMLogs
  2. Filter by:
    • Date range
    • Provider (OpenAI, Anthropic, Azure)
    • Feature (Analysis, Search, etc.)
    • Status (Success, Error)
  3. Click log entry to view details:
    • Request payload (redacted sensitive data)
    • Response data
    • Latency
    • Error messages
    • Token usage

Example Log Entry:

{
"timestamp": "2025-01-15T14:23:45Z",
"provider": "OpenAI",
"model": "gpt-4",
"feature": "requirement_analysis",
"requirement_id": "REQ-123",
"request": {
"prompt": "Analyze requirement quality...",
"max_tokens": 1000
},
"response": {
"status": "success",
"latency_ms": 3456,
"tokens_used": 892,
"cost_estimate": "$0.0268"
}
}

Testing with Sample Requirement:

Isolate issues with controlled test:

  1. Create new requirement: “Test Requirement for AI Integration”
  2. Add minimal content:
    • As a Test User
    • I want to test AI features
    • So that I can verify configuration
  3. Click AI ActionsAnalyze Quality
  4. Note exact error message
  5. Check logs for detailed error

Provider Status Pages:

Check for service outages:

Contacting Support:

If issues persist, contact Catalio support with:

  • Detailed error message
  • Screenshots
  • Steps to reproduce
  • Integration logs (export from Logs page)
  • Expected vs. actual behavior

Best Practices Summary

Configuration:

  • ✅ Start with one provider, expand as needed
  • ✅ Test thoroughly before enabling for all users
  • ✅ Set conservative budget limits initially
  • ✅ Use model tiering for cost optimization
  • ✅ Enable caching to reduce duplicate requests

Security:

  • ✅ Rotate API keys every 90 days
  • ✅ Use managed identities when possible (Azure)
  • ✅ Mark sensitive requirements as non-AI-accessible
  • ✅ Review and audit AI API usage monthly
  • ✅ Configure appropriate data residency for compliance

Cost Management:

  • ✅ Monitor usage daily during first week
  • ✅ Use GPT-3.5-Turbo/Claude Haiku for high-volume tasks
  • ✅ Reserve GPT-4/Claude Opus for complex analysis
  • ✅ Enable budget alerts at 50%, 75%, 90%
  • ✅ Review and optimize monthly

User Education:

  • ✅ Train team on AI feature costs and best practices
  • ✅ Document when to use AI features vs. manual work
  • ✅ Share cost dashboard with stakeholders
  • ✅ Celebrate quality improvements from AI insights

Quality Assurance:

  • ✅ Always review AI suggestions before accepting
  • ✅ AI augments human judgment, doesn’t replace it
  • ✅ Use quality scores as guidance, not absolute truth
  • ✅ Iterate: AI suggestion → human refinement → re-analysis

Next Steps

Now that LLM integration is configured, explore these advanced topics:

Support Resources

Documentation:

Video Tutorials:

Community:

Support:

  • Email: support@catalio.com
  • Chat: Available in-app (bottom right)
  • Enterprise Support: Dedicated account managers for enterprise plans

Ready to Get Started?

  1. Choose your LLM provider (OpenAI recommended for most teams)
  2. Create API account and generate key
  3. Configure in Catalio: Settings → Integrations → AI & LLM
  4. Test with sample requirement
  5. Gradually enable features for your team
  6. Monitor usage and costs
  7. Optimize based on actual usage patterns

Last Updated: January 11, 2025 Applies to: Catalio v1.2+, OpenAI API v1, Anthropic API v2, Azure OpenAI API 2024-02-01