Unlock Catalio’s full AI-powered capabilities by connecting Large Language Model (LLM) providers like OpenAI and Anthropic. This integration enables intelligent requirement analysis, automated refinement suggestions, semantic search, and AI-assisted writing that transforms how your team creates and manages product specifications.
Overview
Why LLM Integration Matters
Catalio’s AI features go far beyond simple text generation. By connecting enterprise-grade LLM providers, you enable sophisticated analysis that helps your team:
Write Better Requirements
- AI suggests improvements to clarity, completeness, and testability
- Automatically identifies missing acceptance criteria
- Generates user story components from natural language descriptions
- Detects ambiguous language and offers alternatives
Accelerate Requirement Creation
- Convert meeting notes into structured requirements
- Generate acceptance criteria from requirement descriptions
- Suggest relevant personas based on requirement content
- Auto-create related use cases and test scenarios
Ensure Quality and Consistency
- Analyze sentiment to detect stakeholder concerns
- Score requirements on completeness and clarity metrics
- Suggest appropriate priority and complexity levels
- Identify duplicate or conflicting requirements
Enable Intelligent Search
- Semantic search understands meaning, not just keywords
- Find related requirements using natural language queries
- Discover dependencies through content analysis
- Group requirements by conceptual similarity
Provide Business Insights
- Summarize requirements for executive dashboards
- Extract key themes across requirement portfolios
- Identify gaps in requirement coverage
- Generate release notes from completed requirements
Supported LLM Providers
Catalio integrates with the leading LLM platforms:
OpenAI (GPT-4/GPT-3.5)
- Industry-leading text generation and analysis
- Excellent for complex reasoning and refinement suggestions
- Strong performance on technical documentation
- Broad availability and competitive pricing
Anthropic (Claude)
- Industry-leading context windows (200K tokens)
- Exceptional at analyzing large requirement sets
- Strong safety features and alignment
- Excellent for nuanced business reasoning
Azure OpenAI Service
- Enterprise security and compliance (SOC 2, HIPAA, FedRAMP)
- Deployed in your Azure tenant for data residency
- Integration with Microsoft enterprise identity
- Guaranteed uptime SLAs and dedicated capacity
Future Support
- Google Vertex AI (Gemini) - Coming Q2 2025
- AWS Bedrock (Claude, Titan) - Coming Q2 2025
- Self-hosted models (Llama, Mistral) - Coming Q3 2025
Data Privacy and Security
Critical Understanding: When you enable LLM integration, requirement content is sent to the provider’s API for processing. Understanding data handling is essential for compliance:
Data Transmission:
- Requirement text, personas, use cases sent to LLM provider
- Transmitted over encrypted HTTPS connections
- API requests include only necessary context
- User credentials never transmitted to LLM providers
Provider Data Policies:
- OpenAI: Enterprise API calls not used for model training (OpenAI API Data Usage Policies)
- Anthropic: No training on customer data by default (Anthropic Privacy Policy)
- Azure OpenAI: Data remains in your Azure tenant, full control (Azure OpenAI Data Privacy)
Catalio Safeguards:
- AI Accessible Toggle: Mark sensitive requirements as non-AI-accessible
- Selective Processing: Choose which features use AI
- Audit Logging: Track all AI API calls for compliance
- Data Residency: Azure OpenAI keeps data in your region
Compliance Considerations:
- GDPR: Ensure provider has adequate safeguards (Azure recommended for EU)
- HIPAA: Use Azure OpenAI with BAA for healthcare applications
- SOC 2: All providers support enterprise security standards
- Export Control: Verify provider availability in your jurisdiction
Prerequisites
Account Setup with LLM Providers
Before configuring Catalio, establish accounts with your chosen LLM providers.
OpenAI Prerequisites
For OpenAI Platform (Recommended for Most Teams):
- Create OpenAI Account: Sign up at platform.openai.com
- Verify Account: Complete email verification and phone verification
- Add Payment Method: Navigate to Billing → Payment methods → Add payment
- Review Pricing: Understand token-based pricing for your expected usage
- GPT-4: ~$0.03-0.06 per 1,000 tokens (varies by model)
- GPT-3.5-Turbo: ~$0.002 per 1,000 tokens
- Set Usage Limits: Configure hard spending limits to control costs
- Settings → Billing → Usage limits
- Recommended: Start with $100/month limit, adjust based on usage
For OpenAI Enterprise (Large Organizations):
- Contact OpenAI sales for enterprise agreements
- Negotiate volume pricing and dedicated capacity
- Enterprise features include:
- No training on your data by default
- Extended context windows
- Priority access during high-demand periods
- Dedicated account management
Anthropic Prerequisites
For Anthropic API (Claude):
- Request Access: Sign up at console.anthropic.com
- Account Approval: Anthropic reviews applications (typically 1-2 business days)
- Organization Setup: Configure your workspace and team members
- Add Payment: Navigate to Settings → Billing → Add payment method
- Understand Pricing: Claude pricing varies by model
- Claude 3.5 Sonnet: ~$0.003-0.015 per 1,000 tokens
- Claude 3 Opus: ~$0.015-0.075 per 1,000 tokens
- Claude 3 Haiku: ~$0.00025-0.00125 per 1,000 tokens
- Set Budget Alerts: Configure notifications for usage thresholds
For Anthropic Enterprise:
- Contact Anthropic for enterprise licensing
- Features include:
- Guaranteed capacity and uptime SLAs
- Extended rate limits
- Priority support
- Custom deployment options
Azure OpenAI Prerequisites
For Azure OpenAI Service:
- Azure Subscription: Active Azure subscription required
- Request Access: Apply for Azure OpenAI access
- Navigate to Azure portal
- Search for “Azure OpenAI Service”
- Complete access application (approval typically 1-5 days)
- Create Resource: After approval, create Azure OpenAI resource
- Choose region for data residency requirements
- Select pricing tier (S0 standard for most use cases)
- Deploy Models: Deploy specific model versions
- GPT-4, GPT-3.5-Turbo, or text-embedding-ada-002
- Choose deployment name and capacity
- Configure Networking: Set up virtual network or private endpoints if required
- Set Up Managed Identity: For secure authentication without keys
Azure Enterprise Considerations:
- Integration with Azure Active Directory for SSO
- Private Link for network isolation
- Customer-managed keys for data encryption
- Compliance certifications (HIPAA, FedRAMP, etc.)
Network and Security Requirements
Network Access:
- Outbound HTTPS access to LLM provider endpoints
- OpenAI:
api.openai.comon port 443 - Anthropic:
api.anthropic.comon port 443 - Azure:
{your-resource}.openai.azure.comon port 443
Firewall Configuration:
- Whitelist provider domains in corporate firewalls
- Allow DNS resolution for API endpoints
- Configure proxy settings if required
Secrets Management:
- Use secure storage for API keys (environment variables, vault services)
- Rotate keys regularly (recommended: every 90 days)
- Restrict key access to authorized personnel only
- Never commit API keys to source control
OpenAI Setup
Creating API Keys
Step 1: Navigate to API Keys Section
- Log into platform.openai.com
- Click your profile icon in top-right corner
- Select API keys from dropdown menu
- Click + Create new secret key
Step 2: Configure Key Settings
- Key Name: Provide descriptive name
- Good: “Catalio Production - AI Features”
- Good: “Catalio Development Environment”
- Bad: “Key 1”, “Test”
- Permissions (if available): Select appropriate scopes
- Model access: Enable GPT-4 and GPT-3.5-Turbo
- Usage: Set permission level (recommended: “All” for simplicity)
- Project (if using projects): Assign to appropriate project
- Click Create secret key
Step 3: Secure Your Key
- Copy Immediately: Key shown only once—copy to secure location
- Test Key: Verify key works before leaving the page
- Store Securely: Use password manager or secrets vault
- Document: Record key name, creation date, and purpose
Example API Key Format:
sk-proj-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Organization Configuration
Organization ID:
OpenAI uses organizations to group API usage and billing:
- Navigate to Settings → Organization
- Note your Organization ID (starts with
org-) - Configure organization settings:
- Name: Your company or project name
- Team members: Add authorized users
- Roles: Assign appropriate permissions
Usage Tracking:
Set up monitoring to track API consumption:
- Navigate to Usage → Activity
- View real-time usage by:
- Model (GPT-4, GPT-3.5, embeddings)
- Date range
- API key
- Export usage reports for cost analysis
Setting Usage Limits
Hard Limits (Spending Caps):
Prevent unexpected costs with hard spending limits:
- Navigate to Settings → Billing → Usage limits
- Set Hard limit (maximum monthly spend)
- Recommended starting point: $100/month
- API calls rejected when limit reached
- Set Soft limit (notification threshold)
- Example: $75/month (75% of hard limit)
- Receive email when threshold crossed
Rate Limits:
OpenAI enforces rate limits based on account tier:
Free Tier:
- 3 requests per minute (RPM)
- 40,000 tokens per minute (TPM)
- Insufficient for production use
Pay-as-you-go (Tier 1):
- 3,500 RPM for GPT-3.5-Turbo
- 200 RPM for GPT-4
- 350,000 TPM across models
Tier 2+ (Higher Usage):
- Automatically upgraded based on usage history
- Up to 10,000 RPM for GPT-4 at highest tier
- Check current tier: Settings → Organization → Limits
Best Practices:
- Start conservative, increase limits as needed
- Monitor usage daily during first week
- Set Slack/email alerts for 50%, 75%, 90% thresholds
- Review and adjust monthly based on actual usage
Model Selection
Choose appropriate models for different use cases:
GPT-4 (Recommended for Core Features):
- Use for: Requirement analysis, quality scoring, complex reasoning
- Strengths: Superior reasoning, fewer hallucinations, better instruction following
- Cost: Higher (~10-20x GPT-3.5)
- Speed: Slower (3-10 seconds per request)
GPT-3.5-Turbo (Recommended for High-Volume):
- Use for: Semantic tagging, summaries, high-volume tasks
- Strengths: Fast, cost-effective, good for straightforward tasks
- Cost: Very low (~$0.002 per 1K tokens)
- Speed: Fast (1-3 seconds per request)
Text-Embedding-3-Small (Semantic Search):
- Use for: Vector embeddings for semantic search
- Strengths: Improved performance over ada-002, better multilingual support
- Cost: ~$0.00002 per 1K tokens (80% cheaper than ada-002)
- Speed: Very fast (<1 second)
Catalio Default Configuration:
- Analysis and quality scoring: GPT-4
- Summaries and tagging: GPT-3.5-Turbo
- Semantic search embeddings: text-embedding-3-small
Anthropic Setup
Creating API Keys
Step 1: Access API Console
- Log into console.anthropic.com
- Navigate to Settings → API Keys
- Click Create Key
Step 2: Configure Key
- Key Name: Descriptive identifier
- Example: “Catalio Production Environment”
- Workspace: Select appropriate workspace (if using multiple)
- Permissions: Configure access levels
- Model Access: Enable Claude 3.5 Sonnet (recommended)
- Rate Limits: Organization-level (cannot be set per-key)
- Click Create Key
Step 3: Save and Secure
- Copy Key: Shown only once, starts with
sk-ant- - Store Securely: Password manager or secrets vault
- Test Immediately: Verify key works before leaving page
Example API Key Format:
sk-ant-api03-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Workspace Configuration
Organization Settings:
- Navigate to Settings → Organization
- Configure:
- Organization Name: Your company or team name
- Team Members: Add authorized users with roles
- Default Model: Claude 3.5 Sonnet (recommended for Catalio)
Billing Setup:
- Navigate to Settings → Billing
- Add payment method (credit card or invoicing for enterprise)
- Configure billing alerts:
- Set alert thresholds: $50, $100, $200, etc.
- Configure notification emails
- Review pricing tiers and volume discounts
Model Selection
Choose the right Claude model for your needs:
Claude 3.5 Sonnet (Recommended for Most Teams):
- Use for: General requirement analysis, quality scoring, refinement
- Strengths: Best balance of capability, speed, and cost
- Context Window: 200,000 tokens (analyze entire requirement portfolios)
- Cost: ~$0.003-0.015 per 1K tokens
- Speed: Fast (2-5 seconds per request)
Claude 3 Opus (Premium Analysis):
- Use for: Complex reasoning, highest-quality analysis
- Strengths: Most capable model, best for nuanced business logic
- Context Window: 200,000 tokens
- Cost: Higher (~$0.015-0.075 per 1K tokens)
- Speed: Slower (5-15 seconds per request)
Claude 3 Haiku (High-Volume Tasks):
- Use for: Simple tagging, summaries, classification
- Strengths: Extremely fast and cost-effective
- Context Window: 200,000 tokens
- Cost: Very low (~$0.00025-0.00125 per 1K tokens)
- Speed: Very fast (1-2 seconds per request)
Catalio Recommendation:
- Default: Claude 3.5 Sonnet for all features (best balance)
- Cost Optimization: Claude 3 Haiku for tagging/summaries, Sonnet for analysis
- Premium: Claude 3 Opus for mission-critical requirement validation
Rate Limits and Quotas
Anthropic Rate Limits:
Limits based on account tier:
Tier 1 (New Accounts):
- 50 requests per minute (RPM)
- 40,000 tokens per minute (TPM)
- $100 maximum monthly spend
Tier 2 (Established Use):
- 1,000 RPM
- 100,000 TPM
- $500 maximum monthly spend
Tier 3+ (Higher Tiers):
- Automatically upgraded based on:
- Account age (typically 7-14 days active use)
- Payment history (consistent on-time payments)
- Usage patterns (higher spend increases limits)
- Up to 5,000 RPM at highest tier
Monitoring Usage:
- Navigate to Usage → API Activity
- View metrics:
- Requests per day/week/month
- Token consumption by model
- Cost breakdown
- Rate limit utilization
- Export reports for analysis
Azure OpenAI Setup
Creating Azure OpenAI Resource
Step 1: Navigate to Azure Portal
- Log into portal.azure.com
- Click + Create a resource
- Search for “Azure OpenAI”
- Click Create on Azure OpenAI Service
Step 2: Configure Resource
-
Basics:
- Subscription: Select your Azure subscription
- Resource Group: Create new or select existing
- Region: Choose based on data residency requirements
- US: East US, South Central US, West Europe
- EU: West Europe, Sweden Central (GDPR compliance)
- Asia: Japan East
- Name: Unique name (e.g.,
catalio-openai-prod) - Pricing Tier: S0 Standard (most common)
-
Network:
- Connectivity: Public or Private Endpoint
- Public: Accessible from internet (use API keys)
- Private: Azure Virtual Network only (highest security)
- Network Security: Configure firewall rules if needed
- Connectivity: Public or Private Endpoint
-
Tags:
- Add organizational tags (Environment: Production, Owner: IT, etc.)
-
Review + Create:
- Verify configuration
- Click Create (deployment takes 2-5 minutes)
Step 3: Deploy Models
After resource creation:
- Navigate to your Azure OpenAI resource
- Click Model deployments → Manage Deployments (opens Azure OpenAI Studio)
- Click + Create new deployment
- Configure deployment:
- Model: Select GPT-4 or GPT-3.5-Turbo
- Model version: Choose latest stable version
- Deployment name: Descriptive name (e.g.,
gpt-4-catalio) - Content filter: Default or configure custom filters
- Rate limit (TPM): Tokens per minute capacity
- Start with 10K TPM, increase based on usage
- Click Create
- Repeat for additional models (embeddings, GPT-3.5, etc.)
Endpoint Configuration
Retrieve Connection Information:
- Navigate to your Azure OpenAI resource
- Click Keys and Endpoint in left menu
- Copy required information:
- Endpoint:
https://{your-resource}.openai.azure.com - Key 1 or Key 2: API key for authentication
- Location: Azure region
- Endpoint:
Example Configuration:
Endpoint: https://{your-resource}.openai.azure.com
Key: YOUR_AZURE_OPENAI_API_KEY
Region: eastus
Deployment: gpt-4-catalio
Managed Identity Setup
For enhanced security, use managed identities instead of API keys:
Step 1: Enable Managed Identity
- Navigate to Azure OpenAI resource
- Click Identity in left menu
- System assigned tab → Toggle Status to On
- Click Save
- Copy Object (principal) ID
Step 2: Grant Permissions
- Navigate to Access control (IAM)
- Click + Add → Add role assignment
- Select role:
- Cognitive Services User: Read access to models
- Cognitive Services OpenAI User: Use models via API
- Assign to system-assigned managed identity
- Click Save
Step 3: Configure Catalio
In Catalio, use managed identity authentication:
- No API key required
- Authentication via Azure AD tokens
- Catalio must be deployed in Azure for this approach
Benefits:
- No API keys to manage or rotate
- Integration with Azure AD for access control
- Audit logs in Azure AD
- Higher security posture
Configuring in Catalio
Adding API Keys
Step 1: Navigate to Integrations Settings
- Log into Catalio as an administrator
- Navigate to Settings → Integrations → AI & LLM
- Click Configure LLM Providers
Step 2: Add OpenAI Configuration
- Click + Add Provider → OpenAI
- Fill in configuration:
- Provider Name: Descriptive name (e.g., “OpenAI Production”)
- API Key: Paste your OpenAI API key (starts with
sk-) - Organization ID (optional): Your org ID (starts with
org-) - Default Model: Select default model
- GPT-4 (best quality)
- GPT-3.5-Turbo (cost-effective)
- Enable for Features: Select which features use this provider
- ☑️ Requirement Analysis
- ☑️ Quality Scoring
- ☑️ Semantic Search
- ☑️ Auto-Summarization
- ☑️ Sentiment Analysis
- Click Test Connection
- Verify test succeeds (sample request sent to OpenAI)
- Click Save Configuration
Step 3: Add Anthropic Configuration
- Click + Add Provider → Anthropic (Claude)
- Fill in configuration:
- Provider Name: “Anthropic Production”
- API Key: Paste your Anthropic API key (starts with
sk-ant-) - Default Model: Claude 3.5 Sonnet (recommended)
- Enable for Features: Select features
- ☑️ Requirement Analysis
- ☑️ Quality Scoring
- ☑️ Long-Context Analysis (uses Claude’s 200K context)
- Click Test Connection
- Verify test succeeds
- Click Save Configuration
Step 4: Add Azure OpenAI Configuration
- Click + Add Provider → Azure OpenAI
- Fill in configuration:
- Provider Name: “Azure OpenAI Production”
- Endpoint: Your Azure OpenAI endpoint URL
- API Key: Key 1 or Key 2 from Azure portal
- Deployment Names: Map model types to deployments
- GPT-4 Deployment:
gpt-4-catalio - GPT-3.5 Deployment:
gpt-35-turbo-catalio - Embeddings Deployment:
text-embedding-3-small
- GPT-4 Deployment:
- API Version:
2024-02-01(or latest available) - Enable for Features: Select features
- Click Test Connection
- Verify all deployments accessible
- Click Save Configuration
Model Selection
Feature-to-Model Mapping:
Configure which models power specific features:
Requirement Analysis:
- Recommended: GPT-4 or Claude 3.5 Sonnet
- Why: Complex reasoning requires most capable models
- Fallback: GPT-3.5-Turbo (if cost-sensitive)
Quality Scoring:
- Recommended: GPT-4 or Claude 3 Opus
- Why: Nuanced evaluation benefits from advanced reasoning
- Fallback: Claude 3.5 Sonnet
Semantic Search (Embeddings):
- Recommended: OpenAI text-embedding-ada-002
- Why: Industry-leading embedding quality, cost-effective
- Alternative: Azure OpenAI embeddings (for data residency)
Auto-Summarization:
- Recommended: GPT-3.5-Turbo or Claude 3 Haiku
- Why: Simple task, cost-effectiveness matters
- Upgrade: GPT-4 for critical summaries
Sentiment Analysis:
- Recommended: GPT-3.5-Turbo or Claude 3 Haiku
- Why: Classification task, speed and cost matter
- Upgrade: Rarely needed
Tag Generation:
- Recommended: GPT-3.5-Turbo
- Why: Pattern recognition, high-volume task
- Upgrade: GPT-4 for more nuanced categorization
Example Configuration Strategy:
| Feature Type | Recommended Model | Provider |
|---|---|---|
| Requirement Analysis | GPT-4 | OpenAI |
| Quality Scoring | GPT-4 | OpenAI |
| Semantic Search (Embeddings) | text-embedding-ada-002 | OpenAI |
| Long-Context Analysis | Claude 3.5 Sonnet (200K) | Anthropic |
| Summarization | GPT-3.5-Turbo | Azure OpenAI |
| Sentiment Analysis | GPT-3.5-Turbo | Azure OpenAI |
| Tag Generation | GPT-3.5-Turbo | Azure OpenAI |
Cost Optimization Strategy:
- Start with GPT-4 for all features to establish quality baseline
- Monitor usage and costs for 1 week
- Identify high-volume, simple tasks (summaries, tags, sentiment)
- Downgrade those to GPT-3.5-Turbo or Claude Haiku
- Keep GPT-4 for core analysis and quality scoring
- Review monthly and adjust based on budget
Testing Connection
Automated Connection Test:
After entering configuration, click Test Connection:
Test Steps:
- Authentication: Verify API key is valid
- Model Access: Confirm default model is accessible
- Rate Limits: Check rate limit headroom
- Sample Request: Send test prompt and verify response
- Latency: Measure response time
Success Criteria:
- ✅ Authentication successful
- ✅ Model accessible
- ✅ Response received within 10 seconds
- ✅ No rate limit errors
Troubleshooting Failed Tests:
“Invalid API Key”
- Verify key copied correctly (no extra spaces)
- Check key hasn’t been revoked
- Ensure key has appropriate permissions
“Model Not Found”
- For Azure: Verify deployment name matches exactly
- For OpenAI/Anthropic: Ensure model is available to your account
- Check for typos in model name
“Rate Limit Exceeded”
- Wait 60 seconds and retry
- Check if other systems using same API key
- Verify account tier supports expected usage
“Connection Timeout”
- Check network connectivity to provider endpoint
- Verify firewall allows outbound HTTPS to provider
- Confirm DNS resolution working
Manual Verification:
Test with sample requirement:
- Navigate to any requirement in Catalio
- Click AI Actions → Analyze Requirement
- Wait for analysis to complete (5-15 seconds)
- Verify:
- Quality score displayed
- Improvement suggestions shown
- No error messages
- Check Integration Logs for API call details
AI Features Enabled
Once LLM providers are configured, these powerful features become available:
AI-Assisted Requirement Writing
How It Works:
As you write requirements, AI provides real-time assistance:
- Natural Language Input: Start typing in plain English
- Example: “Users need to export their data”
- Structure Suggestion: AI converts to user story format
- As a User
- I want to export my personal data in CSV format
- So that I can backup my information locally
- Acceptance Criteria: AI suggests testable criteria
- User can click “Export Data” button on profile page
- System generates CSV with all profile fields
- Download begins within 5 seconds
- Refinement: AI identifies missing details and suggests improvements
Activation:
- Navigate to Requirements → New Requirement
- Click AI Assistant button (🤖)
- Enter natural language description
- Review and refine AI suggestions
- Accept to populate requirement fields
Best Practices:
- Provide context about the persona and benefit
- Be specific about the capability needed
- Review AI suggestions carefully—they’re starting points, not final requirements
- Iterate: Use AI suggestions, refine, then re-analyze for quality
Automatic Requirement Refinement
Quality Analysis:
AI evaluates requirements across multiple dimensions:
Completeness Score (0-100%):
- Checks for required user story components
- Identifies missing acceptance criteria
- Flags absent assumptions or constraints
- Verifies persona linkage
Clarity Score (0-100%):
- Detects ambiguous language (“good,” “fast,” “user-friendly”)
- Identifies vague pronouns (“it,” “that,” “this”)
- Flags undefined technical terms
- Suggests more specific alternatives
Testability Score (0-100%):
- Evaluates if acceptance criteria are measurable
- Identifies subjective criteria without metrics
- Suggests quantifiable alternatives
Overall Quality Score:
- Composite score combining all dimensions
- Red (0-60%): Needs significant work
- Yellow (61-80%): Good but improvable
- Green (81-100%): Excellent quality
Improvement Suggestions:
AI provides specific, actionable recommendations:
Quality Score: 72% (Good)
Completeness: 80% ✓ User story format present ✓ Acceptance criteria defined ⚠️ Missing: Assumptions
section ⚠️ Missing: Technical constraints
Clarity: 65% ⚠️ Ambiguous term: "quickly" (line 3) Suggestion: Replace with specific time (e.g.,
"within 3 seconds") ⚠️ Vague pronoun: "it" (line 5) Suggestion: Replace with "the export file"
Testability: 70% ⚠️ Non-measurable criterion: "easy to use" Suggestion: Replace with "completable in
3 clicks or less"
Recommended Actions:
1. Add assumptions about data format and size limits
2. Specify performance metric instead of "quickly"
3. Replace "easy to use" with measurable usability metric
Activation:
- Open any requirement
- Click AI Actions → Analyze Quality
- Review quality scores and suggestions
- Click Apply Suggestion to auto-fix issues
- Re-run analysis to verify improvements
Use Case Generation
From Requirement to Scenarios:
AI generates concrete use cases from requirement descriptions:
Input Requirement:
As a Business Analyst
I want to export requirements data as CSV
So that I can analyze trends in Excel
AI-Generated Use Cases:
Use Case 1: Standard Export
- Actor: Business Analyst
- Precondition: User logged in, requirements exist in project
- Main Flow:
- User navigates to Requirements page
- User clicks “Export” button
- User selects “CSV format”
- System generates CSV with all fields
- Browser downloads file “requirements-2025-01-15.csv”
- Postcondition: User has CSV file for analysis
- Success Criteria: CSV file downloads within 5 seconds
Use Case 2: Filtered Export
- Actor: Business Analyst
- Main Flow:
- User applies filters (status: Active, priority: High)
- User clicks “Export Filtered”
- System generates CSV with only filtered requirements
- Success Criteria: CSV contains only filtered subset
Use Case 3: Scheduled Export
- Actor: Business Analyst
- Main Flow:
- User navigates to Settings → Exports
- User creates scheduled export (daily at 8am)
- System emails CSV daily
- Success Criteria: User receives email with CSV attachment every morning
Activation:
- Open requirement
- Click AI Actions → Generate Use Cases
- Review generated scenarios
- Edit for accuracy and organizational context
- Click Save Use Cases to create linked use case records
Acceptance Criteria Suggestions
Smart Criteria Generation:
AI analyzes requirement and suggests comprehensive acceptance criteria:
Functional Criteria:
- Core capability checks
- User interface requirements
- Data validation rules
- Error handling scenarios
Non-Functional Criteria:
- Performance metrics (response time, throughput)
- Scalability requirements
- Reliability and availability targets
- Security and privacy requirements
Edge Cases:
- Boundary conditions
- Error scenarios
- Concurrent usage
- Data integrity checks
Example:
Requirement: OAuth authentication integration
AI-Generated Acceptance Criteria:
## Functional Criteria
- [ ] User can click "Sign in with Google" button on login page
- [ ] System redirects to Google OAuth consent screen
- [ ] After authorization, user redirected back to Catalio dashboard
- [ ] User profile populated with name and email from Google account
- [ ] System creates user record if first-time login
- [ ] System associates existing account if email matches
## Non-Functional Criteria
- [ ] OAuth redirect completes within 3 seconds
- [ ] System handles 1,000 concurrent OAuth flows
- [ ] Failed authentication shows clear error message
- [ ] OAuth tokens stored encrypted (AES-256)
- [ ] Session expires after 24 hours of inactivity
## Security Criteria
- [ ] OAuth state parameter prevents CSRF attacks
- [ ] Access tokens never logged or exposed
- [ ] Token refresh handled automatically
- [ ] Revoked tokens result in logout
## Edge Cases
- [ ] User denies OAuth consent → returns to login with message
- [ ] User already logged in → option to link Google account
- [ ] Email domain not in allowlist → access denied with explanation
- [ ] Google API unavailable → fallback to password login
Activation:
- Open requirement
- Scroll to Acceptance Criteria section
- Click AI Suggest Criteria
- Review suggestions
- Check boxes for criteria to add
- Click Add Selected
Requirement Quality Analysis
Continuous Quality Monitoring:
AI continuously monitors requirement quality across your portfolio:
Project Dashboard:
Requirement Quality Overview:
Total Requirements: 247
Average Quality Score: 78% (Good)
Distribution:
Excellent (81-100%): 89 requirements (36%)
Good (61-80%): 124 requirements (50%)
Needs Work (0-60%): 34 requirements (14%)
Common Issues:
1. Missing assumptions: 67 requirements
2. Ambiguous language: 45 requirements
3. Non-measurable criteria: 34 requirements
4. Missing persona links: 23 requirements
Quality Trends:
- Track quality improvement over time
- Identify teams or categories with quality issues
- Celebrate quality improvements
Quality Gates:
Configure quality thresholds for workflow transitions in Settings → Workflows → Quality Gates:
Example Quality Gate Configuration:
| Setting | Value | Purpose |
|---|---|---|
| Minimum Quality Score | 75% | Overall threshold to approve requirements |
| Required Completeness | ≥ 80% | Ensures all key sections are filled |
| Required Clarity | ≥ 70% | Validates readability and structure |
| Required Testability | ≥ 70% | Confirms acceptance criteria are clear |
How It Works:
- When a user attempts to mark a requirement as “Approved”, the quality gate is checked
- If the quality score is below the threshold, the transition is blocked
- The requirement owner receives a notification with AI-generated improvement suggestions
- Once the requirement meets the quality threshold, it can be approved
Natural Language Search
Semantic Search:
Find requirements by meaning, not just keywords:
Traditional Keyword Search:
- Query: “user authentication”
- Finds: Requirements containing exact phrase “user authentication”
- Misses: Requirements about OAuth, SSO, login, credentials
Semantic Search with AI:
- Query: “user authentication”
- Finds requirements about:
- OAuth2 integration
- Single sign-on (SSO)
- Password reset flows
- Multi-factor authentication (MFA)
- Session management
- Login rate limiting
How It Works:
- AI generates vector embeddings for all requirements (one-time)
- User query converted to vector embedding
- System finds requirements with similar vector representations
- Results ranked by semantic similarity
- Even different terminology matches conceptually related content
Advanced Search Features:
Natural Language Queries:
- “Requirements about exporting data”
- “Features for business analysts”
- “Performance and scalability requirements”
- “Security-related features added in last 3 months”
Conceptual Grouping:
- “Show me requirements similar to REQ-123”
- “Find duplicates or overlapping requirements”
- “Group requirements by conceptual theme”
Activation:
- Navigate to Requirements page
- Toggle Semantic Search (🔍🧠)
- Enter natural language query
- View results ranked by relevance
- Click requirement to view details
Usage Management
Monitoring API Usage
Real-Time Usage Dashboard:
Catalio provides comprehensive usage monitoring:
- Navigate to Settings → Integrations → AI & LLM → Usage
- View metrics:
- Requests Today: API calls by feature
- Tokens Consumed: Input and output tokens
- Cost Estimate: Approximate spend based on provider pricing
- Rate Limit Utilization: Percentage of limits used
- Average Latency: Response time by model
Example Dashboard:
AI Usage - Last 30 Days
OpenAI (GPT-4):
Requests: 1,247
Tokens: 2,456,789 (Input: 1,234,567 | Output: 1,222,222)
Estimated Cost: $147.41
Features:
- Requirement Analysis: 456 requests ($89.21)
- Quality Scoring: 678 requests ($52.14)
- Use Case Generation: 113 requests ($6.06)
Anthropic (Claude 3.5 Sonnet):
Requests: 89
Tokens: 876,543
Estimated Cost: $8.77
Features:
- Long-Context Analysis: 89 requests ($8.77)
OpenAI (Embeddings):
Requests: 12,456
Tokens: 34,567,890
Estimated Cost: $3.46
Features:
- Semantic Search: 12,456 requests ($3.46)
Total Estimated Cost: $159.64
Projected Monthly Cost: $163.42 (based on trend)
Export Reports:
- Click Export Usage Report
- Select date range
- Choose format: CSV, JSON, PDF
- Use for billing reconciliation or cost allocation
Setting Budgets
Monthly Budget Limits:
Configure spending limits to control costs:
- Navigate to Settings → Integrations → AI & LLM → Budget
- Click Set Budget Limit
- Configure:
- Monthly Budget: $200 (example)
- Alert Thresholds:
- 50% ($100): Email notification
- 75% ($150): Email + Slack notification
- 90% ($180): Email + Slack + warn users in UI
- Budget Exceeded Action:
- Warn: Continue but notify admins
- Limit: Disable AI features until next month
- Prompt: Ask user to confirm before expensive operations
Per-Feature Budgets:
Allocate budget across features in Settings → AI & LLM → Budget Management:
Example Monthly Budget Allocation ($200 total):
| Feature | Monthly Budget | Percentage | Priority |
|---|---|---|---|
| Requirement Analysis (GPT-4) | $100 | 50% | High |
| Quality Scoring (GPT-4) | $50 | 25% | High |
| Semantic Search (Embeddings) | $20 | 10% | Critical |
| Summaries & Tags (GPT-3.5) | $20 | 10% | Medium |
| Buffer/Overrun | $10 | 5% | - |
How Budget Enforcement Works:
- Each feature has its own individual quota
- When a feature exceeds its quota, only that feature is disabled
- Critical features (like semantic search) continue with a warning to admins
- The system prevents total budget overrun across all features
Managing Costs
Cost Optimization Strategies:
1. Model Tiering
Use appropriate models for each task to balance cost and quality:
Cost-Optimized Model Selection:
| Feature Type | Recommended Model | Rationale |
|---|---|---|
| Requirement Analysis | GPT-4 | Complex reasoning worth the cost |
| Quality Scoring | Claude 3.5 Sonnet | Nuanced evaluation requires advanced AI |
| Summaries | GPT-3.5-Turbo | Simple task, high frequency |
| Tags | GPT-3.5-Turbo | Pattern recognition, predictable |
| Sentiment | Claude 3 Haiku | Fast and affordable classification |
Expected Savings: This configuration typically saves 60% compared to using GPT-4 for all features.
2. Caching and Deduplication
Catalio automatically caches AI responses:
- Identical requests return cached results
- Cache TTL: 24 hours (configurable)
- Estimated savings: 15-30% of API calls
3. Batch Processing
Process multiple requirements in one request:
- Single API call for multiple summaries
- Reduced overhead and latency
- Up to 50% cost reduction for bulk operations
4. User Controls
Empower users to manage costs:
- Manual Trigger: AI features on-demand, not automatic
- Bulk Actions: “Analyze Selected” for multiple requirements
- Preview Mode: Show estimated cost before running analysis
5. Scheduled Optimization
Run expensive operations during off-peak:
- Nightly batch: Generate embeddings for new requirements
- Weekly: Re-analyze all requirements for quality trends
- Monthly: Comprehensive portfolio analysis
Example Cost Trajectory:
Month 1 (Unoptimized): $487
- All features use GPT-4
- Auto-analysis on every save
- No caching
Month 2 (Optimized): $186
- Model tiering implemented
- Manual trigger for analysis
- Caching enabled
- Savings: 62%
Month 3 (Fully Optimized): $143
- Batch processing
- Scheduled operations
- User education on costs
- Savings: 71% vs. Month 1
Security Best Practices
API Key Rotation
Regular Key Rotation Schedule:
Recommended Rotation Frequency:
- Production: Every 90 days
- Development/Staging: Every 180 days
- Compromised Keys: Immediately
Rotation Process:
Step 1: Generate New Key
- Log into provider console
- Create new API key with same permissions
- Test new key before replacing old key
Step 2: Update Catalio Configuration
- Navigate to Settings → Integrations → AI & LLM
- Edit provider configuration
- Add new key alongside old key (dual-key period)
- Click Save
Step 3: Monitor
- Verify all features work with new key
- Monitor for errors (keep old key active during verification)
- Wait 24-48 hours to ensure stability
Step 4: Revoke Old Key
- After verification period, log into provider console
- Revoke old API key
- Remove old key from Catalio configuration
Automation:
- Set calendar reminders for rotation schedule
- Use secrets management tools (HashiCorp Vault, Azure Key Vault)
- Implement automated rotation with terraform or scripts
Access Control
Principle of Least Privilege:
Restrict AI feature access based on user roles. Configure permissions in Settings → Roles & Permissions → AI Features:
Role-Based AI Feature Permissions:
| Permission | Viewer | Contributor | Admin |
|---|---|---|---|
| Use semantic search | ✓ | ✓ | ✓ |
| View AI summaries | ✓ | ✓ | ✓ |
| Trigger requirement analysis | ✗ | ✓ (own) | ✓ (all) |
| Generate use cases | ✗ | ✓ | ✓ |
| Generate acceptance criteria | ✗ | ✓ | ✓ |
| Configure AI providers | ✗ | ✗ | ✓ |
| Manage API keys | ✗ | ✗ | ✓ |
| Set budgets and limits | ✗ | ✗ | ✓ |
| Enable/disable features | ✗ | ✗ | ✓ |
| View personal usage | ✗ | ✓ | ✓ |
| View org-wide usage | ✗ | ✗ | ✓ |
| View costs | ✗ | ✗ | ✓ |
Catalio Configuration:
- Navigate to Settings → Roles & Permissions
- Edit role → AI Features section
- Toggle permissions per role
- Click Save
API Key Permissions:
Use provider-level permissions to restrict key capabilities:
OpenAI:
- Create separate keys for production vs. development
- Use project-scoped keys to isolate usage
- Enable only required models per key
Azure OpenAI:
- Use Azure RBAC for fine-grained control
- Assign “Cognitive Services OpenAI User” role to service principals
- Configure network restrictions (private endpoints, firewalls)
Data Privacy
PII and Sensitive Data:
Protect personally identifiable information (PII):
AI Accessible Toggle:
Mark requirements containing PII as non-AI-accessible:
- Open requirement with sensitive data
- Toggle AI Accessible to OFF
- AI features disabled for this requirement
- Requirement excluded from semantic search embeddings
Automatic PII Detection:
Catalio can detect potential PII:
- Email addresses
- Phone numbers
- Social security numbers
- Credit card numbers
Enable: Settings → AI & LLM → Privacy → PII Detection
- When detected, prompt user to mark requirement as non-AI-accessible
Data Residency:
For compliance requirements (GDPR, HIPAA):
European Customers (GDPR):
- Recommended: Azure OpenAI with EU region (West Europe)
- Alternative: Anthropic (data processing agreement available)
- Avoid: OpenAI direct API (data processed in US)
Healthcare (HIPAA):
- Required: Azure OpenAI with Business Associate Agreement (BAA)
- Configure Azure OpenAI with PHI safeguards
- Document data handling in compliance reports
Financial Services:
- Use Azure OpenAI in appropriate region
- Enable customer-managed keys (CMK) for encryption
- Audit all AI API calls
Data Minimization:
Send only necessary context to AI:
- Exclude user names and email from AI requests
- Redact sensitive fields before sending
- Use anonymized examples in prompts
Configuration:
- Settings → AI & LLM → Privacy → Data Minimization
- Select fields to exclude: Created By, Updated By, Internal Notes
Troubleshooting
Common Errors and Solutions
Error: “API Key Invalid or Expired”
Symptoms:
- AI features return authentication errors
- “401 Unauthorized” in integration logs
Solutions:
- Verify API key in provider console (not revoked)
- Check for extra spaces when copying key
- Regenerate key if expired
- Ensure key has appropriate permissions
- For Azure: Verify endpoint and deployment names exact
Error: “Rate Limit Exceeded”
Symptoms:
- “429 Too Many Requests” errors
- AI features slow or unavailable intermittently
Solutions:
- Check current usage in provider console
- Verify account tier and rate limits
- Implement request queuing in high-usage periods
- Upgrade provider tier if limits consistently hit
- Distribute load: Use multiple API keys or accounts
- Enable caching to reduce duplicate requests
Error: “Model Not Available”
Symptoms:
- “Model not found” errors
- Specific features fail while others work
Solutions:
- Verify model name spelling (GPT-4 vs gpt-4 vs GPT4)
- Check model access in provider console
- For Azure: Confirm deployment created and active
- Ensure account has access to requested model
- Try alternative model as fallback
Error: “Request Timeout”
Symptoms:
- AI features hang, then fail after 30-60 seconds
- Inconsistent availability
Solutions:
- Check network connectivity to provider endpoint
- Verify firewall allows outbound HTTPS
- Test provider API directly with curl/Postman
- Increase timeout in Catalio settings (Settings → AI → Advanced → Timeout)
- Check provider status page for outages
Error: “Budget Limit Exceeded”
Symptoms:
- AI features disabled with budget message
- Users cannot trigger analysis
Solutions:
- Review current budget: Settings → AI & LLM → Budget
- Increase budget limit if appropriate
- Check usage breakdown for unexpected consumption
- Temporarily increase limit for critical needs
- Wait until next monthly budget reset
Error: “Quality Score Calculation Failed”
Symptoms:
- AI analysis starts but returns no results
- Generic error message without details
Solutions:
- Check requirement content isn’t empty
- Verify requirement has minimum required fields
- Try analyzing different requirement (isolate issue)
- Review integration logs for detailed error
- Contact support if persistent
Debugging Integration Issues
Enable Debug Logging:
- Navigate to Settings → AI & LLM → Advanced
- Enable Debug Logging
- Set Log Level: Debug
- Click Save
Viewing Integration Logs:
- Navigate to Settings → AI & LLM → Logs
- Filter by:
- Date range
- Provider (OpenAI, Anthropic, Azure)
- Feature (Analysis, Search, etc.)
- Status (Success, Error)
- Click log entry to view details:
- Request payload (redacted sensitive data)
- Response data
- Latency
- Error messages
- Token usage
Example Log Entry:
{
"timestamp": "2025-01-15T14:23:45Z",
"provider": "OpenAI",
"model": "gpt-4",
"feature": "requirement_analysis",
"requirement_id": "REQ-123",
"request": {
"prompt": "Analyze requirement quality...",
"max_tokens": 1000
},
"response": {
"status": "success",
"latency_ms": 3456,
"tokens_used": 892,
"cost_estimate": "$0.0268"
}
}
Testing with Sample Requirement:
Isolate issues with controlled test:
- Create new requirement: “Test Requirement for AI Integration”
- Add minimal content:
- As a Test User
- I want to test AI features
- So that I can verify configuration
- Click AI Actions → Analyze Quality
- Note exact error message
- Check logs for detailed error
Provider Status Pages:
Check for service outages:
- OpenAI: status.openai.com
- Anthropic: status.anthropic.com
- Azure: status.azure.com
Contacting Support:
If issues persist, contact Catalio support with:
- Detailed error message
- Screenshots
- Steps to reproduce
- Integration logs (export from Logs page)
- Expected vs. actual behavior
Best Practices Summary
Configuration:
- ✅ Start with one provider, expand as needed
- ✅ Test thoroughly before enabling for all users
- ✅ Set conservative budget limits initially
- ✅ Use model tiering for cost optimization
- ✅ Enable caching to reduce duplicate requests
Security:
- ✅ Rotate API keys every 90 days
- ✅ Use managed identities when possible (Azure)
- ✅ Mark sensitive requirements as non-AI-accessible
- ✅ Review and audit AI API usage monthly
- ✅ Configure appropriate data residency for compliance
Cost Management:
- ✅ Monitor usage daily during first week
- ✅ Use GPT-3.5-Turbo/Claude Haiku for high-volume tasks
- ✅ Reserve GPT-4/Claude Opus for complex analysis
- ✅ Enable budget alerts at 50%, 75%, 90%
- ✅ Review and optimize monthly
User Education:
- ✅ Train team on AI feature costs and best practices
- ✅ Document when to use AI features vs. manual work
- ✅ Share cost dashboard with stakeholders
- ✅ Celebrate quality improvements from AI insights
Quality Assurance:
- ✅ Always review AI suggestions before accepting
- ✅ AI augments human judgment, doesn’t replace it
- ✅ Use quality scores as guidance, not absolute truth
- ✅ Iterate: AI suggestion → human refinement → re-analysis
Next Steps
Now that LLM integration is configured, explore these advanced topics:
- Semantic Search Guide - Master AI-powered requirement discovery
- Quality Metrics - Understand AI quality scoring
- AI Workflows - Automate requirement refinement
- API Documentation - Integrate AI features programmatically
Support Resources
Documentation:
Video Tutorials:
- Setting Up OpenAI (10 min)
- Cost Optimization Strategies (15 min)
- Azure OpenAI Enterprise Setup (20 min)
Community:
Support:
- Email: support@catalio.com
- Chat: Available in-app (bottom right)
- Enterprise Support: Dedicated account managers for enterprise plans
Ready to Get Started?
- Choose your LLM provider (OpenAI recommended for most teams)
- Create API account and generate key
- Configure in Catalio: Settings → Integrations → AI & LLM
- Test with sample requirement
- Gradually enable features for your team
- Monitor usage and costs
- Optimize based on actual usage patterns
Last Updated: January 11, 2025 Applies to: Catalio v1.2+, OpenAI API v1, Anthropic API v2, Azure OpenAI API 2024-02-01