Banner image for Metrics
Core Concepts 5 min read

Metrics

Define quantified success criteria with baseline, target, and actual values to make Initiative outcomes measurable and communicable

Updated
On this page

A Metric in Catalio is a structured success measurement for an Initiative — a specific, quantified indicator that defines what “success” looks like and tracks progress toward that definition. Metrics make the difference between “we modernized the system” and “we reduced quote-to-cash cycle time from 12 days to 3 days, a 75% improvement.”

Why Metrics Matter

Modernization initiatives without quantified success criteria are almost impossible to evaluate. Stakeholders may feel good or bad about the outcome, but cannot assess it objectively. Metrics create the shared language of success:

  • Executives understand ROI
  • Project sponsors can confirm value delivery
  • Teams know when they are done

Key Fields

Field Purpose
name Metric name, e.g. “Quote-to-Cash Cycle Time”
target Target value as free text, e.g. “< 48h”, “99.5%”, “< 5 clicks”
baseline Starting value (set during Approval), e.g. “5.2 days”
current Current measured value (tracked during Build/Release/Retro)
unit Unit of measurement: “hours”, “percent”, “NPS points”, etc.
measurement_plan How this metric will be measured (methodology, data source)
metric_type Category of metric (see below)
status Progress status (see below)
progress Percentage progress toward target (0-100)
sort_order Display ordering
initiative_id Parent Initiative

Metric Types

Type What It Measures
performance Speed, throughput, latency (e.g., “cycle time”, “processing speed”)
quality Accuracy, error rates, compliance scores (e.g., “invoice accuracy rate”)
adoption User uptake, feature utilization (e.g., “% users using new portal”)
parity Feature completeness relative to legacy (e.g., “% feature parity achieved”)

Status Tracking

Status Meaning
not_started Metric defined but not yet being measured
on_track Progress is meeting expectations
at_risk Progress is falling behind but recoverable
behind Progress is significantly behind target
achieved Target has been met or exceeded

The Three Value Phases

Metrics evolve across the Initiative lifecycle:

Planning phase — Set targets only. You may not have baseline data yet, but you should commit to what success looks like: “Our target is invoice processing cycle time under 48 hours.”

Approval phase — Confirm baselines. Before approving the Initiative for build, document the current state: “Our current invoice processing cycle time is 5.2 days.” This baseline is the “before” in your before/after story.

Build, Release, Retro phases — Track actuals. As the new system is built and deployed, measure the current value and update progress: “Current cycle time: 3.8 days. Progress: 58% of the way to target.”

Writing Effective Metrics

Good metrics are:

Specific — “Reduce AP invoice processing time” is vague. “Reduce average time from invoice receipt to payment approval from 5.2 days to 2 days” is specific.

Measurable — The metric must be something that can actually be measured. Define the data source and measurement method in measurement_plan.

Time-bound — When will the target be achieved? Set a timeframe expectation even if it is approximate.

Attributable to the Initiative — The metric should be something the Initiative can plausibly affect. Don’t set metrics that depend on external factors entirely outside the team’s control.

Examples by Metric Type

Performance:

Plaintext
Name: Invoice Processing Cycle Time
Target: < 48 hours from receipt to approval
Baseline: 5.2 days (measured Feb 2026)
Current: 3.8 days (measured Apr 2026)
Progress: 58%
Status: on_track

Quality:

Plaintext
Name: Invoice Matching Accuracy
Target: 99.5% auto-match rate
Baseline: 74% auto-match rate
Current: 87% auto-match rate
Progress: 52%
Status: on_track

Adoption:

Plaintext
Name: Portal Active User Rate
Target: 85% of AP clerks using new portal within 90 days
Baseline: 0% (legacy system only)
Current: 62% at day 45
Progress: 73%
Status: on_track

Parity:

Plaintext
Name: Feature Parity
Target: 100% of in-scope features implemented
Baseline: 0%
Current: 78%
Progress: 78%
Status: on_track

Best Practices

Define metrics before Approval, not after.

The Approval stage’s purpose is stakeholder sign-off. If you don’t have measurable success criteria before approval, stakeholders are approving a vague mandate, not a defined engagement.

Confirm baselines before Build.

You cannot calculate improvement without a baseline. Prioritize baseline measurement during Planning and Approval. If the baseline data doesn’t exist, making it exist is a pre-Build task.

Update metrics at regular intervals.

Set a cadence: monthly for long-running initiatives, weekly during Release. Stale metrics erode trust in the Initiative’s measurement discipline.

Use measurement_plan to prevent disputes.

When you define a metric, document exactly how it will be measured: which system is the data source, who pulls the data, and what calculation methodology is used. This prevents disputes about whether targets were met.

Celebrate achieved metrics.

When a metric reaches achieved, make it visible. This validates the Initiative’s value and builds organizational appetite for future modernization investment.

Relationships at a Glance

Related Concept Relationship
Initiative Metrics belong to an Initiative

Next Steps


Pro Tip: If your sponsor asks “when will we be done?”, a Metric with a clear target and a current value gives you a principled answer. Without metrics, “done” is a subjective judgment. With metrics, it is an observable fact.

Support

  • Documentation: Continue reading about Initiatives and Risks
  • Email: support@catalio.ai
  • Community: Share metric design patterns with other Catalio users