Every health plan has built value-based care reporting dashboards. The investment typically runs into millions of dollars — data infrastructure, measurement logic, provider portals, attribution algorithms, benchmark calculations. The dashboards get built, rolled out to contracted providers, and — in the majority of cases — are ignored.
Ask a primary care physician in a value-based arrangement about the plan's performance dashboard and you'll hear some version of: "I check it once a year to see if I earned a bonus. I don't use it during the year." Ask the physician's practice manager and the answer is more specific: "The data is three months old, the attribution is wrong, it doesn't tell me what to do, and it doesn't match what our internal reports show."
The dashboards fail not because plans don't care but because plans build dashboards from the measurement perspective — what does the plan need to measure to settle contracts — rather than from the workflow perspective — what does the provider need to know to change what they do. These are different information products. The plans whose dashboards get actually used have usually rebuilt the function around the provider workflow.
Why plan-built dashboards typically fail
The failure patterns are consistent:
- Data latency. Claims-based metrics lag care by 30-90 days. The quarter the dashboard shows has been closed for weeks by the time the dashboard is current. Providers can't act on data about care that's already happened.
- Attribution problems. Members attributed to providers based on rules that providers don't fully understand or trust. When providers see patients in their attribution list they don't recognize, they stop trusting the data overall.
- Metric overload. Dashboards with dozens of metrics, benchmarks, and trend charts overwhelm rather than inform. Providers don't know which metrics to prioritize.
- Lack of actionability. Performance metrics without actionable next steps produce awareness but not change. "Your diabetes care score is 68%" doesn't help; "these 47 patients have specific open gaps" does.
- Disconnection from workflow. Dashboards in plan portals that providers have to log into specifically are out of workflow. Dashboards that integrate with EHR workflow get used.
- Inconsistent data across plans. Providers with multiple payer contracts see different metrics, different attributions, and different benchmarks from each plan. The noise makes any individual plan's dashboard less useful.
- Benchmarks that don't resonate. Network average benchmarks feel abstract. Comparisons to specific peer groups that the provider recognizes are more meaningful.
What providers actually need
The information needs of a provider in a value-based arrangement are specific and different from what plan performance dashboards typically deliver.
| What plans typically show | What providers need |
|---|---|
| Quality measure scores | Specific open care gaps with patient-level detail |
| Cost benchmarks | Cost drivers the provider can influence |
| Attribution counts | Patient list with risk stratification |
| Risk adjustment factors | Undocumented conditions affecting attribution |
| Utilization metrics | Specific high-utilization patients needing intervention |
| Trend charts | Current month status and actions due |
| Network benchmarks | Comparison to clinically similar practices |
| Year-end projections | Month-over-month progress toward thresholds |
The patient-level breakdown
The single most valuable element of provider-facing analytics is patient-level specificity. Instead of "your diabetes care score is 68%," the actionable version is "these 47 patients have not had an HbA1c in the past 12 months, of whom these 12 are in your upcoming schedule this month." This is operationally useful; the former is just a grade.
Building patient-level specificity into dashboards requires:
- Specific gap identification. Not just "gap exists" but what specifically is needed and when.
- Schedule integration. Identifying which patients are already scheduled (so the gap can be addressed at that visit) vs. which need outreach.
- Clinical context. Relevant recent encounters, current medications, specialist involvement.
- Action suggestions. Specific recommended actions — close the gap at the next visit, order the test, request the record, initiate outreach.
- Actionability tracking. Which actions have been taken, which are pending, which are blocked.
The latency problem, addressed
The claims-based latency problem doesn't have a single solution, but can be substantially mitigated:
Pharmacy data (typically near-real-time) for medication adherence and pharmacy-driven measures
Clinical data integration (FHIR-based) for measures that depend on clinical documentation
Provider-submitted data for self-reported activities
Pre-adjudication claims feeds for provisional utilization visibility
Authorization data for upcoming care that will affect metrics
Member engagement data for outreach that's already occurred
The mature dashboard combines data sources with clear freshness labeling — the provider sees what's current vs. what's lagged, rather than one blended average.
The attribution complexity
Attribution — which patients count as the provider's for value-based measurement — is both critically important and commonly opaque. Providers see patients in their attribution list they don't recognize, and don't see patients they do consider theirs, with no visible logic explaining why.
The practical improvements:
- Transparent attribution rules. Document the specific rules that determine attribution — which visits count, what time windows apply, how primary care designation is determined.
- Visible reasoning per patient. For each attributed patient, show why — what encounters created the attribution, what the last determining visit was.
- Dispute mechanisms. Providers should be able to dispute attributions they disagree with, with review and adjustment processes.
- Prospective attribution. Where possible, attribute prospectively based on patient designation rather than retrospectively based on utilization, to give providers forward visibility.
- Attribution stability. Attribution lists that change significantly between dashboard refreshes are unusable. Stability over the performance period matters.
The EHR integration path
The dashboards that get used consistently are the ones that appear in the provider's existing workflow rather than in a separate plan portal. EHR integration for care gap flagging, risk adjustment prompts, and patient-level alerts moves value-based information into the moment of care.
- Care gap flags at point of care. When the patient is in the exam room, the EHR shows the open gaps the provider can address during the visit.
- Risk adjustment prompts. Documentation opportunities — conditions the patient has but that weren't documented in the current year — surfaced at the encounter.
- Pre-visit planning. The morning schedule identifies patients with open gaps, upcoming screenings, or specific intervention opportunities for the day's visits.
- Post-visit follow-up. After the visit, specific follow-up actions — schedule the specialist referral, arrange the lab, ensure the medication is filled — get tracked.
- Panel management. Outside of individual visits, the practice runs panel management workflows to identify and outreach to patients with open needs.
The contract alignment question
Dashboards exist in the context of specific value-based contracts, and the dashboard has to align with the contract structure. Misalignment between dashboard metrics and contract terms creates significant friction.
- Metric definitions must match. If the dashboard shows "diabetes care composite" and the contract pays on a specific measure, the dashboard metric has to reflect the contract metric precisely.
- Threshold visibility. Thresholds that determine payment should be visible — not just the metric but the specific number that triggers performance payment.
- Year-to-date tracking. For annual measures, showing progress against the annual threshold — what's needed in the remaining time — is more useful than current-period scores.
- Multiple contract types. A provider with multiple contracts (MSSP, MA, commercial) sees different metrics for different populations. The dashboard should support this segmentation.
- Projected settlement. Providers want to know what they're going to earn, not just how they're performing. Settlement projections based on current performance are valuable.
The peer comparison that matters
Benchmarks matter, but network-wide averages are often unhelpful. A pediatric practice benchmarked against all primary care practices sees averages that don't reflect its population. A large urban practice benchmarked against rural practices sees comparisons that feel unfair.
The benchmarks that resonate are peer-group specific — similar specialty, similar panel size, similar population mix. These require more sophisticated analytics but produce comparisons that providers accept as meaningful.
The two-sided feedback loop
The best provider-facing analytics aren't one-way — plan shows provider how they're performing. They're two-way — provider shows plan what's happening at the patient level that the plan can't see from claims. Providers know which patients are challenging to engage, which are non-compliant, which have barriers the plan can help address. Capturing this back into the plan's operational systems makes the whole value-based function work better.
Value-based care is only as effective as the operational systems that support it. Sophisticated contract structures with unsophisticated provider-facing analytics produce disappointment for both sides. The plans building analytics that providers actually use — patient-level, actionable, in-workflow, contract-aligned — are getting the engagement that value-based economics require. For leadership teams assessing where value-based care analytics, provider engagement, and performance reporting capabilities fit within the broader health plan operating model, the Health Insurance Capability Model maps the capabilities — provider analytics, attribution management, clinical data integration, EHR-based workflows — that determine whether value-based programs deliver the behavior change the contracts assume.