Health Insurance — Article 5 of 12

Member 360: Unifying Clinical and Administrative Data Without the Data Lake Fantasy

7 min read

Every health plan has attempted a Member 360 initiative. The pitch is always the same — a single unified view of the member that combines eligibility, claims, clinical data, care management notes, call center interactions, digital engagement, and external data sources. Once the unified view exists, the thinking goes, every downstream function gets better. Care management identifies risk earlier. Member services resolves faster. Analytics produces sharper insights. Product design gets smarter.

The reality in most plans: a multi-year, multi-million-dollar program that produced an impressive-looking data lake, a dashboard no one uses, and operational teams still working from the same transactional systems they started with. Some combination of data quality, latency, governance, and user adoption problems meant the unified view wasn't actually used for the decisions it was supposed to improve.

The plans that have gotten this right have abandoned the universal-view ambition in favor of use-case-specific data unification. They're not building one Member 360. They're building several, each instrumented for specific decisions, and maintaining them as production systems rather than analytics exhibits.

The Member 360 that works is the one a care manager actually opens before a call. Everything else is data lake theater.

Why the generic Member 360 fails

The failure mode is consistent across the plans that have tried this:

  • Latency kills utility. Claims data typically lags service by 30-90 days. Clinical data arrives when providers send it, which is inconsistent. Call center data is near-real-time. Mixing these timescales without specific design decisions produces a view where the freshness of any given element is unknown.
  • Data quality compounds. Each source system has its own data quality issues. Aggregating them without remediation multiplies rather than resolves problems.
  • Governance becomes a blocker. Once data is unified, access controls, HIPAA compliance, and minimum necessary principles have to be reimplemented across the unified view. This is often underestimated.
  • The use cases weren't specified. Without specific decisions the view needs to support, the data model becomes a generic "everything we have" structure that's optimized for nothing.
  • Operational systems still own the workflow. The care manager works in the care management system, not the data lake. The unified view is something to look at, not something that drives decisions.

The use-case-first approach

The alternative is to identify specific decisions and instrument data unification for those decisions. This changes the engineering problem dramatically.

DimensionGeneric Member 360Use-case-specific unified view
ScopeAll data about all membersSpecific data elements for a specific decision
Latency targetUndefined, inherited from sourcesDefined by decision urgency
Data quality workGeneric, often incompleteTargeted to fields that matter
UsersAnalysts, variable adoptionOperational users, measurable adoption
GovernanceComplex, often over-permissiveScoped to the use case
Success metricData coverage percentageDecision quality improvement
Maintenance modelPeriodic, analytics-team-ownedContinuous, product-team-owned

The use cases that justify the investment

Not every decision needs a unified view. The ones that typically justify the engineering investment:

  • High-cost member identification. Identifying members on trajectories toward high-cost events (admissions, ER visits, medication adherence failures) in time to intervene. Requires claims, clinical data, pharmacy data, and increasingly social determinants.
  • Care gap closure. Identifying specific clinical gaps for specific members that can be closed through outreach. Requires claims, clinical data, and HEDIS specifications.
  • Member services resolution. Giving call center reps the context to resolve issues without transfers. Requires eligibility, claims, recent interactions, and authorization status.
  • Care management caseload. Prioritizing care managers on members where intervention is likely to matter. Requires risk scoring, recent clinical events, engagement history, and attribution to programs.
  • Provider performance. Assessing provider performance on specific populations. Requires attribution, clinical outcomes, cost data, and member experience data.
  • Network adequacy analytics. Identifying access gaps that affect members. Requires utilization patterns, provider directories, and member geography.
  • Plan selection guidance. Helping members select plans at open enrollment. Requires benefits, utilization history, cost projections, and provider preferences.

Data latency as a first-class design decision

Each use case has a latency requirement. Member services needs real-time or near-real-time. Care management needs daily at worst. Care gap closure can tolerate weekly. Provider performance can be monthly.

Designing for the latency requirement — rather than inheriting whatever latency the source systems naturally provide — is where the engineering gets specific.

  • Real-time sources. Eligibility checks, recent claims submissions, call center interactions. These need event-driven ingestion or API access at point of use.
  • Daily sources. Paid claims, pharmacy fills, authorization decisions. Typically batch-loaded overnight and refreshed each morning.
  • Weekly sources. Clinical data from provider EHRs via interoperability feeds. Arrives on varying schedules depending on provider connectivity.
  • Monthly sources. Risk adjustment scoring, HEDIS calculations, provider attribution updates. Computed periodically and attached to the member record.
  • Derived elements. Predictive scores, care gap flags, risk trajectories. Recomputed on a cadence matched to how often inputs change.

Mixing these in a single view requires explicit decisions about which element a given user is seeing, how fresh it is, and what to display when the underlying source hasn't provided an update. These are product decisions, not engineering decisions, and they're where generic Member 360 projects usually fall down.

The identity problem

Member 360 assumes the member can be identified consistently across sources. In practice, this is often the hardest part.

Member identity challenges that derail unified views:
The same person appears multiple times with different member IDs (prior coverage periods, dependent vs. subscriber, name changes)
Clinical data arrives identified by provider identifiers that don't map cleanly to member IDs
Household members get confused when data is identified at household level
External data sources (consumer data, social determinants) use different identity schemes entirely
Privacy rules constrain how identity resolution can be performed
Master Data Management (MDM) capabilities matter enormously. The plans that invested in MDM before attempting Member 360 got dramatically better outcomes than those that tried to resolve identity within the unified view.

Clinical data integration specifically

The clinical data piece is where most Member 360 efforts stumble hardest. Claims data, while imperfect, has consistent structure and arrives on predictable schedules. Clinical data from provider EHRs is structurally heterogeneous, arrives inconsistently, and contains free text that requires processing to extract structured elements.

  • FHIR-based integration. The regulatory push toward FHIR APIs is finally making structured clinical data extraction viable for the plans that build the integration capability.
  • Clinical vocabulary mapping. Even structured clinical data uses different vocabularies (SNOMED, LOINC, RxNorm, ICD-10). Mapping between these is work, and getting it wrong makes the clinical view unreliable.
  • Free text processing. The rich clinical information sits in notes, not in structured fields. NLP on clinical notes is now viable but requires significant tuning per specialty.
  • Provider variability. Different providers document differently. Some produce complete structured data; others produce minimal structured data and everything else in notes.

The governance dimension

A unified view of clinical and administrative data creates privacy and access control challenges that don't exist in source systems. In source systems, access control is implemented per system. In a unified view, it has to be reimplemented consistently, and the minimum necessary principle has to be enforced.

The specific issues that come up:

  • Behavioral health information often has elevated protection (42 CFR Part 2) that must persist in the unified view
  • HIV, genetic, and reproductive health information may have state-specific protections
  • Substance use treatment information has specific consent requirements
  • Minors' information, particularly adolescent-confidential information, has complex rules
  • Legal holds and litigation may require specific data preservation and access rules

The operational ownership model

Member 360 implementations often fail because they're owned by data or analytics teams that don't have a stake in how operational users work with the data. The unified view gets built, handed to operations, and then deteriorates as source systems change and no one owns maintaining the integration.

The successful pattern: treat unified views as production products with product managers, engineering teams, SLAs, and user-facing quality metrics. Care management's unified view is owned by care management's product team, with engineering support for the underlying data platform. When the care management system changes, the unified view is updated as part of that change, not as a separate project.

Member 360 as a universal ambition is mostly a distraction — the plans that have invested in it have generally gotten poor returns. Member 360 as a set of specific, use-case-driven unified views that serve operational decisions is producing real value: faster member service resolution, sharper care management targeting, better care gap closure. For leadership teams assessing where member data strategy, clinical integration, and operational analytics capabilities fit within the broader health plan operating model, the Data Architecture Capability Model maps the data foundation capabilities — MDM, integration, governance, latency management — that determine whether unified data views become operational assets or expensive data lake exhibits.

Frequently Asked Questions

Should we build Member 360 on a data lake or a data warehouse architecture?

Neither is the right starting point. Start with the use case and let the use case inform the architecture. Use cases with real-time requirements need event-driven architectures, not data lakes. Use cases with analytical requirements need warehouse-style modeling. Generic data lakes that are supposed to support every use case typically support none of them well. A modern cloud data platform can accommodate multiple architectural patterns — the question is what pattern fits the specific use case.

How long should a Member 360 initiative take before producing value?

If a Member 360 program has gone 18 months without producing operational value that users actively rely on, it's failing. Use-case-specific unified views should produce usable value in 6-9 months per use case, with the first 2-3 establishing the identity resolution and data foundation that subsequent use cases can build on. Programs that take 3+ years to produce value almost always fail to produce meaningful value at all.

What's the role of Master Data Management (MDM) in Member 360?

MDM is foundational and usually underestimated. Without solid member identity resolution, every unified view has accuracy problems that compound across use cases. Investing in MDM before attempting operational unified views produces dramatically better outcomes than trying to resolve identity within each unified view. The plans with mature MDM can build unified views on a trustworthy identity foundation; the plans without it keep rebuilding identity resolution for each new use case.