Health Insurance — Article 6 of 12

Predictive Care Gap Closure: Beyond the Annual Outreach Campaign

7 min read

Care gap closure is one of the health plan functions where the gap between what's operationally typical and what's technically possible is widest. Most plans run annual outreach campaigns — identify members with open HEDIS gaps, send letters, make calls, hope for response. Response rates are low and have been declining. The same members who didn't respond last year don't respond this year. The plans run the campaign anyway because the regulatory and quality requirements demand it.

The fundamental problem is that care gap closure has been designed around the gap — the member has a mammography gap, send a mammography reminder — rather than around the member. A member has a life, a schedule, preferences, constraints, a relationship with providers, specific barriers to getting care, and a history of how they have or haven't engaged with the plan. None of that information typically makes it into a mammography reminder letter.

AI is making it viable to shift this. Not by generating more personalized letters — that's cosmetic — but by rethinking what the engagement is, when it happens, through what channel, and what it's actually asking the member to do. Some plans are getting to care gap closure rates that would have been considered impossible five years ago. Most are not, because the technical capability is necessary but not sufficient.

A care gap reminder that arrives three weeks before the member's already-scheduled appointment isn't helpful. It's noise that teaches the member to ignore future communications.

What the predictive models actually predict

The phrase "predictive care gap closure" gets used loosely. The models that matter predict specific things:

  • Likelihood of gap closure without intervention. Some gaps close themselves — the member already has an appointment, or routinely gets care, or is in active treatment. Predicting this avoids wasting outreach on members who'll close the gap anyway.
  • Responsiveness to specific channels. Different members respond to different channels. Predicting which channel will work for a specific member improves yield dramatically over blanket multichannel approaches.
  • Timing sensitivity. For some gaps, timing matters — sending a flu shot reminder in April is pointless. For others, the window is longer. Predicting the right timing window improves conversion.
  • Barrier type. Members who don't close gaps have reasons — cost, transportation, scheduling, language, trust. Predicting the specific barrier pattern for a member informs what intervention to offer.
  • Provider relationship state. Members with active PCP relationships respond to different interventions than members without. Predicting this determines whether outreach should route through the provider or around the provider.
  • Risk of adverse outcome if gap persists. Not all gaps carry the same clinical risk. A diabetes management gap in a high-A1C member has different urgency than in a controlled member.

The channel mix reality

The industry has spent a decade learning that members don't respond to mail reminders the way they did 15 years ago. Response rates have declined consistently. Adding email, SMS, and phone to the mix improves response modestly but doesn't solve the fundamental problem — the member doesn't recognize the communication as relevant to them.

ChannelTypical response rateBest forFails when
Mail2-5%Older members, formal remindersMember already ignores mail
Phone (live)8-15%Complex gaps, relationship buildingCall avoidance, unknown number
IVR/automated3-8%Simple confirmations, appointment schedulingMember perceives as spam
Email4-10%Members who engage digitallySpam filters, low salience
SMS15-25%Simple actions, appointment remindersOverused, desensitization
App push10-20%Active app usersMembers who don't open the app
In-office (provider)40-60%Members with active PCP relationshipsMember doesn't have active relationship
Community health worker30-50%Complex barriers, high-trust scenariosScale and cost

The provider-mediated approach

The response rates above carry a pattern — interventions routed through providers convert dramatically better than interventions routed directly to members. This isn't news, but it's operationally complicated to do at scale.

Provider-mediated care gap closure requires:

  • Accurate care gap information at the provider. The provider needs to see the gap, ideally in their EHR workflow, not in a separate plan portal they don't check.
  • Clinical validation of the gap. Gaps identified from claims data are sometimes wrong — the care was provided but coded differently, or the gap doesn't apply to this patient. Providers need to be able to dispute gaps easily.
  • Documentation pathways. If the care was provided, the provider needs a way to close the gap with appropriate documentation.
  • Pay-for-performance alignment. The economics of closing gaps need to work for the provider, either through direct payment or through value-based contract performance.
  • Panel management support. Large providers manage panels of patients and need tools to identify and work through gaps systematically, not one patient at a time.

The barriers problem

The members who don't close care gaps aren't failing to comply. They have reasons. Understanding the barrier pattern changes the intervention design.

Barrier patterns that appear in analysis of non-responsive members:
Cost concerns — copays, deductibles, or expected out-of-pocket costs
Transportation — no reliable way to get to the appointment
Scheduling — work hours conflict with available appointments
Language or literacy — communication wasn't understood
Trust — prior negative experience with healthcare
Competing demands — caregiving, multiple jobs, chronic conditions
Anxiety or fear — specific to the recommended service
Provider access — no available PCP, long wait times
Interventions that address the actual barrier outperform interventions that just repeat the reminder at higher volume.

The HEDIS measure dimension

HEDIS specifications drive significant effort, and the specifications are specific. A gap closure program has to understand not just "the member had a mammogram" but "the member had a qualifying mammogram within the measurement window documented in a way that's captured in the data." This sounds pedantic but drives real work.

  • Supplemental data collection. Services provided but not captured in claims (lab results from non-participating labs, clinical data from non-participating providers) have to be collected and loaded in ways HEDIS accepts.
  • Chart review operations. For hybrid measures, chart review at year-end captures gap closures that claims don't show. This is significant annual operational cost.
  • Digital quality measures transition. The industry is transitioning from traditional HEDIS to digital quality measures, which changes data requirements. Plans that invest in FHIR-based clinical data integration get ahead of this.
  • Measure year timing. Closure has to happen within measurement windows. A gap closed after the window closes doesn't count for the measurement year. This creates timing pressure in Q4 that drives much of the inefficiency in care gap programs.

The member engagement platform question

Many plans have invested in member engagement platforms — mobile apps, portals, wellness programs — expecting these to drive care gap closure. The results have been mixed.

Members who actively use plan apps do close more gaps — but the population that actively uses plan apps is small and not representative. The members who are hardest to engage through traditional outreach are often also the hardest to engage through digital channels.

  • For digitally active members. App-based care gap reminders, appointment scheduling integration, and digital health content drive meaningful closure improvement.
  • For occasionally active members. Email and SMS work, particularly when combined with clear action steps and scheduling support.
  • For digitally inactive members. Traditional channels (phone, provider-mediated, community health workers) remain essential. Forcing digital engagement rarely works.
  • For high-barrier members. Community health workers, care coordinators, and member advocates with real problem-solving authority produce outcomes that digital can't match.

The mature strategy is segmented — different engagement approaches for different member populations, chosen based on what actually works for each segment.

The measurement complexity

Care gap closure is straightforward to measure in aggregate (what percentage of members with the gap closed it) but hard to measure at the intervention level (did this specific intervention cause this specific closure). Many closures happen anyway — the member had an appointment, the provider was going to identify the gap, the member was going to get the care. Attributing closure to the plan's outreach requires careful measurement design.

The plans doing this well use control groups. Not every identified member receives outreach for every intervention. Holding out a control population allows measurement of the actual lift from each intervention type, which in turn allows resource allocation toward interventions that work and away from those that don't.

Predictive care gap closure, done well, is meaningful — better health outcomes for members, better quality scores for the plan, better cost management through prevention. Done poorly, it's an expensive illusion of action that doesn't move the underlying numbers. The difference usually comes down to whether the program was designed around the gap or around the member. For leadership teams assessing where care gap management, population health analytics, and member engagement sit within the broader health plan operating model, the Population Health Capability Model maps the capabilities — predictive analytics, channel orchestration, provider integration, barrier resolution — that determine whether care gap programs drive outcomes or just generate unopened mail.

Frequently Asked Questions

How much improvement in care gap closure is realistic with AI-enhanced programs?

Plans starting from traditional batch outreach approaches typically see 15-30% improvement in gap closure rates when they shift to predictive, segmented engagement. The improvement isn't uniform across gap types — some gaps (like diabetic eye exams) respond dramatically to targeted engagement, while others (like colorectal screening) remain challenging regardless of approach. Plans already operating modern engagement programs see more modest improvements, typically 5-15%, from AI enhancement.

How should we think about the build vs. buy decision for care gap analytics?

The gap identification and predictive scoring capabilities are increasingly commoditized — multiple vendors offer capable platforms. Where build vs. buy matters more is in the integration layer: how gap information gets into provider workflows, how member outreach gets triggered across channels, how responses feed back into models. These integrations tend to be plan-specific and hard to buy off the shelf. The mature approach is typically buy the analytics, build the integration.

What's the relationship between care gap closure and Stars ratings?

For Medicare Advantage plans, several Stars measures are directly driven by gap closure performance — medication adherence, statin use, breast cancer screening, colorectal screening, diabetes care measures. The financial leverage of Stars improvement is substantial (rebate percentages and quality bonuses can shift tens of millions of dollars per 100,000 members). This makes Stars-relevant gap closure one of the highest-leverage investment areas in health plans, and explains why plans with mature programs invest significantly more than those focused purely on HEDIS.