The Accenture HealthTech Innovation Challenge is a big deal. Run globally, the program attracted in excess of 1,200 applications this year. The world is broken into three groups EMEA, Asia and the Americas. There is a lengthy application process which sees the team at Accenture and their stellar lineup of judges winnow down each region to 12 finalists. Those finalists then compete in what Accenture calls the Demolition Derby where each company presents to 6-8 judges in 15-minute blocks. Given there are 8 blocks, this is both a sprint (15 mins is not a lot to tell your story) and a marathon.
While we were humbled to have made the cut for the Boston round where we competed against an extraordinary group of companies. We were even more humbled to be selected as a finalist for San Francisco, where we will go up against our fellow finalists from Boston, plus six teams from Tokyo, Sydney and Dublin rounds.
One of the reasons we made it to the final is that we are attacking a massive problem in healthcare – clinical variation management. When we say massive, we really mean massive – $812B annually in the US alone. Add another $1T for the rest of the world (using 5% of GDP for healthcare vs the 18% in the US). That’s closing in on $2 trillion per year for labs, tests, diagnostics, medications and other care that didn’t improve the patient outcomes – and in many cases diminished it.
Managing Clinical Variation is notoriously complex. How else could a $812B a year problem persist for the better part of three decades?
That’s right, we have been working the clinical variation problem for almost 30 years – ever since the AMA first started to aggressively advocate for evidenced-based care guidelines. The effectiveness of evidenced-based care is not in question, it has been proven out in hundreds of peer-reviewed studies. Better outcomes, lower costs. The pillars of the value-based care movement.
So if we know how to do it, why aren’t we doing it?
The answer lies in scale. The cost, time and effort to produce a care process model manually is extremely high (which is why the refresh cycle is around 4 to 7 years!). Furthermore, the acceptance of those care process models (what we call adherence) is quite low, diminishing further the utility of the effort.
Let’s examine both of these separately:
The Complexity of the Problem
Any healthcare episode can be broken down into component parts. How granular you go is a function of what you are looking for. For the purposes of this post – let’s keep it at a high level:
- Events (every lab, test, order, incision and suture, done inpatient or outpatient)
- Sequences (you don’t put the new knee in until you have removed the old one)
- Timing (you don’t administer the pain killer the day before the surgery, you do it 1 hour and 40 minutes prior)
When you combine the three it creates a picture of extraordinary complexity where the events, the sequence of those events and the timing of the sequence of events all conspire to confuse even the most competent clinician.
The result is providers (and payers as seen here) generally develop something rather general – something high level, often using national literature as a guide.
The Challenge of Acceptance and its Impact on Adherence
Care path adherence is an a particularly thorny problem. Doctors own the patient relationship. The care path doesn’t own the patient relationship, nor does the machine. Doctors know what is best for their individual patients. They understand the exceptions, the co-morbidities, the family history, the patient’s preferences.
As a result, they often deviate from the care process model. Often it is warranted and at times is innovative. Often it is not warranted and is a result of habit, or in the rare case financial gain.
- When a care path is too broad it will get dismissed.
- When a care path is based on dissimilar cohorts it will get dismissed.
- When a care path is based on too small a sample size it will get dismissed.
- When a care path looks more like the opinion of a single doctor it will get dismissed.
- When a care path doesn’t reflect the objectives of the organization it will get dismissed.
Only when a care path can fulfill all of the dismissal reasons (plus some others that I have likely missed) will it get accepted.
The Opportunity: Building Acceptable Care Pathways at Scale
To create acceptable care process models at scale requires a few key elements:
- Use the hospital’s own data. By using the hospital’s own data one interacts with that hospital’s patient population – not some amorphous national standard. Building care process models with the hospital’s data also incorporates the work of the doctors at the hospital – ensuring that how they practice medicine gets built into the care process models.
- Incorporate everything. To generate a great care path, there will need to be multiple databases involved, the EMR, billing,, pharmacy. This requires technology (FHIR for example) but the results have higher resolution and superior explainability.
- Transparency. Simply presenting a care process model, no matter how granular, without showing what produced it will result in dismissal. Every step of every care process model must be open to inspection – why is it here, what are the stats, what is an acceptable substitute?
- Model the mission. Every provider and payer has a slightly different set of objectives that can result in a large variance in how they practice medicine. Consider knee replacement surgery. A teaching hospital will approach it far differently than a for-profit hospital and different again from a faith-based hospital. Presenting candidate care paths that reflect the various missions puts the physician in charge of making the right decisions.
- Make it granular. Care process models are difficult and time consuming and as a result, most are “one size fits all.” This is not inherently problematic, however, it would be superior if you had a care process model for 55+, active females getting a total knee replacement and another for 70+, inactive, overweight males.
The product we demonstrated in Boston delivers against much of what is outlined here – compressing man-years into hours and uncovering good variation (innovation) alongside bad variation. More importantly, the product has every bell and whistle all packed into a single application interface that was designed to be used by doctors, not data scientists (although they love it too).
Achieving the elements outlined above can deliver exceptional results. At Flagler in St. Augustine, Florida they applied our software to pneumonia resulting a care path that saved them $1,350 per patient ($850K per year), reduced LOS by 2.5 days and reduced readmissions by 7X. The result was a 22X ROI for the hospital. With a care pathway per month for the next 18 months, they expect to save over $20M while simultaneously improving the quality of care they deliver to their patients.
It is an amazing story underscores the size of the opportunity. If a 335-bed hospital can save $20M what can NY Presberterian do?
So wish us luck as we prep for the SF round. They say it is an honor to be nominated, but we would like to win – and take advantage of the platform it provides to make healthcare better, across the board.