Kaiser Permanente, one of the US’ leading healthcare companies, wanted to more effectively use big data to deliver the best care more affordably. Specifically, it wanted to be able to better determine readmission risk for patient discharges and predictively classify readmissions as potentially preventable or not preventable.
Before the data lake: The company was unable to seamlessly access high volumes of both new and existing data from disparate sources and analyze it for patterns and trends.
With the data lake: The data lake housed more than 60 million records of historical patient data. The customer had a clear view of what data was in the data lake; could track its source, format and lineage; and enable users to more efficiently search, browse and find the data needed for analytics.
Results: Kaiser reduced costs and improved outcomes by ultimately lowering readmission rates, due to 1) significantly improved understanding of potentially preventable readmissions; 2) the ability to develop better algorithms to more accurately predict readmissions; and 3) a new ability to identify patients with the highest risk of readmission early in their initial hospitalization and proactively adjust treatment plans sooner to account for that risk.