Posts Tagged ‘Distributed computing’

Personalized Medicine in 3 – 5 Years?

March 23, 2012 30 comments

Healthcare Delivery Symposium

The optimism was overflowing at the Aspinwall Symposium at WPI yesterday.  The healthcare transformation is on with distributed care, quality payment models, analytics and big data on display.

Most interesting were the panel presentations and discussions. Recently I wrote about the need to accelerate building our health information infrastructure nationwide. This enables a faster transition to digitized health records, I argued.  The news for me at this conference is that an interoperable  infrastructure in the traditional “plumbing” sense may become less important.

Distributed Care Models

David Dimon at EMC spoke about the growth of distributed care models which rely on an “exostructure”. Virtual data centers enable caregivers to connect with patients in a variety of settings outside the doctor’s office or hospital.  Monitoring patients vital signs at home is certainly less expensive than at the hospital and may change some treatment protocols around length of stay.

Dale Wiggins, CTO of Phillips Patient Care and Clinical Informatics shared an example of how one distributed care model works. Phillips eICU is a virtual Intensive Care Unit. Its aim is to end the need for multiple, highly trained and highly paid Intensivists – the physicians who specialize in the care of ICU patients.

The center of the eICU is a cockpit where an Intensivist works, monitoring patients’ vital signs and video real-time over distance.  This model extends the ability to dramatically improve outcomes, particularly among patients in more rural settings. The eICU is now deployed at UMass Memorial Hospital system.

The next logical step is to extend the ICU to the home, reducing readmission during the critical 30 day period after release from the hospital.

Analytics, Predictive Informatics and Big Data.

Robert Friedlander IBM Master Inventor spoke about the promise of breakthroughs in distributed data processing create massive computing power. Google’s open source MapReduce framework and its successor Hadoop have given researchers in biotech and pharma an unprecedented ability to query very large data sets quickly and with great flexibility.

Prior to Hadoop, a large data set like the human genome was impossible to analyze and query in-depth. Relational databases are  efficient with tabular data. Medical research requires analysis of very large  “heterogeneous” data sets which don’t fit into tables.

Medical breakthroughs can happen by applying predictive analytics which involve many complex queries of these massive data sets.  All this happens easily and relatively inexpensively through distributed massively parallel processing powered by Hadoop and MapReduce frameworks.

Bob predicted that these technologies may radically change the way healthcare is delivered and in the foreseeable future. “The promise of truly personalized medicine may become a reality in the next three to five years.”

Can the healthcare system handle that kind of disruption? Let me know your thoughts.

photo credit:

%d bloggers like this: