Archive for the ‘BioInformatics’ Category

The Healthcare IT market is expected to grow at a CAGR of about 24% during 2012 to 2014. What other sectors can boast such a rate in this economy? It is anticipated to be a $40 billion industry by the end of 2011.  Why?

In a time when most spending is being frozen or sharply reduced there can only be one reason for increased investment in HIT. It’s the same reason that has always driven strong spending- the expectation of a significant return on investment.

The urgency for significant HIT adoption is unavoidably clear:

  • The cost of healthcare is rising too fast for traditional containment approaches.
  • Even with healthcare reforms, too many people cannot afford health insurance and,
  • Federal and State programs cannot absorb the increased cost of the uninsured in their existing aid programs.
  • Hospitalization costs continue to skyrocket
  • While costs are rising, the quality of care is not.

While many debate how to implement HIT, virtually no one debates the need for its adoption throughout the health sector. The current system has been broken for quite some time and is still hemorrhaging billions of tax dollars.

The latest health information technologies hold the promise of a truly transformed future…a future that was previously impossible just a decade ago:

  • EHRs can reduced the costs of information management
    • Current, uniform patient data becomes accessible in real-time wherever the patient is being treated.
    • Duplicate testing becomes minimized
    • Analytics on data-rich patient information yields
      • better informed care decisions
      • improved outcomes
      • lower treatment costs
    • Sophisticated analysis of massive patient databases yields
      • Superior management of drug safety and effectiveness
      • Rapid identification of expensive, less likely to succeed treatments techniques and more

These are just a few areas of improved care and reduced cost.

It’s not a question of do we adopt state of the art HIT but one of how aggressively we pursue deployment now.

Faculty-Researchers at Harvard Medical School (HMS) practicing at Brigham & Women’s Hospital (BWH) Division of Pharmacoepidemiology and Pharmacoeconomics have Netezza’s TwinFin™ data warehouse appliance as their platform for advanced analytics. Their choice of this technology is especially important at a time when many other stakeholders in drug safety and effectiveness (DSE) are planning to upgrade technology. Harvard and BWH have been leaders in pharmacoepidemiological & pharmacoeconomic research since the 1990’s.  The lab chief, Dr. Jerry Avorn, is the well-known author of “Powerful Medicines: The Benefits, Risks and Costs of Prescription Drugs”.

Dr. Sebastian Schneeweiss, is the Director for Drug Evaluation and Outcomes Research and Vice Chief of the Division of Pharmacoepidemiology and Pharmacoeconomics at Brigham and Women’s Hospital. The Harvard team of researchers is considered to be an industry bellwether:  Here are some of the needs evaluated by the technical lead, Dr. Jeremy Rassen and the sophisticated data mining faculty:

  • Computationally intense rapid analysis of claims data—and in the future E.H.R. data—that keeps pace with expanding data input
  • Capabilities for in-database analytics
  • Ability for accelerated testing of new algorithms
  • A system that facilitates automation of continuous drug safety and effectiveness monitoring
  • Simplicity of use that minimizes the often bottlenecking need for IT support and database administration

Simplicity of use is especially critical since other technologies often require significant set up and technical support time, both of which can seriously delay the outflow of much needed DSE information to groups involved in Health Economics and Outcomes Research (HEOR), Pharmacovigilance, and Epidemiology.

Dr. Schneeweiss is particularly interested in:

“…comparative safety and effectiveness of pharmaceuticals and biotech products, drug policy and risk management program evaluation, and epidemiologic methods using electronic healthcare databases.”

As such, he expects the use of Netezza technology will help expedite the delivery of timely DSE data and ultimately enhance the ability of care providers to act more quickly and effectively on behalf of patients.

We at Netezza are excited that our collaboration with these notable HMS faculty/researchers has already led to leveraging IBM research development efforts and existing products toward revolutionizing computational pharmacoepidemiology. Advanced research tools for pharmacoepidemiology carry with them the prospect of improved Drug Safety and Effectiveness on a global scale.

The nascent trend of five years ago is rapidly becoming the model of today. More & more pharma research is focused on joining forces with universities. The rationale is simple and brilliant; With the ever escalating costs of R & D and the ‘patent cliff’ fast approaching, the merger of resources is a natural wellspring of mutual benefits.

Big pharma needs new drug discoveries and more cost effective ways of discovering them. 80% of all FDA approved drugs have generic counter-parts, according to the 2010 Kaiser Foundation report on prescription drug trends. Add to this the fast approaching edge of the “patent cliff” (2011-2015) when dozens of brand name drugs go off patent, including six of the ten largest medicines in the U.S. and you get a good idea of the challenges facing the ‘business’ of big pharma. Viagara, Actos, Symbicort, Crestor and Avandia are just a few major brands to face generic competition soon.

Partnering with universities will offer big pharma alternate approaches to their on-going research…access to new and experimental technologies, creative thought processes indigenous to the university environment…more cost efficient continued development of in-license drug candidates…fresh stimulus for stalled projects…the potential of discovering multiple, new applications for existing drugs…all in an arena that could offer new, more rapid research platforms for the discovery and release of better medicines.

In this win-win collaboration, universities will be able to analyze pharma’s extensive and diverse data…and data is what it’s all about in the research and development of new drugs. An example of this trend is Sanofi’s recent announcement to collaborate with Harvard in diabetes and cancer research.  As pharma gleans new & improved information from institutional partners, so too do those institutions gain precious access to pharma’s previously locked treasure chest of health science research.

It’s a natural collaboration, taking place on a global scale. A marriage of necessity expected to bring forth a new generation of blockbuster progeny.

The most populated country in the world is fully engaging the issues of healthcare reform. The cornerstone of the newly emerging system is patient data. The organization, storage and management of health records will ultimately maximize the benefits of patient care and cost efficiency.
The WSJ underscores the importance:  “China’s health-care IT market will see remarkable growth in the next five years, triggered partly by China’s three year” health-care reform program, said Janet Chiew, analyst for research firm IDC. IDC estimates the market will reach $2.4 billion in 2013 and grow at an average 19.9% per year.”

Data storage and analysis capabilities will be further driven as the influx of new medical devices and diagnostic equipment continues to surge ahead. According to one report, China’s overall medical equipment market is expected to double between now and 2015 to reach over $53 billion.  This steady increase of new medical equipment will generate massive amounts of new patient data and a concurrent need for real-time access. It is not inconceivable that patient data could more than double in a decade.

Before the information from new diagnostic equipment can be transformed in to cost saving health-care analytics, the existing systems of paper based patient records must become electronic records. This is the first step for eliminating expensive redundancies and delays in gathering patient data. Database technology and storage solutions from IBM and others are already being deployed throughout China and have been for some time.

In Guangdong province, a group of high volume hospitals are implementing a program called CHAS, or Clinical and Health Records Analytics and Sharing. One such hospital focused on traditional Chinese medicine has more than 10,000 patient visits per day. Deployment of the new health-care analytics technology in this hospital is expected by the year-end.

China’s health-care reform offers data storage and analysis technology its largest opportunity yet for developing and deploying solutions…solutions that will have global impact on reducing costs and improving patient care.

Recently, the Proceedings of the National Academy of Science (PNAS) published an article describing the successful completion of IBM researchers mapping neural pathways of a macaque monkey.  IBM’s interest?  One of IBM’s chief interests is looking at design in substrates that can generate lots of intelligence in small spaces.  Being smart about squeezing lots of intelligence in physically small spaces can be used when designed next generation computer chips.  Indeed, one of the findings IBM was interested in is how architecting intelligence in a space-limited network (like the brain, which is bounded by the skull on the upper plane and by quantum physics at the lower plan) is different from the unlimited space of a social network.

The PNAS article included this comment (emphasis ours)

“We derive a unique network incorporating 410 anatomical tracing studies of the macaque brain from the Collation of Connectivity data on the Macaque brain (CoCoMac) neuroinformatic database. Our network consists of 383 hierarchically organized regions spanning cortex, thalamus, and basal ganglia; models the presence of 6,602 directed long-distance connections; is three times larger than any previously derived brain network; and contains subnetworks corresponding to classic corticocortical, corticosubcortical, and subcortico-subcortical fiber systems.”

Why is deeper analysis of brain networks important for the genetic predictions of Alzheimers?  Genetic testing for Alzheimer’s risk is at a standstill of sorts.  Consider this snippet from a widely read article on the limitations of genetic screening testing for Alzheimers:

“For the majority of people who are at risk for the late-onset form of Alzheimer’s disease, the most important factors are age, female gender, family history, and presence of the gene APOE4 . Even though people who have the APOE4 gene are more likely to develop Alzheimer’s, genetic testing is not very useful because so many people who have APOE4 don’t go on to develop Alzheimer’s, and there are plenty of people who don’t have APOE4 that do develop Alzheimer’s.”

It seems likely that APOE4 alone is not a good indicator.  What will be a much better indicator are multiple markers.  One approach for finding multiple markers is looking for single markers like APOE4; a better approach is to look down neural pathways to find where APOE4 is acting together with other genes—and their joint activity, decay, presence, or absence is a much higher correlate.  This is impossible without much deeper understanding of where the brain rural routes, highways, freeways, and superhighways exist.

In Next Generation Gene Sequencing, Don’t Forget the Data…and the Answers

In the next wave of gene sequencing techniques, the focus is mostly on the inputs.  Like this new nanopore approach by a computational physicist from the University of Illinois Urbana-Champaign.  By pulsing an electric field on and off around a strand of DNA, they can induce the DNA to expand and relax as it fits through the nanopore…just the behavior needed to read each protein.  So much innovation on the front end.  What about the outputs?

In a recent press release, one industry guru wants us to spend more time thinking about what to do with the data than how to generate it:

“[The] difficult challenge is accurately estimating what researchers are going to do with the data downstream. Collaborative research efforts, clever data mash-ups and near-constant slicing and dicing of NGS datasets are driving capacity and capability requirements in ways that are difficult to predict,” said Chris Dagdigian, principal consultant at BioTeam, an independent consulting firm that specialises in high performance IT for research. “Users today need to consider a much broader spectrum of requirements when investing in storage solutions.”

Unfortunately, one of today’s myths is that storage solutions are prepared to do the ‘near-constant slicing and dicing’ Mr. Dagdigian mentions.  Too often, high performance computers (née supercomputers) are used to sticking a big storage system on the end and dumping data.  The problem is that without industry leading tools to get data out of the storage system, the real challenge doesn’t end in the sequencing…it’s just beginning.

Is this a new problem?  Some think so.  For example, George Magklaras, senior engineer at the University of Oslo says “The distribution and post-processing of large data-sets is also an important issue. Initial raw data and resulting post-processing files need to be accessed (and perhaps replicated), analyzed and annotated by various scientific communities at regional, national and international levels. This is purely a technological problem for which clear answers do not exist, despite the fact that large-scale cyber infrastructures exist in other scientific fields, such as particle physics.  However, genome sequence data have slightly different requirements from particle physics data and thus the process of distributing and making sense of large data-sets for Genome Assembly and annotation requires different technological approaches at the data- network and middleware/software layers.”

New problems need new solutions.