Archive for July, 2010

It’s an amazing day, the beginning of something great! Netezza Corporation’s (www.netezza.com)  (NYSE:NZ) new Health & Life Sciences division is launching our voice on Facebook, Twitter, LinkedIn, and more.  As the premier analysis engine for all healthcare oriented data, Netezza will be a leading voice in what is now possible as we move toward truly predictive drugs, care, and outcomes.

My friend and colleague Bill Zanine, our Business Solutions Executive for Health & Life Sciences, will be the broadcast presence for where technology meets science meets business in his blog Rx for Analytics.  Bill has spent years as feet on the street and in the elite ivory towers of global healthcare data vision, provision, and utilization.

Where will you find Netezza this year, next year, and beyond?

–        When insurance payers are trying to identify fraud and abuse to pass those savings on to you the consumer, and do it before the fraudster is paid, not after, you’ll find Netezza

–        When drug companies look to gene sequencing post-process analysis to identify which drugs your personalized medicine profile contraindicates, you’ll find Netezza

–        When providers want to query a centralized electronic health record for aggregate analytics on a vector, symptom, drug or outcome, you’ll find Netezza

–        When pharmaceuticals want to make smarter contracts, more effective distribution, and penetrate the opinion ‘cloud’ of influential doctors and academicians to get better drugs to you, you’ll find Netezza

We can make your doctor smarter.  We can make the next drug better.  We can make your insurance cheaper.  We can help them cure cancer faster.  Netezza can do all this.  Netezza can.

In Next Generation Gene Sequencing, Don’t Forget the Data…and the Answers

In the next wave of gene sequencing techniques, the focus is mostly on the inputs.  Like this new nanopore approach by a computational physicist from the University of Illinois Urbana-Champaign.  By pulsing an electric field on and off around a strand of DNA, they can induce the DNA to expand and relax as it fits through the nanopore…just the behavior needed to read each protein.  So much innovation on the front end.  What about the outputs?

In a recent press release, one industry guru wants us to spend more time thinking about what to do with the data than how to generate it:

“[The] difficult challenge is accurately estimating what researchers are going to do with the data downstream. Collaborative research efforts, clever data mash-ups and near-constant slicing and dicing of NGS datasets are driving capacity and capability requirements in ways that are difficult to predict,” said Chris Dagdigian, principal consultant at BioTeam, an independent consulting firm that specialises in high performance IT for research. “Users today need to consider a much broader spectrum of requirements when investing in storage solutions.”

Unfortunately, one of today’s myths is that storage solutions are prepared to do the ‘near-constant slicing and dicing’ Mr. Dagdigian mentions.  Too often, high performance computers (née supercomputers) are used to sticking a big storage system on the end and dumping data.  The problem is that without industry leading tools to get data out of the storage system, the real challenge doesn’t end in the sequencing…it’s just beginning.

Is this a new problem?  Some think so.  For example, George Magklaras, senior engineer at the University of Oslo says “The distribution and post-processing of large data-sets is also an important issue. Initial raw data and resulting post-processing files need to be accessed (and perhaps replicated), analyzed and annotated by various scientific communities at regional, national and international levels. This is purely a technological problem for which clear answers do not exist, despite the fact that large-scale cyber infrastructures exist in other scientific fields, such as particle physics.  However, genome sequence data have slightly different requirements from particle physics data and thus the process of distributing and making sense of large data-sets for Genome Assembly and annotation requires different technological approaches at the data- network and middleware/software layers.”

New problems need new solutions.