Archive for August, 2009

Neural marketing or neuromarketing is an emerging practice of scanning the brain of focus group members as they watch advertisements or images and slogans to understand their propensity to buy.  Without going further, what you imagine in your mind’s eye (no pun intended) is what happens: a bunch of folks hooked up to electrodes watching movies on a big screen with some white-lab-coated scientists scurrying around (and likely some marketers watching behind a two-way mirror).  While the concept gained ground in the early 2000’s, neural marketing is hitting the mainstream with the publication of the first handbook on the topic, Buyology by Martin Lindstrom.  While the book has received some poor reviews, the first mainstream book of a new movement or technique is not expected to be especially good—others like The One to One Future by Peppers and Rogers or Clicking by Faith Popcorn weren’t accused of being perfectly written–we all can’t be Gladwell or Vollmann.  Whitepapers and blogs abound, but the real story here is what happens in the future.  Today, neural marketing is expensive.  And the data is very hard to collect, manage and analyze.  A full brain scan for a few seconds might represent 10 gigabytes of data.  One could imagine collecting a terabyte of data from one subject for a pair of single television commercials.  Then to get an n of 20 for both the control and experimental groups would represent 40 terabytes of data.  For a single experiment.  Fortunately, innovation and price decreases continue for systems that can handle the loading throughput, storage and mining of data of this size—it was previously cost prohibitive with typical database and hardware configurations.  However, it will be some time before this is available at any kind of non-stellar cost.  One problem is that medicine is driving deeper into specificity and speed—which creates more data.  Older techniques like echo-planar imaging sequences led us down to microsecond recording times of tiny micron-length parts of the brain.  They have one subject at a time (the patient), and want to take a single picture and review it.  As you can imagine, any computer wants to print out the image and immediately get rid of that enormous data file.  Marketers, on the other hand, want to do something more akin to taking a video–think sleep lab.  Marketing, on the other hand, wants less data, more aggregation, and most importantly, an objective way to score the scan numerically rather than visual comparison that comes out of the medical tradition.

So is this going to be real?  Nestle was reported in 2008 to be noodling around a step away from the brain on its way to replicating the nose and brain of its taste testers.  And Microsoft thinks so—they patented an approach to measuring reactions to graphical user interfaces from focus group members.

Recently, lawmakers agreed on who, where, and how a national program would roll out a requirement for restaurants to label the calorie content of their foods.  A number of locations–including New York City–already have these regulations.   Calorie counts are a great example of a brand new source of data not only being unleashed to the public, but more importantly, influencing their purchasing behavior.  A.G. Lafley, the CEO of P&G, called the point of purchase where a consumer makes the buying decision the ‘moment of truth’.  More and more, the calorie count will be front and center at this moment of truth.  And purchasing behavior maps to profitability, brand loyalty and brand switching, market share, and all the wonderful things that businesses need to be constantly measuring, mining, and acting on to stay competitive and meet their performance expectations.

Consider this thought exercise.  There are approximately 300 million (300M) people in the US, so one can assume there are 300M meals eaten a day (the number of babies eating no prepared meals offsets those who eat more than one meal a day, CED).  According to NPD, one in five meals are served in restaurants, so 20% of 300M is 60M.  Although I can’t source it, somewhere over 50% of restaurant meals in the US are served from chain restaurants.  Thus 30M meals a day from chain restaurants.  Very soon, these 30M meals, when ordered, will need to be accompanied by a calorie count, if not more information such as fat content.  Another study cited by a July 8th article in the Wall St. Journal cited that 1/3rd of Subway customers noticed the nutrition information of their order, although other chains showed much less notice.  If we assume 5% of patrons notice the information and are influenced by it, that’s at least 1M meals a day where the consumer is influenced by calorie count.  Another validation of these numbers might be the following: McDonalds serves 47M consumers a day worldwide, so let’s assume 20M in the US.  If 2.5% of those customers notice the calorie counts, that equals ½ a million customers—assuming one eats only one McDonalds meal a day, there’s half your one million meals out of just one chain.  If an average meal costs $6, then the total economic impact of changing preferences may be conservatively estimated at $6M per day, or $2B per year.

Many of today’s sizable data mining businesses and technologies were founded on addressing market pain of less than $2B.  This new data phenomenon is real.  We can expect to see a growing prevalence not only of consumer data mining solutions, popular ones about on the web and now they’re hitting the phone.  But the business opportunity here is selling data mining applications to the chain restaurants.  The meal providers should want to know

  • If we want to look at price elasticity of meals, wouldn’t we want to look at calorie elasticity?
  • We want to see calorie responsiveness geographically, demographically, related to time of day, size of store, and against a number of other independent variables (or dimensional data, if you’re a techie)

All these tie back to promotions and new items.  If the restaurant is looking at new item profitability, how to grow a segment of our category, or where to get the most bang out of the promotional buck, maybe calories is where to start.  And if we’re talking numbers, with millions of consumers, and hundreds of thousands of stores, take a guess at how much data we’re talking about.  Clearly, mining this is not something a business can do with any kind of speed applying statistical functions on little databases like MySQL or Oracle.

Traditional BI Speeds Even More Crippling Than We Thought

A recent article from Stephen Swoyer of The Data Warehousing Institute implies that the negative impact to end users from poor query performance is even worse than originally thought.  Stephen’s key quote:

“BI vendors like to talk up a 20/80 split — i.e., in any given organization, only 20 percent of users are actually consuming BI technologies; the remaining 80 percent are disenfranchised. According to BI Survey 8, however, most shops clock in at far below the 20 percent rate. In any given BI-using organization, notes Nigel Pendse, a principal with BARC and the primary architect of BI Survey, just over 8 percent of employees are actually using BI tools. Even in industries that have aggressively adopted BI tools (e.g., wholesale, banking, and retail), usage barely exceeds 11 percent.”

So the most recent study says 8%+ (let’s call it 10%) of end users actually use what’s been rolled out, and the rest are ‘disenfranchised’.  One thrust of the article is that BI Vendors are actually inflating the amount of usage by using the colloquial ‘80/20’ description of the usage problem.  This is a bit like my favorite story about Warren Buffett.  To paraphrase, an interviewer once asked him “Mr. Buffett, I learned that you don’t spend most of your time talking, negotiation, or in meetings but you spend 80% of your work time reading.  Is this true?”  Mr. Buffett’s replied “Actually it’s more like 90%”.  We can agree that there is a huge group of disenfranchised business users, or we can violently agree.  One study author called BI firms’ penchant for using  an 80/20 split ‘vendor’s optimism’.  This is splitting hairs.  Most business people won’t care whether 80% or 90% of their people are not using a decision support system paid for and installed.  Both numbers are equally alarming and mean failure.

Why did the Business Application Research Center study show such low usage?  They found three reasons:  “security limitations, user scalability, and slow query performance”.  Of these, the last two can be bucketed into ‘system performance’.  One could interpret this to mean that ‘If your BI application gets security right, and it follows the typical usage pattern—a minority of potential users use it—, it’s due to one thing: performance.”

It’s unlikely very low usage five years ago and the same very low usage today has different root causes.  What’s more likely is the typical end user has very real limitations on how much time they can spend on BI, and if the system performance can’t meet their requirements, they don’t engage with it.