Blog

What Did We Really Learn From the BBC Brain-Training Software Study?

Ever since I saw the press releases yesterday telling of a new article to be released in Nature showing that brain-training software was ineffective, I knew a storm was brewing.  The paper was still under embargo at that point, so I was anxiously awaiting its release today.  Slowly, but surely, the mainstream media got wind of the paper, running headlines like “Brain Games Don’t Make You Smarter”.  Then the blogosphere lit up, with ongoing chatter throughout the day on this controversial paper. I was stuck in the lab all day, and couldn’t put a post together, so I’m a little late to the party.  But I wanted to give you a rundown of what exactly the study found, and point out a few intricacies of their findings.

When I began graduate school, there was a savvy postdoc in our lab who showed the newbies the ropes.  One of the best pieces of advice he offered was, “Don’t believe everything you read, and always check who did the study.”  I try to live by these words every time I read a study.

The group who submitted the Nature paper was led by a researcher named Adrian Owen, a professor at MRC Cognition and Brain Sciences Unit, Cambridge UK.  Owen developed this brain training program and study in collaboration with the BBC.  A quick look at Owen’s PubMed listing shows he’s primarily known for using fMRI to prove that people who are in a constant vegetative/minimally conscious state are, in fact, self-aware (a controversial field and a bold claim, which I’m not going to get into right now).  But the point is: Owen isn’t an expert in brain plasticity or behavioral training-induced cognitive changes.

Making brain-training software isn’t a task you just jump into, and experts spend years proving and refining approaches in animal models.  But it appears that Owen woke up one day and suddenly decided he had the insight to figure out whether the cognitive benefits claimed by brain training software were true.

Even if we give Owen the benefit of the doubt, and assume he knows what he’s doing, all brain-training programs are not created equal.  I try, whenever possible, to refrain from using the term “brain games”, because when training modules are created from sound preclinical and clinical research, they’re really much more than games.  Owen and the BBC only tested their program, so the results simply say that their program doesn’t work.  This finding does not generalize across the industry.

SharpBrains has the best rundown I’ve seen of what’s wrong with this report, including the nitty-gritty details showing that the participants in the Owen/BBC study used the brain training software for considerably less time than most programs.  Also, the training sessions were unsupervised, hence the participants were possibly prone to distraction.

While I’m moderately annoyed with the overreaching conclusions the authors made, I’m even more ticked at the mainstream media headlines.  We spend billions of dollars bringing drugs to market, and often things go wrong during drug trials.  Companies miss clinical endpoints, or worse, someone has an adverse event.  Yet, when this happens, I have to scour the net just to find a mention of the problem.  The brain training software industry is still in its infancy, and there will inevitably be bumps in the road.  But the truth is, these studies cost a fraction of what it takes to bring a drug to market, and despite what this rogue Nature paper says, have a huge potential to help millions of people.