Thomas Goetz Thomas Goetz

Why The Debate Over Personal Genomics Is a False One

I appeared on KQED's Forum show this morning to discuss this whole Walgreen's/Pathway Genomics fallout. Here's a link to the show: And here are some quick thoughts:

The controversy seems to have stirred the FDA to assert its authority - and that of physicians - over any and all medical metrics. As readers of The Decision Tree know, I have little patience for the argument that we need doctors as gatekeepers of our genetic information. This isn't a drug, and this isn't a device - it's information about ourselves, as ordinary as our hair color or our waist size or our blood pressure - all things that we can measure and consider without a doctor's permission.

I appeared on KQED's Forum show this morning to discuss this whole Walgreen's/Pathway Genomics fallout. Here's a link to the show: And here are some quick thoughts:

The controversy seems to have stirred the FDA to assert its authority - and that of physicians - over any and all medical metrics. As readers of The Decision Tree know, I have little patience for the argument that we need doctors as gatekeepers of our genetic information. This isn't a drug, and this isn't a device - it's information about ourselves, as ordinary as our hair color or our waist size or our blood pressure - all things that we can measure and consider without a doctor's permission.

I'm amazed, in many ways, that this discussion continues to be perpetuated in terms of "can people handle the truth?" - because that line of argument is flawed in so many ways. I'll offer a few: 1) People are more capable of handling genetic information (and other health information) than they're given credit for. 2) Most doctors aren't experts in genetics anyways. 3) If you wait for doctors to give us this information, we'll be waiting for something like 17 years. 4) This is our information, about us, and we own it as much as we own our thoughts and our values. 5) We may want to ask doctors or genetic counselors about what our DNA means - I'm not saying it's easy to understand - but that's entirely our choice.

I'm sincerely fearful that, now with Congress deciding it wants to inspect this stuff, that the FDA will feel obligated to regulate and shut us off from what is rightfully ours. To me, getting access to this information is a civil rights issue. It's our data.

Some in the government see things clearly here. Donald Berwick, President Obama's nominee to run CMS - the agency that oversees Medicare and Medicaid - has defended the rights of patients to own their information. The FDA is now run by the well-regarded Peggy Hamburg, who I have only heard great things about; in a brief conversation with her last year, I was struck by her fair-mindedness and belief in the ideals of transparency and greater consumer empowerment. My hope is that she sees the light here. She's written about how the FDA is a public-health agency, particularly in terms of "risk communication"; well, one of the reasons we communicate risks is to allow people to take responsibility and act in ways to minimize our risks. It's the basis of preventive health. That's precisely the potential of personal genomics, and to squash that would have a net effect of undermining the public's health.

The FDA doesn't have to use regulation like a hammer to squash innovation and the opportunity for people to use genetics to take control of their health. They can help foster innovation and issue some basic guidelines that recognizes information is a powerful tool, and one that rejects intermediation and paternalism.

I'm crossing my fingers.

Read More
Brian Mossop Brian Mossop

More On Intelligence

On the heels of a post I did at The Scientist (“Amazing Rats”), where I proposed a new model of intelligence based on a animal’s ability to solve problems rather than its communication skill, I read a blog post by Jonah Lehrer at The Frontal Cortex where he gives his take on what intelligence really means. Rather than smarts merely defined by how many facts someone can cram into their heads, Lehrer argues that a better measure of intelligence is to look at how well people (or animals) can shift their selective attention.  Facts are just facts, but the intelligent being can manipulate and organize the information for the task at hand, which places a high demand on the attention circuits in the brain.

On the heels of a post I did at The Scientist (“Amazing Rats”), where I proposed a new model of intelligence based on a animal’s ability to solve problems rather than its communication skill, I read a blog post by Jonah Lehrer at The Frontal Cortex where he gives his take on what intelligence really means. Rather than smarts merely defined by how many facts someone can cram into their heads, Lehrer argues that a better measure of intelligence is to look at how well people (or animals) can shift their selective attention.  Facts are just facts, but the intelligent being can manipulate and organize the information for the task at hand, which places a high demand on the attention circuits in the brain.

Lehrer cites a study by Walter Mischel, which studied a group of children that had a marshmallow placed in front of them.  The kids were told that they could have the marshmallow now, or wait 15 minutes and get two marshmallows.  According to Lehrer, those children that were able to wait for the bigger reward payout had better SAT scores, were better behaved, and less stressed than their impulsive counterparts.

Mischel’s study is a classic test for discounting of delayed rewards, and I’m not surprised that the kids that were able to wait a short period of time for the bigger reward ultimately did better than those that couldn’t.  But are the kids that could wait more intelligent, or is there something else going on with the kids that cracked?

Numerous studies have shown that people prone to addiction continually discount delayed rewards.  They’re impulsive.  They can’t see past the immediate pleasure of the reward.

Defects in the dopamine reward system (e.g. addiction) interfere with other circuits in the brain.  Therefore, it doesn’t surprise me that very intelligent addicts make unwise decisions while seeking their next high.

I’m not saying all the kids in Mischel’s study who impulsively took the first marshmallow as a reward were all addicts.  But as Lehrer pointed out, these kids ended up more stressed out and with more behavioral problems than other children, which says there’s more to the story than simply an intelligence difference.

Read More
Brian Mossop Brian Mossop

C-reactive Protein: The Good, The Bad, and The Ugly

When something’s wrong with the body, the innate immune system kicks into high gear, sending inflammatory molecules through the body, which help recruit macrophages – the cellular garbage collectors – to the scene. Recent publications show systemic inflammation goes hand-in-hand with cardiovascular disease (CVD) and atherosclerotic vessels. Researchers have been trying to pinpoint which inflammatory markers could potentially be used as biomarkers for CVD risk or progression. Current efforts have zeroed in on one marker in particular, the C-reactive protein, in the hopes of finding a way to assess a person’s risk for CVD both non-invasively and well before a cardiovascular event occurs. Preliminary evidence has shown that in the normal population, the higher the C-reactive protein level, the higher the risk for CVD. But what exactly is a normal population? These days, a full serving of heart disease often comes with a heaping side of Type II diabetes, rheumatoid arthritis, or chronic kidney disease, creating a so-called “co-morbidity” of chronic diseases. Not surprisingly, these secondary disease states also affect the levels of C-reactive protein in the blood. So when a patient has more than one chronic condition, how useful is measuring the C-reactive protein level in predicting CVD risk?

When something’s wrong with the body, the innate immune system kicks into high gear, sending inflammatory molecules through the body, which help recruit macrophages – the cellular garbage collectors – to the scene. Recent publications show systemic inflammation goes hand-in-hand with cardiovascular disease (CVD) and atherosclerotic vessels. Researchers have been trying to pinpoint which inflammatory markers could potentially be used as biomarkers for CVD risk or progression. Current efforts have zeroed in on one marker in particular, the C-reactive protein, in the hopes of finding a way to assess a person’s risk for CVD both non-invasively and well before a cardiovascular event occurs. Preliminary evidence has shown that in the normal population, the higher the C-reactive protein level, the higher the risk for CVD. But what exactly is a normal population? These days, a full serving of heart disease often comes with a heaping side of Type II diabetes, rheumatoid arthritis, or chronic kidney disease, creating a so-called “co-morbidity” of chronic diseases. Not surprisingly, these secondary disease states also affect the levels of C-reactive protein in the blood. So when a patient has more than one chronic condition, how useful is measuring the C-reactive protein level in predicting CVD risk?

A new study published this week in PLoS One by a group at Kings College, London, took a look at people with rheumatoid arthritis (RA), an inflammatory joint condition that also coincides with remarkably elevated C-reactive protein levels. According to the authors, along with swollen joints, sufferers of rheumatoid arthritis are also twice as likely to have a heart attack.

The researchers looked at three subclinical measures of CVD: flow mediated dilation (measures endothelial cell function), intima-medial thickness (measures arterial wall thickness), and pulse wave velocity (measures large artery stiffness), in people with RA and healthy control subjects. The RA group was further subdivided into three tiers according to how much C-reactive protein was circulating in the patient’s blood during a baseline reading.

If C-reactive protein was in fact causing CVD in rheumatoid arthritis patients, the subclinical CVD measures should incrementally change as the level of C-reactive protein increases. However, the researchers found that two of the subclinical measures didn’t change at all as the level of C-reactive protein increased in those with rheumatoid arthritis. The third subclinical measure – the flow mediated dilation value, which measures how responsive endothelial cells are – actually improved as C-reactive protein levels rose, suggesting that the protein may offer a protective function in a chronic state of inflammation.

I’m always scouring scientific papers looking for the next great thing in biomarkers. After all, if a simple blood test can tell us who’s at risk for certain diseases, we could make great strides in diagnosis and treatment of affected people. But I think this paper shows that biomarker readings are not so straightforward. In the complicated web of chronic disease we’re now spinning, we need to better understand how cardiovascular disease biomarkers – particularly inflammatory markers -- change when people have more than one chronic medical condition.

http://www.plosone.org/article/info:doi/10.1371/journal.pone.0010242#pone-0010242-t001

Read More
Brian Mossop Brian Mossop

What Did We Really Learn From the BBC Brain-Training Software Study?

Ever since I saw the press releases yesterday telling of a new article to be released in Nature showing that brain-training software was ineffective, I knew a storm was brewing.  The paper was still under embargo at that point, so I was anxiously awaiting its release today.  Slowly, but surely, the mainstream media got wind of the paper, running headlines like “Brain Games Don’t Make You Smarter”.  Then the blogosphere lit up, with ongoing chatter throughout the day on this controversial paper. I was stuck in the lab all day, and couldn’t put a post together, so I’m a little late to the party.  But I wanted to give you a rundown of what exactly the study found, and point out a few intricacies of their findings.

Ever since I saw the press releases yesterday telling of a new article to be released in Nature showing that brain-training software was ineffective, I knew a storm was brewing.  The paper was still under embargo at that point, so I was anxiously awaiting its release today.  Slowly, but surely, the mainstream media got wind of the paper, running headlines like “Brain Games Don’t Make You Smarter”.  Then the blogosphere lit up, with ongoing chatter throughout the day on this controversial paper. I was stuck in the lab all day, and couldn’t put a post together, so I’m a little late to the party.  But I wanted to give you a rundown of what exactly the study found, and point out a few intricacies of their findings.

When I began graduate school, there was a savvy postdoc in our lab who showed the newbies the ropes.  One of the best pieces of advice he offered was, “Don’t believe everything you read, and always check who did the study.”  I try to live by these words every time I read a study.

The group who submitted the Nature paper was led by a researcher named Adrian Owen, a professor at MRC Cognition and Brain Sciences Unit, Cambridge UK.  Owen developed this brain training program and study in collaboration with the BBC.  A quick look at Owen’s PubMed listing shows he’s primarily known for using fMRI to prove that people who are in a constant vegetative/minimally conscious state are, in fact, self-aware (a controversial field and a bold claim, which I’m not going to get into right now).  But the point is: Owen isn’t an expert in brain plasticity or behavioral training-induced cognitive changes.

Making brain-training software isn’t a task you just jump into, and experts spend years proving and refining approaches in animal models.  But it appears that Owen woke up one day and suddenly decided he had the insight to figure out whether the cognitive benefits claimed by brain training software were true.

Even if we give Owen the benefit of the doubt, and assume he knows what he’s doing, all brain-training programs are not created equal.  I try, whenever possible, to refrain from using the term “brain games”, because when training modules are created from sound preclinical and clinical research, they’re really much more than games.  Owen and the BBC only tested their program, so the results simply say that their program doesn’t work.  This finding does not generalize across the industry.

SharpBrains has the best rundown I’ve seen of what’s wrong with this report, including the nitty-gritty details showing that the participants in the Owen/BBC study used the brain training software for considerably less time than most programs.  Also, the training sessions were unsupervised, hence the participants were possibly prone to distraction.

While I’m moderately annoyed with the overreaching conclusions the authors made, I’m even more ticked at the mainstream media headlines.  We spend billions of dollars bringing drugs to market, and often things go wrong during drug trials.  Companies miss clinical endpoints, or worse, someone has an adverse event.  Yet, when this happens, I have to scour the net just to find a mention of the problem.  The brain training software industry is still in its infancy, and there will inevitably be bumps in the road.  But the truth is, these studies cost a fraction of what it takes to bring a drug to market, and despite what this rogue Nature paper says, have a huge potential to help millions of people.

Read More
Brian Mossop Brian Mossop

New Biomarkers for Diabetes

Obesity (determined by BMI) and blood glucose levels are by far the best predictors of whether a person will develop diabetes. Yet doctors are always on high alert for new biomarkers that may be more sensitive indicators of which patients will develop diabetes in the near future. The idea of using biomarkers to predict diabetes is not entirely new. Glycated hemoglobin (HbA1C) values are now routinely being monitored to screen for at-risk patients. However, a new study in PLoS One shows that several new biomarkers in the blood may further our understanding of exactly who’s at risk for diabetes, and increase our knowledge of the etiology of the disease.

Obesity (determined by BMI) and blood glucose levels are by far the best predictors of whether a person will develop diabetes. Yet doctors are always on high alert for new biomarkers that may be more sensitive indicators of which patients will develop diabetes in the near future. The idea of using biomarkers to predict diabetes is not entirely new. Glycated hemoglobin (HbA1C) values are now routinely being monitored to screen for at-risk patients. However, a new study in PLoS One shows that several new biomarkers in the blood may further our understanding of exactly who’s at risk for diabetes, and increase our knowledge of the etiology of the disease.

Veikko Salomaa and colleagues from the Department of Chronic Disease Prevention at the National Institute for Health and Welfare in Helsinki, Finland, tested nearly 13,000 people and found almost 600 cases of diabetes during routine follow-up exams.

According to the study, low levels of adiponectin, and high levels of apoB, C-reactive protein (CRP), and insulin, increase the chance that a woman will develop diabetes. When these factors were measured, proper diabetes prediction increased by 14% compared to when doctors only use classic risk factors, such as BMI and blood glucose levels, to predict disease.

The biomarkers that best predicted diabetes in men were low adiponectin, and high levels of CRP, interleukin-1 receptor antagonist, and ferritin. Accounting for these biomarkers led to a 25% increase in correct diabetes detection in the cohort.

read the study here.

Read More
Thomas Goetz Thomas Goetz

When Less Care Is Better

David Leonhardt has a smart column in today's NYT that takes on the "more care is better" idea with some cold, hard facts. Leonhardt frames his story on the idea that we need to say "no" a lot more, starting with CT scans, and more.

David Leonhardt has a smart column in today's NYT that takes on the "more care is better" idea with some cold, hard facts. Leonhardt frames his story on the idea that we need to say "no" a lot more, starting with CT scans, and more.

It’s not just CT scans. Caesarean births have become more common, with little benefit to babies and significant burden to mothers. Men who would never have died from prostate cancer have been treated for it and left incontinent or impotent. Cardiac stenting and bypasses, with all their side effects, have become popular partly because people believe they reduce heart attacks. For many patients, the evidence suggests, that’s not true.

 

Advocates for less intensive medicine have been too timid about all this. They often come across as bean counters, while the try-anything crowd occupies the moral high ground. The reality, though, is that unnecessary care causes a lot of pain and even death.

Being the economics writer, Leonhardt's focus is on keeping costs down, but there are benefits for individual patients here, too. It's worth remembering that a lot of this care comes in late-stage disease, when the margin for improving lives is slim - in fact, when late-stage interventions can often be detreminental to life.

We tend to think that throwing more resources at a problem can solve it, but just as that doesn't work in technology - see the myth of the man-month - so it can be ineffective in healthcare. But actually convincing people that they're wrong on this point - which we feel in our bones, despite the evidence - will take some work.

UPDATE: A nifty comment on the same article from Health Beat: "...An increased focus on learning to communicate risk and benefit effectively and by ramping up the patient’s role in decision-making will be far more important in reducing health care costs than learning to “say no.”"

Read More
Brian Mossop Brian Mossop

Amazing Rats

I had the opportunity to write a post at the new blog of The Scientist magazine, "Naturally Selected".  The post is not about preventive medicine.  Rather, it taps into my neuroscience roots, and discusses the basis of intelligence in animals.  Here's an excerpt:

I had the opportunity to write a post at the new blog of The Scientist magazine, "Naturally Selected".  The post is not about preventive medicine.  Rather, it taps into my neuroscience roots, and discusses the basis of intelligence in animals.  Here's an excerpt:

Ever since the size of our brains outgrew our closest animal relatives, we humans have declared ourselves far smarter than any other creatures in the animal kingdom.  But our big brains, and bigger egos, may underestimate the intelligence of other critters, simply because we’ve been asking the wrong questions. A study published in January in PLoS One shows that if we define intelligence not in terms of communication but in terms of problem-solving, then our animal brethren may be a lot smarter than we’ve given them credit for – starting with the rat.

 

Read the entire post here.

Read More
Brian Mossop Brian Mossop

Slate Reports on Different Types of LDLs

As a follow-up to my post "The Truth About Cholesterol", here's a report from Slate showing that all LDLs are not created equal, and some types are more dangerous than others.  Moreover, the article discusses how America's "War on Fat" steered us away from butter and lard, but led us to an arguably more dangerous food, the refined carbohydrate.  Post your thoughts!

As a follow-up to my post "The Truth About Cholesterol", here's a report from Slate showing that all LDLs are not created equal, and some types are more dangerous than others.  Moreover, the article discusses how America's "War on Fat" steered us away from butter and lard, but led us to an arguably more dangerous food, the refined carbohydrate.  Post your thoughts!

Read More
Brian Mossop Brian Mossop

Screening HPV at Home

In Chapter 6 of The Decision Tree, "Screening for Everything", Thomas talks about the human papilloma virus (HPV), the virus that causes cervical cancer. Traditionally, doctors detected HPV by looking for irregular cells in the pap smear. But now, a cheap ($5) test can detect and analyze the DNA of the virus, determining if it is the high- or low-risk type, which can determine the likelihood of a patient developing cervical cancer. One problem remains: you still have to get women into the clinic to be tested. However, a new study in the British Medical Journal shows that home testing is not only a reality, but it may actually boost compliance rates. Roughly 28% of women using the home testing kit, which consisted of a simple cervicovaginal lavage, effectively screened themselves, while only about 17% of women required to go into the doctor's office for screening showed up.

In Chapter 6 of The Decision Tree, "Screening for Everything", Thomas talks about the human papilloma virus (HPV), the virus that causes cervical cancer. Traditionally, doctors detected HPV by looking for irregular cells in the pap smear. But now, a cheap ($5) test can detect and analyze the DNA of the virus, determining if it is the high- or low-risk type, which can determine the likelihood of a patient developing cervical cancer. One problem remains: you still have to get women into the clinic to be tested. However, a new study in the British Medical Journal shows that home testing is not only a reality, but it may actually boost compliance rates. Roughly 28% of women using the home testing kit, which consisted of a simple cervicovaginal lavage, effectively screened themselves, while only about 17% of women required to go into the doctor's office for screening showed up.

The HPV DNA test is primarily looking for the high-risk virus serotype, and the authors of this study claim that home screening kits have the same sensitivity as the doctor's protocol when specifically looking for the aggressive virus.

Special thanks to Lindsay Crouse for bringing this to my attention. In her email to me, she brilliantly summed up the significance of home HPV testing:

While screening has been tremendously successful in Western countries at reducing cervical cancer cases and deaths, the obstacle of reaching all women through screening remains. Currently, if a woman is to be screened for cervical cancer, she must visit a health care provider for a gynecological exam. If she is unable or reluctant to do that, whether due to transportation, cost, or comfort issues, she is less likely to get screened at all, and is consequently at increased risk for developing cervical cancer. More than half of such cancers are typically diagnosed in women who do not get screened regularly.

Read More
Thomas Goetz Thomas Goetz

The Paradox of Technology in Healthcare

One of the great humdingers in the current debate over healthcare reform is the duplicitous role of technology in increasing costs. Sophisticated medical technologies save thousands of lives every year, giving us scans that spot tumors early and devices that keep our hearts beating and our blood flowing. But these miracle technologies come with a paradox. In nearly every sector of the economy, technology drives costs down - just as your digital camera gets cheaper and better every year, so technology drives down the cost of manufacturing, the cost of retailing, the cost of research. But for some reason, in healthcare, technology has the opposite effect; it doesn't cut costs, it raises them. In fact, medical technologies - from CT scans to stents to biologics - are a significant factor in the 10% annual growth rate of healthcare spending, a rate that's nearly triple the pace of inflation. (Overall, the US is now estimated to spend a stunning $2.7 trillion on healthcare in 2010.)

toshiba_ct_scanner_630px
toshiba_ct_scanner_630px

One of the great humdingers in the current debate over healthcare reform is the duplicitous role of technology in increasing costs. Sophisticated medical technologies save thousands of lives every year, giving us scans that spot tumors early and devices that keep our hearts beating and our blood flowing. But these miracle technologies come with a paradox. In nearly every sector of the economy, technology drives costs down - just as your digital camera gets cheaper and better every year, so technology drives down the cost of manufacturing, the cost of retailing, the cost of research. But for some reason, in healthcare, technology has the opposite effect; it doesn't cut costs, it raises them. In fact, medical technologies - from CT scans to stents to biologics - are a significant factor in the 10% annual growth rate of healthcare spending, a rate that's nearly triple the pace of inflation. (Overall, the US is now estimated to spend a stunning $2.7 trillion on healthcare in 2010.)

This was made clear once again last week, when a Massachusetts state audit found that healthcare costs rose 20% from 2006 to 2008, largely because of new imaging technologies. The single largest increase was for digital mammography, a new - and expensive - way to screen for breast cancer.

What's going on here? Why can't technology work its magic in healthcare, the way it does in the rest of the economy?

The answer boils down to what's called "scale" - the notion that technology, thanks to Moore's law and other exponential improvements, gets progressively cheaper, better and thus more accessible. Cheaper and faster chips, sensors and storage mean that digital technology is constantly scaling up and out, touching the lives of more people. These improvements in cost and power are the democratizing force that has propelled GPS from a military technology to a cellphone feature, and they're what helps Apple convince us to buy a new iPod every 18 months. Scalability is the secret sauce of the digital revolution.

Except in healthcare. In healthcare, technologies that scale are suspiciously hard to find. There's no lack of technology, it's just that they don't seem to get cheaper and better at the same exponential rate as in the rest of the universe. This is especially strange because CT scans and pacemakers - to take two frequently blamed cost-generators - rely on the same digital technologies that are getting cheaper outside of healthcare.

There are a couple reasons for this. For one thing, there's far too little price transparency in the medical technology market. Without an open marketplace of prices and services, it's difficult for hospitals and clinics to know whether there's a better deal elsewhere, and manufacturers can keep costs high. Secondly and perhaps more significantly, medical technologies still tend to rely on an expert class to actually deploy the technology. GPS may have turned us all into amateur navigators, but CT scans haven't turned us into hobbyist radiologists. Those highly trained and expensive experts are still needed to actually put the technology to work, making it impossible to entirely automate a process. The result is that technology stays expensive to use, and costs keep going up.

At long last, though, that's changing, and scalable technologies are coming to healthcare. But there's a twist: instead of coming from your doctor or hospital, they're going straight to consumers. Digital monitoring tools like the Nike+ system, which uses a little accelerometer sensor in your running shoe, let people make more informed choices and pursue better health behaviors. And new online decision tools like LifeMath.net, a project of Harvard University's Laboratory for Quantitative Medicine, take advantage of cheap processing power to crunch data into personalized medical recommendations, making it far more relevant than generic advice (and thus much more likely to result in lasting change, addressing what doctors call "the compliance problem"). These and other tools use technology for what it's good at. They put the tools directly in our hands, and get us engaged in our health before we need the expertise of specialists.

In the world of insurance and care providers, some folks already understand this, and are way ahead of Washington policy makers in tapping cheap technologies to improve healthcare. In Hawaii, Kaiser Permanente has started a pilot project that churn through its database of patient data to predict which patients might need which tests - and then sends individuals email alerts suggesting they come in for a test or checkup. It's the same sort of technology that Netflix uses to recommend movies. And the Cleveland Clinic has teamed up with Microsoft to bring self-monitoring tools to patients managing chronic diseases, successfully engaging them in better health behaviors without expensive visits to the hospital.

In the last century, medical technologies ably did their part to extend the life expectancy of the average American to nearly 80 years. It's time to reassess how we deploy technology in healthcare, and put the digital revolution to work not just for our entertainment, but for our health, too.

The is a cross-post from The Huffington Post.

Read More