The New York Times had a story on Friday criticizing the National Cancer Institute for its new decision tool on colon and rectal cancers. The problem, the reporter says, is that the tool - an interactive questionaire that creates a risk estimate for developing colon cancer - only works for white people. African-Americans or Hispanics who try the tool get a message that says: “At this time the risk calculations and results provided by this tool are only accurate for non-Hispanic white men and women ages 50 to 85.” It's an odd story for a couple reasons. First, the reporter seems to have created the controversy on her own - she quotes only one critic of the tool, and that critic is described as reacting negatively "after being referred to the site by a reporter." The same reporter, I assume, who's writing the story. Hmm.
But the real problem with the critique is that it barely acknowledges the reason that the tool only works for whites: It's because the data on risk for colon cancer that's built into the site is based on research that only studied whites. In other words, the NCI used the existing epidemiology, the data that exists, which is based on a Caucasian population study. More research is being done on risks in other populations, but it's not substantial enough to merit a valid decision tool.
I'm all for calling on the NCI to extend the tool and fund the science that will relate to more people. But this is the way of all research - you take certain populations, which correlate in various ways to larger populations, and try to ascertain risk. Rare is the study that's so well funded and so well managed that it can handle the full spectrum of people in the US. So the science evolves slowly, piece by piece, and over time the broader population is covered. Yes, there is such a thing as disparities in health research - certain populations are regrettably understudied. But there's no indication in the Times story that that's the case with colon cancer.
So the Times story has the effect of criticizing the NCI for creating a tool because it's incomplete, entirely missing the forest for the trees: The great thing here is that such a tool exists in the first place. This is the sort of thing we should be encouraging the NCI and other health entities to do - show us the science, and show us how it's relevant, *as it emerges and as soon as it emerges*. These sorts of risk assessment tools are incredibly powerful ways for individuals to think about their health. They help us understand the great body of science in immediately personal terms, giving us perspective on how our decisions - in this case, how much exercise we get or how many vegetables we eat - affect our risk for developing cancer. This should be applauded and encouraged, not criticized for failing to emerge in an all-at-once exhaustive form.
Indeed, the one critique that I have about the NCI's tool - which you can see here - is that it doesn't make plain how your risk stands up against other people's, nor does it make plain what sort of changes could reduce your risk. When I played around with a worst-case scenario for me - only some exercise and not many vegetables - it gave me a lifetime risk of about 6%. But without the context of a general population, I have no idea if that's high or low. And when I fiddle with the numbers and say I take aspirin and eat lots of vegetables and get lots of exercise, my lifetime risk drops to 1.6%. Much better, but I had to guess at what variables to change - meaning I have to guess at what changes to make to my life to improve my odds. If the NCI automated these functions and let me know where I stood and what I might consider changing, the tool would be a lot more potent.