So I was speaking with some people in the genomics-information business this week (will disclose who in due time), and I asked them how they dealt with the constant onslaught of genetic discovery research, specifically the dozens of associations with disease that are published every week. (For instance: a quick Google news search turns up this one for Alzheimer's, this one for gastric cancer (warning: video plays), this one for "suicidal thoughts," this one for polycystic ovary syndrome, and so forth.) If you're hoping to make sense of all of this, I asked these fellows, how do you track it? Tagging? Filtering? Interns?
The answer: We don't bother. Their feeling was that most of these discoveries are one-offs, and not scientifically significant (at least until they've been replicated by other research). So rather than try to sort the wheat from the chaff based on the study parameters and so forth, they just ignore it all and wait for concensus to emerge. And here was the remarkable part: Of the thousand (two thousand? three thousand?) of genetic associations that have been published, they figure about 50 are actually established. Fifty. That's it. Not a great hit rate, for science, really. But it makes sense - wait for a finding to be reproduced before taking it as possible, that's basic science. And unless you have massive studies like this one in the new issue of Nature that's pulling together great arsenals of data (rather than looking at smaller cohorts), well, it's probably right to applaud the effort but ignore the results.