Even so I will still post from time to time.
Here's a piece from Mark's Daily Apple on 15 reasons not to trust the latest nutritional studies.
In my opinion, it's not just the newest nutritional studies we should take with a grain of sand, it's every kind of new study, as well as the latest political polls. They usually have a hidden agenda and some kind of axe to grind. Take it away Mark:
Today, I’m going to discuss many of the reasons you shouldn’t trust the latest nutritional study without looking past the headlines. 1. Industry distorts the research. Last year, Marion Nestle looked at 152 industry-funded nutrition studies. Out of 152, 140 had favorable results for the company who funded it. An earlier analysis of milk, soda, and fruit juice nutrition studies found that those sponsored by milk, soda, and juice companies were far more likely to report favorable results than independent studies. The same things happens in cardiovascular disease trials and orthopedics trials. 2. Ego distorts the research. People become wedded to their theories. Imagine spending 30 years conducting research to support your idea that saturated fat causes heart disease. How hard will you hold on to that hypothesis? How devastating would opposing evidence be to your sense of self-worth? Your research is your identity. It’s what you do. It’s how you respond when chit chatting at cocktail parties. You’re the “saturated fat” guy. Everything’s riding on it being true. Scientists are used to being the smartest person in their respective rooms. It’s not easy to relinquish that or admit mistakes. Heck, that goes for everyone in the world. Scientists are not immune. 3. Correlation masquerading as causation. In day-to-day life, correlative events imply causation. You cut someone off, they honk at you. A man holds a door open, you thank him. You flip a light switch, the light turns on. We’re used to causation explaining correlations. So when two variables are presented together in a nutritional study, especially when it seems plausible (meat causes colon cancer) or reaffirms popular advice (saturated fat causes heart disease), we’re likely to assume the relationship is causal. Correlations provoke interesting hypotheses and tests of those hypotheses, but they’re very often spurious. Everything we eat is associated with cancer if we look hard enough. Does that actually tell us anything useful? 4. No control group. If you want to know the effects of an experimental intervention, you need a group of people that don’t receive the intervention. That’s the control group. Without a control group to compare against the group that received the experimental intervention, a clinical trial doesn’t mean much. You can’t truly know that the experimental variable caused the change without it. 5. Fake controls. The presence of a control group doesn’t make it a good study. The control group has to be a real control. Take this paper from last year claiming that oatmeal for breakfast promotes satiety. Sure, when you’re comparing oatmeal to a cornflake breakfast. It doesn’t take much to beat the satiating (non)effects of cornflakes. How would oatmeal compare to bacon and eggs, or a big ass salad, or sweet potato hash? This study doesn’t tell you that. 6. Small sample size. The smaller the sample size, the less impressive the results. The larger the sample size, the more meaningful the results and the more likely they are to apply to the larger population. That’s why the results of n=1 self-experiments are mostly useful for the person running the experiment on themselves and less useful for others; the sample size of one isn’t enough to generalize the results.....Read the whole thing.
1 comment:
Very interesting read on how the unproven link of dietary cholesterol to heart disease became part of the nutritional culture.
http://www.theguardian.com/society/2016/apr/07/the-sugar-conspiracy-robert-lustig-john-yudkin?
Post a Comment