Blogs
on September 5, 2025
Common Pitfalls in the Results Section and How to Prevent Them
Frequent Mistakes in the Results Section and How to Prevent Them
<br>The path to completing your dissertation's results chapter is paved with potential missteps that can weaken months of meticulous research. Even with a solid plan, it is surprisingly easy to fall into habits that reduce the impact of your findings or, worse, render them invalid. Many of these mistakes are not technical but rhetorical in nature, stemming from a unclear grasp of the chapter's core purpose. This guide details the most frequent blunders students encounter and <a href="https://fraudabc.com/community/profile/celestamosely1/">IGNOU project approval</a> provides a clear strategy for navigating around them to ensure your analysis is bulletproof and methodologically flawless.<br>
1. The Cardinal Sin: Mixing Results with Discussion
<br>This is, without a doubt, the most frequent mistake made in dissertation writing. The Results chapter and the Discussion chapter have distinctly different purposes.<br><img src="https://1.bp.blogspot.com/-6nIJrbi0Pp4/XFjSkqoeYfI/AAAAAAAAAw8/98zWzKMSo64RTU4xBJRUo_ptNaS8OvRigCLcBGAs/w1200-h630-p-k-no-nu/IMG-20190202-WA0017.jpg" style="max-width:440px;float:right;padding:10px 0px 10px 10px;border:0px;" alt="" />
The Pitfall: Interpreting your findings in the Results chapter. Using language like "This suggests that..." or "This surprising finding is probably because..."
Why It's a Problem: It confuses the reader and weakens your argument by failing to present a clean separation between objective data and author analysis.
The Prevention Strategy: Adopt a "just the facts" mentality. Your Results chapter should only answer "What did I find?" Use neutral reporting verbs like "the results indicated," "the data showed," or "a significant difference was observed." Save the "What does this mean?" for the Discussion chapter.
2. The Kitchen Sink Approach: Including Everything
<br>Another common error is to report every single piece of output you generated, whether it answers a research question or not.<br>
The Pitfall: Dumping pages of exploratory analyses that do not speak to your research questions or hypotheses.
Why It's a Problem: It overwhelms and bores the reader, obscuring the truly important findings. It shows a lack of editing and can make it seem like you are fishing for significance rather than testing a hypothesis.
The Prevention Strategy: Let your research questions be your filter. Before including any result, ask: "Does this directly help answer one of my research questions?" If the answer is no, exclude it.
3. Ignoring the Null: Hiding Non-Significant Results
<br>The pressure to find significant results is immense, but science requires full transparency.<br>
The Pitfall: Omitting tests that yielded non-significant results. This is known as "the file drawer effect," where only studies with positive results are published, distorting the scientific record.
Why It's a Problem: It is methodologically dishonest and presents an inaccurate picture of your research process. A non-significant result is still a valuable finding that tells you something important (e.g., "there is no evidence of a relationship between X and Y").
The Prevention Strategy: Report all tests related to your hypotheses. State non-significant results in the same neutral tone as significant ones. Example: "The independent-samples t-test revealed no statistically significant difference in scores between the control and experimental groups (t(58) = 1.45, p = .154)."
4. Misinterpreting Correlation and Causation
<br>This is a fundamental error of data interpretation that can completely invalidate your conclusions.<br><img alt="" />
The Pitfall: Assuming that because two variables are correlated, one must cause the other. For example, "The study found that ice cream sales cause drownings" (when in reality, both are caused by a third variable: hot weather).
Why It's a Problem: It reveals a critical misunderstanding in research logic. Causation can only be inferred through randomized trials.
The Prevention Strategy: Always use precise wording. Use phrases like "associated with," "linked to," "correlated with," or "predicted." Only use "cause" or "effect" if your study design was a true experiment.
5. Lack of Connection: Failing to Link Back to Your Literature Review
<br>Your dissertation is a unified narrative, not a series of disconnected chapters.<br>
The Pitfall: Presenting your results as a set of disconnected facts without connecting them back to the previous studies you outlined in your theoretical framework.
Why It's a Problem: It fails to establish context to start building your argument of what you found. The reader is left wondering how your results contradict the existing body of knowledge.
The Prevention Strategy: While full interpretation is for the Discussion chapter, you can still make a connection in the Results. Use comparative language like:
"Consistent with the work of Smith (2020), the results showed..."
"Contrary to the hypothesis derived from Theory X, the analysis revealed..."
"This finding aligns with the proposed model..."
This sets the stage for the deeper discussion to come.
6. Ineffective Tables and Figures
<br>Badly designed tables and figures can make even the clearest results incomprehensible.<br>
The Pitfall: Overly complex tables that distort the data.
Why It's a Problem: Visuals should aid understanding, not create more work. Poor visuals frustrate the reader and can lead to confusion.
The Prevention Strategy:
Ensure every visual is numbered and has a concise caption.
Keep tables and graphs minimalist. Avoid unnecessary colors.
Choose the correct visual for the message (e.g., bar charts for comparisons, line graphs for trends over time).
Always refer to the visual in the text before it appears.
7. Violating Test Assumptions
<br>Every statistical test comes with a set of underlying assumptions to be used validly.<br>
The Pitfall: Running a regression without first testing that your data meets the necessary assumptions (e.g., normality).
Why It's a Problem: If the assumptions are not met, the results of the test are potentially misleading. Your <a href="https://data.gov.uk/data/search?q=p-values">p-values</a> and confidence intervals cannot be trusted.
The Prevention Strategy: Before running any primary test, conduct assumption checks. This is a critical part of your analysis. If assumptions are violated, use an alternative test (e.g., Mann-Whitney U test instead of an independent-samples t-test) or transform your data.
Final Thoughts
<br>Avoiding these common pitfalls is not about memorizing rules but about adopting a mindset of precision, objectivity, and intellectual honesty. Your data analysis chapter is the evidentiary core of your dissertation; its credibility is paramount. By focusing only on relevant findings, reporting all outcomes, connecting to your literature, and respecting statistical assumptions, you transform your chapter from a simple report of numbers into a compelling, convincing, and scholarly sound presentation of your research. This meticulous approach pays immense dividends in the overall impact of your work.<br>
Be the first person to like this.