Post Your Answer
2 years ago in Data collection , Qualitative Methods By KevinDoump
What is the minimum meaningful effect or difference your study needs to detect, and how does this directly inform the number of participants you require?
In our team-based qualitative project, we're facing the classic challenge of inter-coder reliability. We've developed a codebook, but initial passes show worrying discrepancies. For those who have managed this process, what are the specific, actionable steps you take beyond just training to calibrate your team's interpretive lens, resolve disagreements, and ensure a consistent analytical framework is applied throughout the entire dataset?
All Answers (1 Answers In All)
By Tanya Answered 2 months ago
This is a process, not a one-time training event. I would recommend starting with a collaborative "calibration session." Take a small data sample and code it independently, then meet to compare. Don't just vote discuss why each coder applied their codes, using the data as evidence. This debate refines your codebook definitions with concrete examples. I have seen teams create an "anchor document" with prototypical excerpts for each code. Repeat this process periodically to correct drift. Finally, use a portion of data for formal reliability checking (e.g., Cohen's Kappa), but treat disagreements as fruitful opportunities to strengthen your shared analytical framework, not just as errors to be corrected.
Reply to Tanya
Related Questions