I think one of the most subtly important parts of a journal paper is the methods section. Too many people fill up a methods section with content that is technically appropriate, but doesn’t consider the audience and doesn’t provide much valuable insight into the way the study was completed. Spurious content that tends to fill up methods sections:

  • The fact that the study was IRB approved – Fine to include if a reviewer insists, but this is standard practice for every study, so we would just assume it is true already.
  • Gift card reimbursement process and dollar amount – If anyone is really all that interested in this, a simple statement that gift card reimbursement was used is sufficient, no one cares about the details.
  • Check box discussions around quality, trustworthiness, credibility – This is probably more of a controversial / hot take, but I think these types of quality criteria are important for your own internal process check, but are relatively boring for a reader if they become longwinded. Discussing the quality criteria is a performative act to show you understand and are in the know about research quality; it doesn’t, in and of itself, create high quality research.
  • Filler standard content about semi-structured interviews and axial coding – At least in engineering education this sort of process is now so standard it doesn’t need more than a few sentences and indications of process.
  • Overly long citations of methodologies and explanations of the rules of methodologies that are already well known to the community – If no one is second guessing the appropriateness of the method, you can simply state the methodology and move on to what you really did.
  • Stock limitations section content – If what you write for limitations could be copy pasted from any other qualitative study, it probably doesn’t need to be there.

Again, I think what this sort of content creates methods sections that are technically appropriate, but do not actually explain the study. Instead, consider how you might explain the study if you were in conversation with a person who was a typical reader of your journal. Where would you start?

In my experience, I came up with this approach of an “honest methods section” because I often felt my methods were emergent and iteratively constructed rather than following rules and lists of criteria. To explain something that draws on a few methodological traditions, breaks a few rules of each of those traditions and/or of the discipline, and tries to argue for the importance of a new way of doing things, I think it’s better to just explain what I did and why, not rest on high vocabulary or filler content to appear smart and methods-y.

An example study where I felt I did an “honest methods section” was Supporting the Narrative Agency of a Marginalized Engineering Student. I describe continuity with narrative methods and bell hooks’ critical theorizing, and then I describe how the study came to be. I recall the reviewers had a lot of methods questions, trying to explain how exactly the study “worked,” so each time I came back for a round of revisions I would look at writing an additional methods clarification as if I was explaining what I did in conversation. When requested to explain the process of establishing rapport across gender and racial difference, I did so in the discussion in a limited way that stayed true to how I thought the process had worked “Their identity and position differences were neither a panacea nor a barrier—there were moments when this identity difference made Emilia self-conscious about something she was sharing, and there were times when the identity and power differential led to feeling validated and listened to by someone in a more dominant group” (p. 211).  The paper shifts between author voice depending on the content (third person description of myself and coauthor, Emilia here, elsewhere discussing positionality and impact in the first person).

Some suggestions for creating an honest methods section:

  • You want to moderate your use of specialized methods vocabulary and think about whether any key terms are important enough to need a definition. There is a fine line between education and alienation—if you teach a reader some new terms and concepts, they will appreciate you; if you use foreign terms and concepts too liberally without explanation, they may stop reading, give up, or feel alienated.
  • This first point doesn’t mean you dumb down your method. You want to stay true to the method such that experts in the method recognize and don’t disagree with what you’ve written. But that should still be possible within a spirit of clarification and transparency and inviting in towards all range of readers.
  • Be detailed but purposeful with the details. If it’s good enough to say the interviews ranged from 60-90 minutes, don’t bore everyone with exhaustive details for details’ sake: “the first two interviews were more than 90 minutes, the next interview was only 60 minutes due to schedule constraints, and the final interview was 75 minutes.” Think about what details you are including and why they might matter. And if possible say why they matter directly: “The 60-90 minute interviews allowed for the flexibility to build on the participants’ ideas and establish rapport, while respecting and being efficient with their time.”

And here’s some suggested content or reflection questions for the honest methods section:

  • Methodological traditions – Who or what prior work inspired the study? What methodological traditions does the work resonate with? Have others in the field traditionally used that methodology for similar purposes and with similar research design? If not, how did you think about drawing on the tradition and adapting it? To what extent and in what ways do you say you are conducting research within the methodology versus inspired by it?
  • Emergence of the research purpose / design – Oftentimes in my work there is some sort of emergent research purpose. I do not usually know exactly what I will find on day 1 or the best way to find it. I often like to explain the emergence of the research purpose rather than simply stating research questions that may not represent questions I had in mind on day 1. Now that I write a lot of NSF grants, it’s still true. I have acquiesced to basically always making sure there are clear research questions in the NSF grant, but then there may be a much more emergent process for finding the research question of a paper that comes out of it. Getting to write a paragraph explaining that process helps me feel I’m being honest and clear regarding the study design.
  • Positionality – I’m always a fan of including some sort of positionality reflection, though having written a lot on positionality to date I would now usually try to trim it towards the content that I think is most relevant or seems to help clear things up for readers / reviewers the most. If I am fairly confident my identity as a white man (etc.) has influenced what I saw in an observation or how my participants related to me, I try to identify this and say so. I don’t overly apologize or celebrate this, just a matter of fact acknowledgement to help give the reader context.
  • Data collection details – Fine to list standard rules, but think about listing out data collection details with an eye towards reproducibility of the study, or translation of the research method to new contexts. What would a reader need to know to try to repeat your study or to conduct something similar? What were some key decisions you made? Maybe it was a study where a key nuance was how you built on the ideas that students brought up in interviews—what type of strategy or talk moves did you use? Maybe it was a study with an observation protocol or video analysis—how did you decide what was noteworthy or where to look when? These are just snapshots of your entire method, of course, but they help your work come alive and help it have educational value to the reader.
  • Analysis details – I think this is so often done poorly (e.g., stating you did axial coding to look for emergent themes– a technical answer that illuminates nothing). Instead, help us understand the actual intellectual work that led to the findings section. If you went from many participants’ data down to focus on a few, how did you decide that? When looking at example data, what did you look for? Why did you decide to organize the findings the way you did? What should readers expect as they read the findings?
  • Not Limitations / Rationale throughout — Instead of limitations, I would suggest you have a careful and thoughtful rationale throughout each of the above sections. You can imagine talking to someone (a skeptic, a person unfamiliar with the method, a reviewer) and them interrupting you to ask a question (could be a more skeptical devil’s advocate question, could just be a point of confusion). How would you pause, acknowledge the limitation or question, and answer it / clarify it in a way that helps with the rationale and case for the study. I think cautious qualification and embedded rationale is more helpful than broad overreach and a limitation section to caveat.

That’s my approach. You can read some of my papers for more examples.

Are methods sections that don’t follow this more conversational approach dishonest? In some ways I think they are—I think they pretend that the content shared is actually the way the study was conducted when it is not. They pretend to be divulging the important intellectual work of a study, when they are not. They give an appearance of certainty and quality assurance, when their own cautious rationale would better represent the reality of their intellectual work.

A friend of mine once said to me all qualitative research analysis sections really amount to “I thought about it a while and this is what I found.” Although you’ll want to write more details and surface those implicit decision and thinking processes, remember that at its heart your method is probably fairly simple and isn’t that hard to explain to a newcomer without jargon.