InterviewPrep

Top 20 Qualitative Research Interview Questions & Answers

Master your responses to Qualitative Research related interview questions with our example questions and answers. Boost your chances of landing the job by learning how to effectively communicate your Qualitative Research capabilities.

sample questions qualitative research interviews

Diving into the intricacies of human behavior, thoughts, and experiences is the lifeblood of qualitative research. As a professional in this nuanced field, you are well-versed in the art of gathering rich, descriptive data that can provide deep insights into complex issues. Now, as you prepare to take on new challenges in your career, it’s time to demonstrate not only your expertise in qualitative methodologies but also your ability to think critically and adapt to various research contexts.

Whether you’re interviewing for an academic position, a role within a market research firm, or any other setting where qualitative skills are prized, being prepared with thoughtful responses to potential interview questions can set you apart from other candidates. In this article, we will discuss some of the most common questions asked during interviews for qualitative research roles, offering guidance on how best to articulate your experience and approach to prospective employers.

Common Qualitative Research Interview Questions

1. how do you ensure the credibility of your data in qualitative research.

Ensuring credibility in qualitative research is crucial for the trustworthiness of the findings. By asking about methodological rigor, the interviewer is assessing a candidate’s understanding of strategies such as triangulation, member checking, and maintaining a detailed audit trail, which are essential for substantiating the integrity of qualitative data.

When responding to this question, you should articulate a multi-faceted approach to establishing credibility. Begin by highlighting your understanding of the importance of a well-defined research design and data collection strategy. Explain how you incorporate methods like triangulation, using multiple data sources or perspectives to confirm the consistency of the information obtained. Discuss your process for member checking—obtaining feedback on your findings from the participants themselves—to add another layer of validation. Mention your dedication to keeping a comprehensive audit trail, documenting all stages of the research process, which enables peer scrutiny and adds to the transparency of the study. Emphasize your ongoing commitment to reflexivity, where you continually examine your biases and influence on the research. Through this detailed explanation, you demonstrate a conscientious and systematic approach to safeguarding the credibility of your qualitative research.

Example: “ To ensure the credibility of data in qualitative research, I employ a rigorous research design that is both systematic and reflective. Initially, I establish clear protocols for data collection, which includes in-depth interviews, focus groups, and observations, ensuring that each method is well-suited to the research questions. To enhance the validity of the findings, I apply triangulation, drawing on various data sources, theoretical frameworks, and methodologies to cross-verify the information and interpretations.

During the analysis phase, member checking is a critical step, where I return to participants with a summary of the findings to validate the accuracy and resonance of the interpreted data with their experiences. This not only strengthens the credibility of the results but also enriches the data by incorporating participant insights. Furthermore, I maintain a comprehensive audit trail, meticulously documenting the research process, decisions made, and data transformations. This transparency allows for peer review and ensures that the research can be followed and critiqued by others in the field.

Lastly, reflexivity is integral to my practice. I continuously engage in self-reflection to understand and articulate my biases and assumptions and how they may influence the research process. By doing so, I can mitigate potential impacts on the data and interpretations, ensuring that the findings are a credible representation of the phenomenon under investigation.”

2. Describe a situation where you had to adapt your research methodology due to unforeseen challenges.

When unexpected variables arise, adaptability in research design is vital to maintain the integrity and validity of the study. This question seeks to assess a candidate’s problem-solving skills, flexibility, and resilience in the face of research challenges.

When responding, share a specific instance where you encountered a challenge that impacted your research methodology. Detail the nature of the challenge, the thought process behind your decision to adapt, the steps you took to revise your approach, and the outcome of those changes. Emphasize your critical thinking, your ability to consult relevant literature or peers if necessary, and how your adaptability contributed to the overall success or learning experience of the research project.

Example: “ In a recent qualitative study on community health practices, I encountered a significant challenge when the planned in-person interviews became unfeasible due to a sudden public health concern. The initial methodology was designed around face-to-face interactions to capture rich, detailed narratives. However, with participant safety as a priority, I quickly pivoted to remote data collection methods. After reviewing relevant literature on virtual qualitative research, I adapted the protocol to include video conferencing and phone interviews, ensuring I could still engage deeply with participants. This adaptation required a reevaluation of our ethical considerations, particularly around confidentiality and informed consent in digital formats.

The shift to remote interviews introduced concerns about potential biases, as the change might exclude individuals without access to the necessary technology. To mitigate this, I also offered the option of asynchronous voice recordings or email responses as a means to participate. This inclusive approach not only preserved the integrity of the study but also revealed an unexpected layer of data regarding digital literacy and access in the community. The study’s findings were robust, and the methodology adaptation was reflected upon in the final report, contributing to the discourse on the flexibility and resilience of qualitative research in dynamic contexts.”

3. What strategies do you employ for effective participant observation?

For effective participant observation, a balance between immersion and detachment is necessary to gather in-depth understanding without influencing the natural setting. This method allows the researcher to collect rich, contextual data that surveys or structured interviews might miss.

When responding to this question, highlight your ability to blend in with the participant group to minimize your impact on their behavior. Discuss your skills in active listening, detailed note-taking, and ethical considerations such as informed consent and maintaining confidentiality. Mention any techniques you use to reflect on your observations critically and how you ensure that your presence does not alter the dynamics of the group you are studying. It’s also effective to provide examples from past research where your participant observation led to valuable insights that informed your study’s findings.

Example: “ In participant observation, my primary strategy is to achieve a balance between immersion and detachment. I immerse myself in the environment to gain a deep understanding of the context and participants’ perspectives, while remaining sufficiently detached to observe and analyze behaviors and interactions objectively. To blend in, I adapt to the cultural norms and social cues of the group, which often involves a period of learning and adjustment to minimize my impact on their behavior.

Active listening is central to my approach, allowing me to capture the subtleties of communication beyond verbal exchanges. I complement this with meticulous note-taking, often employing a system of shorthand that enables me to record details without disrupting the flow of interaction. Ethically, I prioritize informed consent and confidentiality, ensuring participants are aware of my role and the study’s purpose. After observations, I engage in reflexive practice, critically examining my own biases and influence on the research setting. This reflexivity was instrumental in a past project where my awareness of my impact on group dynamics led to the discovery of underlying power structures that were not immediately apparent, significantly enriching the study’s findings.”

4. In what ways do you maintain ethical standards while conducting in-depth interviews?

Maintaining ethical standards during in-depth interviews involves respecting participant confidentiality, ensuring informed consent, and being sensitive to power dynamics. Ethical practice in this context is not only about adhering to institutional guidelines but also about fostering an environment where interviewees feel respected and understood.

When responding to this question, it’s vital to articulate a clear understanding of ethical frameworks such as confidentiality and informed consent. Describe specific strategies you employ, such as anonymizing data, obtaining consent through clear communication about the study’s purpose and the participant’s role, and ensuring the interviewee’s comfort and safety during the conversation. Highlight any training or certifications you’ve received in ethical research practices and give examples from past research experiences where you navigated ethical dilemmas successfully. This approach demonstrates your commitment to integrity in the research process and your ability to protect the well-being of your subjects.

Example: “ Maintaining ethical standards during in-depth interviews is paramount to the integrity of the research process. I ensure that all participants are fully aware of the study’s purpose, their role within it, and the ways in which their data will be used. This is achieved through a clear and comprehensive informed consent process. I always provide participants with the option to withdraw from the study at any point without penalty.

To safeguard confidentiality, I employ strategies such as anonymizing data and using secure storage methods. I am also attentive to the comfort and safety of interviewees, creating a respectful and non-threatening interview environment. In situations where sensitive topics may arise, I am trained to handle these with the necessary care and professionalism. For instance, in a past study involving vulnerable populations, I implemented additional privacy measures and worked closely with an ethics review board to navigate the complexities of the research context. My approach is always to prioritize the dignity and rights of the participants, adhering to ethical guidelines and best practices established in the field.”

5. How do you approach coding textual data without personal biases influencing outcomes?

When an interviewer poses a question about coding textual data free from personal biases, they are probing your ability to maintain objectivity and adhere to methodological rigor. This question tests your understanding of qualitative analysis techniques and your awareness of the researcher’s potential to skew data interpretation.

When responding, it’s essential to articulate your familiarity with established coding procedures such as open, axial, or thematic coding. Emphasize your systematic approach to data analysis, which might include multiple rounds of coding, peer debriefing, and maintaining a reflexive journal. Discuss the importance of bracketing your preconceptions during data analysis and how you would seek to validate your coding through methods such as triangulation or member checking. Your answer should convey a balance between a structured approach to coding and an openness to the data’s nuances, demonstrating your commitment to producing unbiased and trustworthy qualitative research findings.

Example: “ In approaching textual data coding, I adhere to a structured yet flexible methodology that mitigates personal bias. Initially, I engage in open coding to categorize data based on its manifest content, allowing patterns to emerge organically. This is followed by axial coding, where I explore connections between categories, and if applicable, thematic coding to identify overarching themes. Throughout this process, I maintain a reflexive journal to document my thought process and potential biases, ensuring transparency and self-awareness.

To ensure the reliability of my coding, I employ peer debriefing sessions, where colleagues scrutinize my coding decisions, challenging assumptions and offering alternative interpretations. This collaborative scrutiny helps to counteract any personal biases that might have crept into the analysis. Additionally, I utilize methods such as triangulation, comparing data across different sources, and member checking, soliciting feedback from participants on the accuracy of the coded data. These strategies collectively serve to validate the coding process and ensure that the findings are a credible representation of the data, rather than a reflection of my preconceptions.”

6. What is your experience with utilizing grounded theory in qualitative studies?

Grounded theory is a systematic methodology that operates almost in a reverse fashion from traditional research. Employers ask about your experience with grounded theory to assess your ability to conduct research that is flexible and adaptable to the data.

When responding, you should outline specific studies or projects where you’ve applied grounded theory. Discuss the nature of the data you worked with, the process of iterative data collection and analysis, and how you developed a theoretical framework as a result. Highlight any challenges you faced and how you overcame them, as well as the outcomes of your research. This will show your practical experience and your ability to engage deeply with qualitative data to extract meaningful theories and conclusions.

Example: “ In applying grounded theory to my qualitative studies, I have embraced its iterative approach to develop a theoretical framework grounded in empirical data. For instance, in a project exploring the coping mechanisms of individuals with chronic illnesses, I conducted in-depth interviews and focus groups, allowing the data to guide the research process. Through constant comparative analysis, I coded the data, identifying core categories and the relationships between them. This emergent coding process was central to refining and saturating the categories, ensuring the development of a robust theory that encapsulated the lived experiences of the participants.

Challenges such as data saturation and ensuring theoretical sensitivity were navigated by maintaining a balance between openness to the data and guiding research questions. The iterative nature of grounded theory facilitated the identification of nuanced coping strategies that were not initially apparent, leading to a theory that emphasized the dynamic interplay between personal agency and social support. The outcome was a substantive theory that not only provided a deeper understanding of the participants’ experiences but also had practical implications for designing support systems for individuals with chronic conditions.”

7. Outline the steps you take when conducting a thematic analysis.

Thematic analysis is a method used to identify, analyze, and report patterns within data, and it requires a systematic approach to ensure validity and reliability. This question assesses whether a candidate can articulate a clear, methodical process that will yield insightful findings from qualitative data.

When responding, you should outline a step-by-step process that begins with familiarization with the data, whereby you immerse yourself in the details, taking notes and highlighting initial ideas. Proceed to generating initial codes across the entire dataset, which involves organizing data into meaningful groups. Then, search for themes by collating codes into potential themes and gathering all data relevant to each potential theme. Review these themes to ensure they work in relation to the coded extracts and the entire dataset, refining them as necessary. Define and name themes, which entails developing a detailed analysis of each theme and determining the essence of what each theme is about. Finally, report the findings, weaving the analytic narrative with vivid examples, within the context of existing literature and the research questions. This methodical response not only showcases your technical knowledge but also demonstrates an organized thought process and the ability to communicate complex procedures clearly.

Example: “ In conducting a thematic analysis, I begin by thoroughly immersing myself in the data, which involves meticulously reading and re-reading the content to gain a deep understanding of its breadth and depth. During this stage, I make extensive notes and begin to mark initial ideas that strike me as potentially significant.

Following familiarization, I generate initial codes systematically across the entire dataset. This coding process is both reflective and interpretative, as it requires me to identify and categorize data segments that are pertinent to the research questions. These codes are then used to organize the data into meaningful groups.

Next, I search for themes by examining the codes and considering how they may combine to form overarching themes. This involves collating all the coded data relevant to each potential theme and considering the interrelationships between codes, themes, and different levels of themes, which may include sub-themes.

The subsequent step is to review these themes, checking them against the dataset to ensure they accurately represent the data. This may involve collapsing some themes into each other, splitting others, and refining the specifics of each theme. The essence of this iterative process is to refine the themes so that they tell a coherent story about the data.

Once the themes are satisfactorily developed, I define and name them. This involves a detailed analysis of each theme and determining what aspect of the data each theme captures. I aim to articulate the nuances within each theme, identifying the story that each tells about the data, and considering how this relates to the broader research questions and literature.

Lastly, I report the findings, weaving together the thematic analysis narrative. This includes selecting vivid examples that compellingly illustrate each theme, discussing how the themes interconnect, and situating them within the context of existing literature and the research questions. This final write-up is not merely about summarizing the data but about telling a story that provides insights into the research topic.”

8. When is it appropriate to use focus groups rather than individual interviews, and why?

Choosing between focus groups and individual interviews depends on the research goals and the nature of the information sought. Focus groups excel in exploring complex behaviors, attitudes, and experiences through the dynamic interaction of participants.

When responding to this question, articulate the strengths of both methods, matching them to specific research scenarios. For focus groups, emphasize your ability to facilitate lively, guided discussions that leverage group dynamics to elicit a breadth of perspectives. For individual interviews, highlight your skill in creating a safe, confidential space where participants can share detailed, personal experiences. Demonstrate strategic thinking by discussing how you would decide on the most suitable method based on the research question, participant characteristics, and the type of data needed to achieve your research objectives.

Example: “ Focus groups are particularly apt when the research question benefits from the interaction among participants, as the group dynamics can stimulate memories, ideas, and experiences that might not surface in one-on-one interviews. They are valuable for exploring the range of opinions or feelings about a topic, allowing researchers to observe consensus formation, the diversity of perspectives, and the reasoning behind attitudes. This method is also efficient for gathering a breadth of data in a limited timeframe. However, it’s crucial to ensure that the topic is suitable for discussion in a group setting and that participants are comfortable speaking in front of others.

Conversely, individual interviews are more appropriate when the subject matter is sensitive or requires deep exploration of personal experiences. They provide a private space for participants to share detailed and nuanced insights without the influence of others, which can be particularly important when discussing topics that may not be openly talked about in a group. The method allows for a tailored approach, where the interviewer can adapt questions based on the participant’s responses, facilitating a depth of understanding that is harder to achieve in a group setting. The decision between the two methods ultimately hinges on the specific needs of the research, the nature of the topic, and the goals of the study.”

9. Detail how you would validate findings from a case study research design.

In case study research, validation is paramount to ensure that interpretations and conclusions are credible. A well-validated case study reinforces the rigor of the research method and bolsters the transferability of its findings to other contexts.

When responding to this question, detail your process, which might include triangulation, where you corroborate findings with multiple data sources or perspectives; member checking, which involves sharing your interpretations with participants for their input; and seeking peer debriefing, where colleagues critique the process and findings. Explain how these methods contribute to the dependability and confirmability of your research, showing that you are not just collecting data but actively engaging with it to construct a solid, defensible narrative.

Example: “ In validating findings from a case study research design, I employ a multi-faceted approach to ensure the dependability and confirmability of the research. Triangulation is a cornerstone of my validation process, where I corroborate evidence from various data sources, such as interviews, observations, and documents. This method allows for cross-validation and helps in constructing a robust narrative by revealing consistencies and discrepancies in the data.

Member checking is another essential step in my process. By sharing my interpretations with participants, I not only honor their perspectives but also enhance the credibility of the findings. This iterative process ensures that the conclusions drawn are reflective of the participants’ experiences and not solely based on my own interpretations.

Lastly, peer debriefing serves as a critical checkpoint. By engaging colleagues who critique the research process and findings, I open the study to external scrutiny, which helps in mitigating any potential biases and enhances the study’s rigor. These colleagues act as devil’s advocates, challenging assumptions and conclusions, thereby strengthening the study’s validity. Collectively, these strategies form a comprehensive approach to validating case study research, ensuring that the findings are well-substantiated and trustworthy.”

10. What measures do you take to ensure the transferability of your qualitative research findings?

When asked about ensuring transferability, the interviewer is assessing your ability to articulate the relevance of your findings beyond the specific context of your study. They want to know if you can critically appraise your research design and methodology.

To respond effectively, you should discuss the thoroughness of your data collection methods, such as purposive sampling, to gather diverse perspectives that enhance the depth of the data. Explain your engagement with participants and the setting to ensure a rich understanding of the phenomenon under study. Highlight your detailed documentation of the research process, including your reflexivity, to allow others to follow your footsteps analytically. Finally, speak about how you communicate the boundaries of your research applicability and how you encourage readers to consider the transferability of findings to their contexts through clear and comprehensive descriptions of your study’s context, participants, and assumptions.

Example: “ In ensuring the transferability of my qualitative research findings, I prioritize a robust and purposive sampling strategy that captures a wide range of perspectives relevant to the research question. This approach not only enriches the data but also provides a comprehensive understanding of the phenomenon across varied contexts. By doing so, I lay a foundation for the findings to resonate with similar situations, allowing others to judge the applicability of the results to their own contexts.

I meticulously document the research process, including the setting, participant interactions, and my own reflexivity, to provide a transparent and detailed account of how conclusions were reached. This level of documentation serves as a roadmap for other researchers or practitioners to understand the intricacies of the study and evaluate the potential for transferability. Furthermore, I ensure that my findings are presented with a clear delineation of the context, including any cultural, temporal, or geographic nuances, and discuss the assumptions underpinning the study. By offering this rich, contextualized description, I invite readers to engage critically with the findings and assess their relevance to other settings, thus facilitating a responsible and informed application of the research outcomes.”

11. How do you determine when data saturation has been reached in your study?

Determining data saturation is crucial because it signals when additional data does not yield new insights, ensuring efficient use of resources without compromising the depth of understanding. This question is posed to assess a candidate’s experience and judgment in qualitative research.

When responding to this question, one should highlight their systematic approach to data collection and analysis. Discuss the iterative process of engaging with the data, constantly comparing new information with existing codes and themes. Explain how you monitor for emerging patterns and at what point these patterns become consistent and repeatable, indicating saturation. Mention any specific techniques or criteria you employ, such as the use of thematic analysis or constant comparison methods, and how you document the decision-making process to ensure transparency and validity in your research findings.

Example: “ In determining data saturation, I employ a rigorous and iterative approach to data collection and analysis. As I engage with the data, I continuously compare new information against existing codes and themes, carefully monitoring for the emergence of new patterns or insights. Saturation is approached when the data begins to yield redundant information, and no new themes or codes are emerging from the analysis.

I utilize techniques such as thematic analysis and constant comparison methods to ensure a systematic examination of the data. I document each step of the decision-making process, noting when additional data does not lead to new theme identification or when existing themes are fully fleshed out. This documentation not only serves as a checkpoint for determining saturation but also enhances the transparency and validity of the research findings. Through this meticulous process, I can confidently assert that data saturation has been achieved when the collected data offers a comprehensive understanding of the research phenomenon, with a rich and well-developed thematic structure that accurately reflects the research scope.”

12. Relate an instance where member checking significantly altered your research conclusions.

Member checking serves as a vital checkpoint to ensure accuracy, credibility, and resonance of the data with those it represents. It can reveal misunderstandings or even introduce new insights that substantially shift the study’s trajectory or outcomes.

When responding, candidates should recount a specific project where member checking made a pivotal difference in their findings. They should detail the initial conclusions, how the process of member checking was integrated, what feedback was received, and how it led to a re-evaluation or refinement of the research outcomes. This response showcases the candidate’s methodological rigor, flexibility in incorporating feedback, and dedication to producing research that authentically reflects the voices and experiences of the study’s participants.

Example: “ In a recent qualitative study on community responses to urban redevelopment, initial findings suggested broad support for the initiatives among residents. However, during the member checking phase, when participants reviewed and commented on the findings, a nuanced perspective emerged. Several participants highlighted that their apparent support was, in fact, resignation due to a lack of viable alternatives, rather than genuine enthusiasm for the redevelopment plans.

This feedback prompted a deeper dive into the data, revealing a pattern of resigned acceptance across a significant portion of the interviews. The conclusion was substantially revised to reflect this sentiment, emphasizing the complexity of community responses to redevelopment, which included both cautious optimism and skeptical resignation. This critical insight not only enriched the study’s validity but also had profound implications for policymakers interested in understanding the true sentiment of the affected communities.”

13. What are the key considerations when selecting a sample for phenomenological research?

The selection of a sample in phenomenological research is not about quantity but about the richness and relevance of the data that participants can provide. It requires an intimate knowledge of the research question and a deliberate choice to include participants who have experienced the phenomenon in question.

When responding to this question, it’s essential to emphasize the need for a purposeful sampling strategy that aims to capture a broad spectrum of perspectives on the phenomenon under study. Discuss the importance of sample diversity to ensure the findings are robust and reflect varied experiences. Mention the necessity of establishing clear criteria for participant selection and the willingness to adapt as the research progresses. Highlighting your commitment to ethical considerations, such as informed consent and the respectful treatment of participants’ information, will also demonstrate your thorough understanding of the nuances in qualitative sampling.

Example: “ In phenomenological research, the primary goal is to understand the essence of experiences concerning a particular phenomenon. Therefore, the key considerations for sample selection revolve around identifying individuals who have experienced the phenomenon of interest and can articulate their lived experiences. Purposeful sampling is essential to ensure that the participants chosen can provide rich, detailed accounts that contribute to a deep understanding of the phenomenon.

The diversity of the sample is also crucial. It is important to select participants who represent a range of perspectives within the phenomenon, not just a homogenous group. This might involve considering factors such as age, gender, socio-economic status, or other relevant characteristics that could influence their experiences. While the sample size in phenomenological studies is often small to allow for in-depth analysis, it is vital to ensure that the sample is varied enough to uncover a comprehensive understanding of the phenomenon.

Lastly, ethical considerations are paramount. Participants must give informed consent, understanding the nature of the study and their role in it. The researcher must also be prepared to handle sensitive information with confidentiality and respect, ensuring the participants’ well-being is prioritized throughout the study. Adapting the sample selection criteria as the study progresses is also important, as initial interviews may reveal additional nuances that require the inclusion of further varied perspectives to fully grasp the phenomenon.”

14. Which software tools do you prefer for qualitative data analysis, and for what reasons?

The choice of software tools for qualitative data analysis reflects a researcher’s approach to data synthesis and interpretation. It also indicates their proficiency with technology and their ability to leverage sophisticated features to deepen insights.

When responding, it’s essential to discuss specific features of the software tools you prefer, such as coding capabilities, ease of data management, collaborative features, or the ability to handle large datasets. Explain how these features have enhanced your research outcomes in the past. For example, you might highlight the use of NVivo for its robust coding structure that helped you organize complex data efficiently or Atlas.ti for its intuitive interface and visualization tools that made it easier to detect emerging patterns. Your response should demonstrate your analytical thought process and your commitment to rigorous qualitative analysis.

Example: “ In my qualitative research endeavors, I have found NVivo to be an invaluable tool, primarily due to its advanced coding capabilities and its ability to manage large and complex datasets effectively. The node structure in NVivo facilitates a hierarchical organization of themes, which streamlines the coding process and enhances the reliability of the data analysis. This feature was particularly beneficial in a recent project where the depth and volume of textual data required a robust system to ensure consistency and comprehensiveness in theme development.

Another tool I frequently utilize is Atlas.ti, which stands out for its user-friendly interface and powerful visualization tools. These features are instrumental in identifying and illustrating relationships between themes, thereby enriching the interpretive depth of the analysis. The network views in Atlas.ti have enabled me to construct clear visual representations of the data interconnections, which not only supported my analytical narrative but also facilitated stakeholder understanding and engagement. The combination of these tools, leveraging their respective strengths, has consistently augmented the quality and impact of my qualitative research outcomes.”

15. How do you handle discrepancies between participants’ words and actions in ethnographic research?

Ethnographic research hinges on the researcher’s ability to interpret both verbal and non-verbal data to draw meaningful conclusions. This question allows the interviewer to assess a candidate’s methodological rigor and analytical skills.

When responding, it’s essential to emphasize your systematic approach to reconciling such discrepancies. Discuss the importance of context, the use of triangulation to corroborate findings through multiple data sources, and the strategies you employ to interpret and integrate conflicting information. Highlight your commitment to ethical research practices, the ways you ensure participant understanding and consent, and your experience with reflective practice to mitigate researcher bias. Showcasing your ability to remain flexible and responsive to the data, while maintaining a clear analytical framework, will demonstrate your proficiency in qualitative research.

Example: “ In ethnographic research, discrepancies between participants’ words and actions are not only common but also a valuable source of insight. When I encounter such discrepancies, I first consider the context in which they occur, as it often holds the key to understanding the divergence. Cultural norms, social pressures, or even the presence of the researcher can influence participants’ behaviors and self-reporting. I employ triangulation, utilizing multiple data sources such as interviews, observations, and relevant documents to construct a more comprehensive understanding of the phenomena at hand.

I also engage in reflective practice to examine my own biases and assumptions that might influence data interpretation. By maintaining a stance of cultural humility and being open to the participants’ perspectives, I can better understand the reasons behind their actions and words. When integrating conflicting information, I look for patterns and themes that can reconcile the differences, often finding that they reveal deeper complexities within the social context being studied. Ethical research practices, including ensuring participant understanding and consent, are paramount throughout this process, as they help maintain the integrity of both the data and the relationships with participants.”

16. What role does reflexivity play in your research process?

Reflexivity is an ongoing self-assessment that ensures research findings are not merely a reflection of the researcher’s preconceptions, thereby increasing the credibility and authenticity of the work.

When responding, illustrate your understanding of reflexivity with examples from past research experiences. Discuss how you have actively engaged in reflexivity by questioning your assumptions, how this shaped your research design, and the methods you employed to ensure that your findings were informed by the data rather than your personal beliefs. Demonstrate your commitment to ethical research practice by highlighting how you’ve maintained an open dialogue with your participants and peers to challenge and refine your interpretations.

Example: “ Reflexivity is a cornerstone of my qualitative research methodology, as it allows me to critically examine my own influence on the research process and outcomes. In practice, I maintain a reflexive journal throughout the research process, documenting my preconceptions, emotional responses, and decision-making rationales. This ongoing self-analysis ensures that I remain aware of my potential biases and the ways in which my background and perspectives might shape the data collection and analysis.

For instance, in a recent ethnographic study, I recognized my own cultural assumptions could affect participant interactions. To mitigate this, I incorporated member checking and peer debriefing as integral parts of the research cycle. By actively seeking feedback on my interpretations from both participants and fellow researchers, I was able to challenge my initial readings of the data and uncover deeper, more nuanced insights. This reflexive approach not only enriched the research findings but also upheld the integrity and credibility of the study, fostering a more authentic and ethical representation of the participants’ experiences.”

17. Describe a complex qualitative dataset you’ve managed and how you navigated its challenges.

Managing a complex qualitative dataset requires meticulous organization, a strong grasp of research methods, and the ability to discern patterns and themes amidst a sea of words and narratives. This question evaluates the candidate’s analytical and critical thinking skills.

When responding to this question, you should focus on a specific project that exemplifies your experience with complex qualitative data. Outline the scope of the data, the methods you used for organization and analysis, and the challenges you encountered—such as data coding, thematic saturation, or ensuring reliability and validity. Discuss the strategies you implemented to address these challenges, such as iterative coding, member checking, or triangulation. By providing concrete examples, you demonstrate not only your technical ability but also your methodological rigor and dedication to producing insightful, credible research findings.

Example: “ In a recent project, I managed a complex qualitative dataset that comprised over 50 in-depth interviews, several focus groups, and field notes from participant observation. The data was rich with nuanced perspectives on community health practices, but it presented challenges in ensuring thematic saturation and maintaining a systematic approach to coding across multiple researchers.

To navigate these challenges, I employed a rigorous iterative coding process, utilizing NVivo software to facilitate organization and analysis. Initially, I conducted a round of open coding to identify preliminary themes, followed by axial coding to explore the relationships between these themes. As the dataset was extensive, I also implemented a strategy of constant comparison to refine and merge codes, ensuring thematic saturation was achieved. To enhance the reliability and validity of our findings, I organized regular peer debriefing sessions, where the research team could discuss and resolve discrepancies in coding and interpretation. Additionally, I conducted member checks with a subset of participants, which not only enriched the data but also validated our thematic constructs. This meticulous approach enabled us to develop a robust thematic framework that accurately reflected the complexity of the community’s health practices and informed subsequent policy recommendations.”

18. How do you integrate quantitative data to enhance the richness of a primarily qualitative study?

Integrating quantitative data with qualitative research can add a layer of objectivity, enhance validity, and offer a scalable dimension to the findings. This mixed-methods approach can help in identifying outliers or anomalies in qualitative data.

When responding to this question, a candidate should articulate their understanding of both qualitative and quantitative research methodologies. They should discuss specific techniques such as triangulation, where quantitative data serves as a corroborative tool for qualitative findings, or embedded analysis, where quantitative data provides a backdrop for deep qualitative exploration. The response should also include practical examples of past research scenarios where the candidate successfully merged both data types to strengthen their study, highlighting their ability to create a symbiotic relationship between numbers and narratives for richer, more robust research outcomes.

Example: “ Integrating quantitative data into a qualitative study can significantly enhance the depth and credibility of the research findings. In my experience, I employ triangulation to ensure that themes emerging from qualitative data are not only rich in context but also empirically grounded. For instance, in a study exploring patient satisfaction, while qualitative interviews might reveal nuanced patient experiences, quantitative satisfaction scores can be used to validate and quantify the prevalence of these experiences across a larger population.

Furthermore, I often use quantitative data as a formative tool to guide the qualitative inquiry. By initially analyzing patterns in quantitative data, I can identify areas that require a deeper understanding through qualitative methods. For example, if a survey indicates a trend in consumer behavior, follow-up interviews or focus groups can explore the motivations behind that trend. This embedded analysis approach ensures that qualitative findings are not only contextually informed but also quantitatively relevant, leading to a more comprehensive understanding of the research question.”

19. What is your rationale for choosing narrative inquiry over other qualitative methods in storytelling contexts?

Narrative inquiry delves into individual stories to find broader truths and patterns. This method captures the richness of how people perceive and make sense of their lives, revealing the interplay of various factors in shaping narratives.

When responding, articulate your understanding of narrative inquiry, emphasizing its strengths in capturing lived experiences and its ability to provide a detailed, insider’s view of a phenomenon. Highlight your knowledge of how narrative inquiry can uncover the nuances of storytelling, such as the role of language, emotions, and context, which are essential for a deep understanding of the subject matter. Demonstrate your ability to choose an appropriate research method based on the research question, objectives, and the nature of the data you aim to collect.

Example: “ Narrative inquiry is a powerful qualitative method that aligns exceptionally well with the exploration of storytelling contexts due to its focus on the richness of personal experience and the construction of meaning. By delving into individuals’ stories, narrative inquiry allows researchers to capture the complexities of lived experiences, which are often embedded with emotions, cultural values, and temporal elements that other methods may not fully grasp. The longitudinal nature of narrative inquiry, where stories can be collected and analyzed over time, also offers a dynamic perspective on how narratives evolve, intersect, and influence the storyteller’s identity and worldview.

In choosing narrative inquiry, one is committing to a methodological approach that honors the subjectivity and co-construction of knowledge between the researcher and participants. This approach is particularly adept at uncovering the layers of language use, symbolism, and the interplay of narratives with broader societal discourses. It is this depth and nuance that makes narrative inquiry the method of choice when the research aim is not just to catalog events but to understand the profound implications of storytelling on individual and collective levels. The method’s flexibility in accommodating different narrative forms – be it oral, written, or visual – further underscores its suitability for research that seeks to holistically capture the essence of storytelling within its natural context.”

20. How do you address potential power dynamics that may influence a participant’s responses during interviews?

Recognizing and mitigating the influence of power dynamics is essential to maintain the integrity of the data collected in qualitative research, ensuring that findings reflect the participants’ genuine perspectives.

When responding to this question, one should emphasize their awareness of such dynamics and articulate strategies to minimize their impact. This could include techniques like establishing rapport, using neutral language, ensuring confidentiality, and employing reflexivity—being mindful of one’s own influence on the conversation. Furthermore, demonstrating an understanding of how to create a safe space for open dialogue and acknowledging the importance of participant empowerment can convey a commitment to ethical and effective qualitative research practices.

Example: “ In addressing potential power dynamics, my approach begins with the conscious effort to create an environment of trust and safety. I employ active listening and empathetic engagement to establish rapport, which helps to level the conversational field. I am meticulous in using neutral, non-leading language to avoid inadvertently imposing my own assumptions or perspectives on participants. This is complemented by an emphasis on the voluntary nature of participation and the assurance of confidentiality, which together foster a space where participants feel secure in sharing their authentic experiences.

Reflexivity is a cornerstone of my practice; I continuously self-assess and acknowledge my positionality and its potential influence on the research process. By engaging in this critical self-reflection, I am better equipped to recognize and mitigate any power imbalances that may arise. Moreover, I strive to empower participants by validating their narratives and ensuring that the interview process is not just extractive but also offers them a platform to be heard and to contribute meaningfully to the research. This balanced approach not only enriches the data quality but also adheres to the ethical standards that underpin responsible qualitative research.”

Top 20 Stakeholder Interview Questions & Answers

Top 20 multicultural interview questions & answers, you may also be interested in..., top 20 active listening interview questions & answers, top 20 business strategy interview questions & answers, top 20 plastic surgery interview questions & answers, top 20 b2b marketing interview questions & answers.

Research

83 Qualitative Research Questions & Examples

83 Qualitative Research Questions & Examples

Free Website Traffic Checker

Discover your competitors' strengths and leverage them to achieve your own success

Qualitative research questions help you understand consumer sentiment. They’re strategically designed to show organizations how and why people feel the way they do about a brand, product, or service. It looks beyond the numbers and is one of the most telling types of market research a company can do.

The UK Data Service describes this perfectly, saying, “The value of qualitative research is that it gives a voice to the lived experience .”

Read on to see seven use cases and 83 qualitative research questions, with the added bonus of examples that show how to get similar insights faster with Similarweb Research Intelligence.

Inspirational quote about customer insights

What is a qualitative research question?

A qualitative research question explores a topic in-depth, aiming to better understand the subject through interviews, observations, and other non-numerical data. Qualitative research questions are open-ended, helping to uncover a target audience’s opinions, beliefs, and motivations.

How to choose qualitative research questions?

Choosing the right qualitative research questions can be incremental to the success of your research and the findings you uncover. Here’s my six-step process for choosing the best qualitative research questions.

  • Start by understanding the purpose of your research. What do you want to learn? What outcome are you hoping to achieve?
  • Consider who you are researching. What are their experiences, attitudes, and beliefs? How can you best capture these in your research questions ?
  • Keep your questions open-ended . Qualitative research questions should not be too narrow or too broad. Aim to ask specific questions to provide meaningful answers but broad enough to allow for exploration.
  • Balance your research questions. You don’t want all of your questions to be the same type. Aim to mix up your questions to get a variety of answers.
  • Ensure your research questions are ethical and free from bias. Always have a second (and third) person check for unconscious bias.
  • Consider the language you use. Your questions should be written in a way that is clear and easy to understand. Avoid using jargon , acronyms, or overly technical language.

Choosing qualitative questions

Types of qualitative research questions

For a question to be considered qualitative, it usually needs to be open-ended. However, as I’ll explain, there can sometimes be a slight cross-over between quantitative and qualitative research questions.

Open-ended questions

These allow for a wide range of responses and can be formatted with multiple-choice answers or a free-text box to collect additional details. The next two types of qualitative questions are considered open questions, but each has its own style and purpose.

  • Probing questions are used to delve deeper into a respondent’s thoughts, such as “Can you tell me more about why you feel that way?”
  • Comparative questions ask people to compare two or more items, such as “Which product do you prefer and why?” These qualitative questions are highly useful for understanding brand awareness , competitive analysis , and more.

Closed-ended questions

These ask respondents to choose from a predetermined set of responses, such as “On a scale of 1-5, how satisfied are you with the new product?” While they’re traditionally quantitative, adding a free text box that asks for extra comments into why a specific rating was chosen will provide qualitative insights alongside their respective quantitative research question responses.

  • Ranking questions get people to rank items in order of preference, such as “Please rank these products in terms of quality.” They’re advantageous in many scenarios, like product development, competitive analysis, and brand awareness.
  • Likert scale questions ask people to rate items on a scale, such as “On a scale of 1-5, how satisfied are you with the new product?” Ideal for placement on websites and emails to gather quick, snappy feedback.

Qualitative research question examples

There are many applications of qualitative research and lots of ways you can put your findings to work for the success of your business. Here’s a summary of the most common use cases for qualitative questions and examples to ask.

Qualitative questions for identifying customer needs and motivations

These types of questions help you find out why customers choose products or services and what they are looking for when making a purchase.

  • What factors do you consider when deciding to buy a product?
  • What would make you choose one product or service over another?
  • What are the most important elements of a product that you would buy?
  • What features do you look for when purchasing a product?
  • What qualities do you look for in a company’s products?
  • Do you prefer localized or global brands when making a purchase?
  • How do you determine the value of a product?
  • What do you think is the most important factor when choosing a product?
  • How do you decide if a product or service is worth the money?
  • Do you have any specific expectations when purchasing a product?
  • Do you prefer to purchase products or services online or in person?
  • What kind of customer service do you expect when buying a product?
  • How do you decide when it is time to switch to a different product?
  • Where do you research products before you decide to buy?
  • What do you think is the most important customer value when making a purchase?

Qualitative research questions to enhance customer experience

Use these questions to reveal insights into how customers interact with a company’s products or services and how those experiences can be improved.

  • What aspects of our product or service do customers find most valuable?
  • How do customers perceive our customer service?
  • What factors are most important to customers when purchasing?
  • What do customers think of our brand?
  • What do customers think of our current marketing efforts?
  • How do customers feel about the features and benefits of our product?
  • How do customers feel about the price of our product or service?
  • How could we improve the customer experience?
  • What do customers think of our website or app?
  • What do customers think of our customer support?
  • What could we do to make our product or service easier to use?
  • What do customers think of our competitors?
  • What is your preferred way to access our site?
  • How do customers feel about our delivery/shipping times?
  • What do customers think of our loyalty programs?

Qualitative research question example for customer experience

  • ‍♀️ Question: What is your preferred way to access our site?
  • Insight sought: How mobile-dominant are consumers? Should you invest more in mobile optimization or mobile marketing?
  • Challenges with traditional qualitative research methods: While using this type of question is ideal if you have a large database to survey when placed on a site or sent to a limited customer list, it only gives you a point-in-time perspective from a limited group of people.
  • A new approach: You can get better, broader insights quicker with Similarweb Digital Research Intelligence. To fully inform your research, you need to know preferences at the industry or market level.
  • ⏰ Time to insight: 30 seconds
  • ✅ How it’s done: Similarweb offers multiple ways to answer this question without going through a lengthy qualitative research process. 

First, I’m going to do a website market analysis of the banking credit and lending market in the finance sector to get a clearer picture of industry benchmarks.

Here, I can view device preferences across any industry or market instantly. It shows me the device distribution for any country across any period. This clearly answers the question of how mobile dominate my target audience is , with 59.79% opting to access site via a desktop vs. 40.21% via mobile

I then use the trends section to show me the exact split between mobile and web traffic for each key player in my space. Let’s say I’m about to embark on a competitive campaign that targets customers of Chase and Bank of America ; I can see both their audiences are highly desktop dominant compared with others in their space .

Qualitative question examples for developing new products or services

Research questions like this can help you understand customer pain points and give you insights to develop products that meet those needs.

  • What is the primary reason you would choose to purchase a product from our company?
  • How do you currently use products or services that are similar to ours?
  • Is there anything that could be improved with products currently on the market?
  • What features would you like to see added to our products?
  • How do you prefer to contact a customer service team?
  • What do you think sets our company apart from our competitors?
  • What other product or service offerings would like to see us offer?
  • What type of information would help you make decisions about buying a product?
  • What type of advertising methods are most effective in getting your attention?
  • What is the biggest deterrent to purchasing products from us?

Qualitative research question example for service development

  • ‍♀️ Question: What type of advertising methods are most effective in getting your attention?
  • Insight sought: The marketing channels and/or content that performs best with a target audience .
  • Challenges with traditional qualitative research methods: When using qualitative research surveys to answer questions like this, the sample size is limited, and bias could be at play.
  • A better approach: The most authentic insights come from viewing real actions and results that take place in the digital world. No questions or answers are needed to uncover this intel, and the information you seek is readily available in less than a minute.
  • ⏰ Time to insight: 5 minutes
  • ✅ How it’s done: There are a few ways to approach this. You can either take an industry-wide perspective or hone in on specific competitors to unpack their individual successes. Here, I’ll quickly show a snapshot with a whole market perspective.

qualitative example question - marketing channels

Using the market analysis element of Similarweb Digital Intelligence, I select my industry or market, which I’ve kept as banking and credit. A quick click into marketing channels shows me which channels drive the highest traffic in my market. Taking direct traffic out of the equation, for now, I can see that referrals and organic traffic are the two highest-performing channels in this market.

Similarweb allows me to view the specific referral partners and pages across these channels. 

qualitative question example - Similarweb referral channels

Looking closely at referrals in this market, I’ve chosen chase.com and its five closest rivals . I select referrals in the channel traffic element of marketing channels. I see that Capital One is a clear winner, gaining almost 25 million visits due to referral partnerships.

Qualitative research question example

Next, I get to see exactly who is referring traffic to Capital One and the total traffic share for each referrer. I can see the growth as a percentage and how that has changed, along with an engagement score that rates the average engagement level of that audience segment. This is particularly useful when deciding on which new referral partnerships to pursue.  

Once I’ve identified the channels and campaigns that yield the best results, I can then use Similarweb to dive into the various ad creatives and content that have the greatest impact.

Qualitative research example for ad creatives

These ads are just a few of those listed in the creatives section from my competitive website analysis of Capital One. You can filter this list by the specific campaign, publishers, and ad networks to view those that matter to you most. You can also discover video ad creatives in the same place too.

In just five minutes ⏰ 

  • I’ve captured audience loyalty statistics across my market
  • Spotted the most competitive players
  • Identified the marketing channels my audience is most responsive to
  • I know which content and campaigns are driving the highest traffic volume
  • I’ve created a target list for new referral partners and have been able to prioritize this based on results and engagement figures from my rivals
  • I can see the types of creatives that my target audience is responding to, giving me ideas for ways to generate effective copy for future campaigns

Qualitative questions to determine pricing strategies

Companies need to make sure pricing stays relevant and competitive. Use these questions to determine customer perceptions on pricing and develop pricing strategies to maximize profits and reduce churn.

  • How do you feel about our pricing structure?
  • How does our pricing compare to other similar products?
  • What value do you feel you get from our pricing?
  • How could we make our pricing more attractive?
  • What would be an ideal price for our product?
  • Which features of our product that you would like to see priced differently?
  • What discounts or deals would you like to see us offer?
  • How do you feel about the amount you have to pay for our product?

Get Faster Answers to Qualitative Research Questions with Similarweb Today

Qualitative research question example for determining pricing strategies

  • ‍♀️ Question: What discounts or deals would you like to see us offer?
  • Insight sought: The promotions or campaigns that resonate with your target audience.
  • Challenges with traditional qualitative research methods: Consumers don’t always recall the types of ads or campaigns they respond to. Over time, their needs and habits change. Your sample size is limited to those you ask, leaving a huge pool of unknowns at play.
  • A better approach: While qualitative insights are good to know, you get the most accurate picture of the highest-performing promotion and campaigns by looking at data collected directly from the web. These analytics are real-world, real-time, and based on the collective actions of many, instead of the limited survey group you approach. By getting a complete picture across an entire market, your decisions are better informed and more aligned with current market trends and behaviors.
  • ✅ How it’s done: Similarweb’s Popular Pages feature shows the content, products, campaigns, and pages with the highest growth for any website. So, if you’re trying to unpack the successes of others in your space and find out what content resonates with a target audience, there’s a far quicker way to get answers to these questions with Similarweb.

Qualitative research example

Here, I’m using Capital One as an example site. I can see trending pages on their site showing the largest increase in page views. Other filters include campaign, best-performing, and new–each of which shows you page URLs, share of traffic, and growth as a percentage. This page is particularly useful for staying on top of trending topics , campaigns, and new content being pushed out in a market by key competitors.

Qualitative research questions for product development teams

It’s vital to stay in touch with changing consumer needs. These questions can also be used for new product or service development, but this time, it’s from the perspective of a product manager or development team. 

  • What are customers’ primary needs and wants for this product?
  • What do customers think of our current product offerings?
  • What is the most important feature or benefit of our product?
  • How can we improve our product to meet customers’ needs better?
  • What do customers like or dislike about our competitors’ products?
  • What do customers look for when deciding between our product and a competitor’s?
  • How have customer needs and wants for this product changed over time?
  • What motivates customers to purchase this product?
  • What is the most important thing customers want from this product?
  • What features or benefits are most important when selecting a product?
  • What do customers perceive to be our product’s pros and cons?
  • What would make customers switch from a competitor’s product to ours?
  • How do customers perceive our product in comparison to similar products?
  • What do customers think of our pricing and value proposition?
  • What do customers think of our product’s design, usability, and aesthetics?

Qualitative questions examples to understand customer segments

Market segmentation seeks to create groups of consumers with shared characteristics. Use these questions to learn more about different customer segments and how to target them with tailored messaging.

  • What motivates customers to make a purchase?
  • How do customers perceive our brand in comparison to our competitors?
  • How do customers feel about our product quality?
  • How do customers define quality in our products?
  • What factors influence customers’ purchasing decisions ?
  • What are the most important aspects of customer service?
  • What do customers think of our customer service?
  • What do customers think of our pricing?
  • How do customers rate our product offerings?
  • How do customers prefer to make purchases (online, in-store, etc.)?

Qualitative research question example for understanding customer segments

  • ‍♀️ Question: Which social media channels are you most active on?
  • Insight sought: Formulate a social media strategy . Specifically, the social media channels most likely to succeed with a target audience.
  • Challenges with traditional qualitative research methods: Qualitative research question responses are limited to those you ask, giving you a limited sample size. Questions like this are usually at risk of some bias, and this may not be reflective of real-world actions.
  • A better approach: Get a complete picture of social media preferences for an entire market or specific audience belonging to rival firms. Insights are available in real-time, and are based on the actions of many, not a select group of participants. Data is readily available, easy to understand, and expandable at a moment’s notice.
  • ✅ How it’s done: Using Similarweb’s website analysis feature, you can get a clear breakdown of social media stats for your audience using the marketing channels element. It shows the percentage of visits from each channel to your site, respective growth, and specific referral pages by each platform. All data is expandable, meaning you can select any platform, period, and region to drill down and get more accurate intel, instantly.

Qualitative question example social media

This example shows me Bank of America’s social media distribution, with YouTube , Linkedin , and Facebook taking the top three spots, and accounting for almost 80% of traffic being driven from social media.

When doing any type of market research, it’s important to benchmark performance against industry averages and perform a social media competitive analysis to verify rival performance across the same channels.

Qualitative questions to inform competitive analysis

Organizations must assess market sentiment toward other players to compete and beat rival firms. Whether you want to increase market share , challenge industry leaders , or reduce churn, understanding how people view you vs. the competition is key.

  • What is the overall perception of our competitors’ product offerings in the market?
  • What attributes do our competitors prioritize in their customer experience?
  • What strategies do our competitors use to differentiate their products from ours?
  • How do our competitors position their products in relation to ours?
  • How do our competitors’ pricing models compare to ours?
  • What do consumers think of our competitors’ product quality?
  • What do consumers think of our competitors’ customer service?
  • What are the key drivers of purchase decisions in our market?
  • What is the impact of our competitors’ marketing campaigns on our market share ? 10. How do our competitors leverage social media to promote their products?

Qualitative research question example for competitive analysis

  • ‍♀️ Question: What other companies do you shop with for x?
  • Insight sought: W ho are your competitors? Which of your rival’s sites do your customers visit? How loyal are consumers in your market?
  • Challenges with traditional qualitative research methods:  Sample size is limited, and customers could be unwilling to reveal which competitors they shop with, or how often they around. Where finances are involved, people can act with reluctance or bias, and be unwilling to reveal other suppliers they do business with.
  • A better approach: Get a complete picture of your audience’s loyalty, see who else they shop with, and how many other sites they visit in your competitive group. Find out the size of the untapped opportunity and which players are doing a better job at attracting unique visitors – without having to ask people to reveal their preferences.
  • ✅ How it’s done: Similarweb website analysis shows you the competitive sites your audience visits, giving you access to data that shows cross-visitation habits, audience loyalty, and untapped potential in a matter of minutes.

Qualitative research example for audience analysis

Using the audience interests element of Similarweb website analysis, you can view the cross-browsing behaviors of a website’s audience instantly. You can see a matrix that shows the percentage of visitors on a target site and any rival site they may have visited.

Qualitative research question example for competitive analysis

With the Similarweb audience overlap feature, view the cross-visitation habits of an audience across specific websites. In this example, I chose chase.com and its four closest competitors to review. For each intersection, you see the number of unique visitors and the overall proportion of each site’s audience it represents. It also shows the volume of unreached potential visitors.

qualitative question example for audience loyalty

Here, you can see a direct comparison of the audience loyalty represented in a bar graph. It shows a breakdown of each site’s audience based on how many other sites they have visited. Those sites with the highest loyalty show fewer additional sites visited.

From the perspective of chase.com, I can see 47% of their visitors do not visit rival sites. 33% of their audience visited 1 or more sites in this group, 14% visited 2 or more sites, 4% visited 3 or more sites, and just 0.8% viewed all sites in this comparison. 

How to answer qualitative research questions with Similarweb

Similarweb Research Intelligence drastically improves market research efficiency and time to insight. Both of these can impact the bottom line and the pace at which organizations can adapt and flex when markets shift, and rivals change tactics.

Outdated practices, while still useful, take time . And with a quicker, more efficient way to garner similar insights, opting for the fast lane puts you at a competitive advantage.

With a birds-eye view of the actions and behaviors of companies and consumers across a market , you can answer certain research questions without the need to plan, do, and review extensive qualitative market research .

Wrapping up

Qualitative research methods have been around for centuries. From designing the questions to finding the best distribution channels, collecting and analyzing findings takes time to get the insights you need. Similarweb Digital Research Intelligence drastically improves efficiency and time to insight. Both of which impact the bottom line and the pace at which organizations can adapt and flex when markets shift.

Similarweb’s suite of digital intelligence solutions offers unbiased, accurate, honest insights you can trust for analyzing any industry, market, or audience.

  • Methodologies used for data collection are robust, transparent, and trustworthy.
  • Clear presentation of data via an easy-to-use, intuitive platform.
  • It updates dynamically–giving you the freshest data about an industry or market.
  • Data is available via an API – so you can plug into platforms like Tableau or PowerBI to streamline your analyses.
  • Filter and refine results according to your needs.

Are quantitative or qualitative research questions best?

Both have their place and purpose in market research. Qualitative research questions seek to provide details, whereas quantitative market research gives you numerical statistics that are easier and quicker to analyze. You get more flexibility with qualitative questions, and they’re non-directional.

What are the advantages of qualitative research?

Qualitative research is advantageous because it allows researchers to better understand their subject matter by exploring people’s attitudes, behaviors, and motivations in a particular context. It also allows researchers to uncover new insights that may not have been discovered with quantitative research methods.

What are some of the challenges of qualitative research?

Qualitative research can be time-consuming and costly, typically involving in-depth interviews and focus groups. Additionally, there are challenges associated with the reliability and validity of the collected data, as there is no universal standard for interpreting the results.

author-photo

by Liz March

Digital Research Specialist

Liz March has 15 years of experience in content creation. She enjoys the outdoors, F1, and reading, and is pursuing a BSc in Environmental Science.

Related Posts

Importance of Market Research: 9 Reasons Why It’s Crucial for Your Business

Importance of Market Research: 9 Reasons Why It’s Crucial for Your Business

Audience Segmentation: Definition, Importance & Types

Audience Segmentation: Definition, Importance & Types

Geographic Segmentation: Definition, Pros & Cons, Examples, and More

Geographic Segmentation: Definition, Pros & Cons, Examples, and More

Demographic Segmentation: The Key To Transforming Your Marketing Strategy

Demographic Segmentation: The Key To Transforming Your Marketing Strategy

Unlocking Consumer Behavior: What Makes Your Customers Tick?

Unlocking Consumer Behavior: What Makes Your Customers Tick?

Customer Segmentation: Expert Tips on Understanding Your Audience

Customer Segmentation: Expert Tips on Understanding Your Audience

Wondering what similarweb can do for your business.

Give it a try or talk to our insights team — don’t worry, it’s free!

sample questions qualitative research interviews

Qualitative Research Questions: Gain Powerful Insights + 25 Examples

We review the basics of qualitative research questions, including their key components, how to craft them effectively, & 25 example questions.

Einstein was many things—a physicist, a philosopher, and, undoubtedly, a mastermind. He also had an incredible way with words. His quote, "Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted," is particularly poignant when it comes to research. 

Some inquiries call for a quantitative approach, for counting and measuring data in order to arrive at general conclusions. Other investigations, like qualitative research, rely on deep exploration and understanding of individual cases in order to develop a greater understanding of the whole. That’s what we’re going to focus on today.

Qualitative research questions focus on the "how" and "why" of things, rather than the "what". They ask about people's experiences and perceptions , and can be used to explore a wide range of topics.

The following article will discuss the basics of qualitative research questions, including their key components, and how to craft them effectively. You'll also find 25 examples of effective qualitative research questions you can use as inspiration for your own studies.

Let’s get started!

What are qualitative research questions, and when are they used?

When researchers set out to conduct a study on a certain topic, their research is chiefly directed by an overarching question . This question provides focus for the study and helps determine what kind of data will be collected.

By starting with a question, we gain parameters and objectives for our line of research. What are we studying? For what purpose? How will we know when we’ve achieved our goals?

Of course, some of these questions can be described as quantitative in nature. When a research question is quantitative, it usually seeks to measure or calculate something in a systematic way.

For example:

  • How many people in our town use the library?
  • What is the average income of families in our city?
  • How much does the average person weigh?

Other research questions, however—and the ones we will be focusing on in this article—are qualitative in nature. Qualitative research questions are open-ended and seek to explore a given topic in-depth.

According to the Australian & New Zealand Journal of Psychiatry , “Qualitative research aims to address questions concerned with developing an understanding of the meaning and experience dimensions of humans’ lives and social worlds.”

This type of research can be used to gain a better understanding of people’s thoughts, feelings and experiences by “addressing questions beyond ‘what works’, towards ‘what works for whom when, how and why, and focusing on intervention improvement rather than accreditation,” states one paper in Neurological Research and Practice .

Qualitative questions often produce rich data that can help researchers develop hypotheses for further quantitative study.

  • What are people’s thoughts on the new library?
  • How does it feel to be a first-generation student at our school?
  • How do people feel about the changes taking place in our town?

As stated by a paper in Human Reproduction , “...‘qualitative’ methods are used to answer questions about experience, meaning, and perspective, most often from the standpoint of the participant. These data are usually not amenable to counting or measuring.”

Both quantitative and qualitative questions have their uses; in fact, they often complement each other. A well-designed research study will include a mix of both types of questions in order to gain a fuller understanding of the topic at hand.

If you would like to recruit unlimited participants for qualitative research for free and only pay for the interview you conduct, try using Respondent  today. 

Crafting qualitative research questions for powerful insights

Now that we have a basic understanding of what qualitative research questions are and when they are used, let’s take a look at how you can begin crafting your own.

According to a study in the International Journal of Qualitative Studies in Education, there is a certain process researchers should follow when crafting their questions, which we’ll explore in more depth.

1. Beginning the process 

Start with a point of interest or curiosity, and pose a draft question or ‘self-question’. What do you want to know about the topic at hand? What is your specific curiosity? You may find it helpful to begin by writing several questions.

For example, if you’re interested in understanding how your customer base feels about a recent change to your product, you might ask: 

  • What made you decide to try the new product?
  • How do you feel about the change?
  • What do you think of the new design/functionality?
  • What benefits do you see in the change?

2. Create one overarching, guiding question 

At this point, narrow down the draft questions into one specific question. “Sometimes, these broader research questions are not stated as questions, but rather as goals for the study.”

As an example of this, you might narrow down these three questions: 

into the following question: 

  • What are our customers’ thoughts on the recent change to our product?

3. Theoretical framing 

As you read the relevant literature and apply theory to your research, the question should be altered to achieve better outcomes. Experts agree that pursuing a qualitative line of inquiry should open up the possibility for questioning your original theories and altering the conceptual framework with which the research began.

If we continue with the current example, it’s possible you may uncover new data that informs your research and changes your question. For instance, you may discover that customers’ feelings about the change are not just a reaction to the change itself, but also to how it was implemented. In this case, your question would need to reflect this new information: 

  • How did customers react to the process of the change, as well as the change itself?

4. Ethical considerations 

A study in the International Journal of Qualitative Studies in Education stresses that ethics are “a central issue when a researcher proposes to study the lives of others, especially marginalized populations.” Consider how your question or inquiry will affect the people it relates to—their lives and their safety. Shape your question to avoid physical, emotional, or mental upset for the focus group.

In analyzing your question from this perspective, if you feel that it may cause harm, you should consider changing the question or ending your research project. Perhaps you’ve discovered that your question encourages harmful or invasive questioning, in which case you should reformulate it.

5. Writing the question 

The actual process of writing the question comes only after considering the above points. The purpose of crafting your research questions is to delve into what your study is specifically about” Remember that qualitative research questions are not trying to find the cause of an effect, but rather to explore the effect itself.

Your questions should be clear, concise, and understandable to those outside of your field. In addition, they should generate rich data. The questions you choose will also depend on the type of research you are conducting: 

  • If you’re doing a phenomenological study, your questions might be open-ended, in order to allow participants to share their experiences in their own words.
  • If you’re doing a grounded-theory study, your questions might be focused on generating a list of categories or themes.
  • If you’re doing ethnography, your questions might be about understanding the culture you’re studying.

Whenyou have well-written questions, it is much easier to develop your research design and collect data that accurately reflects your inquiry.

In writing your questions, it may help you to refer to this simple flowchart process for constructing questions:

sample questions qualitative research interviews

Download Free E-Book 

25 examples of expertly crafted qualitative research questions

It's easy enough to cover the theory of writing a qualitative research question, but sometimes it's best if you can see the process in practice. In this section, we'll list 25 examples of B2B and B2C-related qualitative questions.

Let's begin with five questions. We'll show you the question, explain why it's considered qualitative, and then give you an example of how it can be used in research.

1. What is the customer's perception of our company's brand?

Qualitative research questions are often open-ended and invite respondents to share their thoughts and feelings on a subject. This question is qualitative because it seeks customer feedback on the company's brand. 

This question can be used in research to understand how customers feel about the company's branding, what they like and don't like about it, and whether they would recommend it to others.

2. Why do customers buy our product?

This question is also qualitative because it seeks to understand the customer's motivations for purchasing a product. It can be used in research to identify the reasons  customers buy a certain product, what needs or desires the product fulfills for them, and how they feel about the purchase after using the product.

3. How do our customers interact with our products?

Again, this question is qualitative because it seeks to understand customer behavior. In this case, it can be used in research to see how customers use the product, how they interact with it, and what emotions or thoughts the product evokes in them.

4. What are our customers' biggest frustrations with our products?

By seeking to understand customer frustrations, this question is qualitative and can provide valuable insights. It can be used in research to help identify areas in which the company needs to make improvements with its products.

5. How do our customers feel about our customer service?

Rather than asking why customers like or dislike something, this question asks how they feel. This qualitative question can provide insights into customer satisfaction or dissatisfaction with a company. 

This type of question can be used in research to understand what customers think of the company's customer service and whether they feel it meets their needs.

20 more examples to refer to when writing your question

Now that you’re aware of what makes certain questions qualitative, let's move into 20 more examples of qualitative research questions:

  • How do your customers react when updates are made to your app interface?
  • How do customers feel when they complete their purchase through your ecommerce site?
  • What are your customers' main frustrations with your service?
  • How do people feel about the quality of your products compared to those of your competitors?
  • What motivates customers to refer their friends and family members to your product or service?
  • What are the main benefits your customers receive from using your product or service?
  • How do people feel when they finish a purchase on your website?
  • What are the main motivations behind customer loyalty to your brand?
  • How does your app make people feel emotionally?
  • For younger generations using your app, how does it make them feel about themselves?
  • What reputation do people associate with your brand?
  • How inclusive do people find your app?
  • In what ways are your customers' experiences unique to them?
  • What are the main areas of improvement your customers would like to see in your product or service?
  • How do people feel about their interactions with your tech team?
  • What are the top five reasons people use your online marketplace?
  • How does using your app make people feel in terms of connectedness?
  • What emotions do people experience when they're using your product or service?
  • Aside from the features of your product, what else about it attracts customers?
  • How does your company culture make people feel?

As you can see, these kinds of questions are completely open-ended. In a way, they allow the research and discoveries made along the way to direct the research. The questions are merely a starting point from which to explore.

This video offers tips on how to write good qualitative research questions, produced by Qualitative Research Expert, Kimberly Baker.

Wrap-up: crafting your own qualitative research questions.

Over the course of this article, we've explored what qualitative research questions are, why they matter, and how they should be written. Hopefully you now have a clear understanding of how to craft your own.

Remember, qualitative research questions should always be designed to explore a certain experience or phenomena in-depth, in order to generate powerful insights. As you write your questions, be sure to keep the following in mind:

  • Are you being inclusive of all relevant perspectives?
  • Are your questions specific enough to generate clear answers?
  • Will your questions allow for an in-depth exploration of the topic at hand?
  • Do the questions reflect your research goals and objectives?

If you can answer "yes" to all of the questions above, and you've followed the tips for writing qualitative research questions we shared in this article, then you're well on your way to crafting powerful queries that will yield valuable insights.

Download Free E-Book

Respondent_100+Questions_Banners_1200x644 (1)

Asking the right questions in the right way is the key to research success. That’s true for not just the discussion guide but for every step of a research project. Following are 100+ questions that will take you from defining your research objective through  screening and participant discussions.

Fill out the form below to access free e-book! 

Recommend Resources:

  • How to Recruit Participants for Qualitative Research
  • The Best UX Research Tools of 2022
  • 10 Smart Tips for Conducting Better User Interviews
  • 50 Powerful Questions You Should Ask In Your Next User Interview
  • How To Find Participants For User Research: 13 Ways To Make It Happen
  • UX Diary Study: 5 Essential Tips For Conducing Better Studies
  • User Testing Recruitment: 10 Smart Tips To Find Participants Fast
  • Qualitative Research Questions: Gain Powerful Insights + 25
  • How To Successfully Recruit Participants for A Study (2022 Edition)
  • How To Properly Recruit Focus Group Participants (2022 Edition)
  • The Best Unmoderated Usability Testing Tools of 2022

50 Powerful User Interview Questions You Should Consider Asking

We researched the best user interview questions you can use for your qualitative research studies. Use these 50 sample questions for your next...

A Guide to Usability Testing Questions (Including 100 Examples)

Asking the right questions in the right way is the key to the success of your UX research project. With tips and 100+ question examples, Respondent...

How To ​​Unleash Your Extra Income Potential With Respondent

The number one question we get from new participants is “how can I get invited to participate in more projects.” In this article, we’ll discuss a few...

Logo for Open Educational Resources

Chapter 11. Interviewing

Introduction.

Interviewing people is at the heart of qualitative research. It is not merely a way to collect data but an intrinsically rewarding activity—an interaction between two people that holds the potential for greater understanding and interpersonal development. Unlike many of our daily interactions with others that are fairly shallow and mundane, sitting down with a person for an hour or two and really listening to what they have to say is a profound and deep enterprise, one that can provide not only “data” for you, the interviewer, but also self-understanding and a feeling of being heard for the interviewee. I always approach interviewing with a deep appreciation for the opportunity it gives me to understand how other people experience the world. That said, there is not one kind of interview but many, and some of these are shallower than others. This chapter will provide you with an overview of interview techniques but with a special focus on the in-depth semistructured interview guide approach, which is the approach most widely used in social science research.

An interview can be variously defined as “a conversation with a purpose” ( Lune and Berg 2018 ) and an attempt to understand the world from the point of view of the person being interviewed: “to unfold the meaning of peoples’ experiences, to uncover their lived world prior to scientific explanations” ( Kvale 2007 ). It is a form of active listening in which the interviewer steers the conversation to subjects and topics of interest to their research but also manages to leave enough space for those interviewed to say surprising things. Achieving that balance is a tricky thing, which is why most practitioners believe interviewing is both an art and a science. In my experience as a teacher, there are some students who are “natural” interviewers (often they are introverts), but anyone can learn to conduct interviews, and everyone, even those of us who have been doing this for years, can improve their interviewing skills. This might be a good time to highlight the fact that the interview is a product between interviewer and interviewee and that this product is only as good as the rapport established between the two participants. Active listening is the key to establishing this necessary rapport.

Patton ( 2002 ) makes the argument that we use interviews because there are certain things that are not observable. In particular, “we cannot observe feelings, thoughts, and intentions. We cannot observe behaviors that took place at some previous point in time. We cannot observe situations that preclude the presence of an observer. We cannot observe how people have organized the world and the meanings they attach to what goes on in the world. We have to ask people questions about those things” ( 341 ).

Types of Interviews

There are several distinct types of interviews. Imagine a continuum (figure 11.1). On one side are unstructured conversations—the kind you have with your friends. No one is in control of those conversations, and what you talk about is often random—whatever pops into your head. There is no secret, underlying purpose to your talking—if anything, the purpose is to talk to and engage with each other, and the words you use and the things you talk about are a little beside the point. An unstructured interview is a little like this informal conversation, except that one of the parties to the conversation (you, the researcher) does have an underlying purpose, and that is to understand the other person. You are not friends speaking for no purpose, but it might feel just as unstructured to the “interviewee” in this scenario. That is one side of the continuum. On the other side are fully structured and standardized survey-type questions asked face-to-face. Here it is very clear who is asking the questions and who is answering them. This doesn’t feel like a conversation at all! A lot of people new to interviewing have this ( erroneously !) in mind when they think about interviews as data collection. Somewhere in the middle of these two extreme cases is the “ semistructured” interview , in which the researcher uses an “interview guide” to gently move the conversation to certain topics and issues. This is the primary form of interviewing for qualitative social scientists and will be what I refer to as interviewing for the rest of this chapter, unless otherwise specified.

Types of Interviewing Questions: Unstructured conversations, Semi-structured interview, Structured interview, Survey questions

Informal (unstructured conversations). This is the most “open-ended” approach to interviewing. It is particularly useful in conjunction with observational methods (see chapters 13 and 14). There are no predetermined questions. Each interview will be different. Imagine you are researching the Oregon Country Fair, an annual event in Veneta, Oregon, that includes live music, artisan craft booths, face painting, and a lot of people walking through forest paths. It’s unlikely that you will be able to get a person to sit down with you and talk intensely about a set of questions for an hour and a half. But you might be able to sidle up to several people and engage with them about their experiences at the fair. You might have a general interest in what attracts people to these events, so you could start a conversation by asking strangers why they are here or why they come back every year. That’s it. Then you have a conversation that may lead you anywhere. Maybe one person tells a long story about how their parents brought them here when they were a kid. A second person talks about how this is better than Burning Man. A third person shares their favorite traveling band. And yet another enthuses about the public library in the woods. During your conversations, you also talk about a lot of other things—the weather, the utilikilts for sale, the fact that a favorite food booth has disappeared. It’s all good. You may not be able to record these conversations. Instead, you might jot down notes on the spot and then, when you have the time, write down as much as you can remember about the conversations in long fieldnotes. Later, you will have to sit down with these fieldnotes and try to make sense of all the information (see chapters 18 and 19).

Interview guide ( semistructured interview ). This is the primary type employed by social science qualitative researchers. The researcher creates an “interview guide” in advance, which she uses in every interview. In theory, every person interviewed is asked the same questions. In practice, every person interviewed is asked mostly the same topics but not always the same questions, as the whole point of a “guide” is that it guides the direction of the conversation but does not command it. The guide is typically between five and ten questions or question areas, sometimes with suggested follow-ups or prompts . For example, one question might be “What was it like growing up in Eastern Oregon?” with prompts such as “Did you live in a rural area? What kind of high school did you attend?” to help the conversation develop. These interviews generally take place in a quiet place (not a busy walkway during a festival) and are recorded. The recordings are transcribed, and those transcriptions then become the “data” that is analyzed (see chapters 18 and 19). The conventional length of one of these types of interviews is between one hour and two hours, optimally ninety minutes. Less than one hour doesn’t allow for much development of questions and thoughts, and two hours (or more) is a lot of time to ask someone to sit still and answer questions. If you have a lot of ground to cover, and the person is willing, I highly recommend two separate interview sessions, with the second session being slightly shorter than the first (e.g., ninety minutes the first day, sixty minutes the second). There are lots of good reasons for this, but the most compelling one is that this allows you to listen to the first day’s recording and catch anything interesting you might have missed in the moment and so develop follow-up questions that can probe further. This also allows the person being interviewed to have some time to think about the issues raised in the interview and go a little deeper with their answers.

Standardized questionnaire with open responses ( structured interview ). This is the type of interview a lot of people have in mind when they hear “interview”: a researcher comes to your door with a clipboard and proceeds to ask you a series of questions. These questions are all the same whoever answers the door; they are “standardized.” Both the wording and the exact order are important, as people’s responses may vary depending on how and when a question is asked. These are qualitative only in that the questions allow for “open-ended responses”: people can say whatever they want rather than select from a predetermined menu of responses. For example, a survey I collaborated on included this open-ended response question: “How does class affect one’s career success in sociology?” Some of the answers were simply one word long (e.g., “debt”), and others were long statements with stories and personal anecdotes. It is possible to be surprised by the responses. Although it’s a stretch to call this kind of questioning a conversation, it does allow the person answering the question some degree of freedom in how they answer.

Survey questionnaire with closed responses (not an interview!). Standardized survey questions with specific answer options (e.g., closed responses) are not really interviews at all, and they do not generate qualitative data. For example, if we included five options for the question “How does class affect one’s career success in sociology?”—(1) debt, (2) social networks, (3) alienation, (4) family doesn’t understand, (5) type of grad program—we leave no room for surprises at all. Instead, we would most likely look at patterns around these responses, thinking quantitatively rather than qualitatively (e.g., using regression analysis techniques, we might find that working-class sociologists were twice as likely to bring up alienation). It can sometimes be confusing for new students because the very same survey can include both closed-ended and open-ended questions. The key is to think about how these will be analyzed and to what level surprises are possible. If your plan is to turn all responses into a number and make predictions about correlations and relationships, you are no longer conducting qualitative research. This is true even if you are conducting this survey face-to-face with a real live human. Closed-response questions are not conversations of any kind, purposeful or not.

In summary, the semistructured interview guide approach is the predominant form of interviewing for social science qualitative researchers because it allows a high degree of freedom of responses from those interviewed (thus allowing for novel discoveries) while still maintaining some connection to a research question area or topic of interest. The rest of the chapter assumes the employment of this form.

Creating an Interview Guide

Your interview guide is the instrument used to bridge your research question(s) and what the people you are interviewing want to tell you. Unlike a standardized questionnaire, the questions actually asked do not need to be exactly what you have written down in your guide. The guide is meant to create space for those you are interviewing to talk about the phenomenon of interest, but sometimes you are not even sure what that phenomenon is until you start asking questions. A priority in creating an interview guide is to ensure it offers space. One of the worst mistakes is to create questions that are so specific that the person answering them will not stray. Relatedly, questions that sound “academic” will shut down a lot of respondents. A good interview guide invites respondents to talk about what is important to them, not feel like they are performing or being evaluated by you.

Good interview questions should not sound like your “research question” at all. For example, let’s say your research question is “How do patriarchal assumptions influence men’s understanding of climate change and responses to climate change?” It would be worse than unhelpful to ask a respondent, “How do your assumptions about the role of men affect your understanding of climate change?” You need to unpack this into manageable nuggets that pull your respondent into the area of interest without leading him anywhere. You could start by asking him what he thinks about climate change in general. Or, even better, whether he has any concerns about heatwaves or increased tornadoes or polar icecaps melting. Once he starts talking about that, you can ask follow-up questions that bring in issues around gendered roles, perhaps asking if he is married (to a woman) and whether his wife shares his thoughts and, if not, how they negotiate that difference. The fact is, you won’t really know the right questions to ask until he starts talking.

There are several distinct types of questions that can be used in your interview guide, either as main questions or as follow-up probes. If you remember that the point is to leave space for the respondent, you will craft a much more effective interview guide! You will also want to think about the place of time in both the questions themselves (past, present, future orientations) and the sequencing of the questions.

Researcher Note

Suggestion : As you read the next three sections (types of questions, temporality, question sequence), have in mind a particular research question, and try to draft questions and sequence them in a way that opens space for a discussion that helps you answer your research question.

Type of Questions

Experience and behavior questions ask about what a respondent does regularly (their behavior) or has done (their experience). These are relatively easy questions for people to answer because they appear more “factual” and less subjective. This makes them good opening questions. For the study on climate change above, you might ask, “Have you ever experienced an unusual weather event? What happened?” Or “You said you work outside? What is a typical summer workday like for you? How do you protect yourself from the heat?”

Opinion and values questions , in contrast, ask questions that get inside the minds of those you are interviewing. “Do you think climate change is real? Who or what is responsible for it?” are two such questions. Note that you don’t have to literally ask, “What is your opinion of X?” but you can find a way to ask the specific question relevant to the conversation you are having. These questions are a bit trickier to ask because the answers you get may depend in part on how your respondent perceives you and whether they want to please you or not. We’ve talked a fair amount about being reflective. Here is another place where this comes into play. You need to be aware of the effect your presence might have on the answers you are receiving and adjust accordingly. If you are a woman who is perceived as liberal asking a man who identifies as conservative about climate change, there is a lot of subtext that can be going on in the interview. There is no one right way to resolve this, but you must at least be aware of it.

Feeling questions are questions that ask respondents to draw on their emotional responses. It’s pretty common for academic researchers to forget that we have bodies and emotions, but people’s understandings of the world often operate at this affective level, sometimes unconsciously or barely consciously. It is a good idea to include questions that leave space for respondents to remember, imagine, or relive emotional responses to particular phenomena. “What was it like when you heard your cousin’s house burned down in that wildfire?” doesn’t explicitly use any emotion words, but it allows your respondent to remember what was probably a pretty emotional day. And if they respond emotionally neutral, that is pretty interesting data too. Note that asking someone “How do you feel about X” is not always going to evoke an emotional response, as they might simply turn around and respond with “I think that…” It is better to craft a question that actually pushes the respondent into the affective category. This might be a specific follow-up to an experience and behavior question —for example, “You just told me about your daily routine during the summer heat. Do you worry it is going to get worse?” or “Have you ever been afraid it will be too hot to get your work accomplished?”

Knowledge questions ask respondents what they actually know about something factual. We have to be careful when we ask these types of questions so that respondents do not feel like we are evaluating them (which would shut them down), but, for example, it is helpful to know when you are having a conversation about climate change that your respondent does in fact know that unusual weather events have increased and that these have been attributed to climate change! Asking these questions can set the stage for deeper questions and can ensure that the conversation makes the same kind of sense to both participants. For example, a conversation about political polarization can be put back on track once you realize that the respondent doesn’t really have a clear understanding that there are two parties in the US. Instead of asking a series of questions about Republicans and Democrats, you might shift your questions to talk more generally about political disagreements (e.g., “people against abortion”). And sometimes what you do want to know is the level of knowledge about a particular program or event (e.g., “Are you aware you can discharge your student loans through the Public Service Loan Forgiveness program?”).

Sensory questions call on all senses of the respondent to capture deeper responses. These are particularly helpful in sparking memory. “Think back to your childhood in Eastern Oregon. Describe the smells, the sounds…” Or you could use these questions to help a person access the full experience of a setting they customarily inhabit: “When you walk through the doors to your office building, what do you see? Hear? Smell?” As with feeling questions , these questions often supplement experience and behavior questions . They are another way of allowing your respondent to report fully and deeply rather than remain on the surface.

Creative questions employ illustrative examples, suggested scenarios, or simulations to get respondents to think more deeply about an issue, topic, or experience. There are many options here. In The Trouble with Passion , Erin Cech ( 2021 ) provides a scenario in which “Joe” is trying to decide whether to stay at his decent but boring computer job or follow his passion by opening a restaurant. She asks respondents, “What should Joe do?” Their answers illuminate the attraction of “passion” in job selection. In my own work, I have used a news story about an upwardly mobile young man who no longer has time to see his mother and sisters to probe respondents’ feelings about the costs of social mobility. Jessi Streib and Betsy Leondar-Wright have used single-page cartoon “scenes” to elicit evaluations of potential racial discrimination, sexual harassment, and classism. Barbara Sutton ( 2010 ) has employed lists of words (“strong,” “mother,” “victim”) on notecards she fans out and asks her female respondents to select and discuss.

Background/Demographic Questions

You most definitely will want to know more about the person you are interviewing in terms of conventional demographic information, such as age, race, gender identity, occupation, and educational attainment. These are not questions that normally open up inquiry. [1] For this reason, my practice has been to include a separate “demographic questionnaire” sheet that I ask each respondent to fill out at the conclusion of the interview. Only include those aspects that are relevant to your study. For example, if you are not exploring religion or religious affiliation, do not include questions about a person’s religion on the demographic sheet. See the example provided at the end of this chapter.

Temporality

Any type of question can have a past, present, or future orientation. For example, if you are asking a behavior question about workplace routine, you might ask the respondent to talk about past work, present work, and ideal (future) work. Similarly, if you want to understand how people cope with natural disasters, you might ask your respondent how they felt then during the wildfire and now in retrospect and whether and to what extent they have concerns for future wildfire disasters. It’s a relatively simple suggestion—don’t forget to ask about past, present, and future—but it can have a big impact on the quality of the responses you receive.

Question Sequence

Having a list of good questions or good question areas is not enough to make a good interview guide. You will want to pay attention to the order in which you ask your questions. Even though any one respondent can derail this order (perhaps by jumping to answer a question you haven’t yet asked), a good advance plan is always helpful. When thinking about sequence, remember that your goal is to get your respondent to open up to you and to say things that might surprise you. To establish rapport, it is best to start with nonthreatening questions. Asking about the present is often the safest place to begin, followed by the past (they have to know you a little bit to get there), and lastly, the future (talking about hopes and fears requires the most rapport). To allow for surprises, it is best to move from very general questions to more particular questions only later in the interview. This ensures that respondents have the freedom to bring up the topics that are relevant to them rather than feel like they are constrained to answer you narrowly. For example, refrain from asking about particular emotions until these have come up previously—don’t lead with them. Often, your more particular questions will emerge only during the course of the interview, tailored to what is emerging in conversation.

Once you have a set of questions, read through them aloud and imagine you are being asked the same questions. Does the set of questions have a natural flow? Would you be willing to answer the very first question to a total stranger? Does your sequence establish facts and experiences before moving on to opinions and values? Did you include prefatory statements, where necessary; transitions; and other announcements? These can be as simple as “Hey, we talked a lot about your experiences as a barista while in college.… Now I am turning to something completely different: how you managed friendships in college.” That is an abrupt transition, but it has been softened by your acknowledgment of that.

Probes and Flexibility

Once you have the interview guide, you will also want to leave room for probes and follow-up questions. As in the sample probe included here, you can write out the obvious probes and follow-up questions in advance. You might not need them, as your respondent might anticipate them and include full responses to the original question. Or you might need to tailor them to how your respondent answered the question. Some common probes and follow-up questions include asking for more details (When did that happen? Who else was there?), asking for elaboration (Could you say more about that?), asking for clarification (Does that mean what I think it means or something else? I understand what you mean, but someone else reading the transcript might not), and asking for contrast or comparison (How did this experience compare with last year’s event?). “Probing is a skill that comes from knowing what to look for in the interview, listening carefully to what is being said and what is not said, and being sensitive to the feedback needs of the person being interviewed” ( Patton 2002:374 ). It takes work! And energy. I and many other interviewers I know report feeling emotionally and even physically drained after conducting an interview. You are tasked with active listening and rearranging your interview guide as needed on the fly. If you only ask the questions written down in your interview guide with no deviations, you are doing it wrong. [2]

The Final Question

Every interview guide should include a very open-ended final question that allows for the respondent to say whatever it is they have been dying to tell you but you’ve forgotten to ask. About half the time they are tired too and will tell you they have nothing else to say. But incredibly, some of the most honest and complete responses take place here, at the end of a long interview. You have to realize that the person being interviewed is often discovering things about themselves as they talk to you and that this process of discovery can lead to new insights for them. Making space at the end is therefore crucial. Be sure you convey that you actually do want them to tell you more, that the offer of “anything else?” is not read as an empty convention where the polite response is no. Here is where you can pull from that active listening and tailor the final question to the particular person. For example, “I’ve asked you a lot of questions about what it was like to live through that wildfire. I’m wondering if there is anything I’ve forgotten to ask, especially because I haven’t had that experience myself” is a much more inviting final question than “Great. Anything you want to add?” It’s also helpful to convey to the person that you have the time to listen to their full answer, even if the allotted time is at the end. After all, there are no more questions to ask, so the respondent knows exactly how much time is left. Do them the courtesy of listening to them!

Conducting the Interview

Once you have your interview guide, you are on your way to conducting your first interview. I always practice my interview guide with a friend or family member. I do this even when the questions don’t make perfect sense for them, as it still helps me realize which questions make no sense, are poorly worded (too academic), or don’t follow sequentially. I also practice the routine I will use for interviewing, which goes something like this:

  • Introduce myself and reintroduce the study
  • Provide consent form and ask them to sign and retain/return copy
  • Ask if they have any questions about the study before we begin
  • Ask if I can begin recording
  • Ask questions (from interview guide)
  • Turn off the recording device
  • Ask if they are willing to fill out my demographic questionnaire
  • Collect questionnaire and, without looking at the answers, place in same folder as signed consent form
  • Thank them and depart

A note on remote interviewing: Interviews have traditionally been conducted face-to-face in a private or quiet public setting. You don’t want a lot of background noise, as this will make transcriptions difficult. During the recent global pandemic, many interviewers, myself included, learned the benefits of interviewing remotely. Although face-to-face is still preferable for many reasons, Zoom interviewing is not a bad alternative, and it does allow more interviews across great distances. Zoom also includes automatic transcription, which significantly cuts down on the time it normally takes to convert our conversations into “data” to be analyzed. These automatic transcriptions are not perfect, however, and you will still need to listen to the recording and clarify and clean up the transcription. Nor do automatic transcriptions include notations of body language or change of tone, which you may want to include. When interviewing remotely, you will want to collect the consent form before you meet: ask them to read, sign, and return it as an email attachment. I think it is better to ask for the demographic questionnaire after the interview, but because some respondents may never return it then, it is probably best to ask for this at the same time as the consent form, in advance of the interview.

What should you bring to the interview? I would recommend bringing two copies of the consent form (one for you and one for the respondent), a demographic questionnaire, a manila folder in which to place the signed consent form and filled-out demographic questionnaire, a printed copy of your interview guide (I print with three-inch right margins so I can jot down notes on the page next to relevant questions), a pen, a recording device, and water.

After the interview, you will want to secure the signed consent form in a locked filing cabinet (if in print) or a password-protected folder on your computer. Using Excel or a similar program that allows tables/spreadsheets, create an identifying number for your interview that links to the consent form without using the name of your respondent. For example, let’s say that I conduct interviews with US politicians, and the first person I meet with is George W. Bush. I will assign the transcription the number “INT#001” and add it to the signed consent form. [3] The signed consent form goes into a locked filing cabinet, and I never use the name “George W. Bush” again. I take the information from the demographic sheet, open my Excel spreadsheet, and add the relevant information in separate columns for the row INT#001: White, male, Republican. When I interview Bill Clinton as my second interview, I include a second row: INT#002: White, male, Democrat. And so on. The only link to the actual name of the respondent and this information is the fact that the consent form (unavailable to anyone but me) has stamped on it the interview number.

Many students get very nervous before their first interview. Actually, many of us are always nervous before the interview! But do not worry—this is normal, and it does pass. Chances are, you will be pleasantly surprised at how comfortable it begins to feel. These “purposeful conversations” are often a delight for both participants. This is not to say that sometimes things go wrong. I often have my students practice several “bad scenarios” (e.g., a respondent that you cannot get to open up; a respondent who is too talkative and dominates the conversation, steering it away from the topics you are interested in; emotions that completely take over; or shocking disclosures you are ill-prepared to handle), but most of the time, things go quite well. Be prepared for the unexpected, but know that the reason interviews are so popular as a technique of data collection is that they are usually richly rewarding for both participants.

One thing that I stress to my methods students and remind myself about is that interviews are still conversations between people. If there’s something you might feel uncomfortable asking someone about in a “normal” conversation, you will likely also feel a bit of discomfort asking it in an interview. Maybe more importantly, your respondent may feel uncomfortable. Social research—especially about inequality—can be uncomfortable. And it’s easy to slip into an abstract, intellectualized, or removed perspective as an interviewer. This is one reason trying out interview questions is important. Another is that sometimes the question sounds good in your head but doesn’t work as well out loud in practice. I learned this the hard way when a respondent asked me how I would answer the question I had just posed, and I realized that not only did I not really know how I would answer it, but I also wasn’t quite as sure I knew what I was asking as I had thought.

—Elizabeth M. Lee, Associate Professor of Sociology at Saint Joseph’s University, author of Class and Campus Life , and co-author of Geographies of Campus Inequality

How Many Interviews?

Your research design has included a targeted number of interviews and a recruitment plan (see chapter 5). Follow your plan, but remember that “ saturation ” is your goal. You interview as many people as you can until you reach a point at which you are no longer surprised by what they tell you. This means not that no one after your first twenty interviews will have surprising, interesting stories to tell you but rather that the picture you are forming about the phenomenon of interest to you from a research perspective has come into focus, and none of the interviews are substantially refocusing that picture. That is when you should stop collecting interviews. Note that to know when you have reached this, you will need to read your transcripts as you go. More about this in chapters 18 and 19.

Your Final Product: The Ideal Interview Transcript

A good interview transcript will demonstrate a subtly controlled conversation by the skillful interviewer. In general, you want to see replies that are about one paragraph long, not short sentences and not running on for several pages. Although it is sometimes necessary to follow respondents down tangents, it is also often necessary to pull them back to the questions that form the basis of your research study. This is not really a free conversation, although it may feel like that to the person you are interviewing.

Final Tips from an Interview Master

Annette Lareau is arguably one of the masters of the trade. In Listening to People , she provides several guidelines for good interviews and then offers a detailed example of an interview gone wrong and how it could be addressed (please see the “Further Readings” at the end of this chapter). Here is an abbreviated version of her set of guidelines: (1) interview respondents who are experts on the subjects of most interest to you (as a corollary, don’t ask people about things they don’t know); (2) listen carefully and talk as little as possible; (3) keep in mind what you want to know and why you want to know it; (4) be a proactive interviewer (subtly guide the conversation); (5) assure respondents that there aren’t any right or wrong answers; (6) use the respondent’s own words to probe further (this both allows you to accurately identify what you heard and pushes the respondent to explain further); (7) reuse effective probes (don’t reinvent the wheel as you go—if repeating the words back works, do it again and again); (8) focus on learning the subjective meanings that events or experiences have for a respondent; (9) don’t be afraid to ask a question that draws on your own knowledge (unlike trial lawyers who are trained never to ask a question for which they don’t already know the answer, sometimes it’s worth it to ask risky questions based on your hypotheses or just plain hunches); (10) keep thinking while you are listening (so difficult…and important); (11) return to a theme raised by a respondent if you want further information; (12) be mindful of power inequalities (and never ever coerce a respondent to continue the interview if they want out); (13) take control with overly talkative respondents; (14) expect overly succinct responses, and develop strategies for probing further; (15) balance digging deep and moving on; (16) develop a plan to deflect questions (e.g., let them know you are happy to answer any questions at the end of the interview, but you don’t want to take time away from them now); and at the end, (17) check to see whether you have asked all your questions. You don’t always have to ask everyone the same set of questions, but if there is a big area you have forgotten to cover, now is the time to recover ( Lareau 2021:93–103 ).

Sample: Demographic Questionnaire

ASA Taskforce on First-Generation and Working-Class Persons in Sociology – Class Effects on Career Success

Supplementary Demographic Questionnaire

Thank you for your participation in this interview project. We would like to collect a few pieces of key demographic information from you to supplement our analyses. Your answers to these questions will be kept confidential and stored by ID number. All of your responses here are entirely voluntary!

What best captures your race/ethnicity? (please check any/all that apply)

  • White (Non Hispanic/Latina/o/x)
  • Black or African American
  • Hispanic, Latino/a/x of Spanish
  • Asian or Asian American
  • American Indian or Alaska Native
  • Middle Eastern or North African
  • Native Hawaiian or Pacific Islander
  • Other : (Please write in: ________________)

What is your current position?

  • Grad Student
  • Full Professor

Please check any and all of the following that apply to you:

  • I identify as a working-class academic
  • I was the first in my family to graduate from college
  • I grew up poor

What best reflects your gender?

  • Transgender female/Transgender woman
  • Transgender male/Transgender man
  • Gender queer/ Gender nonconforming

Anything else you would like us to know about you?

Example: Interview Guide

In this example, follow-up prompts are italicized.  Note the sequence of questions.  That second question often elicits an entire life history , answering several later questions in advance.

Introduction Script/Question

Thank you for participating in our survey of ASA members who identify as first-generation or working-class.  As you may have heard, ASA has sponsored a taskforce on first-generation and working-class persons in sociology and we are interested in hearing from those who so identify.  Your participation in this interview will help advance our knowledge in this area.

  • The first thing we would like to as you is why you have volunteered to be part of this study? What does it mean to you be first-gen or working class?  Why were you willing to be interviewed?
  • How did you decide to become a sociologist?
  • Can you tell me a little bit about where you grew up? ( prompts: what did your parent(s) do for a living?  What kind of high school did you attend?)
  • Has this identity been salient to your experience? (how? How much?)
  • How welcoming was your grad program? Your first academic employer?
  • Why did you decide to pursue sociology at the graduate level?
  • Did you experience culture shock in college? In graduate school?
  • Has your FGWC status shaped how you’ve thought about where you went to school? debt? etc?
  • Were you mentored? How did this work (not work)?  How might it?
  • What did you consider when deciding where to go to grad school? Where to apply for your first position?
  • What, to you, is a mark of career success? Have you achieved that success?  What has helped or hindered your pursuit of success?
  • Do you think sociology, as a field, cares about prestige?
  • Let’s talk a little bit about intersectionality. How does being first-gen/working class work alongside other identities that are important to you?
  • What do your friends and family think about your career? Have you had any difficulty relating to family members or past friends since becoming highly educated?
  • Do you have any debt from college/grad school? Are you concerned about this?  Could you explain more about how you paid for college/grad school?  (here, include assistance from family, fellowships, scholarships, etc.)
  • (You’ve mentioned issues or obstacles you had because of your background.) What could have helped?  Or, who or what did? Can you think of fortuitous moments in your career?
  • Do you have any regrets about the path you took?
  • Is there anything else you would like to add? Anything that the Taskforce should take note of, that we did not ask you about here?

Further Readings

Britten, Nicky. 1995. “Qualitative Interviews in Medical Research.” BMJ: British Medical Journal 31(6999):251–253. A good basic overview of interviewing particularly useful for students of public health and medical research generally.

Corbin, Juliet, and Janice M. Morse. 2003. “The Unstructured Interactive Interview: Issues of Reciprocity and Risks When Dealing with Sensitive Topics.” Qualitative Inquiry 9(3):335–354. Weighs the potential benefits and harms of conducting interviews on topics that may cause emotional distress. Argues that the researcher’s skills and code of ethics should ensure that the interviewing process provides more of a benefit to both participant and researcher than a harm to the former.

Gerson, Kathleen, and Sarah Damaske. 2020. The Science and Art of Interviewing . New York: Oxford University Press. A useful guidebook/textbook for both undergraduates and graduate students, written by sociologists.

Kvale, Steiner. 2007. Doing Interviews . London: SAGE. An easy-to-follow guide to conducting and analyzing interviews by psychologists.

Lamont, Michèle, and Ann Swidler. 2014. “Methodological Pluralism and the Possibilities and Limits of Interviewing.” Qualitative Sociology 37(2):153–171. Written as a response to various debates surrounding the relative value of interview-based studies and ethnographic studies defending the particular strengths of interviewing. This is a must-read article for anyone seriously engaging in qualitative research!

Pugh, Allison J. 2013. “What Good Are Interviews for Thinking about Culture? Demystifying Interpretive Analysis.” American Journal of Cultural Sociology 1(1):42–68. Another defense of interviewing written against those who champion ethnographic methods as superior, particularly in the area of studying culture. A classic.

Rapley, Timothy John. 2001. “The ‘Artfulness’ of Open-Ended Interviewing: Some considerations in analyzing interviews.” Qualitative Research 1(3):303–323. Argues for the importance of “local context” of data production (the relationship built between interviewer and interviewee, for example) in properly analyzing interview data.

Weiss, Robert S. 1995. Learning from Strangers: The Art and Method of Qualitative Interview Studies . New York: Simon and Schuster. A classic and well-regarded textbook on interviewing. Because Weiss has extensive experience conducting surveys, he contrasts the qualitative interview with the survey questionnaire well; particularly useful for those trained in the latter.

  • I say “normally” because how people understand their various identities can itself be an expansive topic of inquiry. Here, I am merely talking about collecting otherwise unexamined demographic data, similar to how we ask people to check boxes on surveys. ↵
  • Again, this applies to “semistructured in-depth interviewing.” When conducting standardized questionnaires, you will want to ask each question exactly as written, without deviations! ↵
  • I always include “INT” in the number because I sometimes have other kinds of data with their own numbering: FG#001 would mean the first focus group, for example. I also always include three-digit spaces, as this allows for up to 999 interviews (or, more realistically, allows for me to interview up to one hundred persons without having to reset my numbering system). ↵

A method of data collection in which the researcher asks the participant questions; the answers to these questions are often recorded and transcribed verbatim. There are many different kinds of interviews - see also semistructured interview , structured interview , and unstructured interview .

A document listing key questions and question areas for use during an interview.  It is used most often for semi-structured interviews.  A good interview guide may have no more than ten primary questions for two hours of interviewing, but these ten questions will be supplemented by probes and relevant follow-ups throughout the interview.  Most IRBs require the inclusion of the interview guide in applications for review.  See also interview and  semi-structured interview .

A data-collection method that relies on casual, conversational, and informal interviewing.  Despite its apparent conversational nature, the researcher usually has a set of particular questions or question areas in mind but allows the interview to unfold spontaneously.  This is a common data-collection technique among ethnographers.  Compare to the semi-structured or in-depth interview .

A form of interview that follows a standard guide of questions asked, although the order of the questions may change to match the particular needs of each individual interview subject, and probing “follow-up” questions are often added during the course of the interview.  The semi-structured interview is the primary form of interviewing used by qualitative researchers in the social sciences.  It is sometimes referred to as an “in-depth” interview.  See also interview and  interview guide .

The cluster of data-collection tools and techniques that involve observing interactions between people, the behaviors, and practices of individuals (sometimes in contrast to what they say about how they act and behave), and cultures in context.  Observational methods are the key tools employed by ethnographers and Grounded Theory .

Follow-up questions used in a semi-structured interview  to elicit further elaboration.  Suggested prompts can be included in the interview guide  to be used/deployed depending on how the initial question was answered or if the topic of the prompt does not emerge spontaneously.

A form of interview that follows a strict set of questions, asked in a particular order, for all interview subjects.  The questions are also the kind that elicits short answers, and the data is more “informative” than probing.  This is often used in mixed-methods studies, accompanying a survey instrument.  Because there is no room for nuance or the exploration of meaning in structured interviews, qualitative researchers tend to employ semi-structured interviews instead.  See also interview.

The point at which you can conclude data collection because every person you are interviewing, the interaction you are observing, or content you are analyzing merely confirms what you have already noted.  Achieving saturation is often used as the justification for the final sample size.

An interview variant in which a person’s life story is elicited in a narrative form.  Turning points and key themes are established by the researcher and used as data points for further analysis.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

How to conduct qualitative interviews (tips and best practices)

Last updated

18 May 2023

Reviewed by

Miroslav Damyanov

However, conducting qualitative interviews can be challenging, even for seasoned researchers. Poorly conducted interviews can lead to inaccurate or incomplete data, significantly compromising the validity and reliability of your research findings.

When planning to conduct qualitative interviews, you must adequately prepare yourself to get the most out of your data. Fortunately, there are specific tips and best practices that can help you conduct qualitative interviews effectively.

  • What is a qualitative interview?

A qualitative interview is a research technique used to gather in-depth information about people's experiences, attitudes, beliefs, and perceptions. Unlike a structured questionnaire or survey, a qualitative interview is a flexible, conversational approach that allows the interviewer to delve into the interviewee's responses and explore their insights and experiences.

In a qualitative interview, the researcher typically develops a set of open-ended questions that provide a framework for the conversation. However, the interviewer can also adapt to the interviewee's responses and ask follow-up questions to understand their experiences and views better.

  • How to conduct interviews in qualitative research

Conducting interviews involves a well-planned and deliberate process to collect accurate and valid data. 

Here’s a step-by-step guide on how to conduct interviews in qualitative research, broken down into three stages:

1. Before the interview

The first step in conducting a qualitative interview is determining your research question . This will help you identify the type of participants you need to recruit . Once you have your research question, you can start recruiting participants by identifying potential candidates and contacting them to gauge their interest in participating in the study. 

After that, it's time to develop your interview questions. These should be open-ended questions that will elicit detailed responses from participants. You'll also need to get consent from the participants, ideally in writing, to ensure that they understand the purpose of the study and their rights as participants. Finally, choose a comfortable and private location to conduct the interview and prepare the interview guide.

2. During the interview

Start by introducing yourself and explaining the purpose of the study. Establish a rapport by putting the participants at ease and making them feel comfortable. Use the interview guide to ask the questions, but be flexible and ask follow-up questions to gain more insight into the participants' responses. 

Take notes during the interview, and ask permission to record the interview for transcription purposes. Be mindful of the time, and cover all the questions in the interview guide.

3. After the interview

Once the interview is over, transcribe the interview if you recorded it. If you took notes, review and organize them to make sure you capture all the important information. Then, analyze the data you collected by identifying common themes and patterns. Use the findings to answer your research question. 

Finally, debrief with the participants to thank them for their time, provide feedback on the study, and answer any questions they may have.

Free AI content analysis generator

Make sense of your research by automatically summarizing key takeaways through our free content analysis tool.

sample questions qualitative research interviews

  • What kinds of questions should you ask in a qualitative interview?

Qualitative interviews involve asking questions that encourage participants to share their experiences, opinions, and perspectives on a particular topic. These questions are designed to elicit detailed and nuanced responses rather than simple yes or no answers.

Effective questions in a qualitative interview are generally open-ended and non-leading. They avoid presuppositions or assumptions about the participant's experience and allow them to share their views in their own words. 

In customer research , you might ask questions such as:

What motivated you to choose our product/service over our competitors?

How did you first learn about our product/service?

Can you walk me through your experience with our product/service?

What improvements or changes would you suggest for our product/service?

Have you recommended our product/service to others, and if so, why?

The key is to ask questions relevant to the research topic and allow participants to share their experiences meaningfully and informally. 

  • How to determine the right qualitative interview participants

Choosing the right participants for a qualitative interview is a crucial step in ensuring the success and validity of the research . You need to consider several factors to determine the right participants for a qualitative interview. These may include:

Relevant experiences : Participants should have experiences related to the research topic that can provide valuable insights.

Diversity : Aim to include diverse participants to ensure the study's findings are representative and inclusive.

Access : Identify participants who are accessible and willing to participate in the study.

Informed consent : Participants should be fully informed about the study's purpose, methods, and potential risks and benefits and be allowed to provide informed consent.

You can use various recruitment methods, such as posting ads in relevant forums, contacting community organizations or social media groups, or using purposive sampling to identify participants who meet specific criteria.

  • How to make qualitative interview subjects comfortable

Making participants comfortable during a qualitative interview is essential to obtain rich, detailed data. Participants are more likely to share their experiences openly when they feel at ease and not judged. 

Here are some ways to make interview subjects comfortable:

Explain the purpose of the study

Start the interview by explaining the research topic and its importance. The goal is to give participants a sense of what to expect.

Create a comfortable environment

Conduct the interview in a quiet, private space where the participant feels comfortable. Turn off any unnecessary electronics that can create distractions. Ensure your equipment works well ahead of time. Arrive at the interview on time. If you conduct a remote interview, turn on your camera and mute all notetakers and observers.

Build rapport

Greet the participant warmly and introduce yourself. Show interest in their responses and thank them for their time.

Use open-ended questions

Ask questions that encourage participants to elaborate on their thoughts and experiences.

Listen attentively

Resist the urge to multitask . Pay attention to the participant's responses, nod your head, or make supportive comments to show you’re interested in their answers. Avoid interrupting them.

Avoid judgment

Show respect and don't judge the participant's views or experiences. Allow the participant to speak freely without feeling judged or ridiculed.

Offer breaks

If needed, offer breaks during the interview, especially if the topic is sensitive or emotional.

Creating a comfortable environment and establishing rapport with the participant fosters an atmosphere of trust and encourages open communication. This helps participants feel at ease and willing to share their experiences.

  • How to analyze a qualitative interview

Analyzing a qualitative interview involves a systematic process of examining the data collected to identify patterns, themes, and meanings that emerge from the responses. 

Here are some steps on how to analyze a qualitative interview:

1. Transcription

The first step is transcribing the interview into text format to have a written record of the conversation. This step is essential to ensure that you can refer back to the interview data and identify the important aspects of the interview.

2. Data reduction

Once you’ve transcribed the interview, read through it to identify key themes, patterns, and phrases emerging from the data. This process involves reducing the data into more manageable pieces you can easily analyze.

The next step is to code the data by labeling sections of the text with descriptive words or phrases that reflect the data's content. Coding helps identify key themes and patterns from the interview data.

4. Categorization

After coding, you should group the codes into categories based on their similarities. This process helps to identify overarching themes or sub-themes that emerge from the data.

5. Interpretation

You should then interpret the themes and sub-themes by identifying relationships, contradictions, and meanings that emerge from the data. Interpretation involves analyzing the themes in the context of the research question .

6. Comparison

The next step is comparing the data across participants or groups to identify similarities and differences. This step helps to ensure that the findings aren’t just specific to one participant but can be generalized to the wider population.

7. Triangulation

To ensure the findings are valid and reliable, you should use triangulation by comparing the findings with other sources, such as observations or interview data.

8. Synthesis

The final step is synthesizing the findings by summarizing the key themes and presenting them clearly and concisely. This step involves writing a report that presents the findings in a way that is easy to understand, using quotes and examples from the interview data to illustrate the themes.

  • Tips for transcribing a qualitative interview

Transcribing a qualitative interview is a crucial step in the research process. It involves converting the audio or video recording of the interview into written text. 

Here are some tips for transcribing a qualitative interview:

Use transcription software

Transcription software can save time and increase accuracy by automatically transcribing audio or video recordings.

Listen carefully

When manually transcribing, listen carefully to the recording to ensure clarity. Pause and rewind the recording as necessary.

Use appropriate formatting

Use a consistent format for transcribing, such as marking pauses, overlaps, and interruptions. Indicate non-verbal cues such as laughter, sighs, or changes in tone.

Edit for clarity

Edit the transcription to ensure clarity and readability. Use standard grammar and punctuation, correct misspellings, and remove filler words like "um" and "ah."

Proofread and edit

Verify the accuracy of the transcription by listening to the recording again and reviewing the notes taken during the interview.

Use timestamps

Add timestamps to the transcription to reference specific interview sections.

Transcribing a qualitative interview can be time-consuming, but it’s essential to ensure the accuracy of the data collected. Following these tips can produce high-quality transcriptions useful for analysis and reporting.

  • Why are interview techniques in qualitative research effective?

Unlike quantitative research methods, which rely on numerical data, qualitative research seeks to understand the richness and complexity of human experiences and perspectives. 

Interview techniques involve asking open-ended questions that allow participants to express their views and share their stories in their own words. This approach can help researchers to uncover unexpected or surprising insights that may not have been discovered through other research methods.

Interview techniques also allow researchers to establish rapport with participants, creating a comfortable and safe space for them to share their experiences. This can lead to a deeper level of trust and candor, leading to more honest and authentic responses.

  • What are the weaknesses of qualitative interviews?

Qualitative interviews are an excellent research approach when used properly, but they have their drawbacks. 

The weaknesses of qualitative interviews include the following:

Subjectivity and personal biases

Qualitative interviews rely on the researcher's interpretation of the interviewee's responses. The researcher's biases or preconceptions can affect how the questions are framed and how the responses are interpreted, which can influence results.

Small sample size

The sample size in qualitative interviews is often small, which can limit the generalizability of the results to the larger population.

Data quality

The quality of data collected during interviews can be affected by various factors, such as the interviewee's mood, the setting of the interview, and the interviewer's skills and experience.

Socially desirable responses

Interviewees may provide responses that they believe are socially acceptable rather than truthful or genuine.

Conducting qualitative interviews can be expensive, especially if the researcher must travel to different locations to conduct the interviews.

Time-consuming

The data analysis process can be time-consuming and labor-intensive, as researchers need to transcribe and analyze the data manually.

Despite these weaknesses, qualitative interviews remain a valuable research tool . You can take steps to mitigate the impact of these weaknesses by incorporating the perspectives of other researchers or participants in the analysis process, using multiple data sources , and critically analyzing your biases and assumptions.

Mastering the art of qualitative interviews is an essential skill for businesses looking to gain deep insights into their customers' needs , preferences, and behaviors. By following the tips and best practices outlined in this article, you can conduct interviews that provide you with rich data that you can use to make informed decisions about your products, services, and marketing strategies. 

Remember that effective communication, active listening, and proper analysis are critical components of successful qualitative interviews. By incorporating these practices into your customer research, you can gain a competitive edge and build stronger customer relationships.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 22 August 2024

Last updated: 5 February 2023

Last updated: 16 August 2024

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.

Get started for free

sample questions qualitative research interviews

Qualitative Research 101: Interviewing

5 Common Mistakes To Avoid When Undertaking Interviews

By: David Phair (PhD) and Kerryn Warren (PhD) | March 2022

Undertaking interviews is potentially the most important step in the qualitative research process. If you don’t collect useful, useable data in your interviews, you’ll struggle through the rest of your dissertation or thesis.  Having helped numerous students with their research over the years, we’ve noticed some common interviewing mistakes that first-time researchers make. In this post, we’ll discuss five costly interview-related mistakes and outline useful strategies to avoid making these.

Overview: 5 Interviewing Mistakes

  • Not having a clear interview strategy /plan
  • Not having good interview techniques /skills
  • Not securing a suitable location and equipment
  • Not having a basic risk management plan
  • Not keeping your “ golden thread ” front of mind

1. Not having a clear interview strategy

The first common mistake that we’ll look at is that of starting the interviewing process without having first come up with a clear interview strategy or plan of action. While it’s natural to be keen to get started engaging with your interviewees, a lack of planning can result in a mess of data and inconsistency between interviews.

There are several design choices to decide on and plan for before you start interviewing anyone. Some of the most important questions you need to ask yourself before conducting interviews include:

  • What are the guiding research aims and research questions of my study?
  • Will I use a structured, semi-structured or unstructured interview approach?
  • How will I record the interviews (audio or video)?
  • Who will be interviewed and by whom ?
  • What ethics and data law considerations do I need to adhere to?
  • How will I analyze my data? 

Let’s take a quick look at some of these.

The core objective of the interviewing process is to generate useful data that will help you address your overall research aims. Therefore, your interviews need to be conducted in a way that directly links to your research aims, objectives and research questions (i.e. your “golden thread”). This means that you need to carefully consider the questions you’ll ask to ensure that they align with and feed into your golden thread. If any question doesn’t align with this, you may want to consider scrapping it.

Another important design choice is whether you’ll use an unstructured, semi-structured or structured interview approach . For semi-structured interviews, you will have a list of questions that you plan to ask and these questions will be open-ended in nature. You’ll also allow the discussion to digress from the core question set if something interesting comes up. This means that the type of information generated might differ a fair amount between interviews.

Contrasted to this, a structured approach to interviews is more rigid, where a specific set of closed questions is developed and asked for each interviewee in exactly the same order. Closed questions have a limited set of answers, that are often single-word answers. Therefore, you need to think about what you’re trying to achieve with your research project (i.e. your research aims) and decided on which approach would be best suited in your case.

It is also important to plan ahead with regards to who will be interviewed and how. You need to think about how you will approach the possible interviewees to get their cooperation, who will conduct the interviews, when to conduct the interviews and how to record the interviews. For each of these decisions, it’s also essential to make sure that all ethical considerations and data protection laws are taken into account.

Finally, you should think through how you plan to analyze the data (i.e., your qualitative analysis method) generated by the interviews. Different types of analysis rely on different types of data, so you need to ensure you’re asking the right types of questions and correctly guiding your respondents.

Simply put, you need to have a plan of action regarding the specifics of your interview approach before you start collecting data. If not, you’ll end up drifting in your approach from interview to interview, which will result in inconsistent, unusable data.

Your interview questions need to directly  link to your research aims, objectives and  research questions - your "golden thread”.

2. Not having good interview technique

While you’re generally not expected to become you to be an expert interviewer for a dissertation or thesis, it is important to practice good interview technique and develop basic interviewing skills .

Let’s go through some basics that will help the process along.

Firstly, before the interview , make sure you know your interview questions well and have a clear idea of what you want from the interview. Naturally, the specificity of your questions will depend on whether you’re taking a structured, semi-structured or unstructured approach, but you still need a consistent starting point . Ideally, you should develop an interview guide beforehand (more on this later) that details your core question and links these to the research aims, objectives and research questions.

Before you undertake any interviews, it’s a good idea to do a few mock interviews with friends or family members. This will help you get comfortable with the interviewer role, prepare for potentially unexpected answers and give you a good idea of how long the interview will take to conduct. In the interviewing process, you’re likely to encounter two kinds of challenging interviewees ; the two-word respondent and the respondent who meanders and babbles. Therefore, you should prepare yourself for both and come up with a plan to respond to each in a way that will allow the interview to continue productively.

To begin the formal interview , provide the person you are interviewing with an overview of your research. This will help to calm their nerves (and yours) and contextualize the interaction. Ultimately, you want the interviewee to feel comfortable and be willing to be open and honest with you, so it’s useful to start in a more casual, relaxed fashion and allow them to ask any questions they may have. From there, you can ease them into the rest of the questions.

As the interview progresses , avoid asking leading questions (i.e., questions that assume something about the interviewee or their response). Make sure that you speak clearly and slowly , using plain language and being ready to paraphrase questions if the person you are interviewing misunderstands. Be particularly careful with interviewing English second language speakers to ensure that you’re both on the same page.

Engage with the interviewee by listening to them carefully and acknowledging that you are listening to them by smiling or nodding. Show them that you’re interested in what they’re saying and thank them for their openness as appropriate. This will also encourage your interviewee to respond openly.

Need a helping hand?

sample questions qualitative research interviews

3. Not securing a suitable location and quality equipment

Where you conduct your interviews and the equipment you use to record them both play an important role in how the process unfolds. Therefore, you need to think carefully about each of these variables before you start interviewing.

Poor location: A bad location can result in the quality of your interviews being compromised, interrupted, or cancelled. If you are conducting physical interviews, you’ll need a location that is quiet, safe, and welcoming . It’s very important that your location of choice is not prone to interruptions (the workplace office is generally problematic, for example) and has suitable facilities (such as water, a bathroom, and snacks).

If you are conducting online interviews , you need to consider a few other factors. Importantly, you need to make sure that both you and your respondent have access to a good, stable internet connection and electricity. Always check before the time that both of you know how to use the relevant software and it’s accessible (sometimes meeting platforms are blocked by workplace policies or firewalls). It’s also good to have alternatives in place (such as WhatsApp, Zoom, or Teams) to cater for these types of issues.

Poor equipment: Using poor-quality recording equipment or using equipment incorrectly means that you will have trouble transcribing, coding, and analyzing your interviews. This can be a major issue , as some of your interview data may go completely to waste if not recorded well. So, make sure that you use good-quality recording equipment and that you know how to use it correctly.

To avoid issues, you should always conduct test recordings before every interview to ensure that you can use the relevant equipment properly. It’s also a good idea to spot check each recording afterwards, just to make sure it was recorded as planned. If your equipment uses batteries, be sure to always carry a spare set.

Where you conduct your interviews and the equipment you use to record them play an important role in how the process unfolds.

4. Not having a basic risk management plan

Many possible issues can arise during the interview process. Not planning for these issues can mean that you are left with compromised data that might not be useful to you. Therefore, it’s important to map out some sort of risk management plan ahead of time, considering the potential risks, how you’ll minimize their probability and how you’ll manage them if they materialize.

Common potential issues related to the actual interview include cancellations (people pulling out), delays (such as getting stuck in traffic), language and accent differences (especially in the case of poor internet connections), issues with internet connections and power supply. Other issues can also occur in the interview itself. For example, the interviewee could drift off-topic, or you might encounter an interviewee who does not say much at all.

You can prepare for these potential issues by considering possible worst-case scenarios and preparing a response for each scenario. For instance, it is important to plan a backup date just in case your interviewee cannot make it to the first meeting you scheduled with them. It’s also a good idea to factor in a 30-minute gap between your interviews for the instances where someone might be late, or an interview runs overtime for other reasons. Make sure that you also plan backup questions that could be used to bring a respondent back on topic if they start rambling, or questions to encourage those who are saying too little.

In general, it’s best practice to plan to conduct more interviews than you think you need (this is called oversampling ). Doing so will allow you some room for error if there are interviews that don’t go as planned, or if some interviewees withdraw. If you need 10 interviews, it is a good idea to plan for 15. Likely, a few will cancel , delay, or not produce useful data.

You should consider all the potential risks, how you’ll reduce their probability and how you'll respond if they do indeed materialize.

5. Not keeping your golden thread front of mind

We touched on this a little earlier, but it is a key point that should be central to your entire research process. You don’t want to end up with pages and pages of data after conducting your interviews and realize that it is not useful to your research aims . Your research aims, objectives and research questions – i.e., your golden thread – should influence every design decision and should guide the interview process at all times. 

A useful way to avoid this mistake is by developing an interview guide before you begin interviewing your respondents. An interview guide is a document that contains all of your questions with notes on how each of the interview questions is linked to the research question(s) of your study. You can also include your research aims and objectives here for a more comprehensive linkage. 

You can easily create an interview guide by drawing up a table with one column containing your core interview questions . Then add another column with your research questions , another with expectations that you may have in light of the relevant literature and another with backup or follow-up questions . As mentioned, you can also bring in your research aims and objectives to help you connect them all together. If you’d like, you can download a copy of our free interview guide here .

Recap: Qualitative Interview Mistakes

In this post, we’ve discussed 5 common costly mistakes that are easy to make in the process of planning and conducting qualitative interviews.

To recap, these include:

If you have any questions about these interviewing mistakes, drop a comment below. Alternatively, if you’re interested in getting 1-on-1 help with your thesis or dissertation , check out our dissertation coaching service or book a free initial consultation with one of our friendly Grad Coaches.

sample questions qualitative research interviews

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Introduction to Research Methods

6 qualitative research and interviews.

So we’ve described doing a survey and collecting quantitative data. But not all questions can best be answered by a survey. A survey is great for understanding what people think (for example), but not why they think what they do. If your research is intending to understand the underlying motivations or reasons behind peoples actions, or to build a deeper understanding on the background of a subject, an interview may be the more appropriate data collection method.

Interviews are a method of data collection that consist of two or more people exchanging information through a structured process of questions and answers. Questions are designed by the researcher to thoughtfully collect in-depth information on a topic or set of topics as related to the central research question. Interviews typically occur in-person, although good interviews can also be conducted remotely via the phone or video conferencing. Unlike surveys, interviews give the opportunity to ask follow-up questions and thoughtfully engage with participants on the spot (rather than the anonymous and impartial format of survey research).

And surveys can be used in qualitative or quantitative research – though they’re more typically a qualitative technique. In-depth interviews , containing open-ended questions and structured by an interview guide . One can also do a standardized interview with closed-ended questions (i.e. answer options) that are structured by an interview schedule as part of quantitative research. While these are called interviews they’re far closer to surveys, so we wont cover them again in this chapter. The terms used for in-depth interviews we’ll cover in the next section.

6.1 Interviews

In-depth interviews allow participants to describe experiences in their own words (a primary strength of the interview format). Strong in-depth interviews will include many open-ended questions that allow participants to respond in their own words, share new ideas, and lead the conversation in different directions. The purpose of open-ended questions and in-depth interviews is to hear as much as possible in the person’s own voice, to collect new information and ideas, and to achieve a level of depth not possible in surveys or most other forms of data collection.

Typically, an interview guide is used to create a soft structure for the conversation and is an important preparation tool for the researcher. You can not go into an interview unprepared and just “wing it”; what the interview guide allows you to do is map out a framework, order of topics, and may include specific questions to use during the interview. Generally, the interview guide is thought of as just that — a guide to use in order to keep the interview focused. It is not set in stone and a skilled researcher can change the order of questions or topics in an interviews based on the organic conversation flow.

Depending on the experience and skill level of the researcher, an interview guide can be as simple as a list of topics to cover. However, for consistency and quality of research, the interviewer may want to take the time to at least practice writing out questions in advance to ensure that phrasing and word choices are as clear, objective, and focused as possible. It’s worth remembering that working out the wording of questions in advance allows researchers to ensure more consistency across interview. The interview guide below, taken from the wonderful and free textbook Principles of Sociological Inquiry , shows an interview guide that just has topics.

sample questions qualitative research interviews

Alternatively, you can use a more detailed guide that lists out possible questions, as shown below. A more detailed guide is probably better for an interviewer that has less experience, or is just beginning to work on a given topic.

sample questions qualitative research interviews

The purpose of an interview guide is to help ask effective questions and to support the process of acquiring the best possible data for your research. Topics and questions should be organized thematically, and in a natural progression that will allow the conversation to flow and deepen throughout the course of the interview. Often, researchers will attempt to memorize or partially memorize the interview guide, in order to be more fully present with the participant during the conversation.

6.2 Asking good Questions

Remember, the purposes of interviews is to go more in-depth with an individual than is possible with a generalized survey. For this reason, it is important to use the guide as a starting point but not to be overly tethered to it during the actual interview process. You may get stuck when respondents give you shorter answers than you expect, or don’t provide the type of depth that you need for your research. Often, you may want to probe for more specifics. Think about using follow up questions like “How does/did that affect you?” or “How does X make you feel?” and “Tell me about a time where X…”

For example, if I was researching the relationship between pets and mental health, some strong open-ended questions might be: * How does your pet typically make you feel when you wake up in the morning? * How does your pet generally affect your mood when you arrive home in the evening? * Tell me about a time when your pet had a significant impact on your emotional state.

Questions framed in this manner leave plenty of room for the respondent to answer in their own words, as opposed to leading and/or truncated questions, such as: * Does being with your pet make you happy? * After a bad day, how much does seeing your pet improve your mood? * Tell me about how important your pet is to your mental health.

These questions assume outcomes and will not result in high quality research. Researchers should always avoid asking leading questions that give away an expected answer or suggest particular responses. For instance, if I ask “we need to spend more on public schools, don’t you think?” the respondent is more likely to agree regardless of their own thoughts. Some wont, but humans generally have a strong natural desire to be agreeable. That’s why leaving your questions neutral and open so that respondents can speak to their experiences and views is critical.

6.3 Analyzing Interview Data

Writing good questions and interviewing respondents are just the first steps of the interview process. After these stages, the researcher still has a lot of work to do to collect usable data from the interview. The researcher must spend time coding and analyzing the interview to retrieve this data. Just doing an interview wont produce data. Think about how many conversations you have everyday, and none of those are leaving you swimming in data.

Hopefully you can record your interviews. Recording your interviews will allow you the opportunity to transcribe them word for word later. If you can’t record the interview you’ll need to take detailed notes so that you can reconstruct what you heard later. Do not trust yourself to “just remember” the conversation. You’re collecting data, precious data that you’re spending time and energy to collect. Treat it as important and valuable. Remember our description of the methodology section from Chapter 2, you need to maintain a chain of custody on your data. If you just remembered the interview, you could be accused of making up the results. Your interview notes and the recording become part of that chain of custody to prove to others that your interviews were real and that your results are accurate.

Assuming you recorded your interview, the first step in the analysis process is transcribing the interview. A transcription is a written record of every word in an interview. Transcriptions can either be completed by the researcher or by a hired worker, though it is good practice for the researcher to transcribe the interview him or herself. Researchers should keep the following points in mind regarding transcriptions: * The interview should take place in a quiet location with minimal background noise to produce a clear recording; * Transcribing interviews is a time-consuming process and may take two to three times longer than the actual interview; * Transcriptions provide a more precise record of the interview than hand written notes and allow the interviewer to focus during the interview.

After transcribing the interview, the next step is to analyze the responses. Coding is the main form of analysis used for interviews and involves studying a transcription to identify important themes. These themes are categorized into codes, which are words or phrases that denote an idea.

You’ll typically being with several codes in mind that are generated by key ideas you week seeking in the questions, but you can also being by using open coding to understand the results. An open coding process involves reading through the transcript multiple times and paying close attention to each line of the text to discover noteworthy concepts. During the open coding process, the researcher keeps an open mind to find any codes that may be relevant to the research topic.

After the open coding process is complete, focused coding can begin. Focused coding takes a closer look at the notes compiled during the open coding stage to merge common codes and define what the codes mean in the context of the research project.

Imagine a researcher is conducting interviews to learn about various people’s experiences of childhood in New Orleans. The following example shows several codes that this researcher extrapolated from an interview with one of their subjects.

sample questions qualitative research interviews

6.4 Using interview data

The next chapter will address ways to identify people to interview, but most of the remainder of the book will address how to analyze quantitative data. That shouldn’t be taken as a sign that quantitative data is better, or that it’s easier to use interview data. Because in an interview the researcher must interpret the words of others it is often more challenging to identify your findings and clearly answer your research question. However, quantitative data is more common, and there are more different things you can do with it, so we spend a lot of the textbook focusing on it.

I’ll work through one more example of using interview data though. It takes a lot of practice to be a good and skilled interviewer. What I show below is a brief excerpt of an interview I did, and how that data was used in a resulting paper I wrote. These aren’t the only way you can use interview data, but it’s an example of what the intermediary and final product might look like.

The overall project these are drawn from was concerned with minor league baseball stadiums, but the specific part I’m pulling from here was studying the decline and rejuvenation of downtown around those stadiums in several cities. You’ll see that I’m using the words of the respondent fairly directly, because that’s my data. But I’m not just relying on one respondent and trusting them, I did a few dozen interviews in order to understand the commonalities in people’s perspectives to build a narrative around my research question.

Excerpt from Notes

Excerpt from Notes

Excerpt from Resulting Paper

Excerpt from Resulting Paper

How many interviews are necessary? It actually doesn’t take many. What you want to observe in your interviews is theoretical saturation , where the codes you use in the transcript begin to appear across conversations and groups. If different people disagree that’s fine, but what you want to understand is the commonalities across peoples perspectives. Most research on the subject says that with 8 interviews you’ll typically start to see a decline in new information gathered. That doesn’t mean you won’t get new words , but you’ll stop hearing completely unique perspectives or gain novel insights. At that point, where you’ve ‘heard it all before’ you can stop, because you’ve probably identified the answer to the questions you were trying to research.

6.5 Ensuring Anonymity

One significant ethical concern with interviews, that also applies to surveys, is making sure that respondents maintain anonymity. In either form of data collection you may be asking respondents deeply personal questions, that if exposed may cause legal, personal, or professional harm. Notice that in the excerpt of the paper above the respondents are only identified by an id I assigned (Louisville D) and their career, rather than their name. I can only include the excerpt of the interview notes above because there are no details that might lead to them being identified.

You may want to report details about a person to contextualize the data you gathered, but you should always ensure that no one can be identified from your research. For instance, if you were doing research on racism at large companies, you may want to preface people’s comments by their race, as there is a good chance that white and minority employees would feel differently about the issues. However, if you preface someones comments by saying they’re a minority manager, that may violate their anonymity. Even if you don’t state what company you did interviews with, that may be enough detail for their co-workers to identify them if there are few minority managers at the company. As such, always think long and hard about whether there is any way that the participation of respondents may be exposed.

6.6 Why not both?

sample questions qualitative research interviews

We’ve discussed surveys and interviews as different methods the last two chapters, but they can also complement each other.

For instance, let’s say you’re curious to study people who change opinions on abortion, either going from support to opposition or vice versa. You could use a survey to understand the prevalence of changing opinions, i.e. what percentage of people in your city have changed their views. That would help to establish whether this is a prominent issue, or whether it’s a rare phenomenon. But it would be difficult to understand from the survey what makes people change their views. You could add an open ended question for anyone that said they changed their opinion, but many people won’t respond and few will provide the level of detail necessary to understand their motivations. Interviews with people that have changed their opinions would give you an opportunity to explore how their experiences and beliefs have changed in combination with their views towards abortion.

6.7 Summary

In the last two chapters we’ve discussed the two most prominent methods of data collection in the social sciences: surveys and interviews. What we haven’t discussed though is how to identify the people you’ll collect data from; that’s called a sampling strategy. In the next chapter

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Types of Interviews in Research | Guide & Examples

Types of Interviews in Research | Guide & Examples

Published on March 10, 2022 by Tegan George . Revised on June 22, 2023.

An interview is a qualitative research method that relies on asking questions in order to collect data . Interviews involve two or more people, one of whom is the interviewer asking the questions.

There are several types of interviews, often differentiated by their level of structure.

  • Structured interviews have predetermined questions asked in a predetermined order.
  • Unstructured interviews are more free-flowing.
  • Semi-structured interviews fall in between.

Interviews are commonly used in market research, social science, and ethnographic research .

Table of contents

What is a structured interview, what is a semi-structured interview, what is an unstructured interview, what is a focus group, examples of interview questions, advantages and disadvantages of interviews, other interesting articles, frequently asked questions about types of interviews.

Structured interviews have predetermined questions in a set order. They are often closed-ended, featuring dichotomous (yes/no) or multiple-choice questions. While open-ended structured interviews exist, they are much less common. The types of questions asked make structured interviews a predominantly quantitative tool.

Asking set questions in a set order can help you see patterns among responses, and it allows you to easily compare responses between participants while keeping other factors constant. This can mitigate   research biases and lead to higher reliability and validity. However, structured interviews can be overly formal, as well as limited in scope and flexibility.

  • You feel very comfortable with your topic. This will help you formulate your questions most effectively.
  • You have limited time or resources. Structured interviews are a bit more straightforward to analyze because of their closed-ended nature, and can be a doable undertaking for an individual.
  • Your research question depends on holding environmental conditions between participants constant.

Prevent plagiarism. Run a free check.

Semi-structured interviews are a blend of structured and unstructured interviews. While the interviewer has a general plan for what they want to ask, the questions do not have to follow a particular phrasing or order.

Semi-structured interviews are often open-ended, allowing for flexibility, but follow a predetermined thematic framework, giving a sense of order. For this reason, they are often considered “the best of both worlds.”

However, if the questions differ substantially between participants, it can be challenging to look for patterns, lessening the generalizability and validity of your results.

  • You have prior interview experience. It’s easier than you think to accidentally ask a leading question when coming up with questions on the fly. Overall, spontaneous questions are much more difficult than they may seem.
  • Your research question is exploratory in nature. The answers you receive can help guide your future research.

An unstructured interview is the most flexible type of interview. The questions and the order in which they are asked are not set. Instead, the interview can proceed more spontaneously, based on the participant’s previous answers.

Unstructured interviews are by definition open-ended. This flexibility can help you gather detailed information on your topic, while still allowing you to observe patterns between participants.

However, so much flexibility means that they can be very challenging to conduct properly. You must be very careful not to ask leading questions, as biased responses can lead to lower reliability or even invalidate your research.

  • You have a solid background in your research topic and have conducted interviews before.
  • Your research question is exploratory in nature, and you are seeking descriptive data that will deepen and contextualize your initial hypotheses.
  • Your research necessitates forming a deeper connection with your participants, encouraging them to feel comfortable revealing their true opinions and emotions.

A focus group brings together a group of participants to answer questions on a topic of interest in a moderated setting. Focus groups are qualitative in nature and often study the group’s dynamic and body language in addition to their answers. Responses can guide future research on consumer products and services, human behavior, or controversial topics.

Focus groups can provide more nuanced and unfiltered feedback than individual interviews and are easier to organize than experiments or large surveys . However, their small size leads to low external validity and the temptation as a researcher to “cherry-pick” responses that fit your hypotheses.

  • Your research focuses on the dynamics of group discussion or real-time responses to your topic.
  • Your questions are complex and rooted in feelings, opinions, and perceptions that cannot be answered with a “yes” or “no.”
  • Your topic is exploratory in nature, and you are seeking information that will help you uncover new questions or future research ideas.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Depending on the type of interview you are conducting, your questions will differ in style, phrasing, and intention. Structured interview questions are set and precise, while the other types of interviews allow for more open-endedness and flexibility.

Here are some examples.

  • Semi-structured
  • Unstructured
  • Focus group
  • Do you like dogs? Yes/No
  • Do you associate dogs with feeling: happy; somewhat happy; neutral; somewhat unhappy; unhappy
  • If yes, name one attribute of dogs that you like.
  • If no, name one attribute of dogs that you don’t like.
  • What feelings do dogs bring out in you?
  • When you think more deeply about this, what experiences would you say your feelings are rooted in?

Interviews are a great research tool. They allow you to gather rich information and draw more detailed conclusions than other research methods, taking into consideration nonverbal cues, off-the-cuff reactions, and emotional responses.

However, they can also be time-consuming and deceptively challenging to conduct properly. Smaller sample sizes can cause their validity and reliability to suffer, and there is an inherent risk of interviewer effect arising from accidentally leading questions.

Here are some advantages and disadvantages of each type of interview that can help you decide if you’d like to utilize this research method.

Advantages and disadvantages of interviews
Type of interview Advantages Disadvantages
Structured interview
Semi-structured interview , , , and
Unstructured interview , , , and
Focus group , , and , since there are multiple people present

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

A focus group is a research method that brings together a small group of people to answer questions in a moderated setting. The group is chosen due to predefined demographic traits, and the questions are designed to shed light on a topic of interest. It is one of 4 types of interviews .

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). Types of Interviews in Research | Guide & Examples. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/methodology/interviews-research/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, unstructured interview | definition, guide & examples, structured interview | definition, guide & examples, semi-structured interview | definition, guide & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Logo for Open Educational Resources Collective

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 13: Interviews

Danielle Berkovic

Learning outcomes

Upon completion of this chapter, you should be able to:

  • Understand when to use interviews in qualitative research.
  • Develop interview questions for an interview guide.
  • Understand how to conduct an interview.

What are interviews?

An interviewing method is the most commonly used data collection technique in qualitative research. 1 The purpose of an interview is to explore the experiences, understandings, opinions and motivations of research participants. 2 Interviews are conducted one-on-one with the researcher and the participant. Interviews are most appropriate when seeking to understand a participant’s subjective view of an experience and are also considered suitable for the exploration of sensitive topics.

What are the different types of interviews?

There are four main types of interviews:

  • Key stakeholder: A key stakeholder interview aims to explore one issue in detail with a person of interest or importance concerning the research topic. 3 Key stakeholder interviews seek the views of experts on some cultural, political or health aspects of the community, beyond their personal beliefs or actions. An example of a key stakeholder is the Chief Health Officer of Victoria (Australia’s second-most populous state) who oversaw the world’s longest lockdowns in response to the COVID-19 pandemic.
  • Dyad: A dyad interview aims to explore one issue in a level of detail with a dyad (two people). This form of interviewing is used when one participant of the dyad may need some support or is not wholly able to articulate themselves (e.g. people with cognitive impairment, or children). Independence is acknowledged and the interview is analysed as a unit. 4
  • Narrative: A narrative interview helps individuals tell their stories, and prioritises their own perspectives and experiences using the language that they prefer. 5 This type of interview has been widely used in social research but is gaining prominence in health research to better understand person-centred care, for example, negotiating exercise and food abstinence whilst living with Type 2 diabetes. 6,7
  • Life history: A life history interview allows the researcher to explore a person’s individual and subjective experiences within a history of the time framework. 8 Life history interviews challenge the researcher to understand how people’s current attitudes, behaviours and choices are influenced by previous experiences or trauma. Life history interviews have been conducted with Holocaust survivors 9 and youth who have been forcibly recruited to war. 10

Table 13.4 provides a summary of four studies, each adopting one of these types of interviews.

Interviewing techniques

There are two main interview techniques:

  • Semi-structured: Semi-structured interviewing aims to explore a few issues in moderate detail, to expand the researcher’s knowledge at some level. 11 Semi-structured interviews give the researcher the advantage of remaining reasonably objective while enabling participants to share their perspectives and opinions. The researcher should create an interview guide with targeted open questions to direct the interview. As examples, semi-structured interviews have been used to extend knowledge of why women might gain excess weight during pregnancy, 12 and to update guidelines for statin uptake. 13
  • In-depth: In-depth interviewing aims to explore a person’s subjective experiences and feelings about a particular topic. 14 In-depth interviews are often used to explore emotive (e.g. end-of-life care) 15 and complex (e.g. adolescent pregnancy) topics. 16 The researcher should create an interview guide with selected open questions to ask of the participant, but the participant should guide the direction of the interview more than in a semi-structured setting. In-depth interviews value participants’ lived experiences and are frequently used in phenomenology studies (as described in Chapter 6) .

When to use the different types of interview s

The type of interview a researcher uses should be determined by the study design, the research aims and objectives, and participant demographics. For example, if conducting a descriptive study, semi-structured interviews may be the best method of data collection. As explained in Chapter 5 , descriptive studies seek to describe phenomena, rather than to explain or interpret the data. A semi-structured interview, which seeks to expand upon some level of existing knowledge, will likely best facilitate this.

Similarly, if conducting a phenomenological study, in-depth interviews may be the best method of data collection. As described in Chapter 6 , the key concept of phenomenology is the individual. The emphasis is on the lived experience of that individual and the person’s sense-making of those experiences. Therefore, an in-depth interview is likely best placed to elicit that rich data.

While some interview types are better suited to certain study designs, there are no restrictions on the type of interview that may be used. For example, semi-structured interviews provide an excellent accompaniment to trial participation (see Chapter 11 about mixed methods), and key stakeholder interviews, as part of an action research study, can be used to define priorities, barriers and enablers to implementation.

How do I write my interview questions?

An interview aims to explore the experiences, understandings, opinions and motivations of research participants. The general rule is that the interviewee should speak for 80 per cent of the interview, and the interviewer should only be asking questions and clarifying responses, for about 20 per cent of the interview. This percentage may differ depending on the interview type; for example, a semi-structured interview involves the researcher asking more questions than in an in-depth interview. Still, to facilitate free-flowing responses, it is important to use open-ended language to encourage participants to be expansive in their responses. Examples of open-ended terms include questions that start with ‘who’, ‘how’ and ‘where’.

The researcher should avoid closed-ended questions that can be answered with yes or no, and limit conversation. For example, asking a participant ‘Did you have this experience?’ can elicit a simple ‘yes’, whereas asking them to ‘Describe your experience’, will likely encourage a narrative response. Table 13.1 provides examples of terminology to include and avoid in developing interview questions.

Table 13.1. Interview question formats to use and avoid

Use Avoid
Tell me about… Do you think that…
What happened when… Will you do this…
Why is this important? Did you believe that…
How did you feel when…

How do you…
Were there issues from your perspective…
What are the…

What does...

How long should my interview be?

There is no rule about how long an interview should take. Different types of interviews will likely run for different periods of time, but this also depends on the research question/s and the type of participant. For example, given that a semi-structured interview is seeking to expand on some previous knowledge, the interview may need no longer than 30 minutes, or up to one hour. An in-depth interview seeks to explore a topic in a greater level of detail and therefore, at a minimum, would be expected to last an hour. A dyad interview may be as short as 15 minutes (e.g. if the dyad is a person with dementia and a family member or caregiver) or longer, depending on the pairing.

Designing your interview guide

To figure out what questions to ask in an interview guide, the researcher may consult the literature, speak to experts (including people with lived experience) about the research and draw on their current knowledge. The topics and questions should be mapped to the research question/s, and the interview guide should be developed well in advance of commencing data collection. This enables time and opportunity to pilot-test the interview guide. The pilot interview provides an opportunity to explore the language and clarity of questions, the order and flow of the guide and to determine whether the instructions are clear to participants both before and after the interview. It can be beneficial to pilot-test the interview guide with someone who is not familiar with the research topic, to make sure that the language used is easily understood (and will be by participants, too). The study design should be used to determine the number of questions asked and the duration of the interview should guide the extent of the interview guide. The participant type may also determine the extent of the interview guide; for example, clinicians tend to be time-poor and therefore shorter, focused interviews are optimal. An interview guide is also likely to be shorter for a descriptive study than a phenomenological or ethnographic study, given the level of detail required. Chapter 5 outlined a descriptive study in which participants who had undergone percutaneous coronary intervention were interviewed. The interview guide consisted of four main questions and subsequent probing questions, linked to the research questions (see Table 13.2). 17

Table 13.2. Interview guide for a descriptive study

Research question Open questions Probing questions and topics
How does the patient feel, physically and psychologically, after their procedure? From your perspective, what would be considered a successful outcome of the procedure? Did the procedure meet your expectations? How do you define whether the procedure was successful?
How did you feel after the procedure?

How did you feel one week after the procedure and how does that compare with how you feel now?
How does the patient function after their procedure? After your procedure, tell me about your ability to do your daily activities? Prompt for activities including gardening, housework, personal care, work-related and family-related tasks.

Did you attend cardiac rehabilitation? Can you tell us about your experience of cardiac rehabilitation? What effect has medication had on your recovery?

What are the long-term effects of the procedure? What, if any, lifestyle changes have you made since your procedure?

Table 13.3 is an example of a larger and more detailed interview guide, designed for the qualitative component of a mixed-methods study aiming to examine the work and financial effects of living with arthritis as a younger person. The questions are mapped to the World Health Organization’s International Classification of Functioning, Disability, and Health, which measures health and disability at individual and population levels. 18

Table 13.3. Detailed interview guide

Research questions Open questions Probing questions
How do young people experience their arthritis diagnosis? Tell me about your experience of being diagnosed with arthritis.

How did being diagnosed with arthritis make you feel?

Tell me about your experience of arthritis flare ups what do they feel like?

What impacts arthritis flare ups or feeling like your arthritis is worse?

What circumstances lead to these feelings?

Based on your experience, what do you think causes symptoms of arthritis to become worse?
When were you diagnosed with arthritis?

What type of arthritis were you diagnosed with?

Does anyone else in your family have arthritis? What relation are they to you?
What are the work impacts of arthritis on younger people? What is your field of work, and how long have you been in this role?

How frequently do you work (full-time/part-time/casual)?
How has arthritis affected your work-related demands or career? How so?

Has arthritis led you to reconsider your career? How so?

Has arthritis affected your usual working hours each week? How so?

How have changes to work or career because of your arthritis impacted other areas of life, i.e. mental health or family role?
What are the financial impacts of living with arthritis as a younger person? Has your arthritis led to any financial concerns? Financial concerns pertaining to:

• Direct costs: rheumatologist, prescribed and non-prescribed medications (as well as supplements), allied health costs (rheumatology, physiotherapy, chiropractic, osteopathy, myotherapy), Pilates, and gym/personal trainer fees, complementary therapies.

• Indirect costs: workplace absenteeism, productivity, loss of wages, informal care, cost of different types of insurance: health insurance (joint replacements)

It is important to create an interview guide, for the following reasons:

  • The researcher should be familiar with their research questions.
  • Using an interview guide will enable the incorporation of feedback from the piloting process.
  • It is difficult to predict how participants will respond to interview questions. They may answer in a way that is anticipated or they may provide unanticipated insights that warrant follow-up. An interview guide (a physical or digital copy) enables the researcher to note these answers and follow-up with appropriate inquiry.
  • Participants will likely have provided heterogeneous answers to certain questions. The interview guide enables the researcher to note similarities and differences across various interviews, which may be important in data analysis.
  • Even experienced qualitative researchers get nervous before an interview! The interview guide provides a safety net if the researcher forgets their questions or needs to anticipate the next question.

Setting up the interview

In the past, most interviews were conducted in person or by telephone. Emerging technologies promote easier access to research participation (e.g. by people living in rural or remote communities, or for people with mobility limitations). Even in metropolitan settings, many interviews are now conducted electronically (e.g. using videoconferencing platforms). Regardless of your interview setting, it is essential that the interview environment is comfortable for the participant. This process can begin as soon as potential participants express interest in your research. Following are some tips from the literature and our own experiences of leading interviews:

  • Answer questions and set clear expectations . Participating in research is not an everyday task. People do not necessarily know what to expect during a research interview, and this can be daunting. Give people as much information as possible, answer their questions about the research and set clear expectations about what the interview will entail and how long it is expected to last. Let them know that the interview will be recorded for transcription and analysis purposes. Consider sending the interview questions a few days before the interview. This gives people time and space to reflect on their experiences, consider their responses to questions and to provide informed consent for their participation.
  • Consider your setting . If conducting the interview in person, consider the location and room in which the interview will be held. For example, if in a participant’s home, be mindful of their private space. Ask if you should remove your shoes before entering their home. If they offer refreshments (which in our experience many participants do), accept it with gratitude if possible. These considerations apply beyond the participant’s home; if using a room in an office setting, consider privacy and confidentiality, accessibility and potential for disruption. Consider the temperature as well as the furniture in the room, who may be able to overhear conversations and who may walk past. Similarly, if interviewing by phone or online, take time to assess the space, and if in a house or office that is not quiet or private, use headphones as needed.
  • Build rapport. The research topic may be important to participants from a professional perspective, or they may have deep emotional connections to the topic of interest. Regardless of the nature of the interview, it is important to remember that participants are being asked to open up to an interviewer who is likely to be a stranger. Spend some time with participants before the interview, to make sure that they are comfortable. Engage in some general conversation, and ask if they have any questions before you start. Remember that it is not a normal part of someone’s day to participate in research. Make it an enjoyable and/or meaningful experience for them, and it will enhance the data that you collect.
  • Let participants guide you. Oftentimes, the ways in which researchers and participants describe the same phenomena are different. In the interview, reflect the participant’s language. Make sure they feel heard and that they are willing and comfortable to speak openly about their experiences. For example, our research involves talking to older adults about their experience of falls. We noticed early in this research that participants did not use the word ‘fall’ but would rather use terms such as ‘trip’, ‘went over’ and ‘stumbled’. As interviewers we adopted the participant’s language into our questions.
  • Listen consistently and express interest. An interview is more complex than a simple question-and-answer format. The best interview data comes from participants feeling comfortable and confident to share their stories. By the time you are completing the 20th interview, it can be difficult to maintain the same level of concentration as with the first interview. Try to stay engaged: nod along with your participants, maintain eye contact, murmur in agreement and sympathise where warranted.
  • The interviewer is both the data collector and the data collection instrument. The data received is only as good as the questions asked. In qualitative research, the researcher influences how participants answer questions. It is important to remain reflexive and aware of how your language, body language and attitude might influence the interview. Being rested and prepared will enhance the quality of the questions asked and hence the data collected.
  • Avoid excessive use of ‘why’. It can be challenging for participants to recall why they felt a certain way or acted in a particular manner. Try to avoid asking ‘why’ questions too often, and instead adopt some of the open language described earlier in the chapter.

After your interview

When you have completed your interview, thank the participant and let them know they can contact you if they have any questions or follow-up information they would like to provide. If the interview has covered sensitive topics or the participant has become distressed throughout the interview, make sure that appropriate referrals and follow-up are provided (see section 6).

Download the recording from your device and make sure it is saved in a secure location that can only be accessed by people on the approved research team (see Chapters 35 and 36).

It is important to know what to do immediately after each interview is completed. Interviews should be transcribed – that is, reproduced verbatim for data analysis. Transcribing data is an important step in the process of analysis, but it is very time-consuming; transcribing a 60-minute interview can take up to 8 hours. Data analysis is discussed in Section 4.

Table 13.4. Examples of the four types of interviews

Title
CC Licence
First author and year Cuthbertson, 2019 Bannon, 2021 McGranahan, 2020 Gutierrez-Garcia, 2021
Interview type Key stakeholder Dyad Narrative Life history
Interview guide Appendix A eAppendix Supplement Not provided, but the text states that ‘qualitative semi-structured narrative interviews’ were conducted.’ [methods] Not provided, but the text states that ‘an open and semi-structured question guide was designed for use.' [methods]
Study design Convergent mixed-methods study Qualitative dyadic study Narrative interview study Life history and lifeline techniques
Number of participants 30

Key stakeholders were emergency management or disaster healthcare practitioners, academics specialising in disaster management in the Oceania region, and policy managers.
23 dyads 28 7
Aim ‘To investigate threats to the health and well-being of societies associated with disaster impact in Oceania.’ [abstract] ‘To explore the lived experiences of couples managing young-onset dementia using an integrated dyadic coping model.’[abstract] ‘To explore the experiences and views of people with psychotic experiences who have not received any treatment or other support from mental health services for the past 5 years.’ [abstract] ‘To analyse the use of life histories and lifelines in the study of female genital mutilation in the context of cross-cultural research in participants with different languages.’ [abstract]
Country Australia, Fiji, Indonesia, Aotearoa New Zealand, Timor Leste and Tonga United States England Spain
Length of interview 45–60 minutes 60 minutes 40-120 minutes 3 sessions

Session 1: life history interview

Session 2: Lifeline activity where participants used drawings to complement or enhance their interview

Session 3: The researchers and participants worked together to finalise the lifeline.
The life history interviews ran for 40 – 60 minutes. The timing for sessions 2 and 3 is not provided.
Sample of interview questions from interview guide 1. What do you believe are the top five disaster risks or threats in the Oceania region today?

2. What disaster risks do you believe are emerging in the Oceania region over the next decade?

3. Why do you think these are risks?

4. What are the drivers of these risks?

5. Do you have any suggestions on how we can improve disaster risk assessment?

6. Are the current disaster risk plans and practices suited to the future disaster risks? If not, why? If not, what do you think needs to be done to improve them?

7. What are the key areas of disaster practice that can enhance future community resilience to disaster risk?

8. What are the barriers or inhibitors to facilitating this practice?

9. What are the solutions or facilitators to enhancing community resilience?

[Appendix A]

1. We like to start by learning more about what you each first noticed that prompted the evaluations you went through to get to the diagnosis.

• Can you each tell me about the earliest symptoms you noticed?

2. What are the most noticeable or troubling symptoms that you have experienced since the time of diagnosis?

• How have your changes in functioning impacted you?

• Emotionally, how do you feel about your symptoms and the changes in functioning you are experiencing?

3. Are you open with your friends and family about the diagnosis?

• Have you experienced any stigma related to your diagnosis?

4. What is your understanding of the diagnosis?

• What is your understanding about the how this condition will affect you both in the future? How are you getting information about this diagnosis?

[eAppendix Supplement]

Not provided. Not provided.
Analysis Thematic analysis guided by The Hazard and Peril Glossary for describing and categorising disasters applied by the Centre for Research on the Epidemiology of Disasters Emergency Events Database Thematic analysis guided by the Dyadic Coping Theoretical Framework Inductive thematic analysis outlined by Braun and Clarke. Phenomenological method proposed by Giorgi (sense of the whole):

1. Reading the entire description to obtain a general sense of the discourse

2. The researcher goes back to the beginning and reads the text again, with the aim of distinguishing the meaning units by separating the perspective of the phenomenon of interest

3. The researcher expresses the contents of the units of meaning more clearly by creating categories

4. The researcher synthesises the units and categories of meaning into a consistent statement that takes into account the participant’s experience and language.
Main themes 1. Climate change is observed as a contemporary and emerging disaster risk

2. Risk is contextual to the different countries, communities and individuals in Oceania.

3. Human development trajectories and their impact, along with perceptions of a changing world, are viewed as drivers of current and emerging risks.

4. Current disaster risk plans and practices are not suited to future disaster risks.

5. Increased education and education of risk and risk assessment at a local level to empower community risk ownership.

[Results, Box 1]
1. Stress communication

2. Positive individual dyadic coping

3. Positive conjoint dyadic coping

4. Negative individual dyadic coping

5. Negative conjoint dyadic coping

[Abstract]
1. Perceiving psychosis as positive

2. Making sense of psychotic experiences

3. Finding sources of strength

4. Negative past experiences of mental health services

5. Positive past experiences with individual clinicians

[Abstract]
1. Important moments and their relationship with female genital mutilation

2. The ritual knife: how sharp or blunt it is at different stages, where and how women are subsequently held as a result

3. Changing relationships with family: how being subject to female genital mutilation changed relationships with mothers

4. Female genital mutilation increases the risk of future childbirth complications which change relationships with family and healthcare systems

5. Managing experiences with early exposure to physical and sexual violence across the lifespan.

Interviews are the most common data collection technique in qualitative research. There are four main types of interviews; the one you choose will depend on your research question, aims and objectives. It is important to formulate open-ended interview questions that are understandable and easy for participants to answer. Key considerations in setting up the interview will enhance the quality of the data obtained and the experience of the interview for the participant and the researcher.

  • Gill P, Stewart K, Treasure E, Chadwick B. Methods of data collection in qualitative research: interviews and focus groups. Br Dent J . 2008;204(6):291-295. doi:10.1038/bdj.2008.192
  • DeJonckheere M, Vaughn LM. Semistructured interviewing in primary care research: a balance of relationship and rigour. Fam Med Community Health . 2019;7(2):e000057. doi:10.1136/fmch-2018-000057
  • Nyanchoka L, Tudur-Smith C, Porcher R, Hren D. Key stakeholders’ perspectives and experiences with defining, identifying and displaying gaps in health research: a qualitative study. BMJ Open . 2020;10(11):e039932. doi:10.1136/bmjopen-2020-039932
  • Morgan DL, Ataie J, Carder P, Hoffman K. Introducing dyadic interviews as a method for collecting qualitative data. Qual Health Res .  2013;23(9):1276-84. doi:10.1177/1049732313501889
  • Picchi S, Bonapitacola C, Borghi E, et al. The narrative interview in therapeutic education. The diabetic patients’ point of view. Acta Biomed . Jul 18 2018;89(6-S):43-50. doi:10.23750/abm.v89i6-S.7488
  • Stuij M, Elling A, Abma T. Negotiating exercise as medicine: Narratives from people with type 2 diabetes. Health (London) . 2021;25(1):86-102. doi:10.1177/1363459319851545
  • Buchmann M, Wermeling M, Lucius-Hoene G, Himmel W. Experiences of food abstinence in patients with type 2 diabetes: a qualitative study. BMJ Open .  2016;6(1):e008907. doi:10.1136/bmjopen-2015-008907
  • Jessee E. The Life History Interview. Handbook of Research Methods in Health Social Sciences . 2018:1-17:Chapter 80-1.
  • Sheftel A, Zembrzycki S. Only Human: A Reflection on the Ethical and Methodological Challenges of Working with “Difficult” Stories. The Oral History Review . 2019;37(2):191-214. doi:10.1093/ohr/ohq050
  • Harnisch H, Montgomery E. “What kept me going”: A qualitative study of avoidant responses to war-related adversity and perpetration of violence by former forcibly recruited children and youth in the Acholi region of northern Uganda. Soc Sci Med .  2017;188:100-108. doi:10.1016/j.socscimed.2017.07.007
  • Ruslin., Mashuri S, Rasak MSA, Alhabsyi M, Alhabsyi F, Syam H. Semi-structured Interview: A Methodological Reflection on the Development of a Qualitative Research Instrument in Educational Studies. IOSR-JRME . 2022;12(1):22-29. doi:10.9790/7388-1201052229
  • Chang T, Llanes M, Gold KJ, Fetters MD. Perspectives about and approaches to weight gain in pregnancy: a qualitative study of physicians and nurse midwives. BMC Pregnancy & Childbirth . 2013;13(47)doi:10.1186/1471-2393-13-47
  • DeJonckheere M, Robinson CH, Evans L, et al. Designing for Clinical Change: Creating an Intervention to Implement New Statin Guidelines in a Primary Care Clinic. JMIR Hum Factors .  2018;5(2):e19. doi:10.2196/humanfactors.9030
  • Knott E, Rao AH, Summers K, Teeger C. Interviews in the social sciences. Nature Reviews Methods Primers . 2022;2(1)doi:10.1038/s43586-022-00150-6
  • Bergenholtz H, Missel M, Timm H. Talking about death and dying in a hospital setting – a qualitative study of the wishes for end-of-life conversations from the perspective of patients and spouses. BMC Palliat Care . 2020;19(1):168. doi:10.1186/s12904-020-00675-1
  • Olorunsaiye CZ, Degge HM, Ubanyi TO, Achema TA, Yaya S. “It’s like being involved in a car crash”: teen pregnancy narratives of adolescents and young adults in Jos, Nigeria. Int Health . 2022;14(6):562-571. doi:10.1093/inthealth/ihab069
  • Ayton DR, Barker AL, Peeters G, et al. Exploring patient-reported outcomes following percutaneous coronary intervention: A qualitative study. Health Expect .  2018;21(2):457-465. doi:10.1111/hex.12636
  • World Health Organization. International Classification of Functioning, Disability and Health (ICF). WHO. https://www.who.int/standards/classifications/international-classification-of-functioning-disability-and-health#:~:text=ICF%20is%20the%20WHO%20framework,and%20measure%20health%20and%20disability.
  • Cuthbertson J, Rodriguez-Llanes JM, Robertson A, Archer F. Current and Emerging Disaster Risks Perceptions in Oceania: Key Stakeholders Recommendations for Disaster Management and Resilience Building. Int J Environ Res Public Health .  2019;16(3)doi:10.3390/ijerph16030460
  • Bannon SM, Grunberg VA, Reichman M, et al. Thematic Analysis of Dyadic Coping in Couples With Young-Onset Dementia. JAMA Netw Open .  2021;4(4):e216111. doi:10.1001/jamanetworkopen.2021.6111
  • McGranahan R, Jakaite Z, Edwards A, Rennick-Egglestone S, Slade M, Priebe S. Living with Psychosis without Mental Health Services: A Narrative Interview Study. BMJ Open .  2021;11(7):e045661. doi:10.1136/bmjopen-2020-045661
  • Gutiérrez-García AI, Solano-Ruíz C, Siles-González J, Perpiñá-Galvañ J. Life Histories and Lifelines: A Methodological Symbiosis for the Study of Female Genital Mutilation. Int J Qual Methods . 2021;20doi:10.1177/16094069211040969

Qualitative Research – a practical guide for health and social care researchers and practitioners Copyright © 2023 by Danielle Berkovic is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.

Share This Book

  • Harvard Library
  • Research Guides
  • Faculty of Arts & Sciences Libraries

Library Support for Qualitative Research

  • Interview Research

General Handbooks and Overviews

Qualitative research communities.

  • Types of Interviews
  • Recruiting & Engaging Participants
  • Interview Questions
  • Conducting Interviews
  • Recording & Transcription
  • Data Analysis
  • Managing Interview Data
  • Finding Extant Interviews
  • Past Workshops on Interview Research
  • Methodological Resources
  • Remote & Virtual Fieldwork
  • Data Management & Repositories
  • Campus Access
  • Interviews as a Method for Qualitative Research (video) This short video summarizes why interviews can serve as useful data in qualitative research.  
  • InterViews by Steinar Kvale  Interviewing is an essential tool in qualitative research and this introduction to interviewing outlines both the theoretical underpinnings and the practical aspects of the process. After examining the role of the interview in the research process, Steinar Kvale considers some of the key philosophical issues relating to interviewing: the interview as conversation, hermeneutics, phenomenology, concerns about ethics as well as validity, and postmodernism. Having established this framework, the author then analyzes the seven stages of the interview process - from designing a study to writing it up.  
  • Practical Evaluation by Michael Quinn Patton  Surveys different interviewing strategies, from, a) informal/conversational, to b) interview guide approach, to c) standardized and open-ended, to d) closed/quantitative. Also discusses strategies for wording questions that are open-ended, clear, sensitive, and neutral, while supporting the speaker. Provides suggestions for probing and maintaining control of the interview process, as well as suggestions for recording and transcription.  
  • The SAGE Handbook of Interview Research by Amir B. Marvasti (Editor); James A. Holstein (Editor); Jaber F. Gubrium (Editor); Karyn D. McKinney (Editor)  The new edition of this landmark volume emphasizes the dynamic, interactional, and reflexive dimensions of the research interview. Contributors highlight the myriad dimensions of complexity that are emerging as researchers increasingly frame the interview as a communicative opportunity as much as a data-gathering format. The book begins with the history and conceptual transformations of the interview, which is followed by chapters that discuss the main components of interview practice. Taken together, the contributions to The SAGE Handbook of Interview Research: The Complexity of the Craft encourage readers simultaneously to learn the frameworks and technologies of interviewing and to reflect on the epistemological foundations of the interview craft.
  • International Congress of Qualitative Inquiry They host an annual confrerence at the University of Illinois at Urbana-Champaign, which aims to facilitate the development of qualitative research methods across a wide variety of academic disciplines, among other initiatives.
  • METHODSPACE An online home of the research methods community, where practicing researchers share how to make research easier.
  • Social Research Association, UK The SRA is the membership organisation for social researchers in the UK and beyond. It supports researchers via training, guidance, publications, research ethics, events, branches, and careers.
  • Social Science Research Council The SSRC administers fellowships and research grants that support the innovation and evaluation of new policy solutions. They convene researchers and stakeholders to share evidence-based policy solutions and incubate new research agendas, produce online knowledge platforms and technical reports that catalog research-based policy solutions, and support mentoring programs that broaden problem-solving research opportunities.
  • << Previous: Taguette
  • Next: Types of Interviews >>

Except where otherwise noted, this work is subject to a Creative Commons Attribution 4.0 International License , which allows anyone to share and adapt our material as long as proper attribution is given. For details and exceptions, see the Harvard Library Copyright Policy ©2021 Presidents and Fellows of Harvard College.

  • (855) 776-7763

Training Maker

All Products

Qualaroo Insights

ProProfs.com

  • Get Started Free

FREE. All Features. FOREVER!

Try our Forever FREE account with all premium features!

How to Write Qualitative Research Questions: Types & Examples

sample questions qualitative research interviews

Market Research Specialist

Emma David, a seasoned market research professional, specializes in employee engagement, survey administration, and data management. Her expertise in leveraging data for informed decisions has positively impacted several brands, enhancing their market position.

sample questions qualitative research interviews

Qualitative research questions focus on depth and quality, exploring the “why and how” behind decisions, without relying on statistical tools.

Unlike quantitative research, which aims to collect tangible, measurable data from a broader demographic, qualitative analysis involves smaller, focused datasets, identifying patterns for insights.

The information collected by qualitative surveys can vary from text to images, demanding a deep understanding of the subject, and therefore, crafting precise qualitative research questions is crucial for success.

In this guide, we’ll discuss how to write effective qualitative research questions, explore various types, and highlight characteristics of good qualitative research questions.

Let’s dive in!

What Are Qualitative Research Questions?

Qualitative questions aim to understand the depth and nuances of a phenomenon, focusing on “why” and “how” rather than quantifiable measures.

They explore subjective experiences, perspectives, and behaviors, often using open-ended inquiries to gather rich, descriptive data.

Unlike quantitative questions, which seek numerical data, qualitative questions try to find out meanings, patterns, and underlying processes within a specific context.

These questions are essential for exploring complex issues, generating hypotheses, and gaining deeper insights into human behavior and phenomena.

Here’s an example of a qualitative research question:

“How do you perceive and navigate organizational culture within a tech startup environment?”

sample questions qualitative research interviews

This question asks about the respondent’s subjective interpretations and experiences of organizational culture within a specific context, such as a tech startup.

It seeks to uncover insights into the values, norms, and practices that shape workplace dynamics and employee behaviors, providing qualitative data for analysis and understanding.

When Should We Use Qualitative Research Questions?

Qualitative research questions typically aim to open up conversations, encourage detailed narratives, and foster a deep understanding of the subject matter. Here are some scenarios they are best suited for:

  • Exploring Complex Phenomena : When the research topic involves understanding complex processes, behaviors, or interactions that cannot be quantified easily, qualitative questions help delve into these intricate details.
  • Understanding Contexts and Cultures : To grasp the nuances of different social contexts, cultures, or subcultures, qualitative research questions allow for an in-depth exploration of these environments and how they influence individuals and groups.
  • Exploring Perceptions and Experiences : When the aim is to understand people’s perceptions, experiences, or feelings about a particular subject, qualitative questions facilitate capturing the depth and variety of these perspectives.
  • Developing Concepts or Theories : In the early stages of research, where concepts or theories are not yet well-developed, qualitative questions can help generate hypotheses, identify variables, and develop theoretical frameworks based on observations and interpretations.
  • Investigating Processes : To understand how processes unfold over time and the factors that influence these processes, qualitative questions are useful for capturing the dynamics and complexities involved.
  • Seeking to Understand Change : When researching how individuals or groups experience change, adapt to new circumstances, or make decisions, qualitative research questions can provide insights into the motivations, challenges, and strategies involved.
  • Studying Phenomena Not Easily Quantified : For phenomena that are not easily captured through quantitative measures, such as emotions, beliefs, or motivations, qualitative questions can probe these abstract concepts more effectively.
  • Addressing Sensitive or Taboo Topics : In studies where topics may be sensitive, controversial, or taboo, qualitative research questions allow for a respectful and empathetic exploration of these subjects, providing space for participants to share their experiences in their own words.

How to Write Qualitative Research Questions?

Read this guide to learn how you can craft well-thought-out qualitative research questions:

1. Begin with Your Research Goals

The first step in formulating qualitative research questions is to have a clear understanding of what you aim to discover or understand through your research. There are two types of qualitative questionnaires or research – Ontological and Epistemological.

Finding out the nature of your research influences all aspects of your research design, including the formulation of research questions.

Subsequently:

  • Identify your main objective : Consider the broader context of your study. Are you trying to explore a phenomenon, understand a process, or interpret the meanings behind behaviors? Your main objective should guide the formulation of your questions, ensuring they are aligned with what you seek to achieve.
  • Focus on the ‘how’ and ‘why’ : Qualitative research is inherently exploratory and aims to understand the nuances of human behavior and experience. Starting your questions with “how” or “why” encourages a deeper investigation into the motivations, processes, and contexts underlying the subject matter. This approach facilitates an open-ended exploration, allowing participants to provide rich, detailed responses that illuminate their perspectives and experiences.

Take a quick look at the following visual for a better understanding:

sample questions qualitative research interviews

So, if you are doing Ontological research, ensure that the questions focus on the “what” aspects of reality (the premise of your research) and opt for the nature of the knowledge for Epistemological research.

2. Choose the Right Structure

The structure of your research questions significantly impacts the depth and quality of data you collect. Opting for an open-ended format allows respondents the flexibility to express themselves freely, providing insights that pre-defined answers might miss.

  • Open-ended format : These questions do not constrain respondents to a set of predetermined answers, unlike closed-ended questions. By allowing participants to articulate their thoughts in their own words, you can uncover nuances and complexities in their responses that might otherwise be overlooked.
  • Avoid yes/no questions : Yes/no questions tend to limit the depth of responses. While they might be useful for gathering straightforward factual information, they are not conducive to exploring the depths and nuances that qualitative research seeks to uncover. Encouraging participants to elaborate on their experiences and perspectives leads to richer, more informative data.

For example, take a look at some qualitative questions examples shown in the following image:

sample questions qualitative research interviews

3. Be Clear and Specific

Clarity and specificity in your questions are crucial to ensure that participants understand what is being asked and that their responses are relevant to your research objectives.

  • Use clear language : Use straightforward, understandable language in your questions. Avoid jargon, acronyms, or overly technical terms that might confuse participants or lead to misinterpretation. The goal is to make your questions accessible to everyone involved in your study.
  • Be specific : While maintaining the open-ended nature of qualitative questions, it’s important to narrow down your focus to specific aspects of the phenomenon you’re studying. This specificity helps guide participants’ responses and ensures that the data you collect directly relates to your research objectives.

4. Ensure Relevance and Feasibility

Each question should be carefully considered for its relevance to your research goals and its feasibility, given the constraints of your study.

  • Relevance : Questions should be crafted to address the core objectives of your research directly. They should probe areas that are essential to understanding the phenomenon under investigation and should align with your theoretical framework or literature review findings.
  • Feasibility : Consider the practical aspects of your research, including the time available for data collection and analysis, resources, and access to participants. Questions should be designed to elicit meaningful responses within the constraints of your study, ensuring that you can gather and analyze data effectively.

5. Focus on a Single Concept or Theme per Question

To ensure clarity and depth, each question should concentrate on a single idea or theme. However, if your main qualitative research question is tough to understand or has a complex structure, you can create sub-questions in limited numbers and with a “ladder structure”.

This will help your respondents understand the overall research objective in mind, and your research can be executed in a better manner.

For example, suppose your main question is – “What is the current state of illiteracy in your state?”

Then, you can create the following subquestions: 

“How does illiteracy block progress in your state?”

“How would you best describe the feelings you have about illiteracy in your state?”

For an even better understanding, you can see the various qualitative research question examples in the following image:

sample questions qualitative research interviews

📊 : Test them with a small group similar to your study population to ensure they are understood as intended and elicit the kind of responses you are seeking.

: Be prepared to refine your questions based on pilot feedback or as your understanding of the topic deepens.

Types of Qualitative Research Questions With Examples

Qualitative survey questions primarily focus on a specific group of respondents that are participating in case studies, surveys, ethnography studies, etc., rather than numbers or statistics.

As a result, the questions are mostly open-ended and can be subdivided into the following types as discussed below:

1. Descriptive Questions

Descriptive research questions aim to detail the “what” of a phenomenon, providing a comprehensive overview of the context, individuals, or situations under study. These questions are foundational, helping to establish a baseline understanding of the research topic.

  • What are the daily experiences of teachers in urban elementary schools?
  • What strategies do small businesses employ to adapt to rapid technological changes?
  • How do young adults describe their transition from college to the workforce?
  • What are the coping mechanisms of families with members suffering from chronic illnesses?
  • How do community leaders perceive the impact of gentrification in their neighborhoods?

2. Interpretive Questions

Interpretive questions seek to understand the “how” and “why” behind a phenomenon, focusing on the meanings people attach to their experiences. These questions delve into the subjective interpretations and perceptions of participants.

  • How do survivors of natural disasters interpret their experiences of recovery and rebuilding?
  • Why do individuals engage in voluntary work within their communities?
  • How do parents interpret and navigate the challenges of remote schooling for their children?
  • Why do consumers prefer local products over global brands in certain markets?
  • How do artists interpret the influence of digital media on traditional art forms?

3. Comparative Questions

Comparative research questions are designed to explore differences and similarities between groups, settings, or time periods. These questions can help to highlight the impact of specific variables on the phenomenon under study.

  • How do the strategies for managing work-life balance compare between remote and office workers?
  • What are the differences in consumer behavior towards sustainable products in urban versus rural areas?
  • How do parenting styles in single-parent households compare to those in dual-parent households?
  • What are the similarities and differences in leadership styles across different cultures?
  • How has the perception of online privacy changed among teenagers over the past decade?

4. Process-oriented Questions

These questions focus on understanding the processes or sequences of events over time. They aim to uncover the “how” of a phenomenon, tracing the development, changes, or evolution of specific situations or behaviors.

  • How do non-profit organizations develop and implement community outreach programs?
  • What is the process of decision-making in high-stakes business environments?
  • How do individuals navigate the process of career transition after significant industry changes?
  • What are the stages of adaptation for immigrants in a new country?
  • How do social movements evolve from inception to national recognition?

5. Evaluative Questions

Evaluative questions aim to assess the effectiveness, value, or impact of a program, policy, or phenomenon. These questions are critical for understanding the outcomes and implications of various initiatives or situations.

  • How effective are online therapy sessions compared to in-person sessions in treating anxiety?
  • What is the impact of community gardening programs on neighborhood cohesion?
  • How do participants evaluate the outcomes of leadership training programs in their professional development?
  • What are the perceived benefits and drawbacks of telecommuting for employees and employers?
  • How do residents evaluate the effectiveness of local government policies on waste management?

6. One-on-One Questions

The one-on-one questions are asked to a single person and can be thought of as individual interviews that you can conduct online via phone and video chat as well.

The main aim of such questions is to ask your customers or people in the focus group a series of questions about their purchase motivations. These questions might also come with follow-ups, and if your customers respond with some interesting fact or detail, dig deeper and explore the findings as much as you want.

  • What makes you happy in regard to [your research topic]?
  • If I could make a wish of yours come true, what do you desire the most?
  • What do you still find hard to come to terms with?
  • Have you bought [your product] before?
  • If so, what was your initial motivation behind the purchase?

7. Exploratory Questions

These questions are designed to enhance your understanding of a particular topic. However, while asking exploratory questions, you must ensure that there are no preconceived notions or biases to it. The more transparent and bias-free your questions are, the better and fair results you will get.

  • What is the effect of personal smart devices on today’s youth?
  • Do you feel that smart devices have positively or negatively impacted you?
  • How do your kids spend their weekends?
  • What do you do on a typical weekend morning?

8. Predictive Questions

The predictive questions are used for qualitative research that is focused on the future outcomes of an action or a series of actions. So, you will be using past information to predict the reactions of respondents to hypothetical events that might or might not happen in the future.

These questions come in extremely handy for identifying your customers’ current brand expectations, pain points, and purchase motivation.

  • Are you more likely to buy a product when a celebrity promotes it?
  • Would you ever try a new product because one of your favorite celebs claims that it actually worked for them?
  • Would people in your neighborhood enjoy a park with rides and exercise options?
  • How often would you go to a park with your kids if it had free rides?

9. Focus Groups

These questions are mostly asked in person to the customer or respondent groups. The in-person nature of these surveys or studies ensures that the group members get a safe and comfortable environment to express their thoughts and feelings about your brand or services.

  • How would you describe your ease of using our product?
  • How well do you think you were able to do this task before you started using our product?
  • What do you like about our promotional campaigns?
  • How well do you think our ads convey the meaning?

10. In-Home Videos

Collecting video feedback from customers in their comfortable, natural settings offers a unique perspective. At home, customers are more relaxed and less concerned about their mannerisms, posture, and choice of words when responding.

This approach is partly why Vogue’s 73 Questions Series is highly popular among celebrities and viewers alike. In-home videos provide insights into customers in a relaxed environment, encouraging them to be honest and share genuine experiences.

  • What was your first reaction when you used our product for the first time?
  • How well do you think our product performed compared to your expectations?
  • What was your worst experience with our product?
  • What made you switch to our brand?

11. Online Focus Groups

Online focus groups mirror the traditional, in-person format but are conducted virtually, offering a more cost-effective and efficient approach to gathering data. This digital format extends your reach and allows a rapid collection of responses from a broader audience through online platforms.

You can utilize social media and other digital forums to create communities of respondents and initiate meaningful discussions. Once you have them started, you can simply observe the exchange of thoughts and gather massive amounts of interesting insights!

  • What do you like best about our product?
  • How familiar are you with this particular service or product we offer?
  • What are your concerns with our product?
  • What changes can we make to make our product better?

Ask the Right Qualitative Research Questions for Meaningful Insights From Your Respondents

Watch: How to Create a Survey Using ProProfs Survey Maker

By now, you might have realized that manually creating a list of qualitative research questions is a daunting task. Keeping numerous considerations in mind, it’s easy to run out of ideas while crafting qualitative survey questions .

However, investing in smart survey tools, like ProProfs Survey Maker, can significantly streamline this process, allowing you to create various types of surveys in minutes.

With this survey tool , you can generate forms, NPS surveys , tests, quizzes, and assessments.

It’s also useful for conducting polls, sidebar surveys, and in-app surveys. Offering over 100 templates and more than 1,000,000 ready-to-use examples of phenomenological research questions, this software simplifies the task immensely.

Equipped with the right tools and the professional tips shared here, you’re well-prepared to conduct thorough research studies and obtain valuable insights that drive impactful results.

Frequently Asked Questions on Q ualitative Research Questions

1. how do you choose qualitative research questions.

To choose qualitative research questions, identify your main research goal, focus on exploring ‘how’ and ‘why’ aspects, ensure questions are open-ended, and align them with your theoretical framework and methodology.

2. Why are good qualitative research questions important?

Good qualitative research questions are important because they guide the research focus, enable the exploration of depth and complexity, and facilitate the gathering of rich, detailed insights into human experiences and behaviors.

Emma David

About the author

Emma David is a seasoned market research professional with 8+ years of experience. Having kick-started her journey in research, she has developed rich expertise in employee engagement, survey creation and administration, and data management. Emma believes in the power of data to shape business performance positively. She continues to help brands and businesses make strategic decisions and improve their market standing through her understanding of research methodologies.

Related Posts

sample questions qualitative research interviews

How To Write a Good Research Question: Guide with Definition, Tips & Examples

sample questions qualitative research interviews

Client Onboarding Questionnaire: Best Practices & 20+ Questions

sample questions qualitative research interviews

How to Ask Sensitive Questions in Surveys

sample questions qualitative research interviews

How Demographics Surveys Make Marketing Metrics Stand-Out

sample questions qualitative research interviews

16 Advantages & Disadvantages of Questionnaires

sample questions qualitative research interviews

100+ Customer Satisfaction Survey Questions to Ask in 2024

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

IndianScribes

Research, Record, and Transcribe Better

Preparing Questions for a Qualitative Research Interview

Updated on: June 22, 2024

Preparing-Questions-for-a-Qualitative-Research-Interview

A qualitative research interview is an invaluable tool for researchers. Whether one’s studying social phenomena, exploring personal narratives, or investigating complex issues, interviews offer a means to gain unique insights. 

“The quality of the data collected in a qualitative research interview is highly dependent on the quality and appropriateness of the questions asked.”

But how do you prepare the right questions to ensure your interviews yield rich data? In this guide, we’ll explore the types of qualitative research interviews and provide tips for crafting effective questions.

Table of Contents

Types of Qualitative Research Interviews

Before diving into question preparation, it’s important to select the type of qualitative research interview that’s best suited for the study at hand.

There are three types of qualitative research interviews:

Structured Interviews 

Structured interviews involve asking the same set of pre-written questions to every participant. This approach ensures consistency, making it easier to compare data between participants or groups later.

When conducting structured interviews, keep these guidelines in mind:

  • Pre-written Questions : All questions, including probes, should be meticulously written in advance.
  • Detailed Questions : Questions should be detailed enough to be used verbatim during interviews.
  • Consistent Sequence : The sequence of questions should be pre-decided and consistent across interviews.

Example of a Structured Interview Question

Question : Thinking back to your childhood days in Chelsea, can you remember what kind of local music was popular at the time?

  • Why do you think it was so popular?
  • Where was it played?
  • Were there other popular genres?

Structured interviews are ideal when you need uniform data collection across all participants. They are common in large-scale studies or when comparing responses quantitatively.

Read more: Advantages & Disadvantages of Structured Interviews

Semi-structured Interviews 

The second type of qualitative interviews are semi-structured interviews. In these interviews, the  interview guide outlines the topics to be explored, but the actual questions are not pre-written.

This approach allows interviewers the freedom to phrase questions spontaneously and explore topics in more depth.

Example of a Semi-Structured Interview Question

Question : What problems did the participant face growing up in the community?

  • Education-related.
  • Related to their immediate family.
  • Related to the community in general.

Semi-structured interviews strike a balance between flexibility and structure. They offer a framework within which interviewers can adapt questions to participants’ responses, making them suitable for in-depth exploration.

Unstructured Interviews 

In unstructured interviews, often referred to as  informal conversational interviews , are characterized by a lack of formal guidelines, predefined questions, or sequencing.

Questions emerge during the interview based on the conversation’s flow and the interviewee’s observations. Consequently, each unstructured interview is unique, and questions may evolve over time.

Unstructured interviews are highly exploratory and can lead to unexpected insights. They are particularly valuable when studying complex or novel phenomena where predefined questions may limit understanding.

Deciding What Information You Need

Once you’ve chosen the type of interview that suits your research study, the next step is to decide what information you need to collect.

Patton’s six types of questions offer a framework for shaping your inquiries:

  • Behavior or Experience : Explore participants’ actions and experiences.
  • Opinion or Belief : Probe participants’ beliefs, attitudes, and opinions.
  • Feelings : Delve into the emotional aspects of participants’ experiences.
  • Knowledge : Assess participants’ understanding and awareness of a topic.
  • Sensory : Investigate how participants perceive and interact with their environment.
  • Background or Demographic : Collect information about participants’ personal characteristics and histories.

Based on these categories, create a list of the specific information you aim to collect through the interview. This step ensures that your questions align with your research objectives.

Writing the Qualitative Research Interview Questions

After deciding the type of interview and nature of information you’d like to gather, the next step is to write the actual questions. 

Using Open-Ended Questions

Open-ended questions are the backbone of qualitative research interviews. They encourage participants to share their experiences and thoughts in-depth, providing rich, detailed data.

Avoid ‘yes’ or ‘no’ questions, as they limit responses. Instead, use open-ended questions that grant participants the freedom to express themselves. Here are some examples – 

Examples of Open-Ended Questions

How do you feel about working at ABC Corp. during your initial years there?

  • Encourages participants to share their emotions and experiences.

Can you describe the attitudes and approach to work of the other people working with you at the time?

  • Invites participants to reflect on their colleagues’ behaviors and attitudes.

Tell me more about your relationship with your peers.

  • Encourages participants to provide narrative insights into their relationships.

Read More: 100 Open-Ended Qualitative Interview Questions

Going from Unstructured to Structured Questions

Unstructured Questions allow the interviewee to guide the conversation, letting them focus on what they think is most important.

These questions make the interview longer, but also provide richer and deeper insight.

Examples of Unstructured Questions

  • Tell me about your experience working at [xxx].
  • What did it feel like to live in that neighborhood?
  • What stood out to you as the defining characteristic of that neighborhood?

Examples of Structured Questions

  • What are some ways people dealt with the health issues caused by excessive chemical industries in the neighborhood?
  • As an employee at ABC Corp. during the time, did you observe any specific actions taken by the employers to address the issue?

Probing Questions

Probing questions are used to get more information about an answer or clarify something. They help interviewers dig deeper, clarify responses, and gain a more comprehensive understanding.

Examples of Probing Questions

Tell me more about that.

  • Encourages participants to elaborate on their previous response.

And how did you feel about that?

  • Invites participants to share their emotional reactions.

What do you mean when you say [xxx]?

  • Seeks clarification on ambiguous or complex statements.

Probing questions enhance the depth and clarity of the data collected, however they should be used judiciously to avoid overwhelming participants.

A General Last Question

As your interview approaches its conclusion, it’s beneficial to have a general last question that allows the interviewee to share any additional thoughts or opinions they feel are relevant.

For instance, you might ask:

Thank you for all that valuable information. Is there anything else you’d like to add before we end?

This open-ended question provides participants with a final opportunity to express themselves fully, ensuring that no critical insights are left unshared.

Preparing questions for qualitative research interviews requires a thoughtful approach that considers the interview type, desired information, and the balance between structured and unstructured questioning.

Here’s a great guide from the Harvard University on the subject.

Read More: How to Transcribe an Interview – A Complete Guide

  • Choosing the Right Setting for a Qualitative Research Interview
  • 5 Ways Researchers can Transcribe from Audio to Text

Reader Interactions

hlabishi says

April 8, 2015 at 12:37 pm

I found the information valuable. It will assist me a lot with my research work.

Harpinder says

June 8, 2015 at 10:40 pm

I am going for my pilot study. Above information is really valuable for me. Thank you.

September 28, 2015 at 10:21 am

thank you for Patton’s 6 types of questions related to: 1. Behavior or experience. 2. Opinion or belief. 3. Feelings. 4. Knowledge. 5. Sensory. 6. Background or demographic. Really helpful

IBRAHIM A. ALIYU says

October 7, 2015 at 6:04 pm

Very interesting and good guides, thanks a lot

Dumisani says

July 31, 2017 at 7:55 am

Very informative. Thank you

Yongama says

June 5, 2018 at 11:57 pm

this is a good information and it helped me

Joshua Nonwo says

June 3, 2019 at 11:02 pm

vital information that really help me to do my research. thank you so much.

June 12, 2019 at 7:36 pm

Thanks a lot. Example of structured interview broadens My mind in formulating my structured research question. Indeed very helpful.

mwiine says

November 29, 2019 at 6:31 am

thanx, a lot. the information will guide me in my research.

Kayayoo isaac says

November 29, 2019 at 7:54 am

Thanks for the information, it was very much helpful to me in the area of data collection.

leslie says

December 27, 2019 at 4:29 pm

very useful thanks.

louisevbanz says

January 20, 2020 at 3:19 pm

I’d like put the writers of this in my references. May I ask who the writers are and what year was this published? Thank you very much.

Daniel says

June 1, 2020 at 6:21 pm

Thank you very much. Helpful information in my preparations for structured interviews for my research .

abby kamwana says

December 8, 2020 at 9:03 am

This is the information i was looking for thank you so much!.

Cosmas W.K. Mereku (Prof.) says

June 15, 2021 at 8:59 am

I am teaching 42 MPhil and 6 PhD postgraduate music students research methods this academic year. Your guide to qualitative research interview questions has been very useful. Because the students are in different disciplines (music education, music composition, ethnomusicology and performance), all the types of questions discussed have been very useful. Thank you very much.

Gerald Ibrahim b. says

June 16, 2021 at 12:45 pm

One of my best article ever read..thanks alot this may help me in completing my research report…

Corazon T. Balulao says

March 1, 2022 at 7:47 am

Thank you so much for sharing with us it helps me a lot doing mt basic research

antoinette says

March 28, 2022 at 7:35 am

this was very helpful

พนันบอล เล่นยังไง says

November 21, 2023 at 5:55 am

Very good article! We are linking to this particularly great article on our website. Keep up the good writing.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Transcription
  • Qualitative Research
  • Better Audio & Video
  • Voice Recorders
  • Focus Groups
  • Terms of Service
  • Privacy Policy

Privacy Overview

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • Product Demos
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence

Market Research

  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Qualitative Research Interviews

Try Qualtrics for free

How to carry out great interviews in qualitative research.

11 min read An interview is one of the most versatile methods used in qualitative research. Here’s what you need to know about conducting great qualitative interviews.

What is a qualitative research interview?

Qualitative research interviews are a mainstay among q ualitative research techniques, and have been in use for decades either as a primary data collection method or as an adjunct to a wider research process. A qualitative research interview is a one-to-one data collection session between a researcher and a participant. Interviews may be carried out face-to-face, over the phone or via video call using a service like Skype or Zoom.

There are three main types of qualitative research interview – structured, unstructured or semi-structured.

  • Structured interviews Structured interviews are based around a schedule of predetermined questions and talking points that the researcher has developed. At their most rigid, structured interviews may have a precise wording and question order, meaning that they can be replicated across many different interviewers and participants with relatively consistent results.
  • Unstructured interviews Unstructured interviews have no predetermined format, although that doesn’t mean they’re ad hoc or unplanned. An unstructured interview may outwardly resemble a normal conversation, but the interviewer will in fact be working carefully to make sure the right topics are addressed during the interaction while putting the participant at ease with a natural manner.
  • Semi-structured interviews Semi-structured interviews are the most common type of qualitative research interview, combining the informality and rapport of an unstructured interview with the consistency and replicability of a structured interview. The researcher will come prepared with questions and topics, but will not need to stick to precise wording. This blended approach can work well for in-depth interviews.

Free eBook: The qualitative research design handbook

What are the pros and cons of interviews in qualitative research?

As a qualitative research method interviewing is hard to beat, with applications in social research, market research, and even basic and clinical pharmacy. But like any aspect of the research process, it’s not without its limitations. Before choosing qualitative interviewing as your research method, it’s worth weighing up the pros and cons.

Pros of qualitative interviews:

  • provide in-depth information and context
  • can be used effectively when their are low numbers of participants
  • provide an opportunity to discuss and explain questions
  • useful for complex topics
  • rich in data – in the case of in-person or video interviews , the researcher can observe body language and facial expression as well as the answers to questions

Cons of qualitative interviews:

  • can be time-consuming to carry out
  • costly when compared to some other research methods
  • because of time and cost constraints, they often limit you to a small number of participants
  • difficult to standardize your data across different researchers and participants unless the interviews are very tightly structured
  • As the Open University of Hong Kong notes, qualitative interviews may take an emotional toll on interviewers

Qualitative interview guides

Semi-structured interviews are based on a qualitative interview guide, which acts as a road map for the researcher. While conducting interviews, the researcher can use the interview guide to help them stay focused on their research questions and make sure they cover all the topics they intend to.

An interview guide may include a list of questions written out in full, or it may be a set of bullet points grouped around particular topics. It can prompt the interviewer to dig deeper and ask probing questions during the interview if appropriate.

Consider writing out the project’s research question at the top of your interview guide, ahead of the interview questions. This may help you steer the interview in the right direction if it threatens to head off on a tangent.

sample questions qualitative research interviews

Avoid bias in qualitative research interviews

According to Duke University , bias can create significant problems in your qualitative interview.

  • Acquiescence bias is common to many qualitative methods, including focus groups. It occurs when the participant feels obliged to say what they think the researcher wants to hear. This can be especially problematic when there is a perceived power imbalance between participant and interviewer. To counteract this, Duke University’s experts recommend emphasizing the participant’s expertise in the subject being discussed, and the value of their contributions.
  • Interviewer bias is when the interviewer’s own feelings about the topic come to light through hand gestures, facial expressions or turns of phrase. Duke’s recommendation is to stick to scripted phrases where this is an issue, and to make sure researchers become very familiar with the interview guide or script before conducting interviews, so that they can hone their delivery.

What kinds of questions should you ask in a qualitative interview?

The interview questions you ask need to be carefully considered both before and during the data collection process. As well as considering the topics you’ll cover, you will need to think carefully about the way you ask questions.

Open-ended interview questions – which cannot be answered with a ‘yes’ ‘no’ or ‘maybe’ – are recommended by many researchers as a way to pursue in depth information.

An example of an open-ended question is “What made you want to move to the East Coast?” This will prompt the participant to consider different factors and select at least one. Having thought about it carefully, they may give you more detailed information about their reasoning.

A closed-ended question , such as “Would you recommend your neighborhood to a friend?” can be answered without too much deliberation, and without giving much information about personal thoughts, opinions and feelings.

Follow-up questions can be used to delve deeper into the research topic and to get more detail from open-ended questions. Examples of follow-up questions include:

  • What makes you say that?
  • What do you mean by that?
  • Can you tell me more about X?
  • What did/does that mean to you?

As well as avoiding closed-ended questions, be wary of leading questions. As with other qualitative research techniques such as surveys or focus groups, these can introduce bias in your data. Leading questions presume a certain point of view shared by the interviewer and participant, and may even suggest a foregone conclusion.

An example of a leading question might be: “You moved to New York in 1990, didn’t you?” In answering the question, the participant is much more likely to agree than disagree. This may be down to acquiescence bias or a belief that the interviewer has checked the information and already knows the correct answer.

Other leading questions involve adjectival phrases or other wording that introduces negative or positive connotations about a particular topic. An example of this kind of leading question is: “Many employees dislike wearing masks to work. How do you feel about this?” It presumes a positive opinion and the participant may be swayed by it, or not want to contradict the interviewer.

Harvard University’s guidelines for qualitative interview research add that you shouldn’t be afraid to ask embarrassing questions – “if you don’t ask, they won’t tell.” Bear in mind though that too much probing around sensitive topics may cause the interview participant to withdraw. The Harvard guidelines recommend leaving sensitive questions til the later stages of the interview when a rapport has been established.

More tips for conducting qualitative interviews

Observing a participant’s body language can give you important data about their thoughts and feelings. It can also help you decide when to broach a topic, and whether to use a follow-up question or return to the subject later in the interview.

Be conscious that the participant may regard you as the expert, not themselves. In order to make sure they express their opinions openly, use active listening skills like verbal encouragement and paraphrasing and clarifying their meaning to show how much you value what they are saying.

Remember that part of the goal is to leave the interview participant feeling good about volunteering their time and their thought process to your research. Aim to make them feel empowered , respected and heard.

Unstructured interviews can demand a lot of a researcher, both cognitively and emotionally. Be sure to leave time in between in-depth interviews when scheduling your data collection to make sure you maintain the quality of your data, as well as your own well-being .

Recording and transcribing interviews

Historically, recording qualitative research interviews and then transcribing the conversation manually would have represented a significant part of the cost and time involved in research projects that collect qualitative data.

Fortunately, researchers now have access to digital recording tools, and even speech-to-text technology that can automatically transcribe interview data using AI and machine learning. This type of tool can also be used to capture qualitative data from qualitative research (focus groups,ect.) making this kind of social research or market research much less time consuming.

sample questions qualitative research interviews

Data analysis

Qualitative interview data is unstructured, rich in content and difficult to analyze without the appropriate tools. Fortunately, machine learning and AI can once again make things faster and easier when you use qualitative methods like the research interview.

Text analysis tools and natural language processing software can ‘read’ your transcripts and voice data and identify patterns and trends across large volumes of text or speech. They can also perform khttps://www.qualtrics.com/experience-management/research/sentiment-analysis/

which assesses overall trends in opinion and provides an unbiased overall summary of how participants are feeling.

sample questions qualitative research interviews

Another feature of text analysis tools is their ability to categorize information by topic, sorting it into groupings that help you organize your data according to the topic discussed.

All in all, interviews are a valuable technique for qualitative research in business, yielding rich and detailed unstructured data. Historically, they have only been limited by the human capacity to interpret and communicate results and conclusions, which demands considerable time and skill.

When you combine this data with AI tools that can interpret it quickly and automatically, it becomes easy to analyze and structure, dovetailing perfectly with your other business data. An additional benefit of natural language analysis tools is that they are free of subjective biases, and can replicate the same approach across as much data as you choose. By combining human research skills with machine analysis, qualitative research methods such as interviews are more valuable than ever to your business.

Related resources

Market intelligence 10 min read, marketing insights 11 min read, ethnographic research 11 min read, qualitative vs quantitative research 13 min read, qualitative research questions 11 min read, qualitative research design 12 min read, primary vs secondary research 14 min read, request demo.

Ready to learn more about Qualtrics?

sample questions qualitative research interviews

  • Account Logins

sample questions qualitative research interviews

What We Offer

With a comprehensive suite of qualitative and quantitative capabilities and 55 years of experience in the industry, Sago powers insights through adaptive solutions.

  • Recruitment
  • Communities
  • Methodify® Automated research
  • QualBoard® Digital Discussions
  • QualMeeting® Digital Interviews
  • Global Qualitative
  • Global Quantitative
  • In-Person Facilities
  • Healthcare Solutions
  • Research Consulting
  • Europe Solutions
  • Neuromarketing Tools
  • Trial & Jury Consulting

Who We Serve

Form deeper customer connections and make the process of answering your business questions easier. Sago delivers unparalleled access to the audiences you need through adaptive solutions and a consultative approach.

  • Consumer Packaged Goods
  • Financial Services
  • Media Technology
  • Medical Device Manufacturing
  • Marketing Research

With a 55-year legacy of impact, Sago has proven we have what it takes to be a long-standing industry leader and partner. We continually advance our range of expertise to provide our clients with the highest level of confidence.​

  • Global Offices
  • Partnerships & Certifications
  • News & Media
  • Researcher Events

multi-video ai summaries thumbnail

Take Your Research to the Next Level with Multi-Video AI Summaries

steve schlesinger, mrx council hall of fame

Steve Schlesinger Inducted Into 2024 Market Research Council Hall of Fame

professional woman looking down at tablet in office at night

Sago Announces Launch of Sago Health to Elevate Healthcare Research

Drop into your new favorite insights rabbit hole and explore content created by the leading minds in market research.

  • Case Studies
  • Knowledge Kit

europe summer slowdown blog 3

When Europe Hits Pause: The Summer Slowdown and What It Means for Business

swing voters, wisconsin, aug 2024

The Swing Voters Project, August 2024: Wisconsin

  • Partner with us
  • Join our panel

A Step-by-Step Guide for a Successful Qualitative Interview

  • Resources , Blog

clock icon

Key Takeaways: 

  • Qualitative interviews provide in-depth insights from individual respondents, and are useful when follow-up or clarification is needed
  • Clarity of objectives and audience is essential to gathering actionable insights from your qualitative research project
  • Build a strong researcher-respondent relationship to elicit honest and engaged responses

Qualitative research uses in-depth interviews to gain rich non-numerical data from individuals. This data helps researchers understand concepts, opinions, and personal experiences. Interviews are an excellent method to discover the “why” behind people’s preferences or behaviors, but they require a thoughtful approach.

Continue reading as we explore use cases and define the steps to follow for a successful qualitative interview.

In this Article:

When Should I Use Qualitative Interviews? Conducting a Successful Qualitative Interview – Step by Step Guide

1. Determine Your Objective 2. Understand Your Audience 3. Design Appropriate Questions 4. Organize and Prepare for the Interview 5. Conduct the Interview 6. Transcribe and Analyze Responses 7. Learn, Adapt, and Evolve Your Interviews

Start Conducting Qualitative Interviews with Sago

Ready to conduct your qualitative interviews.

Book a consultation with our team for help with recruitment, facilities, and more.

Book a consultation

When Should I Use Qualitative Interviews?

Qualitative research is used to obtain context and describe underlying factors. It describes “how” and “why.”

Perhaps a business wants to understand what product features are most or least important to each target segment. They could ask:

“Between product A and product B, how would the features in each product influence your buying decision?”

This creates an opportunity for the respondent to reveal what features are personally important and unimportant for them. In an interview setting, researchers can go deeper into why these features are important, and how important each feature is in comparison to others.

Qualitative interviews are best when:

  • You need in-depth insights
  • You want answers to a range of follow-up questions, building on prior responses
  • Your questions require significant explanation and reasoning
  • You explore complex and confusing topics with respondents
  • You want to understand what drives consumer decisions
  • You want to hear the unique voice of your audience first-hand

Conducting a Successful Qualitative Interview – Step by Step Guide

Knowing when to use a qualitative interview is a great first step, but now you need to understand how best to conduct one. Our experts share a range of steps to follow as you embark on a qualitative interview and best practices for each.

1. Determine Your Objective

What are you trying to understand? The answer to this is critical in guiding your qualitative research process.

Some common examples:

  • Understand consumer perceptions of products, services, or brand
  • Reveal strengths and weaknesses in product or service portfolios
  • Understand consumer buying behaviors
  • Test the usability of a website or digital service
  • Emotional reactions to packaging design and marketing assets

2. Understand Your Audience

Who is your target audience for this project? Have a clear understanding of who you need to hear from to meet your research objective.

Here are some examples of objectives, and the sample that is most suited to each:

  • If you want to understand how existing customers perceive the quality of your products, you need a sample of existing customers.
  • If you want to understand why consumers choose competitor products over yours, you need a sample of non-customers who buy products from your primary competitor.
  • If you want to understand how the average person perceives your brand, you need a combination of existing customers, non-customers with awareness of your brand, and unaware non-customers.

3. Design Appropriate Questions

The questions you ask must align with the objectives of your research without being leading or introducing bias.

Here are some best practices when designing research questions:

  • Keep questions open-ended. This increases the depth of insight obtained.
  • Follow a structure. For instance, a tree diagram where every question has pre-determined follow-up questions based on anticipated answers. A planned structure increases the quality and validity of responses and reduces distractions.
  • Design questions that simplify data collection and analysis. Format the responses collected to be compatible with your tools during data ingestion.
  • Keep it simple. Focus on clarity when designing research questions to improve respondent understanding and engagement.

4. Organize and Prepare for the Interview

Relationships are essential to the interview process. Preparation beforehand helps build the respondent-researcher relationship. This relationship creates trust and elicits more honest and in-depth answers from participants. Here are some ways to prepare for an interview:

  • Give respondents as much information as possible—such as question lists and question intent. Put this into an interview handbook to improve engagement and effectiveness.
  • Conduct the interview in a suitable environment with minimal distractions and stressors.
  • Have the necessary materials to record information.
  • Interview yourself to identify and fix problems before you start interviewing others.

5. Conduct the Interview

With a structure in place, researchers have a clear plan of action throughout the interview.

During the interview, stay attuned to emotional reactions and body language with the following techniques:

  • Create a relaxed atmosphere. Ask respondents about their lives, work, and passions to establish a connection.
  • Give respondents your full attention. An engaged researcher encourages an engaged respondent. Plus, they gave up their personal time to help you out.
  • Read body language. Is the respondent crossing their arms, looking down to the floor, or not making eye contact? These reactions may signal discomfort or anxiety, offering an opportunity to build rapport.
  • Follow the questions but be flexible when listening. Deviations from the script may lead to unexpected and valuable insights.

6. Transcribe and Analyze Responses

Convert recorded audio responses to text. Decide early which tool or solution will work best for your needs.

Similarly, researchers may need to annotate video responses to describe behaviors and surrounding context before analysis; e.g., this person gritted their teeth during that response, that person’s vocal tone was anxious and uncertain, etc.

Transcribe responses into a format ready for analysis upon ingestion into your business intelligence tools.

7. Learn, Adapt, and Evolve Your Interviews

Each interview is an opportunity to improve the process. Take time after a project to evaluate how it went.

What did you learn about the process? Was it easy or confusing? Was the respondent comfortable or on edge? Did you get the responses you needed?

Scrutinize your interview approach. Look for ways to improve and innovate the process for better outcomes next time.

Now, you should have a good idea of when to use and how to approach qualitative interviews.

Sago has decades of experience across both quantitative and qualitative research. Our experts find interviews ideal for in-depth qualitative insights that guide new product and service development or improve market positioning for existing offerings. We offer both in-person facilities and online spaces to conduct qualitative interviews.

If you still have questions, get in touch with Sago for help with your next research project.

qualboard mutli-video ai summaries blog thumbnail

Efficiency Unleashed: Quick Insights with QualBoard’s Multi-Video AI Summaries

de la riva case study blog thumbnail

Enhancing Efficiency with All-in-One Digital Qual

girl wearing medical mask in foreground, two people talking in medical masks in background

How Connecting with Gen C Can Help Your Brand Grow

the deciders july 2024 blog thumbnail

The Deciders, July 2024: Former Nikki Haley Voters

smiling woman sitting at a table looking at her phone with a coffee cup in front of her

OnDemand: Crack the Code: Evolving Panel Expectations

toddler girl surrounded by stuffed animals and using an ipad

Pioneering the Future of Pediatric Health

swing voters, july 2024 florida thumbnail

The Swing Voter Project, July 2024: Florida

summer 2024 travel trends

Exploring Travel Trends and Behaviors for Summer 2024

Take a deep dive into your favorite market research topics

sample questions qualitative research interviews

How can we help support you and your research needs?

sample questions qualitative research interviews

BEFORE YOU GO

Have you considered how to harness AI in your research process? Check out our on-demand webinar for everything you need to know

sample questions qualitative research interviews

sample questions qualitative research interviews

Extract insights from Interviews. At Scale.

Sample of document analysis for qualitative studies.

Insight7

Home » Sample of document analysis for qualitative studies

Qualitative Document Analysis plays a crucial role in understanding complex human interactions and social phenomena. This method allows researchers to delve into various textual materials, such as interviews, reports, or articles, to extract meaningful insights and patterns. By systematically examining these documents, analysts can uncover underlying themes that inform both theory and practice in qualitative studies.

The significance of Qualitative Document Analysis lies in its ability to reveal insights that traditional quantitative methods often overlook. As researchers engage with textual data, they develop a richer understanding of the context surrounding the information presented. This process not only enhances the quality of research but also fosters a deeper connection between the data and its implications for real-world applications.

The Importance of Qualitative Document Analysis in Research

Qualitative Document Analysis plays a pivotal role in research by enabling scholars to delve into the intricate meanings behind textual data. This method focuses on the nuances of language, context, and subtext, providing valuable insights that quantitative analysis might overlook. By examining documents critically, researchers can uncover trends, patterns, and connections that enrich their understanding of social phenomena.

To grasp the importance of this analysis, consider a few key aspects: First, it allows for a deep exploration of subjects, leading to a more holistic understanding of the research topic. Second, it enhances the credibility of findings, as the evidence drawn from documents is often rooted in real-world experiences and contexts. Finally, using qualitative document analysis fosters an interactive dialogue with the data, ultimately stimulating innovative ideas and approaches. This method not only strengthens a study but also contributes to the broader academic discourse.

Defining Qualitative Document Analysis

Qualitative Document Analysis involves examining various types of documents to extract meaningful insights. This method is especially useful for understanding complex social phenomena. Through careful reading and interpretation, researchers can uncover themes, patterns, and contexts that inform their study. The process often begins with selecting relevant documents that reflect the experience or phenomenon being studied.

This analysis can be broken down into several key steps. First, researchers identify their objectives and the specific documents they wish to analyze. Next, they thoroughly review the material, noting recurring themes or significant quotes. Finally, researchers interpret these findings within the context of their research questions and broader social implications. By systematically engaging with the document, researchers can extract valuable insights that enhance understanding and inform future practices. Qualitative Document Analysis is not just about the content of the documents; it is about the narratives and meanings derived from them.

Role in Uncovering Insights

Qualitative Document Analysis plays a pivotal role in uncovering insights that can shape understandings and decisions. By systematically reviewing and interpreting documents, researchers extract meaningful information about sentiments, patterns, and trends. This process often involves identifying specific themes that reflect participants’ experiences, challenges, and desires, leading to a richer comprehension of the studied subject.

In practice, document analysis allows researchers to delve deeper into data by examining quotes and context from interviews. For example, analyzing customer feedback can highlight common pain points, enabling organizations to address specific needs effectively. Moreover, aggregating insights across multiple documents enhances the overall findings, providing a comprehensive view of the data landscape. Integrating this methodology expands the potential for actionable insights and deeper understanding, making qualitative document analysis an essential tool for qualitative studies.

Core Steps in Qualitative Document Analysis

Qualitative Document Analysis involves a systematic approach to understanding and interpreting various text-based materials. Initially, one must begin by selecting relevant documents that align with the research question. This critical first step sets the foundation for meaningful analysis. Once documents are gathered, developing a coding framework is essential. This framework helps to categorize and identify key themes, patterns, and insights within the text.

Next, engage in a thorough reading of the documents, highlighting important sections that may contribute to your analysis. It's beneficial to take notes and reflect on how these insights relate to your research objectives. Afterward, the process of extracting meaning comes into play by synthesizing the identified themes. Finally, articulate the findings clearly, ensuring they address the initial research questions while providing deeper insights into the content analyzed. This structured approach to qualitative document analysis not only enhances understanding but also adds rigor to the research process.

Data Collection and Preparation

Data collection and preparation are crucial steps in qualitative document analysis. This phase involves systematically gathering relevant documents that provide insights into the phenomena being studied. For effective data collection, researchers must clearly define the types of documents required and establish the criteria for selection. This may include reports, emails, memos, or any textual evidence that sheds light on the research questions.

Once the documents are collected, preparation entails organizing, categorizing, and formatting them for analysis. Researchers should thoroughly review each document for context and relevance. In some cases, it is helpful to transcribe or annotate the text for clarity. Ensuring accuracy in this preparation phase lays the foundation for insightful analysis. By meticulously approaching the data collection and preparation process, researchers can enhance the reliability and depth of qualitative findings, allowing them to draw meaningful conclusions from the studied documents.

Coding and Categorization

In qualitative document analysis, coding and categorization serve as essential techniques to guide researchers in organizing their findings. Coding involves identifying key themes and concepts within the text, which can be achieved through both open and axial coding methods. Once significant codes are established, categorization allows researchers to group these codes into broader themes. This structured approach not only simplifies the data analysis process but also highlights patterns that might otherwise go unnoticed.

For effective coding and categorization, consider the following steps:

  • Familiarization with the Document : Thoroughly read the material to grasp its context and nuances.
  • Initial Coding : Begin tagging relevant sections of text that correspond to potential themes or topics.
  • Reviewing Codes : Examine initial codes to assess their relevance and ensure consistency.
  • Developing Categories : Group similar codes into overarching categories that capture the essence of the data.
  • Refining Themes : Adjust categories and codes as new insights emerge, ensuring they accurately reflect the material.

By following these steps, researchers can create a meaningful framework for analyzing qualitative findings, leading to richer insights and informed conclusions.

Techniques for Effective Qualitative Document Analysis

Effective qualitative document analysis requires careful planning and systematic approaches to interpret complex information. To begin with, researchers should clearly define their research questions and objectives, guiding their analysis toward relevant insights. This initial focus allows for a more structured examination of documents, whether they be interviews, reports, or other textual materials.

One essential technique is to use coding frameworks that categorize data into themes. This process helps in identifying patterns and discrepancies across various documents. Additionally, conducting a context analysis ensures that the social, cultural, and temporal background influencing the documents is considered. It adds depth to the interpretation, revealing layers of meaning that might otherwise go unnoticed. By integrating these techniques, researchers can enhance the reliability and validity of qualitative document analysis, ensuring that results are both meaningful and actionable.

Content Analysis

Content analysis serves as a cornerstone in qualitative document analysis, facilitating the examination of textual data. This method breaks down content into manageable themes and patterns, allowing researchers to uncover deeper meanings and insights. By systematically categorizing information, researchers can identify key narratives and trends that emerge from qualitative data sources, such as interviews or open-ended survey responses.

To effectively conduct content analysis, consider the following steps:

  • Define the Research Questions : Establish clear objectives to guide the analysis process. These questions should reflect the core themes you aim to explore.
  • Select Content Sources : Decide on the documents or materials you wish to analyze, ensuring they align with your research goals.
  • Develop a Coding Scheme : Create a framework for categorizing themes, concepts, and patterns you observe in the data.
  • Code the Content : Systematically apply your coding scheme, highlighting relevant passages and tagging them according to your predefined categories.
  • Interpret the Findings : Analyze the coded data to draw conclusions that address your research questions, leading to actionable insights.

Employing content analysis allows qualitative insights to shape strategic decisions effectively, shedding light on complex narratives.

Thematic Analysis

Thematic analysis serves as a crucial method in qualitative document analysis, allowing researchers to identify recurring themes across various texts. By examining patterns within the data, analysts can extract significant insights that reflect the underlying meanings present in the documents studied. This systematic approach helps to organize the information into coherent categories, enhancing the clarity and focus of the analysis.

To conduct effective thematic analysis, consider the following key steps:

  • Familiarization with Data: Begin by thoroughly reading the documents to immerse yourself in the content.
  • Initial Coding: Identify notable features of the data and develop preliminary codes that capture essential elements.
  • Theme Development: Group the initial codes into broader themes that represent the key ideas found in the documents.
  • Reviewing Themes: Critically evaluate the identified themes, ensuring they accurately reflect the data.
  • Defining and Naming Themes: Clearly articulate the essence of each theme to encapsulate the core insights derived from qualitative document analysis.

By following these steps, researchers can create a structured framework that guides their interpretation of qualitative data, ultimately leading to meaningful conclusions.

Conclusion: Implementing Qualitative Document Analysis for Robust Findings

Implementing qualitative document analysis can greatly enhance research findings. This method allows for the structured examination of various texts, providing a deeper understanding of participant perspectives. By systematically organizing and interpreting data, researchers can uncover patterns that support robust conclusions.

Moreover, qualitative document analysis encourages collaboration among team members, streamlining the synthesis of insights from scattered data sources. This approach not only saves time but also enriches the analysis, making it a vital tool for researchers seeking reliable and actionable outcomes. Embracing this method can ultimately lead to more informed decision-making and successful project outcomes.

Turn interviews into actionable insights

On this Page

Top 5 Call Monitoring Services for Quality Assurance: What to Consider

You may also like, generative ai consulting market: key insights.

Insight7

Generative AI for product development: Top trends

Generative ai for hr: best tools to consider.

Unlock Insights from Interviews 10x faster

sample questions qualitative research interviews

  • Request demo
  • Get started for free

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 22 August 2024

To share or not to share, that is the question: a qualitative study of Chinese astronomers’ perceptions, practices, and hesitations about open data sharing

  • Jinya Liu   ORCID: orcid.org/0000-0002-9804-8752 1 ,
  • Kunhua Zhao 2 , 3 ,
  • Liping Gu 2 , 3 &
  • Huichuan Xia   ORCID: orcid.org/0000-0002-0838-7452 1  

Humanities and Social Sciences Communications volume  11 , Article number:  1063 ( 2024 ) Cite this article

189 Accesses

Metrics details

  • Science, technology and society
  • Social policy

Many astronomers in Western countries may have taken open data sharing (ODS) for granted to enhance astronomical discoveries and productivity. However, how strong such an assumption holds among Chinese astronomers has not been investigated or deliberated extensively. This may hinder international ODS with Chinese astronomers and lead to a misunderstanding of Chinese astronomers’ perceptions and practices of ODS. To fill this gap, we conducted a qualitative study comprising 14 semi-structured interviews and 136 open-ended survey responses with Chinese astronomers to understand their choices and concerns regarding ODS. We found that many Chinese astronomers conducted ODS to promote research outputs and respected it as a tradition. Some Chinese astronomers have advocated for data rights protection and data infrastructure’s further improvement in usability and availability to guarantee their ODS practices. Still, some Chinese astronomers agonized about ODS regarding the validity of oral commitment with international research groups and the choices between international traditions and domestic customs in ODS. We discovered two dimensions in Chinese astronomers’ action strategies and choices of ODS and discussed their descriptions and consequences. We also proposed the implications of our research for enhancing international ODS in future work.

Similar content being viewed by others

sample questions qualitative research interviews

Citizen scientists—practices, observations, and experience

sample questions qualitative research interviews

Perceived benefits of open data are improving but scientists still lack resources, skills, and rewards

sample questions qualitative research interviews

A focus groups study on data sharing and research data management

Introduction.

Open data sharing (ODS) emphasizes scientific data’s availability to the public beyond its usability and distribution within academic communities (UNESCO, 2021 ). ODS has become increasingly significant since the Big Data era has engendered a paradigm shift towards data-intensive science (Tolle et al., 2011 ), and ODS has promoted data-intensive science to incorporate all stakeholders, such as researchers, policymakers, and system designers to address data processing and utilization issues collectively (Kurata et al., 2017 ; Zuiderwijk et al., 2024 ). Meanwhile, ODS has improved scientific discovery and productivity since different governments and funding agencies have endorsed ODS and published policies to facilitate it (Lamprecht et al., 2020 ). For example, the UK Research and Innovation (UKRI) issued the “Concordat on open research data” in 2016 to ensure that research data gathered and generated by the UK research community must be openly available to the public (UK Research and Innovation, 2016 ). The Chinese government published a “Scientific Data Management Methods” policy in 2018, requiring government-funded research to share its data with the public (General Office of the State Council of China, 2018 ). Besides such government initiatives, the scientific community has also proposed guiding principles for ODS, such as the “FAIR principles” to facilitate data sharing in respect of Findability, Accessibility, Interoperability, and Reuse (Wilkinson et al., 2016 ).

Astronomy is data-intensive and has long been regarded as a prime model of ODS for other scientific fields. For example, the famous Large Synoptic Survey Telescope (LSST) project has committed to real-time ODS after its start-up in 2025 and has released early survey data since June 2021 (Guy et al., 2023 ). Scholars have conducted a few studies to dig out the good practices of ODS in astronomy and found that ODS has a long tradition in astronomy supported by its well-established knowledge infrastructure and data policies (Zuiderwijk and Spiers, 2019 ; Borgman et al., 2021 ). Still, scholars found that some astronomers were hesitant to conduct ODS due to the high reward expectations (e.g., acknowledgment, institutional yearly evaluation, extra citation) and extra efforts (e.g., additional data description) required in ODS practices (Zuiderwijk and Spiers, 2019 ; Kim and Zhang, 2015 ); some astronomers also raised barriers about the usability and availability of data infrastructure to support ODS practices (Pepe et al., 2014 ).

Despite the ODS tradition in astronomy, researchers’ motivations and barriers to ODS may differ based on their cultural contexts. Most empirical studies of ODS have been conducted in Western and developed countries (Genova, 2018 ). Whether these findings hold in non-Western cultures deserves further exploration. Chinese culture and customs differ from Western ones, which may impose distinctive influences on Chinese people’s perspectives and behaviors. For example, Confucianism often renders Chinese individual researchers stick to collectivism or the societal roles assigned to them (Jin and Peng, 2021 ), which is less common in Western culture or academia to our knowledge. Also, scientific research paradigms have originated from and situated in Western culture for a long time. They call for critical examinations and alternative perspectives at the individual and societal or cultural levels, and ODS has been regarded as an essential lens to deliberate it (Serwadda et al., 2018 ; Bezuidenhout and Chakauya, 2018 ; Zuiderwijk et al., 2024 ).

Besides our concerns about cultural and research paradigm differences, Chinese astronomers’ distinctive characteristics have also motivated us to conduct this study. First, based on our prior experience with some Chinese astronomers, we have observed that Chinese astronomers follow enclosed or independent data-sharing norms that are uncommon to researchers in other disciplines. Their research seems to be more international than domestic. Since a slogan from the Chinese government has influenced many research disciplines (including ours) in China, advocating Chinese scholars to “Write your paper on the motherland” (Wang et al., 2024 ), we wondered how such propaganda would impact Chinese astronomers’ attitudes and behaviors. Second, a recent study has revealed that some Chinese astronomers struggled with ODS because they respected it as a tradition on the one hand and desired to gain career advantages (e.g., more data citations) on the other (Liu J, 2021). This finding contrasts another recent study’s conclusion that Chinese early career researchers (ECRs) (in non-astronomy disciplines) would only welcome ODS if the evaluation system rewarded them (Xu, et al., 2020 ). Hence, we wanted to investigate Chinese astronomers’ motivations and barriers regarding ODS further.

Finally, though ODS has been well-acknowledged internationally, it has not been studied or implemented extensively in most research disciplines in China, with astronomy as a rare exception. Hence, we posited that research about ODS in astronomy might shed light on other research disciplines’ popularization of ODS in China. In addition, previous studies on ODS in China have primarily focused on the Chinese government’s open data policies, infrastructure conditions, and management practices (Zhang, et al., 2022 ; Huang et al., 2021 ). To the best of our knowledge, little attention has been paid to Chinese researchers’ perceptions and practices. Thus, we wanted to conduct an exploratory investigation with Chinese astronomers to fill this gap and foster international ODS and research collaboration in Chinese astronomy and other research disciplines more broadly.

With these motivations in mind, we proposed the following research questions.

How do Chinese astronomers perceive and practice open data sharing?

Why do some Chinese astronomers hesitate over the issue of open data sharing?

To address those research questions, we conducted a qualitative study comprising 14 semi-structured interviews and 136 open-ended survey responses with Chinese astronomers to understand their practices and concerns regarding ODS. We found that many Chinese astronomers conducted ODS to promote research outputs and respected it as a tradition. Some Chinese astronomers have advocated for data rights protection and data infrastructure’s further improvement in usability and availability to guarantee their ODS practices. Still, some Chinese astronomers agonized about ODS regarding the validity of oral commitment with international research groups and the choices between international traditions and domestic customs in ODS. We discovered two dimensions in Chinese astronomers’ action strategies and choices of ODS and discussed these findings and implications. This study makes the following contributions. First, it provides a non-Western viewpoint for global ODS in astronomy and recommendations for advancing global and Chinese ODS policies and practices. Second, it reveals Chinese astronomers’ concerns, motivations, and barriers to conducting ODS. This may inspire domestic government, international research policymakers, and ODS platforms and practitioners to empathize with and support Chinese astronomers. Finally, this study may shed light on implementing ODS in other research disciplines in China, which has not been popular.

Literature review

The background of ods in science.

The open data movement in scientific communities was initiated at the beginning of the 21st century (e.g., Max Planck Society, 2003) (Tu and Shen, 2023 ). ODS, also known as open research data, advocates that the openness of scientific data to the public is imperative to science (UNESCO, 2021 ; Fox et al., 2021 ). Prior research has inquired about researchers’ intrinsic and extrinsic motivations for ODS. Intrinsic motivations include personal background and ethical perspectives. For example, a researcher’s personal background (research experience, gender, position, age, etc.) has been found to affect their ODS preferences, and significant differences have been observed in research experience (Zuiderwijk and Spiers, 2019 ; Digital Science et al., 2024 ). Also, a researcher’s ethical stance influences their ODS practices. Some researchers conduct ODS because they want to benefit the research community and promote reciprocity among data stakeholders, such as data producers, funders, and data users (Lee et al., 2014 ; Ju and Kim, 2019 ). Extrinsic motivations for ODS include incentive policies, data infrastructure, and external pressures from funders, journals, or community rules. Incentive policies, such as the promise of data citation and the rewarding credit from their institutions, effectively enhance ODS (Dorch et al., 2015 ; Popkin, 2019 ). Also, a well-established infrastructure could facilitate ODS by reducing its cost (Kim and Zhang, 2015 ). Moreover, regulations from researchers’ stakeholders (e.g., journals and funders) press their ODS practices as well. One example is developing data policies. Kim and Stanton proposed that journal regulative pressure has significantly positive relationships with ODS behaviors (Kim and Stanton, 2016 ).

Despite the motivations, researchers in ODS still have valid justifications for not conducting such practices (Zuiderwijk et al., 2024 ; Boeckhout et al., 2018 ). Sayogo and Pardo categorized those barriers into (1) technological barriers, (2) social, organizational, and economic barriers, and (3) legal and policy barriers (Sayogo and Pardo, 2013 ). More specifically, at the individual level, Houtkoop et al. found that ODS was uncommon in psychology due to psychologists’ insufficient training and extra workload (Houtkoop et al., 2018 ). Meanwhile, Banks et al. indicated that researchers in organizational research were afraid of exposing the quality of their data (Banks et al., 2022 ). In addition, researchers’ ethical concerns also influence their ODS practices, primarily privacy and fairness issues. Walsh et al. identified the privacy risks related to identity, attribute, and membership disclosure as the main ethical concerns about ODS (Walsh et al., 2018 ). Anane et al. worried that ODS could compromise fairness because some new or busy researchers might lose their data rights during the critical post‐first‐publication period (Anane-Sarpong et al., 2020 ). At the societal level, inadequate data policies have failed to guarantee researchers’ data rights, and property rights are unclear. Enwald et al. proposed that researchers in physics and technology, arts and humanities, social sciences, and health sciences were concerned about legal issues (e.g., confidentiality and intellectual property rights), misuse or misinterpretation of data, and loss of authorship (Enwald et al., 2022 ). Anane et al. found that data ownership was a crucial barrier affecting public health researchers’ willingness to share data openly (Anane-Sarpong et al., 2018 ).

The factors that influence astronomical ODS practices

Astronomy has been a prime example of ODS practices in scientific communities (Koribalski, 2019 ). For example, in gamma-ray astronomy, astronomers have explored how to render high-level data formats and software openly accessible and sharable for the astronomical community (Deil et al., 2017 ). In space-based astronomy, ODS has been an established norm in its research community for a long history (Harris and Baumann, 2015 ). In the interdisciplinary field of astrophysics, evidence has shown that papers with links to data, which also represent an approach of ODS, have a citation advantage over papers that did not link the data (Dorch et al., 2015 ). Additionally, many data archives in astronomy have been openly accessible to the public to increase their reusable value and potential for rediscovery (Rebull, 2022 ).

Prior studies have examined the socio-technical factors fostering ODS. Data policies support ODS implementations, and existing data infrastructure plays an essential role in ODS practices in astronomy (Pasquetto et al., 2016 ; Genova, 2018 ). For example, Reichman et al. attributed astronomy’s long tradition of ODS to its extensive and collaborative infrastructure (e.g., software and data centers) (Reichman et al., 2011 ). In practice, some famous astronomy organizations have built solid data infrastructures to support ODS, such as NASA Astrophysics Data System (ADS) and the International Virtual Observatory Alliance (IVOA) (Kurtz et al., 2004 ; Genova, 2018 ). Astronomy’s integrated knowledge infrastructure spanning decades and countries, encompassing observational data, catalogs, bibliographic records, archives, thesauri, and software, prompts global ODS among astronomers (Borgman et al., 2021 ). Many astronomers have a strong sense of duty to their research communities and the public. Thus, they would accept requests for data to assist colleagues and facilitate new scientific discoveries, which enhances ODS (Stahlman, 2022 ). Besides, astronomers perceived reciprocity influences their ODS practices. They aspire to improve their research outputs’ visibility and contribute to new, innovative, or high-quality research via ODS (Zuiderwijk and Spiers, 2019 ).

Still, some factors may hinder astronomers’ ODS practices. At the individual level, ODS may bring them extra learning load and academic reputation risks. For example, if astronomers perceive challenges in ODS or feel they need to acquire further knowledge, they may be less inclined to engage in such practices (Gray et al., 2011 ). Additionally, astronomers expressed concerns about the possibility of others discovering mistakes in the data (Zuiderwijk and Spiers, 2019 ). Pepe et al. also showed that the difficulty of sharing large data sets and the overreliance on non-robust, non-reproducible mechanisms for sharing data (e.g., via email) were the main hindrances to astronomers’ ODS practices (Pepe et al., 2014 ). At the societal level, an exponential increase in astronomical data volume has led to a continuous enrichment of utilization scenarios. ODS may involve data privacy or national security issues, especially when such data is integrated with other datasets. Thus, Harris and Baumann regarded the primary concern in global ODS as safeguarding national security and establishing appropriate licensing mechanisms (Harris and Baumann, 2015 ).

The development of ODS in China

The Chinese government has recognized ODS as a national strategy in both scientific and public service domains. They issued the “Scientific Data Management Methods” in 2018 and “Opinions on Building a More Perfect System and Mechanism for the Market-oriented Allocation of Factors” in 2022. These policies require that data from government-funded research projects must be shared with the public according to the principle of “openness as the norm and non-openness as the exception” (General Office of the State Council of China, 2018 ; General Office of the State Council of China, 2024 ). The Chinese government applied the “hierarchical management, safety, and control” concept as ODS arrangements to realize a dynamic ordered open research data at the social level (Li et al., 2022 ).

At the institutional level, the Chinese Academy of Sciences (CAS) has been actively promoting infrastructure construction and institutional repositories to support ODS. For example, CAS has affiliated eleven out of twenty national-level data centers that are foundational for ODS in China since 2019. Meanwhile, many Chinese journals have published data policies requesting that researchers append their papers with open-access data. The National Natural Science Foundation of China (NSFC) has funded over 6000 data-intensive research programs, encouraging ODS among them in compliance with the NSFC’s mandate (Zhang et al., 2021 ). Regarding Chinese researchers’ attitudes and practices toward ODS, Zhang et al. have observed that Chinese data policies have shifted from focusing on data management to encompassing both data governance and ODS. This shift has shrunk the gap between Chinese researchers’ positive attitudes toward ODS and their less active ODS behaviors (Zhang et al., 2021 ). Driven by journal policies, Chinese researchers’ ODS behaviors have been encouraged. For example, Li et al. found that more than 90% of the published dataset of ScienceDB is also paper-related data and proposed that the pressure from journals has been the main driving force for researchers to conduct ODS (Li et al., 2022 ). ScienceDB (Science Data Bank) is a general-purpose repository in China that publishes scientific research data from various disciplines (Science Data Bank, 2024 ).

Methodology

We conducted a qualitative study comprising 14 interviews and 136 open-ended survey questions with Chinese astronomers from 12 institutions. Our interview questions were semi-structured. Some were framed from the existing literature, and others were generated during the interviews based on the interviewees’ responses. Our open-ended questions are extended from a recent survey on data management services in Chinese astronomy (Liu, 2021 ). Table 1 depicts the formation of our interview questions that served as the major source of our research data. We acknowledge that the interviewees’ responses could be influenced by questions and context during the interview and tried to avoid such biases with the following strategies. First, although Chinese astronomers were hard to contact and recruit, we did our best to diversify our interview sample. Our interviewed Chinese astronomers included researchers and practitioners in observatories, scholars and Ph.D. students in astronomy at top universities in China, and researchers in astronomical research centers. Second, we conducted our interviews in different contexts, such as on campus, in observatories, at research centers, and over phones. Thus, we tried to de-contextualize our interview questions to reduce potential biases. Finally, our qualitative data and analysis were not only from interviews but also from our previous survey. We used the interview and survey data to corroborate and complement each other.

Data collection and analysis

Our interviews were conducted in person or via WeChat video. They lasted 30–45 min and were recorded and fully transcribed. Our recruitment was challenging and time-consuming due to COVID-19 and the limited number of Chinese astronomers available for the interview. We have obtained their informed consent and have followed strict institutional rules to protect their privacy and data confidentiality. In addition, we conducted a survey using the online platform ‘Survey Star’ and obtained responses from 136 Chinese astronomers. For the scope of this paper, we focus on reporting qualitative data.

We kept our first round of data analysis, including notetaking and transcription, simultaneous with the interview progress. Meanwhile, we have fully transcribed and translated the interview recordings in Chinese into verbatim in English. As for the data analysis part, we employed the thematic analysis technique to extract and analyze themes from the interview transcripts (The interviewees are numbered with the letter P) and open-ended survey responses (The survey responses are numbered with the letter Q). Thematic analysis is well-suited for analyzing interview transcripts and open-ended survey responses (Braun and Clarke, 2006 ). We referenced Braun and Clarke’s recommended phases and stages of the analysis process (Braun and Clarke, 2006 ). First, we read through transcriptions and highlight meaning units. Simultaneously, we conducted coding and identified participants’ accounts, which were presented in the form of notes. Second, we categorized the codes and subsequently attributed them with themes that corresponded to ethical concerns. Third, we verified the themes by having them reviewed by two additional authors to ensure high accuracy in our analysis. Finally, we linked our themes with existing literature to provide a more comprehensive narrative of our findings. Table 2 lists the demographic information of the interviewees.

We referenced Stamm et al.’s work to categorize the career stages of the Chinese astronomers we interviewed (Stamm et al., 2017 ). As shown in Table 2 , Most interviewees fall into the Senior-career stage because they have rich research experiences and resources in ODS.

Three types of Chinese astronomers’ behaviors at different ODS stages

We categorize the Chinese astronomers’ ODS behaviors into three types at different stages of ODS. First, Chinese astronomers mentioned that one type of ODS behavior is making the data publicly available on a popular platform (e.g., Github, NASA ADS, arXiv) or data centers after the proprietary data period has expired. The proprietary data period, or the exclusive data period, refers to the time between researchers first accessing the data and publishing their findings. This period typically ranges from one year to two years in astronomy, which aims to cover a normal and complete astronomical research cycle. P13 explained:

The data is not in our hands. After we use the telescope to complete the observations, the data will be stored in the telescope’s database. During the proprietary period (12 months), only you can view it. After the proprietary data period has passed, anyone can view it. (P13)

She meant that the raw data produced by astronomers were stored by the builders, who were also responsible for making those data visible to the public when the proprietary data period had expired. Zuiderwijk and Spiers’s survey has also revealed that astronomers seldom store raw data due to their inability to build a data center. Consequently, astronomers often do not influence data-sharing decisions directly but only propose data collection ideas (Zuiderwijk and Spiers, 2019 ).

Secondly, Chinese astronomers also regraded sharing the data with research teams or individuals upon their requests during the proprietary data period, which is also feasible. For example, P5, said:

I published one paper using research data whose proprietary period hasn’t expired. If someone emailed me to inquire whether they could obtain the data for “Figure 2” [here P5 referred to an exemplary figure in her previous publication]. I usually send the data to them. It is common [in astronomy] to communicate with the author via email to consult their willingness toward ODS. (P5)

P5 assumed that sharing data privately was allowed and common among astronomers when the proprietary data period had not yet expired. To some extent, P5 also transformed this private approach toward a visible approach by making his processed data public and publishing it on open platforms.

P11 added the reason why astronomers used this private approach:

The data is not immediately made available. There is a proprietary data period of one or two years. Priority is given to the direct contributors to use the data and produce the first batch of scientific results. After the proprietary data period has expired, others were allowed to discover the value of the data jointly…Other astronomers may also be interested in the data during the proprietary data period. After all, during this period, others were unable to conduct observations and produce data. (P11)

P11 explained that during the period when he applied for observation, others could not produce the data by using the same telescope. However, they might still be interested in such data. Thus, he might share their research data privately with other astronomers if he deemed it necessary for the other astronomers’ research.

Finally, besides the open sharing of research data, two other astronomers also introduced the third type of ODS behavior, the open sharing of research software, tools, and codes. P12 explained:

When the project was completed, project funders required all the research data to be submitted to a certain location for public use. We also needed to submit the software, tools, and related codes developed by astronomers. (P12)

According to P12, ODS is not merely about data per se but also its associated processing tools and accompaniment.

Another astronomer, P10, mentioned that astronomers may also share their software openly to enhance their research influence. P10 said:

Astronomers may openly share their programs in theoretical research and data simulation, particularly simulation programs or source files. They create open-source materials related to their articles and then make their software or related models available online. They also require acknowledgment if someone uses them later. Nowadays, many astronomers use this method for ODS. (P10)

Individual factors concerning Chinese astronomers’ motivations for ODS

Ods is a tradition and duty.

Twelve Chinese astronomers also mentioned that ODS was a traditional norm in astronomy, and they have been obeying it since they entered this scientific field. P11 said:

We have known a traditional norm since we started working in this field. That is, every time you apply for telescope observations and obtain data, this data must be made public one year later. Even if you have not completed your research or published a paper by then, the data will still be made public. For us astronomers, ODS is a natural practice and meaningful endeavor. We believe that astronomy is a role model of ODS for other research fields to follow. (P11)

Four Chinese astronomers also introduced the influence of the tradition of ODS on their motivations for ODS. For example, P10 said:

In the past, I have obtained data of my interest from other astronomers by emailing them. Therefore, if someone approaches me for data, I would also be willing to provide it. (P10)

Another two astronomers elaborated that they acknowledge the ODS tradition due to its benefit to both astronomers and telescopes. P1 said:

According to the international convention, to promote the influence of the telescope and enrich its research outputs, the data is released to the public based on different proprietary data periods. Each data release includes not only raw data but also data products generated by technical personnel processing the raw data. (P1)
I do not process raw data; instead, I typically utilize data products generated by telescopes. These data products, which are openly available in the public domain, assist individuals like me who lack technical expertise in processing raw data to conduct scientific research. Thus, we must also acknowledge the telescope’s contribution when publishing our findings. This is the norm in astronomy. (P13)

P1’s and P13’s opinions were common, which elaborated that telescopes have offered astronomers different kinds of data, enhancing their potential research outputs. In return, when researchers utilize the data generated by telescopes, they also contribute to the telescope’s influence and reputation.

It is worth noting that this tradition is also in telescopes’ data policies, which influences Chinese telescopes’ data proprietary periods setting. For example, the Chinese astronomy projects LAMOST and FAST release data policies that mention the proprietary data period following international conventions. As indicated by P6, the international convention typically observes the proprietary data period of six months to one and a half years.

Six Chinese astronomers believed that ODS is an established tradition in astronomy and ought to be respected and enacted as a duty without considering external factors or consequences. For example, P8, mentioned that:

Astronomy is a very pure discipline without economic benefit, and we have the tradition of ODS. Therefore, they state their data source or post a link to their data directly. My willingness to conduct ODS is also influenced by this atmosphere. Besides that, I regard ODS as a basic requirement because data should be tested [via ODS]. (P8)

Another two astronomers considered ODS in astronomy the nature of science, which motivated them to pursue the goal of openness persistently. For example, P11 said:

Astronomy exemplifies a characteristic of being borderless, where there is a strong inclination towards open academic exchange and sharing of resources and tools. Additionally, astronomy is pure due to its non-profit nature. Thus, astronomers have always maintained simplicity, leading to a culture of openness. (P11)

ODS brings beneficial consequences

Still, four Chinese astronomers hoped to improve their research influence and citations through ODS, especially the research to which they had devoted the most effort. For example, P10 said:

Astronomers not only release their data but also the software or code to process it. This is because if other astronomers use my software and code to process the data, they would also cite the papers with my shared software and code. This will increase the influence of my papers and software or code. (P10)

A similar perspective came from our survey responses Q19, Q22, Q34, and Q47, who also perceived that ODS could improve the research impact of their papers and data. For example, Q22 stated:

I have encountered situations where other researchers requested access to my data. One of the reasons I am willing to share data [with them] is to increase my paper citations. (Q22)

Additionally, some Chinese astronomers practiced ODS to replicate and validate their research. For example, Q26 said:

The primary reason I endorse ODS is to replicate my data analysis by peers and enable independent verification of my research outputs. (Q26)

ODS engenders reciprocity and collaboration opportunities

Fourteen Chinese astronomers mentioned that ODS could increase their research outputs and provide possibilities to obtain other astronomers’ data, thereby promoting the prosperity of research outputs in the entire astronomy community. More importantly, they have established a new type of collaborative opportunity through ODS when data are sufficient but resources/capacities to utilize data are limited. For example, P12 expressed that ODS had a positive impact on the research outputs of the scientific community:

An astronomer I respect once stated that initially, they wanted to conceal all research data, but this proved impossible due to the vast amount of data produced by the telescope. As a result, they released all the data from their large-scale projects. The outcome of this ODS behavior rendered explosive growth in research outputs. (P12)

Another two astronomers noted that ODS was essential to cultivate more astronomers to form collaborative efforts to increase research outputs in the scientific community. P6 said:

The data generated by telescopes used to observe transient events have not been subject to the proprietary data period. Once I observe such events, I will encourage other researchers to join in and rapidly identify these unexpected phenomena, facilitating subsequent observations using various telescopes to maximize scientific output as quickly as possible. (P6)

P6 elaborated that astronomers rely on collaborative efforts for special observations, such as discovering new stars, which maximizes the utilization of global telescope resources. This motivation strengthens collaborations among astronomers from different research teams. P14 added:

New events [e.g., new star discoveries] in astronomy often occur in transience. If I do not share information about these events, other astronomers will not know about them. With limited resources, I may be unable to observe them through other telescopes. However, sharing preliminary data about these events can maximize global resources. This allows for a collaborative effort to observe the event using resources from around the world. (P14)

P14 stated that ODS has the potential to appeal to more astronomers to research contributions through their subsequent and collective efforts based on the initial observation. P14’s opinion echoed Reichman et al.’s findings, which revealed that extensive and collaborative infrastructure was the primary driver behind the adoption of ODS (Reichman et al., 2011 ).

Prior research also indicated that limited resources and capacities would increase collaboration among astronomers in astrophysics research (Zuiderwijk and Spiers, 2019 ). A similar opinion also arose from our survey responses Q18, Q30, and Q52. For example, Q30 said:

I am good at processing data instead of writing papers. ODS can allow me to collaborate with someone who is good at writing papers to co-produce the research output. (Q30)

Societal factors concerning Chinese astronomers’ barriers to ODS

The limitations of verbal agreements in international collaboration.

Although most Chinese astronomers endorsed ODS, three were concerned about other astronomers who might have violated their initial commitments to using data for scientific purposes. For example, P7 commented:

I used to have experiences with foreign collaborators who violated their initial commitments, resulting in unpleasant consequences. Specifically, they promised in emails that they would process the data using a different approach from ours. However, they ended up using the same method and perspective as ours. There was not much to be said about it, as it was not illegal or against data policies’ regulations. It is a matter of trust and promises, and all I can do is not share data with them in the future. (P7)

P10 also added that often, the astronomers’ commitment to email correspondence had to rely on their self-discipline to materialize:

If the proprietary data period has not expired and you share the data with others, you have no control over what they do with it except to trust their promise in the email. This situation relies on the self-discipline of astronomers. (P10)

Three astronomers were also concerned about the validity of oral agreements about ODS. They referred to them as “gentlemen’s agreements.” For example, P14 explained:

In principle, data can be shared with others without a signed contract between us but based on the so-called gentleman’s agreement. Thus, some Chinese astronomers may not be willing to make their research data public because they must assume that everyone is a gentleman [to keep their promise], which may not always be the case as there are also scientists who are not accountable due to a highly competitive environment [in science]. (P14)

P14 regarded the “gentlemen’s agreements” as effective only to those who acted in good faith in fulfilling their commitments. They would not impose or presuppose any “ethical” constraints on collaborators. Hence, he noted that some astronomers were unwilling to share data openly within the proprietary data period because they did not trust the other astronomers’ accountability to fulfill their “gentlemen’s agreements.” Besides that, P6 explained the reason that astronomers have broken their commitments. He said:

In astronomy, some data policies have not been effectively constrained because it is impossible to encompass all subsequent data usage and collaboration situations at first…Also, there are many astronomy alliances. If you are not part of our alliance, you are not bound to commitments, which may lead to disputable issues. (P6)

Data is too dear to share immediately

Ten Chinese astronomers considered that the data they obtained possessed unique scientific values that could contribute to their publication priority and prolificity. Given the fact that publication priority, authorship order, and quantity are still the most important and prevalent factors in evaluating a scholar in China, it becomes comprehensible that these astronomers have expressed concerns about the risk of losing the ‘right of first publication’ if they openly share their processed data too soon. For example, P9 confessed:

I am unwilling to conduct ODS primarily because my research findings have not been published yet. I am concerned that ODS might lead to someone else publishing related findings before I do. (P9)

Similar concerns were also expressed in our survey responses Q42, Q46, and Q53. Q53 provided a more detailed explanation:

The individuals or organizations that produce data should have the right to use it first and only make it publicly available after a round of exploration and the publication of relevant research results. If the data is shared openly and completely from the outset, the number of people or organizations willing to invest time and money in obtaining data in the future will decrease since they can use data obtained by others instead of acquiring it by themselves. (Q53)

Another astronomer, P12, held a negative attitude toward ODS at the early stage of research because he was concerned that their data processing capacity was slower than the other research groups once the data was shared with them:

I put a lot of effort into processing data, and if my research findings have not been published but I release my data in three months [some international rules recommend astronomers to open their data as soon as possible], then someone with a more sophisticated data processing software may be able to write and analyze their research paper within a week because they already have the complete workflow prepared. This may upset the sharers who intended to publish a similar finding, as their work has been done so quickly [sooner than the sharer]. (P12)

A similar opinion could be seen in our survey response Q46:

The scientific community should ensure that those who have worked hard to produce the data also have the priority to publish their research findings before the data has been made publicly available. (Q46)

The disparities between the Chinese and foreign research infrastructures

Five Chinese astronomers expressed their concerns about the disparities between the Chinese and foreign research infrastructures. For example, P9 expressed his concern that adhering to international rules in astronomy might contradict the domestic rules in China due to national security and data confidentiality considerations. He said:

International organizations hope our country will lead in ODS, which may sometimes harm our interests. This is especially the case for the data produced through Chinese telescopes, which are published in international academic journals upon the international journal publishers’ requests because this data may involve confidential engineering tasks in Chinese telescopes that are subject to national security purposes. (P9)

Another astronomer, P4, also mentioned that astronomical data may include equipment parameters that may trigger national security concerns. Hence, she has undergone desensitization before conducting ODS:

Astronomical raw data are generated by the equipment directly and are categorized as first-level data [machine-generated data] in the data policies. More importantly, raw astronomical data should be processed before being opened to the public because the raw data may raise [national] security concerns and leakage equipment parameters. (P4)

P4’s concerns about national security are also reflected in China’s national data policies. For example, the Chinese government mandates the “hierarchical management, safety, and control” policy to supervise ODS to balance its order and dynamic (Li et al., 2022 ).

P8 added that Chinese astronomers are sometimes limited by national rules and domestic data infrastructure usability and accessibility. P8 said:

In some Chinese astronomical projects, only certain frequency bands are internationally permitted, and the first to occupy them claims ownership. Moreover, our data storage and ODS are limited by technical difficulties. We don’t have ODS platforms like NASA ADS. Even if there are, these platforms are currently not as recognized internationally as those abroad. Therefore, when astronomers publish papers or data, they default to submitting them to international platforms. (P8)

Societal factors concerning Chinese astronomers’ hesitations for ODS

The pressure from domestic data policies.

Five Chinese astronomers have mentioned that ODS is subject to the requirements of domestic data policies. Thus, they sense the pressure to conduct ODS. For example, P6 indicated that many astronomy projects in China were government-funded and required data sharing and submission conforming to government regulations as the priority.

Chinese telescopes are primarily funded by the government, as researchers have not yet had the ability to build a telescope on their own. The entire Chinese population is considered one collective, while those non-Chinese are another. The Chinese government aims to promote ODS to data generated by projects funded by public funds. If researchers have not submitted research data to the government-delegated data center, it could potentially impact their subsequent research project approval. By contrast, some foreign telescopes are built by private institutions and may not have the option for ODS. (P6).

Another astronomer, P3, proposed that Chinese mandatory data policies prompt the ODS scale. However, complicated troubles remained.

Our data policies are mandatory, especially for projects funded by national grants. That is, if you don’t conduct ODS, your projects may not be accepted. The volume of ODS is rising consequently. However, the issues related to ODS still need to improve, such as the Chinese astronomers’ initiative willing to ODS is weak, and [sometimes] their open data cannot be reused. There is a need further to investigate Chinese researchers’ [ODS] behaviors, particularly to find the stimulations for them to conduct ODS proactively. (P3)

Besides, three Chinese astronomers shared that the traditional funding source in astronomy also motivated their ODS. P8 explained:

In China, astronomical data [from national telescopes] is mostly institutional and collective. One can apply to use a telescope at a particular institution to obtain astronomical data. The applications may receive different priorities, but the data is not privately owned. (P8)

P8 meant that Chinese astronomers relied on large telescope projects funded by the government. Consequently, the ownership of their observed data belongs to the collective astronomical community in China rather than individual astronomers or research teams.

The language prerequisite in astronomy

Three astronomers have also introduced the issue of a language prerequisite in scientific communication. For example, P12 explained:

[Modern] astronomy predominantly originated from developed nations. Consequently, our conferences, data, and textbooks are primarily in English. However, this can be a barrier for young Chinese astronomers who are not proficient in English. At least among the researchers around me, everyone contends that English is a necessary prerequisite for entering the field of astronomy. That is to say, the entry barrier for astronomy is very high. I termed it “aristocratic science” because it is difficult to conduct astronomical research without good equipment, proficient English, or substantial funding. (P12)

Another astronomer, P9, dismissed astronomical journals in Chinese because these journals would not be acknowledged in the international astronomy community:

I believe English is a strict prerequisite in astronomy. If your English is poor, you may be restricted from engaging in ODS communication. I support [the slogan] publishing in Chinese to enhance Chinese scholars’ international influence, but most astronomical research originates from the West and is primarily dominated by Western institutions. Besides that, domestic journals are not valuable enough for academic evaluation or promotion due to their low influence factor. (P9)

Finally, P13 added that if Chinese astronomers always use English in ODS, it might potentially clash with the academic discourse system in China.

Some people may wonder why, as Chinese researchers, we need to use English to communicate our work. From my personal perspective, of course, I fully support promoting our research discourse system using Chinese as the primary language. However, from a [scientific] communication standpoint, there are times when we need to collaborate with foreign astronomers or improve communication efficiency [in English]. (P13)

The awareness of a competitive environment

Four Chinese astronomers have expressed concerns about ODS due to the highly competitive scientific community to which they belong. For example, P14 stated:

The field we are currently working in is highly competitive, so we need to consider protecting our team’s efforts. If we release the data, there is a possibility that other researchers using more advanced software tools could publish their findings before us. (P14)

Another astronomer, P12, remarked that this competitive atmosphere varies depending on the research directions. He said:

Competition is inevitable but varies across research areas. I engaged in two research areas. One is characterized by intense competition, but the other is more friendly. The highly competitive research area has many researchers pursuing high-quality data and tackling cutting-edge topics. Sometimes, competing with those who publish first or faster becomes necessary. In addition, one kind of “Nei Juan” may exist, which is competing to see who can open data faster. Because the faster your proposal is promised, the sooner your observation project will be approved. (P12)

“Nei Juan” (a.k.a. involution) manifests a fierce but often unfruitful competition to catch up with colleagues, peers, and generations (Li, 2021 ). P12 acknowledged the competitive environment that would push him to publish first or faster but also regarded “Nei Juan” as not always bad for ODS. Still, P9 considered that the “Nei Juan” issue may arise because Chinese astronomers want to catch up with the international astronomical development phase.

Generally speaking, astronomy is relatively less “Nei Juan” compared to other disciplines. However, its rapid development has begun to become more intense. Particularly, Chinese astronomy is in a phase of catching up, characterized by a collaborative yet competitive atmosphere with the international community. Our national astronomical teams, as a collective, are exerting great efforts to excel in some major projects compared to their foreign counterparts, engaging in strenuous research endeavors. (P9)

However, another astronomer, P11, regarded that ODS meant not “the sooner, the better.” P11 argued:

Some data may have been obtained through instrument testing, and its quality is not particularly high, resulting in lower reliability. If it is made openly accessible immediately, users may not obtain accurate results. Besides, the raw data may contain variances or noises originating from different instruments, requiring standardized processing through software to transform it into [reliable] data products. Only then can scientific users and the public truly benefit from this data. (P11)

The interpretation of Chinese astronomers’ ODS motivations and behaviors

Chinese astronomers’ motivations and behaviors in ODS can be interpreted threefold. First, a few Chinese astronomers’ obedience to ODS is traditional. They value the tradition of ODS in astronomy and contend that it should be respected and obeyed as an intrinsic duty (Heuritsch, 2023 ). Also, they acknowledge the value of astronomical ODS practices for scientific research and the whole scientific community, which makes them devote themselves to such practices (e.g., P8, P12). Hence, for them, extrinsic principles (e.g., FAIR), policies (e.g., those from the Chinese government), or individual research outputs do not determine their ODS decisions and behaviors. As P11 said, he had learned and obeyed this tradition since he entered the field of astronomy. This finding in China corroborates Stahlman’s prior research, indicating that astronomers have a strong sense of duty to their research communities and the public (Stahlman, 2022 ). Still, we found it impressive because these Chinese astronomers adhere to ODS traditions, dismissing the government slogan “Write your paper on the motherland,” which is rare in other research disciplines (including ours) in China.

Second, many Chinese astronomers would evaluate the consequences of ODS. One evaluation lens is self-interest. For example, several Chinese astronomers (e.g., P6, P12) have pointed out that ODS can potentially increase individual research outputs and their academic reputation, which motivates them to do it. It is noteworthy that some Chinese astronomers increase research outputs through ODS, both in terms of their personal contributions and for the entire astronomy community. Their evaluation priority is their own data/paper citation over ODS practices. Another evaluation lens is reciprocity. Some Chinese astronomers (e.g., P1, P10) perceive that the data sharer and user roles in ODS among astronomers can be exchanged. An open data sharer can become a user, and vice versa, in different research projects and times. As P10 mentioned, many Chinese astronomers have received the benefits of ODS from other astronomers when they lacked data or resources. As a result, they aspire to contribute to the community by providing opportunities and resources for fellow astronomers who face challenges similar to those they did previously. Thus, they adopt ODS in a respectful manner, hoping to receive the same treatment in the future. Abele-Brehm et al.’s study has revealed that researchers tended to conduct ODS out of reward promises (Abele-Brehm et al., 2019 ). Our findings complement it by differentiating self-interest-oriented and reciprocity-oriented rewards from ODS.

Third, some Chinese astronomers’ choice of ODS can be interpreted as contractual. Without ODS, they cannot receive government funding or get their research proposal accepted, which may impede their research progress and contribution. This finding corroborates Zuiderwijk and Spiers’ research, highlighting the significance of resource constraints and individual expectations benefits, which they could get extra citation or potential collaboration opportunities as essential motivators for ODS in astronomy (Zuiderwijk and Spiers, 2019 ). Furthermore, the development of modern astronomy in China is relatively retarded compared to the U.S. or European counterparts. The Chinese government sponsors most astronomical projects with public funding, hoping to enhance Chinese astronomy through centralized power and resources. For example, in 2018, the Chinese government implemented a scientific data management policy mandating the sharing of research data generated by public funding (General Office of the State Council of China, 2018 ). Thus, Chinese astronomers in contract with government-funded telescopes must enact ODS.

The societal barriers to Chinese astronomers’ ODS practices

We identified a few societal barriers to Chinese astronomers’ ODS practices. First, insufficient data rights protection during ODS may hinder Chinese astronomers’ enthusiasm or trust in conducting ODS. For example, P6 has raised the concern that some astronomical data policies are typically formulated by scientific alliances and only bind members within project teams. Thus, astronomers who do not belong to these alliances do not need to obey these policies. Moreover, P10 and P14 both complained that though they had contributed much data, time, and effort, some global ODS practices relied on verbal agreements, which often lacked enforcement and easily compromised their data rights in an international project. This insufficient protection of data rights may give rise to conflicts of interest among collaborating parties, discouraging subsequent data-sharing practices among Chinese astronomers.

Second, a data infrastructure that is weak in its usability and accessibility may deter some Chinese astronomers from choosing ODS. As P8 remarked, Chinese open research data infrastructures have not been well developed regarding data usability and accessibility, which pushes domestic astronomers to publish data via foreign open research platforms. This concern partly reflects the reality of the underdevelopment of data infrastructure in China, indicating that most of China’s domestic research data repositories have yet to establish licenses, privacy, and copyright guidelines. (Li et al., 2022 ).

Additionally, we found that a highly competitive environment could potentially trigger “Nei Juan” related to competing for publication priority, which could also affect Chinese astronomers’ ODS attitudes and behaviors. Specifically, the increasing emphasis on academic performance has led many Chinese researchers into a “weird circle” of self-imposed pressure to publish papers continuously. This phenomenon is exacerbated by the tenure system in top Chinese universities, which has significantly shaped researchers’ academic work and day-to-day practices (Xu and Poole, 2023 ). Thus, within the intensely competitive scientific landscape and the dominant evaluation system for paper publications, Chinese astronomers may potentially prioritize rapid paper publication over ODS because when scientific resources and academic promotions are scarce, data is invaluable to a researcher. As implied in P14’s quote, some Chinese astronomers may delay or opt out of ODS unless their data rights and research benefits can be ensured.

Two dimensions in the action strategies in Chinese astronomers’ choices for ODS

Apart from the individual and societal factors that motivate or deter Chinese astronomers’ OBS behaviors, we have identified two dimensions in the action strategies that influence their choice of ODS. These two dimensions are presented and interpreted in Table 3 .

First, some Chinese astronomers hesitated to ODS because they had to choose between domestic customs and international traditions in astronomy, which might influence or even determine some Chinese astronomers’ behaviors concerning ODS. For example, several Chinese astronomers (e.g., P11, P13) prioritized compliance with domestic policies over international ones in determining where and how to implement ODS (Zhang et al., 2023). Besides, as explained by P4, almost all Chinese astronomers receive national funding, which would influence their ODS behaviors due to national funding agencies’ requirements for project commitment and applications. China’s “dual track” approach emphasizing data openness and national security simultaneously requires researchers to obey the “Openness as the normal and non-openness as the exception” principle (Li et al., 2022 ). Meanwhile, open data governance and open data movement have gradually impacted government policies as various national security and personal privacy issues are emerging (Arzberger et al., 2004 ). Despite this, ODS policies or concerns about national security and personal privacy may not be suitable for astronomy because astronomy rarely involves security and privacy issues (as highlighted by P9 and P12). As the discrepancy between domestic and international policy environments widens, choosing different norms may pressure Chinese astronomers’ ODS behaviors.

Second, we found some ethical problems related to ODS from the language prerequisite or preference in Chinese astronomy. As mentioned by P12, language has become an entrance bar in Chinese astronomy because astronomy is sort of “aristocratic science” in the sense that English proficiency is a prerequisite for anyone or any institution that wants to participate in astronomy research and practices seriously. Consequently, there is no comparable citizen science project in China to Galaxy Zoo or Zooniverse in the U.S., and local or private colleges in China cannot afford to establish astronomy as a scientific discipline in their institutions because many people in Chinese citizen science projects or below-the-top institutions are not proficient in English. Related to it, as mentioned by P9, domestic journals about astronomy in China are unanimously regarded as inferior and not valuable enough for academic evaluation or promotion. This phenomenon in Chinese astronomy is distinctive from the other research disciplines in China, where domestic journals are not “biased” based on publication language.

Third, domestic astronomy projects obeying international propriety data period policies may exert extra pressure or restraint on Chinese astronomers to conduct ODS. For example, the LAMOST and FAST projects in China follow international conventions in setting their propriety data period and ODS policies in English. As a result, Chinese astronomers who are poor in English would confront logistic hindrances in harnessing these domestic astronomy projects to share their data, ideas, and publications in Chinese. If they want to implement international ODS via LAMOST or FAST, they must spend extra time, effort, or funding translating their data and ideas into English, which may affect their time and resource allocation in the other research activities within the proprietary data period, such as ODS. Hence, we surmise that this language obstacle for some Chinese astronomers could demotivate or discourage them from ODS.

Fourth, some Chinese astronomers may choose between personal development and scientific advancement regarding ODS. First, it may be due to the adverse effects of the Chinese academic promotion system on some astronomers. In China, universities and research institutions typically use publication lists to evaluate academic performance and promotion (Cyranoski, 2018 ). As P14 mentioned, competition for research publication has been growing in some areas of astronomy (e.g., burst source). Some Chinese astronomers may withhold ODS to prioritize their data rights and timely publication. It may also be interpreted by a prevalent phenomenon in the Chinese academy nowadays called “Nei Juan.” Consequently, some Chinese scholars, including astronomers, are pushed to be competitive or “selfish” to increase their research publications, citation metrics, funding opportunities, and data rights. Prior works have found that researchers’ data-sharing willingness tends to be low when perceived competition is high (Acciai et al., 2023 ; Thursby et al., 2018 ), and researchers’ intrinsic motivation gradually weakens when researchers’ organizations implement accountability measures (such as contract signing) and increasingly pursue performance-oriented academic research (Gu and Levin, 2021 ). These findings may also explain some Chinese astronomers’ hesitation about ODS.

Last but not least, astronomy is highly international, and ODS can encourage collaboration among astronomers from different countries. Nevertheless, as mentioned by P7, some collaborators may compromise their promises for data use, which disincentivizes data sharers’ willingness for continuous ODS. Astronomers, through the joint observations of multiple telescopes, can collectively identify the underlying reasons behind astronomical phenomena and thereby promote scientific advancement. However, with the impact of “Nei Juan” and the limitations of verbal commitments, some Chinese astronomers may find it challenging to choose between ODS and prioritizing their academic interests.

Conclusion and implications for future research

Many astronomers in Western countries may have taken ODS for granted to enhance astronomical discoveries and productivity. However, how strong such an assumption holds among Chinese astronomers has not been investigated or deliberated extensively. This may hinder international ODS with Chinese astronomers and lead to a misunderstanding of Chinese astronomers’ perceptions and practices of ODS. Thus, in this paper, we reported our findings from 14 semi-structured interviews and 136 open-ended survey responses with Chinese astronomers about their motivations and hesitations regarding ODS. Our study found that many Chinese astronomers regarded ODS as an international and established duty to obey or reciprocity to harness. However, some Chinese astronomers would also agonize about ODS for data rights concerns, usable and accessible data infrastructure preferences, and “Nei Juan” or academic promotion pressures. Synthesizing these findings, we summarize them as Chinese astronomers’ concerns and choices between domestic customs and international traditions in ODS. Despite the findings, our research has several limitations. First, we still need more data to test and generalize our findings about ODS to Chinese scholars in other disciplines. Second, we have not conducted a comparative analysis of perceptions, concerns, and behavioral differences among astronomers in other countries. In the future, we intend to address this gap by conducting a global study to provide a more comprehensive understanding of ODS in science.

Our research has several implications for future work. First, we advocate for empathy and compromise between domestic customs and international traditions in Chinese astronomy. Undoubtedly, developed and English-speaking countries have been dominant in science and research paradigms for a long time. On the positive side, such dominance has established various traditions, such as ODS in astronomy, which are respected and obeyed by many scholars worldwide, such as many astronomers in China. On the negative side, such long-standing scientific dominance may trigger a developing country’s domestic countermeasures or competing policies, which can agonize some domestic researchers and impede global ODS. For example, as we have revealed, some Chinese astronomers had regarded astronomy as an “aristocratic science” and screened out Chinese astronomers or citizen science participants who were not proficient in English. Future research can investigate further the power dynamics between international traditions and domestic customs in other cultures or research disciplines beyond ODS in astronomy.

Second, we suggest that the international astronomy community publish more inclusive ODS rules that consider the societal contexts of researchers from different countries with different cultural or language backgrounds. Efforts should be made to minimize the reinforcement of one’s dominant position in scientific research through ODS, and to develop more inclusive, sustainable, and equitable rules that appeal to more advantaged countries to join. This may be achieved by providing different languages of ODS platforms, translation assistance to draft collaboration agreements, and multiple options for international collaboration and communication among astronomers from different countries. In this regard, the CARE (Collective benefits, Authority to control, Responsibility, and Ethics) principles serve as a good example (Global Indigenous Data Alliance, 2019 ). Also, we propose that the Chinese government, academic institutions, and funding agencies be more globally leading and open-minded to stimulate ODS, not merely within the border but endeavor to become a global leader or at least an essential stakeholder to promote knowledge sharing and scientific collaboration.

Third, our research findings indicate that individual ethical perspectives among astronomers play a significant role in guiding their ODS practices. To start, reciprocity effectively enhances ODS regardless of the established or domestic research policies. Thus, we suggest that policymakers in China consider emphasizing more on the reciprocity benefits and build a collaborative effort across the scientific community. As the qualitative data from our findings revealed, collaboration benefits from ODS are highly motivating for Chinese astronomers. Still, we have identified concerns among Chinese astronomers. For instance, they have highlighted concerns about the limitations of verbal commitments for ODS within the proprietary data period, potentially engendering “free-riders” in research. Further, we noticed that some Chinese astronomers conduct ODS based on their respect for this tradition and obey it as their duty without considering external factors such as individual interests or community benefits. We posit that this ethical perspective is aligned with deontology. Therefore, we suggest that stakeholders of ODS, such as the scientific community, research institutions and organizations, and ODS platform developers, could propose specific norms or mottos regarding the ODS tradition in astronomy to stimulate astronomers’ voluntary sense of duty to conduct it.

Finally, since we found that some astronomers conducted ODS primarily for self-interests in academia, efforts should be made to ensure that the rights of researchers in astronomy are protected and that they do not bear any risks caused by others (e.g., data misuse, verbal breach of contract). Future research can administer surveys or experiments to explore how significantly these individual factors impact astronomers’ ODS behaviors.

Data availability

The complete translated and transcribed data from our study is available at Peking University Open Research Data ( https://doi.org/10.18170/DVN/JLJGPF ).

Abele-Brehm AE, Gollwitzer M, Steinberg U, Schönbrodt FD (2019) Attitudes toward open science and public data sharing. Soc Psychol 50(4):252–260. https://doi.org/10.1027/1864-9335/a000384

Article   Google Scholar  

Acciai C, Schneider JW, Nielsen MW (2023) Estimating social bias in data sharing behaviours: an open science experiment. Sci Data 10(1):233. https://doi.org/10.1038/s41597-023-02129-8

Article   PubMed   PubMed Central   Google Scholar  

Anane-Sarpong E, Wangmo T, Tanner M (2020) Ethical principles for promoting health research data sharing with sub-Saharan Africa. Dev World Bioeth 20(2):86–95. https://doi.org/10.1111/dewb.12233

Article   PubMed   Google Scholar  

Anane‐Sarpong E, Wangmo T, Ward CL, Sankoh O, Tanner M, Elger BS (2018) You cannot collect data using your own resources and put It on open access”: perspectives from Africa about public health data‐sharing. Dev World Bioeth 18(4):394–405. https://doi.org/10.1111/dewb.12159

Arzberger P, Schroeder P, Beaulieu A, Bowker G, Casey K, Laaksonen L, Moorman D, Uhlir P, Wouters P (2004) Promoting access to public research data for scientific, economic, and social development. Data Sci J 3:135–152. https://doi.org/10.2481/dsj.3.135

Banks GC, Field JG, Oswald FL, O'Boyle EH, Landis RS, Rupp DE, Rogelberg SG (2019) Answers to 18 Questions About Open Science Practices. J Bus Psychol 34:257–270. https://doi.org/10.1007/s10869-018-9547-8

Bezuidenhout L, Chakauya E (2018) Hidden concerns of sharing research data by low/middle-income country scientists. Glob Bioeth 29(1):39–54. https://doi.org/10.1080/11287462.2018.1441780

Boeckhout M, Zielhuis GA, Bredenoord AL (2018) The FAIR guiding principles for data stewardship: fair enough? Eur J Hum Genet 26(7):931–936. https://doi.org/10.1038/s41431-018-0160-0

Borgman CL, Wofford MF, Golshan MS, Darch PT (2021) Collaborative qualitative research at scale: Reflections on 20 years of acquiring global data and making data global. J Assoc Inf Sci Technol 72(6):667–682. https://doi.org/10.1002/asi.24439

Braun V, Clarke V (2006) Using thematic analysis in psychology. Qual Res Psychol 3(2):77–101. https://doi.org/10.1191/1478088706qp063oa

Cyranoski D (2018) China awaits controversial blacklist of ‘poor quality’ journals. Nature 562(7728):471–472. https://doi.org/10.1038/d41586-018-07025-5

Article   ADS   CAS   PubMed   Google Scholar  

Deil C, Boisson C, Kosack K, Perkins J, King J, Eger P, … & Lombardi S (2017, January) Open high-level data formats and software for gamma-ray astronomy. In AIP Conference Proceedings (Vol. 1792, No. 1). AIP Publishing

Digital Science, Hahnel M, Smith G, Schoenenberger H, Scaplehorn N, Day L (2024) The State of Open Data 2023 (Version 1). Digital Science. available at: https://doi.org/10.6084/m9.figshare.24428194.v1 . Accessed 10 March 2024

Dorch BF, Drachen TM, Ellegaard O (2015) The data sharing advantage in astrophysics. Proc Int Astron Union 11(A29A):172–175. https://doi.org/10.1017/S1743921316002696

Enwald H, Grigas V, Rudzioniene J, Kortelainen T (2022) Data sharing practices in open access mode: a study of the willingness to share data in different disciplines. Inform Res Int Electron J 27. https://doi.org/10.47989/irpaper932

Fox J, Pearce KE, Massanari AL, Riles JM, Szulc Ł, Ranjit YS, Gonzales LA (2021) Open science, closed doors? Countering marginalization through an agenda for ethical, inclusive research in communication. J Commun 71(5):764–784. https://doi.org/10.1093/joc/jqab029

General Office of the State Council of China (2018) “Scientific data management measures”, available at: http://www.gov.cn/home/2018-04/02/content_5279296.htm Accessed 11 June 2023

General Office of the State Council of China (2024) Opinions on building a more perfect system and mechanism for the market-oriented allocation of factors. available at: https://www.gov.cn/xinwen/2022-12/21/content_5732906.htm Accessed 15 March 2024

Genova F (2018) Data as a research infrastructure CDS, the Virtual Observatory, astronomy, and beyond. EPJ Web Conf 186:01001. https://doi.org/10.1051/epjconf/201818601001

Global Indigenous Data Alliance (2019) CARE Principles for Indigenous Data Governance. Available at: https://www.gida-global.org/care . Accessed 14 June 2024

Gray N, Mann RG, Morris D, Holliman M, Noddle K (2012) AstroDAbis: Annotations and cross-matches for remote catalogues. ASP Conf Ser 461:351–354. https://doi.org/10.48550/arXiv.1111.6116

Article   ADS   Google Scholar  

Gu J, Levin JS (2021) Tournament in academia: a comparative analysis of faculty evaluation systems in research universities in China and the USA. High Educ 81:897–915. https://doi.org/10.1007/s10734-020-00585-4

Guy LP, Bechtol K, Bellm E, Blum B, Graham ML, Ivezić Ž, … & Strauss M (2023) Rubin Observatory Plans for an Early Science Program. Available at: https://rtn-011.lsst.io/RTN-011.pdf Accessed 15 March 2024

Harris R, Baumann I (2015) Open data policies and satellite Earth observation. Space Policy 32:44–53. https://doi.org/10.1016/j.spacepol.2015.01.001

Heuritsch J (2023) The evaluation gap in astronomy—explained through a rational choice framework. Publications 11(2):33. https://doi.org/10.3390/publications11020033

Houtkoop BL, Chambers C, Macleod M, Bishop DVM, Nichols TE, Wagenmakers E-J (2018) Data sharing in psychology: a survey on barriers and preconditions. Adv Methods Pract Psychol Sci 1(1):70–85. https://doi.org/10.1177/2515245917751886

Huang Y, Cox AM, Sbaffi L (2021) Research data management policy and practice in Chinese university libraries. J Assoc Inf Sci Technol 72:493–506. https://doi.org/10.1002/asi.24413

Jin WY, Peng M (2021) The effects of social perception on moral judgment. Front Psychol 11:557216. https://doi.org/10.3389/fpsyg.2020.557216

Ju B, Kim Y (2019) The formation of research ethics for data sharing by biological scientists: an empirical analysis. Aslib J Inf Manag 71(5):583–600. https://doi.org/10.1108/AJIM-12-2018-0296

Kim Y, Zhang P (2015) Understanding data sharing behaviors of STEM researchers: The roles of attitudes, norms, and data repositories. Libr Inf Sci Res 37(3):189–200. https://doi.org/10.1016/j.lisr.2015.04.006

Kim Y, Stanton JM (2016) Institutional and individual factors affecting scientists’ data-sharing behaviors: a multilevel analysis. J Assoc Inf Sci Technol 67(4):776–799. https://doi.org/10.1002/asi.23424

Article   CAS   Google Scholar  

Koribalski BS (2019) Open astronomy and big data science. Proc Int Astron Union 15(S367):227–230. https://doi.org/10.1017/S1743921321000879

Kurata K, Matsubayashi M, Mine S (2017) Identifying the complex position of research data and data sharing among researchers in natural science. Sage Open 7(3):2158244017717301. https://doi.org/10.1177/21582440177173

Kurtz MJ, Eichhorn G, Accomazzi A, Grant C, Demleitner M, Murray SS (2004) Worldwide use and impact of the NASA Astrophysics Data System digital library. J Am Soc Inf Sci Technol 56(1):36–45. https://doi.org/10.1002/asi.20095

Lamprecht AL, Garcia L, Kuzak M, Martinez C, Arcila R, Martin Del Pico E, Dominguez Del Angel V, Van De Sandt S, Ison J, Martinez PA (2020) Towards FAIR principles for research software. Data Sci 3(1):37–59. https://doi.org/10.3233/DS-190026

Lee H, Reid E, Kim WG (2014) Understanding knowledge sharing in online travel communities: antecedents and the moderating effects of interaction modes. J Hospit Tour Res 38(2):222–242. https://doi.org/10.1177/1096348012451454

Lester DG, Martani A, Elger BS, Wangmo T (2021) Individual notions of fair data sharing from the perspectives of Swiss stakeholders. BMC Health Serv Res 21:1–12. https://doi.org/10.1186/s12913-021-06906-2

Li M (2021) “Nei Juan” in exam-oriented education in China. J Lit Art Stud 11(12):1028–1033. https://doi.org/10.17265/2159-5836/2021.12.015

Li C, Zhou Y, Zheng X, Zhang Z, Jiang L, Li Z, Wang P, Li J, Xu S, Wang Z (2022) Tracing the footsteps of open research data in China. Learn Publ 35:46–55. https://doi.org/10.1002/leap.1439

Liu J (2021) Data Ethics Behaviors and Norms of Researchers [Master, University of Chinese Academy of Sciences]. Available at: https://d.wanfangdata.com.cn/thesis/ChJUaGVzaXNOZXdTMjAyMzAxMTISCFkzODY0MDg4GghidjYyajZyNQ%3D%3D Accessed 11 March 2024

Pasquetto IV, Sands AE, Darch PT, Borgman CL (2016) Open Data in Scientific Settings: From Policy to Practice Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, California, USA. https://doi.org/10.1145/2858036.2858543

Pepe A, Goodman A, Muench A, Crosas M, Erdmann C (2014) How do astronomers share data? Reliability and persistence of datasets linked in AAS publications and a qualitative study of data practices among US astronomers. PLoS One 9(8):e104798. https://doi.org/10.1371/journal.pone.0104798

Article   ADS   PubMed   PubMed Central   Google Scholar  

Popkin G (2019) Data sharing and how it can benefit your scientific career. Nature 569(7756):445–447. https://doi.org/10.1038/d41586-019-01506-x

Rebull LM (2022) Real astronomy data for anyone: explore NASA’s IRSA. Phys Teach 60(1):72–73. https://doi.org/10.1119/10.0009117

Reichman OJ, Jones MB, Schildhauer MP (2011) Challenges and opportunities of open data in ecology. Science 331(6018):703–705. https://doi.org/10.1126/science.1197962 . PMID: 21311007

Sayogo DS, Pardo TA (2013) Exploring the determinants of scientific data sharing: Understanding the motivation to publish research data. Gov Inf Q 30:S19–S31. https://doi.org/10.1016/j.giq.2012.06.011

Science Data Bank (2024) Subject distribution of dataset published in Science Data Bank Available at: https://www.scidb.cn/en/list?searchList/ordernum=1 Accessed 24 March 2024

Serwadda D, Ndebele P, Grabowski MK, Bajunirwe F, Wanyenze RK (2018) Open data sharing and the Global South—Who benefits? Science 359(6376):642–643. https://doi.org/10.1126/science.aap8395

Stamm K, Lin L, Christidis P (2017) Career stages of health service psychologists. American Psychological Association Center for Workforce, Washington, DC. Available at: https://www.apa.org/workforce/publications/15-health-service-career/ Accessed 24 March 2024

Stahlman GR (2022) From nostalgia to knowledge: considering the personal dimensions of data lifecycles. J Assoc Inf Sci Technol 73(12):1692–1705. https://doi.org/10.1002/asi.24687

Tolle KM, Tansley DSW, Hey AJ (2011) The fourth paradigm: data-intensive scientific discovery. Proc IEEE 99(8):1334–1337. https://doi.org/10.1109/JPROC.2011.2155130

Thursby JG, Haeussler C, Thursby MC, Jiang L (2018) Prepublication disclosure of scientific results: Norms, competition, and commercial orientation. Sci Adv 4(5):eaar2133. https://doi.org/10.1126/sciadv.aar2133

Tu Z, Shen J (2023) Value of open research data: a systematic evaluation framework based on multi-stakeholder survey. Libr Inf Sci Res 45(4):101269. https://doi.org/10.1016/j.lisr.2023.101269

UNESCO (2021) UNESCO Recommendation on Open Science. UNESCO General Conference. In: France. https://doi.org/10.54677/MNMH8546

UK Research and Innovation (2016) Concordat on open research data. Available at: https://www.ukri.org/wp-content/uploads/2020/10/UKRI-020920-ConcordatonOpenResearchData.pdf . Accessed 23 March 2024

van Gend T, Zuiderwijk A (2023) Open research data: a case study into institutional and infrastructural arrangements to stimulate open research data sharing and reuse. J Librariansh Inf Sci 55(3):782–797. https://doi.org/10.1177/09610006221101200

Walsh CG, Xia W, Li M, Denny JC, Harris PA, Malin BA (2018) Enabling open-science initiatives in clinical psychology and psychiatry without sacrificing patients’ privacy: current practices and future challenges. Adv Methods Pract Psychol Sci 1(1):104–114. https://doi.org/10.1177/2515245917749652

Wang S, Kinoshita S, Yokoyama HM (2024) Write your paper on the motherland? Account Res 1–3. https://doi.org/10.1080/08989621.2024.2347398

Wilkinson MD, Dumontier M, Aalbersberg IJ, Appleton G, Axton M, Baak A, Blomberg N, Boiten J-W, da Silva Santos LB, Bourne PE (2016) The FAIR Guiding Principles for scientific data management and stewardship. Sci Data 3(1):1–9. https://doi.org/10.1038/sdata.2016.18

Xu J, Chen D, He C, Zeng Y, Nicholas D, Wang Z (2020) How are the new wave of Chinese researchers shaping up in scholarly communication terms? Malays J Libr Inf Sci 25(3):49–70. https://doi.org/10.22452/mjlis.vol25no3.4

Xu W, Poole A (2023) ‘Academics without publications are just like imperial concubines without sons’: the ‘new times’ of Chinese higher education. J Educ Policy 1–18. https://doi.org/10.1080/02680939.2023.2288339

Zhang X, Reindl S, Tian H, Gou M, Song R, Zhao T, Jandrić P (2022) Open science in China: openness, economy, freedom & innovation. Educ Philos Theory 55(4):432–445. https://doi.org/10.1080/00131857.2022.2122440

Zhang L, Downs RR, Li J, Wen L, Li C (2021) A review of open research data policies and practices in China. Data Sci J. https://doi.org/10.5334/dsj-2021-003

Zuiderwijk A, Spiers H (2019) Sharing and re-using open data: a case study of motivations in astrophysics. Int J Inf Manag 49:228–241. https://doi.org/10.1016/j.ijinfomgt.2019.05.024

Zuiderwijk A, Türk BO, Brazier F (2024) Identifying the most important facilitators of open research data sharing and reuse in Epidemiology: a mixed-methods study. PloS One 19(2):e0297969. https://doi.org/10.1371/journal.pone.0297969

Article   CAS   PubMed   PubMed Central   Google Scholar  

Download references

Acknowledgements

The authors acknowledge the support of the Beijing Municipal Social Science Foundation under Grant [No. 22ZXC008].

Author information

Authors and affiliations.

Department of Information Management, Peking University, Beijing, China

Jinya Liu & Huichuan Xia

National Science Library, Chinese Academy of Sciences, Beijing, China

Kunhua Zhao & Liping Gu

Department of Information Resource Management, School of Economics and Management, University of Chinese Academy of Sciences, Beijing, China

You can also search for this author in PubMed   Google Scholar

Contributions

JL: conceptualization, methodology, data collection, formal analysis, original draft, writing, and editing. KZ: review, data collection, and editing. LG: data collection; editing. HX: conceptualization; methodology; formal analysis; writing, editing, and paper finalization.

Corresponding author

Correspondence to Huichuan Xia .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This study was reviewed and approved by the Institutional Review Board of the Institute of Psychology, Chinese Academy of Sciences. All methods were carried out following the relevant guidelines and regulations. The ethical approval number of this study is H23162.

Informed consent

Informed consent is a critical part of ensuring that participants are fully aware of the nature of the research and their involvement in it. Thus, our informed consent involves adequate information about the purpose of the research, methods of participant involvement, the intended use of the results, rights as a participant, and any potential risks that were provided to the participants. Before we began our interviews, we clearly explained the content of our informed consent form to our participants, provided them with ample time to read it, and thoroughly addressed any questions they had regarding the informed consent form. All participants had carefully read and agreed to an informed consent.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Data set 1109, data set 1352, data set 37510, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Liu, J., Zhao, K., Gu, L. et al. To share or not to share, that is the question: a qualitative study of Chinese astronomers’ perceptions, practices, and hesitations about open data sharing. Humanit Soc Sci Commun 11 , 1063 (2024). https://doi.org/10.1057/s41599-024-03570-9

Download citation

Received : 16 November 2023

Accepted : 09 August 2024

Published : 22 August 2024

DOI : https://doi.org/10.1057/s41599-024-03570-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

sample questions qualitative research interviews

  • Open access
  • Published: 04 September 2024

Exploring the feasibility and acceptability of community paramedicine programs in achieving vaccination equity: a qualitative study

  • Monica L. Kasting 1 , 2 ,
  • Alfu Laily 1 ,
  • Sidney J. Smith 1 ,
  • Sathveka Sembian 3 ,
  • Katharine J. Head 4 ,
  • Bukola Usidame 1 ,
  • Gregory D. Zimet 5 &
  • Laura M. Schwab-Reese 1  

BMC Health Services Research volume  24 , Article number:  1022 ( 2024 ) Cite this article

1 Altmetric

Metrics details

Mobile Integrated Health-Community Paramedicine (MIH-CP) is a novel approach that may reduce the rural-urban disparity in vaccination uptake in the United States. MIH-CP providers, as physician extenders, offer clinical follow-up and wrap-around services in homes and communities, uniquely positioning them as trusted messengers and vaccine providers. This study explores stakeholder perspectives on feasibility and acceptability of community paramedicine vaccination programs.

We conducted semi-structured qualitative interviews with leaders of paramedicine agencies with MIH-CP, without MIH-CP, and state/regional leaders in Indiana. Interviews were audio recorded, transcribed verbatim, and analyzed using content analysis.

We interviewed 24 individuals who represented EMS organizations with MIH-CP programs (MIH-CP; n  = 10), EMS organizations without MIH-CP programs (non-MIH-CP; n  = 9), and state/regional administrators (SRA; n  = 5). Overall, the sample included professionals with an average of 19.6 years in the field (range: 1–42 years). Approximately 75% ( n  = 14) were male, and all identified as non-Hispanic white. MIH-CPs reported they initiated a vaccine program to reach underserved areas, operating as a health department extension. Some MIH-CPs integrated existing services, such as food banks, with vaccine clinics, while other MIH-CPs focused on providing vaccinations as standalone initiatives. Key barriers to vaccination program initiation included funding and vaccinations being a low priority for MIH-CP programs. However, participants reported support for vaccine programs, particularly as they provided an opportunity to alleviate health disparities and improve community health. MIH-CPs reported low vaccine hesitancy in the community when community paramedics administered vaccines. Non-CP agencies expressed interest in launching vaccine programs if there is clear guidance, sustainable funding, and adequate personnel.

Conclusions

Our study provides important context on the feasibility and acceptability of implementing an MIH-CP program. Findings offer valuable insights into reducing health disparities seen in vaccine uptake through community paramedics, a novel and innovative approach to reduce health disparities in rural communities.

Peer Review reports

Introduction

Mobile integrated health-community paramedicine (MIH-CP) is a rapidly evolving patient-centered healthcare delivery model within the domain of emergency medical services (EMS) [ 1 , 2 ]. Community Paramedics (CP)s, a large portion of the MIH-CP workforce, expand the traditional role of EMS personnel to be physician extenders, delivering non-urgent but key medical services such as vaccinations. This is particularly important considering the existing vaccination inequities.

The COVID-19 pandemic highlighted these systemic health inequities, exacerbated existing health disparities, and broadened the gap in access to care. For example, many healthcare visits during the COVID-19 pandemic took place virtually using telemedicine and research shows that low socioeconomic status (SES), rural, and minority populations received less access to telehealth during the pandemic [ 3 , 4 ]. Furthermore, a study of Rural Health Clinics (RHCs) found that the clinics reported high levels of financial concerns and challenges obtaining personal protective equipment, resulting in them providing fewer preventive services during the pandemic [ 5 ].

Vaccination rates also dropped during the pandemic, with early reports suggesting some childhood vaccination rates dropping by as much as 70% in the beginning of the pandemic [ 6 , 7 ]. These reductions in vaccine uptake are multifactorial and are associated not only with lack of access to care, but also higher levels of mistrust in the medical system and medical establishment among underrepresented minorities as well as people living in rural areas [ 7 ]. A potential solution to address these disparities is through trusted messengers, who have the opportunity to change previously held beliefs and increase awareness and acceptability of vaccinations [ 8 ]. One example of a trusted messenger is a CP.

CPs are evolving to be a blend of community health workers, social workers, and non-emergency health care providers [ 1 , 2 , 9 ]. Approximately 18 states in the United States (U.S.) have CPs, but the roles vary in scope, training, and authority [ 1 , 10 ]. Studies have shown that they are positively accepted and reviewed across the quadruple aim framework used to assess the effectiveness of a health care system (i.e., improved patient satisfaction, improved provider satisfaction, reducing healthcare costs, and improved population health outcomes) [ 1 , 11 ]. While there are limited studies encompassing all of the quadruple aims, review papers have shown that MIH-CP programs are generally perceived positively as a means of bridging the healthcare delivery gap, especially within communities with healthcare shortages, such as rural areas, and can potentially reduce existing disparities [ 1 , 2 , 9 , 10 , 12 , 13 , 14 , 15 ].

This positive reception and patient satisfaction suggests MIH-CP may be a novel approach to address health disparities and improve uptake of preventive health services, including vaccinations [ 1 , 14 , 15 ]. MIH-CP programs are often able to administer vaccines [ 16 , 17 , 18 , 19 , 20 , 21 , 22 ], but there are few studies specifically examining the impact of this service. Currently, Indiana has more than a dozen MIH-CP programs [ 23 ], including many that provide vaccination services. More research is needed to understand program effectiveness and the potential usefulness in improving health equity through these programs. In addition, it is unclear whether community paramedics are receptive to including vaccine administration in their scope of care, which may cause implementation challenges. Therefore, the aim of this study was to determine perceived barriers, facilitators, attitudes, and beliefs of relevant stakeholders (i.e., MIH-CP/EMS providers, leaders, and administrators) regarding implementation of MIH-CP-based adult vaccination services in the state of Indiana.

This study was reviewed and approved as exempt by the Institutional Review Boards at both Purdue University and Indiana University.

Setting, design, sample, and recruitment

We conducted one-time interviews with three groups of participants: leaders of paramedicine agencies with registered MIH-CP programs (10 interviews), leaders of paramedicine agencies without MIH-CP programs (9 interviews), and state/regional administrators (SRA; 5 interviews). Below we describe how each group was identified and recruited.

The Indiana Department of Homeland Security (DHS), which oversees EMS in Indiana, provided a list of registered MIH-CP programs as of January 2023. Team members contacted the administrators of these programs via email or phone to confirm whether the MIH-CP program was active. Of the 16 registered agencies with an active MIH-CP program, 10 (63%) completed interviews.

To identify non-MIH-CP providers, we used targeted recruitment and identified counties that were demographically similar to the counties served by the MIH-CP interviewees, specifically focusing on rurality (within 4% rurality of MIH-CP interviewees) and average resident age (mean age within 2 years of MIH-CP interviewees). Then, we identified the hospital-based, governmental, paid fire, and private paramedicine agencies in those counties from a registry of ambulance service providers from DHS. We excluded volunteer organizations, as they are quite different in scope and function than organizations with employees. Of the 18 programs identified and contacted, 9 (50%) completed an interview.

Finally, we contacted the state MIH-CP administrators and regional EMS administrators, based on the contact information provided on the DHS website [ 24 ]. Approximately half (i.e., 5 of 9) of the state and reginal administrators contacted by the team completed an interview.

All interviews were conducted by study team members (MLK, AL, SJS) trained in qualitative interviewing and were recorded via Zoom. One interviewer (MLK) is a faculty member with a PhD and two are graduate students (AL, SJS). All interviewers are female. No one else was present in the interviews besides the participants and research team members. The interview guides are included as Additional File 1 (leaders of paramedicine agencies) and Additional File 2 (state/regional administrators). The audio files were transcribed in three rounds, one by artificial-intelligence transcriber through happyscribe.com, then verified by two rounds of manual transcriptions carried out by team members with training in qualitative methods (AL, SJS, SS). Following the interview, participants completed an anonymous survey about their demographic characteristics and beliefs about the COVID-19 vaccine. Survey items were adapted from previously validated surveys [ 25 , 26 , 27 ], where applicable. The survey codebook is included as Additional File 3. All participants were offered a $50 gift card in appreciation of their time, although many declined due to agency restrictions on accepting gifts.

Interviews started with introductory questions about the participants’ roles within their agency to build rapport, better understand the participants’ experience in EMS, and describe the goals of the research project. The rest of the interview questions focused on MIH-CP program history and functions. However, the questions were tailored to the participants’ experiences with MIH-CP and whether they had vaccination programs. For MIH-CP interviews, the subsequent questions focused on their overall MIH-CP programs as well as their vaccination programs, emphasizing how the programs started, barriers to implementation, operational barriers, and lessons learned. Non-MIH-CP interviews emphasized similar topics except that the questions were framed around their opinions and perspectives on MIH-CP as someone without a program. State/regional administrator interviews focused on higher-level administration of MIH-CP programs.

Data analysis

We used qualitative content analysis, as described by Schreier, to analyze the transcripts [ 28 ]. First, two authors (LMSR, AL) completed an exhaustive and comprehensive review of the transcripts to ensure a thorough understanding of all the data. During these reviews, they took notes on content that was repeated across interviews and areas that were unique to each interview. After gaining familiarity with the material, each author reviewed the transcripts for a second time, specifically focusing on content that was not noted in the first review. Then, each author organized their notes into a first draft of a codebook. This approach is most similar to the codebook development strategy described by Schrier as summarization. As part of our note-taking process, we paraphrased relevant passages. As we developed the codebook, we deleted paraphrases that were superfluous and combined related paraphrases. Then, we used the paraphrases to generate the main category and subcategory names. Although we did not generate the main categories prior to codebook development, our draft codebooks were closely aligned with our objectives because the semi-structured interview guide used to collect data was aligned with our objectives.

Based on these initial drafts, two members of the research team (LMSR, MLK) reviewed the draft codebooks, combined the codebook drafts into a single comprehensive codebook (Additional File 4), and pilot-coded a transcript together. Then, one member of the team (LMSR) applied the codebook to the transcripts. Finally, two members of the team (LMSR, MLK) met to review the coded materials and assess for disagreement in the code application. However, the codebook is quite straightforward and descriptive, so there were no disagreements.

Saturation has multiple meanings in qualitative methods. In qualitative content analysis, as described by Schreier, saturation occurs when each subcategory has at least one code segment (i.e., no subcategory is ‘empty’). Because we used a data-driven approach to develop our codebook, we automatically met the criterion of saturation. That is, if the content was not present in the data, it was not present in our coding framework. Data analyses were conducted using MAXQDA.

Sample characteristics

We interviewed 24 individuals who represented EMS organizations with MIH-CP programs (MIH-CP; n  = 10), EMS organizations without MIH-CP programs (non-MIH-CP; n  = 9), and state/regional administrators (SRA; n  = 5). Interviews lasted an average of 41 min (range: 14–75 min). Of the 24 interviewees, 19 responded to the survey provided at the end of the interview. Overall, the sample included highly experienced EMS professionals with an average of 19.6 years in the field (range: 1–42 years). Approximately 75% ( n  = 14) of respondents were male, and all identified as non-Hispanic white. Nearly two-thirds of respondents were fully vaccinated for COVID-19 and had received at least one booster shot ( n  = 12). Another quarter were fully vaccinated without a booster shot ( n  = 5). One respondent received one dose of the COVID-19 vaccine, and one was not vaccinated.

When asked if their programs ever distributed vaccines, more than 75% of agencies reported doing so. This was significantly different between MIH-CP (10 out of 10 distributed vaccines) and non-MIH-CP (5 out of 9 distributed vaccines) programs ( p  < 0.05). Most programs ( n  = 11) discussed distributing COVID-19 vaccines during the pandemic. Flu vaccines were the second most commonly administered vaccine ( n  = 7). Other vaccines included Tetanus, Hepatitis A, and childhood vaccines. Some agencies partnered with other organizations (i.e., primary care providers, health departments, and schools) and were willing to give any vaccines requested by these partners. These partnerships and structures are further discussed in the next section.

Vaccine program structure and organization

All vaccine programs fit into one of three structures: outreach for a separate agency, extension of existing MIH-CP services, or standalone programs focused on vaccine distribution.

Outreach for Separate Agencies

Most vaccine programs were outreach for a separate agency, generally the county health department during the COVID-19 pandemic. In Indiana, many county health departments sponsored mass vaccine clinics and/or provided in-home vaccines for individuals unable to leave their homes. EMS agencies provided staffing for both approaches. One individual shared that during the COVID-19 mass clinics “ the state said that anywhere they were administering vaccines , they had to have a paramedic on site.” (Non-MIH-CP-15). Some agencies allowed their staff to go during normal work hours, while others treated it as volunteer/non-work time. Generally, the MIH-CP programs were more focused on providing in-home services, although a couple of non-MIH-CP programs also provided these services. As one participant explained, “ Let’s do what paramedicine’s meant to do , and it’s to be mobile… ” (MIH-CP-08). Generally, these programs followed the same administrative processes:

“So basically, county health nurse will identify Mrs. Smith at 1234 North Main Street, needs a vaccine. Can you do it on this date? Sure, we’ll do it. We’ll send all the information back to the county health nurse and then she’ll enter it in [the state vaccine registry]. And that’s kind of the partnership we have is we’re the boots on the ground and they’re the paperwork side of things, which is obviously the least fun part.” (MIH-CP-05).

At least one program continued partnering with the county health department beyond the COVID-19 vaccine clinics, including providing vaccines to students in schools and routine vaccines in people’s homes. These arrangements had several benefits for EMS agencies: reduced administrative burden, financial compensation, and relationship building with community organizations. As discussed above, the health department was responsible for procuring and storing the vaccines, managing the schedule, and documenting the distribution with the state vaccine registry. This administrative oversight was particularly helpful when the storage and maintenance of multiple COVID-19 vaccines became complicated. As one participant explained:

“It got crazy. Like you had to order your patients in such a way to where your vaccines weren’t expiring. So, we had a fridge inside of the vehicle, but it does not get to cold storage temperatures. So, it’s only maintaining. So yeah, you had to schedule your Johnson and Johnson’s first and then your Modernas and then your Pfizers…” (MIH-CP-10).

Providing vaccines as an extension of the health department was also financially beneficial for some agencies. All agencies were eligible for reimbursement for vaccine administration as part of a state-wide program. One individual explained, “ We got compensated for all those. I think we got like seventy-five dollars- seventy five to one hundred dollars for- per dose.” (MIH-CP-04). However, some agencies preferred to use the opportunity to build relationships. One individual described their motivation as “ just to help the health department.” (MIH-CP-09). For many agencies, these programs ended when the mass COVID-19 vaccine clinics ended. Some, including non-MIH-CP programs, used the existing processes and relationships as an opportunity to continue the partnerships, including “ a vaccine clinic at our school.” (Non-MIH-CP-12).

Extension of Current Services

Some MIH-CP programs also provided vaccines as an extension of their current services. Several programs offered vaccines to all existing MIH-CP patients. A primary care provider or health department was responsible for vaccine storage and documentation in these instances. Other extensions reflected the uniqueness of the MIH-CP programs. For example, one MIH-CP program operated out of a community center that hosted a weekly food bank. As demand for the food bank increased, MIH-CP personnel decided to pilot a vaccine clinic, which became the basis for mass vaccine drive-through clinics in the state:

“We tied it into the food distribution. So, people were already here, they were already in line. They would get their food, and as they drove through, we flagged the ones that would like- you know, they said, ‘yeah, we’ll do a flu shot as well.’ It was a simple- it started off with a post-it note on their windshield. And as they came through the food distribution, we’d flag them into the other part of the parking lot, and they would stay in their car, roll down their window, we would vaccinate them, and then we’d move them off just to the side to stay there for their 15 minutes to make sure that they weren’t having a reaction. Their instructions were, if you start feeling funny or ill in any way, honk your horn, turn on your flashers, we’ll be right there.” (MIH-CP-06).

Another MIH-CP program was integrated into an occupational health program and had provided vaccines to their patients since 2013. Generally, this consisted of on-site vaccine clinics, particularly for employers who mandated the vaccines. For other employers, program staff “ just made ourselves available. ” (MIH-CP-07). During the pandemic, this program expanded its vaccine services to other MIH-CP programs. For example, they regularly held clinics at Salvation Army and transitional housing centers. During these events, they started “ providing vaccines at every single one of those community events. And that was just simple walk up.” (MIH-CP-07).

Because these programs were unique, the relative benefits and challenges were also unique. Some agencies acted as independent vaccine providers, while others’ administrative structure was more similar to that of the agencies acting as outreach (i.e., purchasing, storage, and documentation were managed by a separate agency). Agencies acting as independent vaccine providers did not frame purchasing, storage, or documentation as challenging. However, these agencies had a history of vaccine administration before the COVID-19 pandemic, meaning they had built sufficient infrastructure (e.g., staff, space, and financial resources) for their day-to-day operations.

Standalone Program

Only one MIH-CP program had a standalone program focused exclusively on vaccines, which started in 2020. The goal was to provide vaccines in schools for staff and students, with a particular emphasis on vaccines required to attend school. During the pandemic, the program shifted to “ a lot more work with COVID vaccines and testing ” (MIH-CP-08). After schools began reopening, the team learned that a local hospital had started providing a traveling nurse to schools to provide vaccines, which duplicated their service. They decided to shift the focus to “ really just finding those gaps and needs.” (MIH-CP-08). For this community, that looked like:

“Let’s do what paramedicine’s meant to do, and it’s to be mobile, right, to go out and fill that fill that gap. So if we have students that are getting to that point where school is going to kick them out because they haven’t met their mandated vaccines, we’ll go out and do that. We’ll put clinics together and fill that piece….We have some vaccinations- for HPV and meningitis I believe - that we needed to- we knew that was the right age, so we connected with the school nurse there and did clinics for the [the local college] students.” (MIH-CP-08).

For this program, vaccine storage and documentation were not reported as challenges. The primary challenge was finding the right partnerships and gaps, although there were also financial challenges. Because they operated as a standalone program, they also managed purchasing the vaccines. The administrator described one related issue as, “ You have to be very strategic about it. And we run into that. You know , there are a few where we’ve had some expire because we haven’t got shots in arms and you eat that cost. ” (MIH-CP-08).

Vaccine program challenges

When talking about challenges related to providing vaccines through an MIH-CP program, participants reported a range of challenges, including concerns about funding, vaccine hesitancy in communities, and vaccines as a low priority for MIH-CP.

One participant described the funding issue as “ Vaccines aren’t sexy. It’s not a big money maker. It’s just- It’s one of those things that has to be done .” (MIH-CP-08). During the COVID-19 pandemic, the state had a program for reimbursing vaccines. Since that program expired, there has been no funding for vaccine distribution through MIH-CP. Without this funding, the biggest barrier for many agencies was “ really just having the money to cover the supplies and the uh cost of actually getting the money out there to do it.” (MIH-CP-01). Even if funding was available, the administrative burden can be overwhelming. One participant described their program’s decision to stop providing vaccines as:

“But there’s just too much already on a day-to-day basis to where even just that minor ask that they’re trying to ask for it’s becoming too burdensome….it would be fantastic if everyone in our organization also had a secretary, right? I mean, that would be- just someone to help. I’m talking about interns or whatever….you’re sacrificing a lot of your personal time in order to do that, because it’s just not- the reimbursement is just not there to really build up the workforce how it needs to be.” (MIH-CP-10).

The lack of established funding mechanisms was perceived by some participant as evidence that it was not a high priority for the state. As one participant said, “ I’m here to serve my community. So , I don’t mind going out there and helping somebody and administering that. But if that was something the health care field thought we should do all the time , then there have to be some kind of funding mechanism for that. ” (Non-MIH-CP-16).

In discussing funding challenges, a few other participants discussed how the lack of MIH-CP infrastructure and state policies impeded reimbursement and billing mechanisms. A state/regional administrator explained that many state agencies oversee vaccine regulations. The “ Department of Health , because they regulate vaccines. They have reimbursed for some of it” , “ the FFSA [Family and Social Services Administration] was covering [MIH-CP vaccines] for Medicaid” and the Department of Homeland Security play roles in who is allowed to administer vaccines and reimbursement (SRA-21). Some MIH-CP administrators believed that the lack of policies governing MIH-CP contributed to the limited reimbursement opportunities. One participant said,

“We just don’t have a standard documented [reimbursement policy] in the state of Indiana….I mean, there are other states that have it out there. I think Minnesota is a prime example, but yeah. What does that look like for the state of Indiana? And let’s get it written into policy, and it’s been talked about for the last several years, and it’s supposed to be coming up, but it’s just nature of how that works.” (MIH-CP-08).

Vaccine Hesitancy

Because most of the programs provided vaccines to individuals who requested them, vaccine hesitancy was not a primary challenge. As one participant described, “ I mean , because we’re not beating their door down and jabbing them without their permission , right? So if we’re there for a service that they’ve requested , uh , I don’t see there being any divide. Uh , I don’t see there being any issue.” (Non-MIH-CP-15). However, many of the participants described wide-spread vaccine hesitancy in their communities. One explained that, “ Yeah , there’s always hesitant , not because we’re doing it. The hesitancy exists because of the vaccine , the misinformation from the vaccines. Um , when the vaccines became a political issue and a political fireball to use , that created a hesitancy . ” (Non-MIH-CP-13).

Some people saw addressing vaccine hesitancy as within the scope of paramedicine. Many felt “ comfortable communicating with people” about vaccines (MIH-CP-01). One went further and said that to address vaccine hesitancy, “ I mean , what do you do? You know , you can talk to individuals. ” (MIH-CP-06). Others wanted to avoid the “ political involvement ” with vaccines (Non-MIH-CP-18). One participant further explained that “ We see them when they’re sick , whether they’re vaccinated or not with COVID or whatever…If they don’t want it , they don’t want it. We as an agency , don’t push that to outside people.” (Non-MIH-CP-13).

These differences in opinions may be related to participants’ own feelings about vaccines. In the open-ended question at the end of the post-interview survey, one participant said that,

“Combating misinformation has been required in our vaccination program. Not only with patients but with healthcare providers. More information to healthcare workers delivered in a manner they will digest such as 1-to-2-minute videos would be beneficial. So much information was given, but ultimately ignored during COVID, and I believe the delivery of the information could have been improved. Asking how do we get all our Healthcare providers speaking comfortably, confidently and competently while delivering the same talking points I believe will be critical to build public trust.” (Post-Interview Survey, anonymous).

This view was also shared by another participant who said, “ And even amongst healthcare workers , the number of them that just outright refuse for whatever reason is pretty , pretty impressive. ” (Non-MIH-CP-17). This division was evident in our post-interview survey questions about vaccine hesitancy. We asked participants how strongly they agreed or disagreed with 12 statements describing vaccine hesitancy like, “Getting a COVID-19 vaccine is a good way to protect me from coronavirus disease.” and “I think COVID-19 vaccines might cause lasting health problems for me.” For all questions, there were individuals who answered “Strongly Agree” and individuals who answered “Strongly Disagree,” respectively. The overall mean score of the 12 items on a scale of 1–5 (with higher numbers indicating more confidence in vaccines) was 3.6 out of 5. However, the anonymous nature of the study precluded us from connecting their interview data with their survey responses.

Vaccines as a Low Priority for MIH-CP

A few participants with MIH-CP programs thought vaccines could be a component of their services, but the other services were more critical: “ We were very protective of our Medics because they see only chronic disease patients , right - the highest risk patients…So we didn’t- we don’t really do , we’re not high-volume vaccines comparatively to some of our other peer programs .” (MIH-CP-02). Although this view was less commonly described in the interviews, several voiced it in the post-interview survey. One said:

“We should be asking if this is the best way to utilize community paramedics. There are much more beneficial tasks (fall prevention, home modification, collection of health information in case of emergency, risk mitigation) that should be prioritized over vaccines. The vaccinations could be a portion of a holistic health picture but is relatively low priority when it comes to the numbers and severity of those impacted.” (Post-Interview Survey, anonymous).

Some participants perceived vaccines to be a low priority for MIH-CP because EMTs could provide the same service. One participant stated, “ I guess when I think community paramedicine , I think more of an advanced scope than vaccine distribution. Um , and here in the state of Indiana , EMT Basics are eligible to distribute vaccinations. ” (Non-MIH-CP-11). However, one of the state administrators clarified that EMTs can “ do influenza and COVID. We added that to their scope of practice. Anything else would have to be a paramedic for vaccination.” (SRA-21).

Vaccine program opportunities

Despite the challenges, many participants felt there were benefits to providing vaccines through MIH-CP and that their programs were successful. Many people viewed vaccines as “ beneficial ” (MIH-CP-01) and that MIH-CP could be an important part of reducing health disparities saying, “ I think that leans into a large ability to see the patient as a whole. And certainly , vaccines are within that ability to go into the home and do and make sure that everybody has equal access. ” (Non-MIH-CP-18). Agencies that had COVID-19 programs reported that they were successful. One said that, “ we ended up doing hundreds of vaccines. I can’t remember how many , but it was a lot of them .” (MIH-CP-10). Another described the response to their services as:

“Oh, incredibly successful. You know, the whole concept was it wasn’t just the clinics that were successful. I say clinic and that’s kind of a broad term ….And so part of these clinics were us going to these individuals homes and giving them these vaccinations on site in their own homes. And that part of this was just, you know, I thought, wildly successful too. Because, you know, here we are taking care to people who otherwise wouldn’t have a means of getting there. And I think that that’s the kind of health care system we need to start moving towards in a lot of respects, not just in vaccinations.” (MIH-CP-07).

In this qualitative study examining implementation of MIH-CP vaccination programs, participants reported a wide variety of vaccination program structure and functions. Overall, vaccine programs were described as very successful and have the potential to serve as an effective way to improve access to underserved areas. The largest overall challenge reported was funding for the program, and the lack of funding had a ripple effect, affecting multiple functions within the organization, resulting in a lack of dedicated staff for vaccines and a perception that vaccinations were a low priority for the organization. Some participants commented on upstream causes of the lack of funding, including that there are not state-wide and federal policies governing MIH-CP, which limits reimbursement opportunities and limits the implementation of broader vaccination programs. Most participants described their vaccination programs as very successful and a way to reach people who were homebound or otherwise unable to access vaccines within their communities. The overall sentiment was that while vaccine hesitancy was not a barrier with the patient population they were serving, they did express discomfort at the prospect of being perceived as “pushing” or advocating for vaccines.

When discussing the feasibility of implementing an MIH-CP vaccine program, the main barrier described was funding. This was described as a barrier at multiple levels, including gaining initial funding, maintaining funding, and having dedicated staffing when sustained funding is not guaranteed. This same barrier has been reported in the literature for community health workers (CHWs), with one study also conducted in Indiana specifically reporting on the difficulties maintaining personnel with uncertain funding mechanisms and a cumbersome and confusing structure to apply for Medicaid reimbursement [ 29 ]. Like our findings, the CHW study reported that inconsistent funding jeopardizes CHW programs and recommended clarifying the existing Medicaid reimbursement policies. Recently, Indiana county health departments have received an influx of public health funding from the state that increased funding for public health in the state by 1500% [ 30 ]. Some of these funds are being used to expand the geographic reach of existing MIH-CP programs. This increased funding should alleviate the barriers discussed by our participants. Future work should examine the effects of this funding on alleviating disparities in the expanded areas.

Overall, CP vaccination programs were perceived as acceptable across EMS organizations. Our participants also reported they believed community members would be supportive of receiving vaccines from a CP. However, there have been limited studies examining patient perceptions of the acceptability of CP vaccine provision, particularly in the U.S [ 2 ]. Similar studies examining patient acceptability of the CHW-model has shown overall positive perceptions and high acceptability both in the U.S [ 31 ], and abroad [ 32 , 33 ]. Future studies should examine community perceptions of CP acceptability to determine whether this might be a model that could be implemented more broadly to address health disparities.

While CPs in our study did report they felt comfortable giving vaccines, most also expressed that they would not want to advocate for vaccines or be seen as “pushing” vaccines on their patients. Even though they did not report any personal vaccination hesitancy in the qualitative interviews, the answers on the anonymous survey did indicate a significant level of vaccination hesitancy in this group of providers. This sentiment is seen across health professionals with one publication finding that nearly one-third of US healthcare providers were hesitant about vaccinations [ 34 ]. This is not a new phenomenon and vaccine hesitant providers existed before the COVID-19 pandemic and continue to exist after the pandemic [ 35 ]. Thus, there is a pressing need not only to educate healthcare providers to reduce vaccination hesitancy among this group, but also to give providers across the spectrum adequate training to effectively communicate with patients so that they feel comfortable combatting existing misinformation to improve vaccination uptake.

This study is among the first to examine feasibility and acceptability of implementing vaccination programs within MIH-CP programs. The findings can be used to inform implementation of other programs and to improve existing programs. However, the results should be interpreted in light of several limitations. First, the participants in this study are from a single state and the findings may be different in other geographic locations. Second, there were counties within the state that had no EMS services, and we were not able to gain perspectives of professionals working in those counties. Third, while the qualitative nature of our study allowed us to gain an in-depth understanding of the existing programs, we did not have quantitative data assessing program effectiveness and we are unable to determine if the implemented programs have had an impact on the health of the community.

This study provides important context on the feasibility and acceptability of implementing an MIH-CP vaccination program. Major barriers to implementing and maintaining these programs are lack of sustained funding and unclear policies governing the programs. While participants in our study did not describe vaccine hesitancy as a major problem in their communities, they also expressed discomfort in advocating for vaccines, should people express hesitancy. They also described vaccines as a lower priority for their agencies than other services they provide, like managing chronic diseases. However, many did describe vaccines as beneficial and an important part of reducing health disparities in their communities. Future research should conduct rigorous evaluations of MIH-CP programs to determine program effectiveness and examine patient perceptions of the acceptability of receiving a vaccine from a CP. Using CP to deliver vaccinations to underserved communities has the potential to reduce health disparities and improve health outcomes for these communities.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Community health worker

Community paramedics

Department of Homeland Security

Emergency Medical Services

Mobile integrated health-community paramedicine

Rural health clinics

Socioeconomic Status

State/regional administrators

Gregg A, Tutek J, Leatherwood MD, Crawford W, Friend R, Crowther M, McKinney R. Systematic Review of Community Paramedicine and EMS Mobile Integrated Health Care Interventions in the United States. Popul Health Manage. 2019;22(3):213–22. https://doi.org/10.1089/pop.2018.0114 .

Article   Google Scholar  

Spelten E, Thomas B, Van Vuuren J, Hardman R, Burns D, O’Meara P, Reynolds L. Implementing community paramedicine: a known player in a new role. A narrative review. Australasian Emerg Care. 2024;27(1):21–5. https://doi.org/10.1016/j.auec.2023.07.003 .

Nana-Sinkam P, Kraschnewski J, Sacco R, Chavez J, Fouad M, Gal T, Behar-Zusman V. Health disparities and equity in the era of COVID-19. J Clin Translational Sci. 2021;5(1):e99. https://doi.org/10.1017/cts.2021.23 .

Article   CAS   Google Scholar  

Braswell M, Wally MK, Kempton LB, Seymour RB, Hsu JR, Karunakar M, Wohler AD. Age and socioeconomic status affect access to telemedicine at an urban level 1 trauma center. OTA International: Open Access J Orthop Trauma. 2021;4(4):e155. https://doi.org/10.1097/OI9.0000000000000155 .

Zahnd WE, Silverman AF, Self S, Hung P, Natafgi N, Adams SA, Eberth JM. The COVID-19 pandemic impact on independent and provider‐based rural health clinics’ operations and cancer prevention and screening provision in the United States. J Rural Health. 2023;39(4):765–71. https://doi.org/10.1111/jrh.12753 .

Article   PubMed   Google Scholar  

Hart C. The Effect of COVID-19 On Immunization Rates. Confessions of a Pediatric Practice Management Consultant.com . April 6, 2020. Retrieved from https://chipsblog.pcc.com/#blog

Gilkey MB, Bednarczyk RA, Gerend MA, Kornides ML, Perkins RB, Saslow D, Brewer NT. Getting human papillomavirus vaccination back on Track: protecting our National Investment in Human Papillomavirus Vaccination in the COVID-19 era. J Adolesc Health. 2020;67(5):633–4. https://doi.org/10.1016/j.jadohealth.2020.08.013 .

Shen AK, Browne S, Srivastava T, Kornides ML, Tan ASL. Trusted messengers and trusted messages: the role for community-based organizations in promoting COVID-19 and routine immunizations. Vaccine. 2023;41(12):1994–2002. https://doi.org/10.1016/j.vaccine.2023.02.045 .

Bigham BL, Kennedy SM, Drennan I, Morrison LJ. Expanding paramedic scope of practice in the community: a systematic review of the literature. Prehospital Emerg Care. 2013;17(3):361–72. https://doi.org/10.3109/10903127.2013.792890 .

Chan J, Griffith LE, Costa AP, Leyenaar MS, Agarwal G. Community paramedicine: a systematic review of program descriptions and training. CJEM. 2019;21(6):749–61. https://doi.org/10.1017/cem.2019.14 .

Bodenheimer T, Sinsky C. From Triple to Quadruple Aim: care of the patient requires care of the provider. Annals Family Med. 2014;12(6):573–6. https://doi.org/10.1370/afm.1713 .

O’Meara P. Community paramedics: a scoping review of their emergence and potential impact. Int Paramedic Pract. 2014;4(1):5–12. https://doi.org/10.12968/ippr.2014.4.1.5 .

Pang PS, Litzau M, Liao M, Herron J, Weinstein E, Weaver C, Miramonti C. Limited data to support improved outcomes after community paramedicine intervention: a systematic review. Am J Emerg Med. 2019;37(5):960–4. https://doi.org/10.1016/j.ajem.2019.02.036 .

Coffman J, Blash L. Utilization of Community Paramedics to Respond to the COVID-19 Pandemic. UCSF Health Workforce Research Center on Long-Term Care. 2022. Retrieved from https://healthworkforce.ucsf.edu/sites/healthworkforce.ucsf.edu/files/HWRC_Report_Utilization%20of%20Community%20Paramedics%20Pandemic.pdf

McAuslan C, Roll J, McAuslan M. Goals, services, and Target Patient Populations of Community Paramedicine in Rural United States: A literature review. Int J Paramedicine. 2023;(3)322–33. https://doi.org/10.56068/FSCK6274 .

O’Meara P, Wingrove G, Nolan M. Frontier and remote paramedicine practitioner models. Rural Remote Health. 2018;18(3):4550. https://doi.org/10.22605/RRH4550 .

Rural Health Information Hub (n.d.) Community paramedicine models for improving access to primary care. Retrieved February 14. 2024, from https://www.ruralhealthinfo.org/toolkits/community-paramedicine/2/improving-access-to-primary-care

Shannon B, Eaton G, Lanos C, Leyenaar M, Nolan M, Bowles K, Batt A. The development of community paramedicine; a restricted review. Health Soc Care Community. 2022;30(6). https://doi.org/10.1111/hsc.13985

Eagle County Paramedic Services. (n.d.). Current ECPS Community Paramedic Service Areas. Retrieved February 14. 2024, from https://www.eaglecountyparamedics.com/services-8590d33

McGuire-Wolfe C, Reardon T. Provision of Hepatitis a vaccine by paramedics during noncritical patient interactions: lessons learned. Infect Control Hosp Epidemiol. 2020;41(S1):s362–362. https://doi.org/10.1017/ice.2020.986 .

Blake B. State program opening up COVID-19 vaccines to homebound Hoosiers. WRTV Indianapolis . February 25, 2021. Retrieved from https://www.wrtv.com/news/coronavirus/state-program-opening-up-covid-19-vaccines-to-homebound-hoosiers

Collins L. MedStar Launches New Mobile Flu Shot Service. November 28, 2018. Retrieved from https://www.nbcdfw.com/news/local/medstar-launches-new-mobile-flu-shot-service/261248/

Indiana Department of Homeland Security. (n.d.). Mobile Integrated Health. Retrieved from https://www.in.gov/dhs/ems/mobile-integrated-health/

Indiana Department of Homeland Security. (n.d.). Indiana Department of Homeland Security: Contact Us . Retrieved from https://www.in.gov/dhs/contact-us/

Larson HJ, Jarrett C, Schulz WS, Chaudhuri M, Zhou Y, Dube E, Wilson R. Measuring vaccine hesitancy: The development of a survey tool. Vaccine. 2015;33(34):4165–75. https://doi.org/10.1016/j.vaccine.2015.04.037

Quinn SC, Jamison AM, An J, Hancock GR, Freimuth VS. Measuring vaccine hesitancy, confidence, trust and flu vaccine uptake: results of a national survey of White and African American adults. Vaccine. 2019;37(9):1168–73. https://doi.org/10.1016/j.vaccine.2019.01.033 .

Helmkamp LJ, Szilagyi PG, Zimet G, Saville AW, Gurfinkel D, Albertin C, Kempe A. A validated modification of the vaccine hesitancy scale for childhood, influenza and HPV vaccines. Vaccine. 2021;39(13):1831–9. https://doi.org/10.1016/j.vaccine.2021.02.039

Schreier M. Qualitative content analysis in practice. Los Angeles: SAGE; 2012.

Book   Google Scholar  

Rodriguez NM, Ruiz Y, Meredith AH, Kimiecik C, Adeoye-Olatunde OA, Kimera LF, Gonzalvo JD. Indiana community health workers: challenges and opportunities for workforce development. BMC Health Serv Res. 2022;22(1):117. https://doi.org/10.1186/s12913-022-07469-6 .

State of Indiana. (2024). Health First Indiana. Retrieved February 20, 2024, from https://www.in.gov/healthfirstindiana/

Chang W, Oo M, Rojas A, Damian AJ. Patients’ perspectives on the feasibility, acceptability, and impact of a Community Health Worker Program: a qualitative study. Health Equity. 2021;5(1):160–8. https://doi.org/10.1089/heq.2020.0159 .

Muhumuza G, Mutesi C, Mutamba F, Ampuriire P, Nangai C. Acceptability and Utilization of Community Health Workers after the adoption of the Integrated Community Case Management Policy in Kabarole District in Uganda. Health Syst Policy Res. 2015;2(1):13.

PubMed   Google Scholar  

Stoutenberg M, Crouch SH, McNulty LK, Kolkenbeck-Ruh A, Torres G, Gradidge PJL, Ware LJ. Acceptability and feasibility of home-based hypertension and physical activity screening by community health workers in an under-resourced community in South Africa. J. Public Health. 2023;32:1011–22. https://doi.org/10.1007/s10389-023-01873-w

Browne SK, Feemster KA, Shen AK, Green-McKenzie J, Momplaisir FM, Faig W, Kuter B J. Coronavirus disease 2019 (COVID-19) vaccine hesitancy among physicians, physician assistants, nurse practitioners, and nurses in two academic hospitals in Philadelphia. Infect Control Hosp Epidemiol. 2022;43(10):1424–32. https://doi.org/10.1017/ice.2021.410

Peterson CJ, Lee B, Nugent K. COVID-19 vaccination hesitancy among Healthcare Workers—A review. Vaccines. 2022;10(6):948. https://doi.org/10.3390/vaccines10060948 .

Article   CAS   PubMed   Google Scholar  

Download references

Acknowledgements

Not applicable.

This project was supported in part by a research grant from the Investigator-Initiated Studies Program of Merck Sharp & Dohme LLC. The opinions expressed in this paper are those of the authors and do not necessarily represent those of Merck Sharp & Dohme LLC.

Author information

Authors and affiliations.

Department of Public Health, Purdue University, 812 W. State Street, West Lafayette, IN, 47907, USA

Monica L. Kasting, Alfu Laily, Sidney J. Smith, Bukola Usidame & Laura M. Schwab-Reese

Cancer Prevention and Control Program, Indiana University Simon Comprehensive Cancer Center, Indianapolis, IN, USA

Monica L. Kasting

Wheldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA

Sathveka Sembian

Department of Communication Studies, Indiana University, Indianapolis, IN, USA

Katharine J. Head

Emeritus, Department of Pediatrics, Indiana University School of Medicine, Indianapolis, IN, USA

Gregory D. Zimet

You can also search for this author in PubMed   Google Scholar

Contributions

MK, KH, GZ, and LSR contributed to the study conception and design. Data collection was performed by MK, AL, and SS. Data analysis was performed by AL, SS and LSR. The first draft of the manuscript was written by AL, LSR, BU, and MK. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Monica L. Kasting .

Ethics declarations

Ethics approval and consent to participate.

This study was reviewed and approved by the Institutional Review Board at Purdue University (IRB-2022-672) and all participants provided informed consent.

Consent for publication

Competing interests.

GDZ has served as an external advisory board member for Pfizer and Moderna, and as a consultant to Merck. MLK has served as a consultant to Merck. GDZ, KJH, LMSR, and MLK have received investigator-initiated research funding from Merck administered through Indiana University and Purdue University, respectively. The other co-authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, supplementary material 3, supplementary material 4, supplementary material 5, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Kasting, M.L., Laily, A., Smith, S.J. et al. Exploring the feasibility and acceptability of community paramedicine programs in achieving vaccination equity: a qualitative study. BMC Health Serv Res 24 , 1022 (2024). https://doi.org/10.1186/s12913-024-11422-0

Download citation

Received : 19 April 2024

Accepted : 09 August 2024

Published : 04 September 2024

DOI : https://doi.org/10.1186/s12913-024-11422-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Health inequities
  • Vaccination Coverage
  • Rural Population
  • Paramedicine
  • Community Paramedicine

BMC Health Services Research

ISSN: 1472-6963

sample questions qualitative research interviews

Loading metrics

Open Access

Peer-reviewed

Research Article

Barriers and facilitators for implementing the WHO Safe Childbirth Checklist (SCC) in Mozambique: A qualitative study using the Consolidated Framework for Implementation Research (CFIR)

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Current address: Department of Social and Behavioral Sciences, Yale School of Public Health, New Haven, Connecticut, United States of America

Affiliation Department of Health Policy, Yale School of Public Health, New Haven, Connecticut, United States of America

ORCID logo

Roles Data curation, Investigation, Methodology, Project administration, Resources, Validation

Affiliations Comité para Saúde de Moçambique, Maputo City, Mozambique, Mozambique Ministry of Health, Maputo City, Mozambique

Roles Validation, Writing – review & editing

Affiliation Department of Social and Behavioral Sciences, Yale School of Public Health, New Haven, Connecticut, United States of America

Roles Formal analysis

Roles Resources, Supervision

Affiliation Mozambique Ministry of Health, Maputo City, Mozambique

Roles Conceptualization, Supervision

Affiliation Department of Chronic Disease Epidemiology, Yale School of Public Health, New Haven, Connecticut, United States of America

Roles Data curation, Methodology

Affiliation Comité para Saúde de Moçambique, Maputo City, Mozambique

Affiliation Department of Epidemiology of Microbial Diseases, Yale School of Public Health, New Haven, Connecticut, United States of America

Roles Conceptualization, Methodology, Resources, Supervision, Validation, Writing – review & editing

Affiliation Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, United States of America

Roles Conceptualization, Data curation, Methodology, Resources, Validation, Writing – review & editing

Affiliation Department of Health Systems and Global Health, Southern Medical University, Guangzhou, Guangdong, China

  • Anqi He, 
  • Elsa Luís Kanduma, 
  • Rafael Pérez-Escamilla, 
  • Devina Buckshee, 
  • Eusébio Chaquisse, 
  • Rosa Marlene Cuco, 
  • Mayur Mahesh Desai, 
  • Danícia Munguambe, 
  • Sakina Erika Reames, 

PLOS

  • Published: September 5, 2024
  • https://doi.org/10.1371/journal.pgph.0003174
  • Reader Comments

Table 1

High maternal and neonatal mortality rates persist in Mozambique, with stillbirths remaining understudied. Most maternal and neonatal deaths in the country are due to preventable and treatable childbirth-related complications that often occur in low-resource settings. The World Health Organization introduced the Safe Childbirth Checklist (SCC) in 2015 to reduce adverse birth outcomes. The SCC, a structured list of evidence-based practices, targets the main causes of maternal and neonatal deaths and stillbirths in healthcare facilities. The SCC has been tested in over 35 countries, demonstrating its ability to improve the quality of care. However, it has not been adopted in Mozambique. This study aimed to identify potential facilitators and barriers to SCC implementation from the perspective of birth attendants, clinical administrators, and decision-makers to inform future SCC implementation in Mozambique. We conducted a qualitative study involving focus group discussions with birth attendants (n = 24) and individual interviews with clinical administrators (n = 6) and decision-makers (n = 8). The Consolidated Framework for Implementation Research guided the questions used in the interviews and focus group discussions, as well as the subsequent data analysis. A deductive thematic analysis of Portuguese-to-English translated transcripts was performed. In Mozambique, most barriers to potential SCC implementation stem from the challenges within a weak health system, including underfunded maternal care, lack of infrastructure and human resources, and low provider motivation. The simplicity of the SCC and the commitment of healthcare providers to better childbirth practices, combined with their willingness to adopt the SCC, were identified as major facilitators. To improve the feasibility of SCC implementation and increase compatibility with current childbirth routines for birth attendants, the SCC should be tailored to context-specific needs. Future research should prioritize conducting pre-implementation assessments to align the SCC more effectively with local contexts and facilitate sustainable enhancements in childbirth practices.

Citation: He A, Kanduma EL, Pérez-Escamilla R, Buckshee D, Chaquisse E, Cuco RM, et al. (2024) Barriers and facilitators for implementing the WHO Safe Childbirth Checklist (SCC) in Mozambique: A qualitative study using the Consolidated Framework for Implementation Research (CFIR). PLOS Glob Public Health 4(9): e0003174. https://doi.org/10.1371/journal.pgph.0003174

Editor: Julia Robinson, PLOS: Public Library of Science, UNITED STATES OF AMERICA

Received: January 2, 2024; Accepted: August 8, 2024; Published: September 5, 2024

Copyright: © 2024 He et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: Our article includes selected excerpts from the qualitative data we collected and synthesized. When we sought ethical approval from the National Committee for Bioethics in Health (CNBS) in Mozambique and conducted the consent process with participants, we did not specify that the full transcripts would be made publicly available. Many of the topics discussed in interviews were of a sensitive nature, and participants may not have felt comfortable sharing their perspectives if we had asked to make the conversations public. Therefore, we feel that releasing full transcripts would not adhere to our ethics and consent practices, and would like to share further information only upon request. For those interested in accessing the interview transcripts, access requests can be directed to the National Committee for Bioethics in Health (CNBS) at [email protected] or to the study PI at [email protected] .

Funding: This work was supported by grants from the 2022 Wilbur G. Downs Fellowship at Yale University (AH, US$4,000), the 2022 Yale School of Medicine Fellowship for Medical Student Research (AH, US$2,000), and the 2022 Lindsay Fellowship for Research in Africa from the Yale MacMillan Center’s Council on African Studies (AH, US$1,000). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Wilbur G. Downs Fellowship: https://bit.ly/3aAvJCk Yale School of Medicine Fellowship: Internal application that doesn’t have a URL. Lindsay Fellowship for Research in Africa: https://bit.ly/3roHFh4 .

Competing interests: The authors have declared that no competing interests exist.

Introduction

The global efforts towards achieving the World Health Organization (WHO) Sustainable Development Goal 3 (Ensure healthy lives and promote well-being for all at all ages) have significantly reduced pregnancy-related deaths, especially in sub-Saharan Africa (SSA) [ 1 ], one of the regions most affected by maternal and neonatal mortality in the world [ 2 ]. Guided by the Mozambican Strategic Plan for Health Sector 2014–19 and the Government’s Five-Year Plan 2020–24 [ 3 , 4 ], the Mozambican government has substantially improved maternal and child health (MCH) outcomes by expanding care services and enhancing their quality. Between 2015 and 2021, maternal mortality in Mozambique decreased by 75.8% [ 5 ], neonatal mortality by 8% [ 6 ], and stillbirth rates declined by 7.4% [ 7 ]. While Mozambique shares a similar neonatal mortality ratio of 27 per 1,000 live births [ 8 ] and a stillbirth rate of 17 per 1,000 total births with overall SSA [ 9 ], it has a significantly lower maternal mortality ratio (MMR) of 127 deaths per 100,000 live births compared to the overall MMR of 536 deaths per 100,000 live births in SSA [ 10 ].

Despite these improvements, maternal and neonatal mortality ratios and stillbirth rates remain unacceptably high in Mozambique with pregnancy and childbirth complications as the leading causes: 86% of maternal deaths result from direct obstetric complications [ 11 ], and 75% of newborn deaths are caused by prematurity, childbirth-related complications, and neonatal infections [ 12 ]. Most of these deaths are preventable and treatable but continue to occur at high rates in low-resource settings [ 13 ].

To address maternal and perinatal morbidity and mortality, the WHO developed the Safe Childbirth Checklist (SCC) in 2015 (see S1 Text ) [ 13 ]. The SCC sets forth a structured list of evidence-based delivery practices which target the major causes of maternal deaths, neonatal deaths, and stillbirths in healthcare facilities, especially in lower- and middle-income countries (LMICs). The SCC streamlines the routine flow of childbirth delivery events into four pause points at which birth attendants ensure that they have completed essential birth practices: (a) on admission, (b) just before pushing (or just before a Caesarean-section), (c) soon after birth, and (d) just before discharge. The SCC prompts birth attendants to implement essential practices which have been shown to improve the quality of care delivered to mothers. A birth attendant’s omission of even one of the SCC items can render the mother and their newborn vulnerable to serious and potentially lethal complications.

The SCC has been implemented and evaluated in over 35 countries, demonstrating varied levels of effectiveness in reducing childbirth complications and improving maternal and newborn health outcomes [ 14 ]. Previous studies conducted in India, Ethiopia, Tanzania, Sri Lanka, Bangladesh, Kenya, Uganda, and Namibia have demonstrated that the implementation of SCC contributed to the overall improvement of the quality of care for mothers and newborns [ 15 – 21 ]. Key findings indicate that SCC adoption leads to increased birth attendant adherence to essential birth practices, improved inventory management for essential supplies, facilitated clinical decision-making, enhanced communication and teamwork among providers, and better management of complications. Moreover, research conducted across various settings has highlighted the significant impact of the SCC in reducing perinatal mortality and stillbirths. In Namibia, Kenya, Uganda, and Rajasthan, India, the implementation of the SCC was associated with decreased perinatal mortality, including facility-based stillbirths, very early neonatal deaths, and neonatal mortality among low-birthweight and preterm infants [ 18 , 21 , 22 ]. Moreover, a post-hoc analysis from the BetterBirth trial in Uttar Pradesh, India, revealed significantly lower odds of perinatal and early neonatal mortality with each additional SCC practice performed [ 15 ].

At least 11 countries in Africa have adopted and adapted the SCC: Rwanda, Ethiopia, Burkina Faso, Guinea, Côte d’Ivoire, Mali, Nigeria, Tanzania, Uganda, Kenya, and Namibia [ 14 , 16 – 18 , 23 – 25 ]. The experiences in these countries have provided fresh and valuable insights into local adaptations, facilitators, and barriers to successful implementation of the SCC [ 26 , 27 ]. The primary facilitators of SCC implementation were characteristics inherent to the checklist itself, including its ease of completion and comprehension, and its effectiveness as a job aid for essential practices [ 14 ]. Additional enabling factors identified included leadership commitment, provider motivation, and comprehensive training and supervision regarding SCC usage [ 14 , 23 ]. Barriers to SCC implementation frequently related to a shortage of clinical staff and essential birth and checklist supplies, a lack of professional training on the SCC, perceptions of increased workload due to the SCC usage, and challenges that often coincided with delivering quality maternal care [ 14 ]. Therefore, as research across multiple regions has underscored, adapting the SCC to the local context is crucial to align it with local guidelines and for its adoption by healthcare professionals. For example, in Burkina Faso and Côte d’Ivoire, health providers suggested integrating the SCC with existing tools like the partograph and displaying it in maternity wards as a reminder of critical birth practices [ 23 ]. In Kenya and Uganda, local modifications aimed at enhancing preterm birth outcomes included integrating a triage pause for initial assessments, focusing on assessing gestational age and managing preterm labor, and adjusting the SCC to better align with national care standards [ 25 ].

Despite its strong potential to improve maternal and newborn health outcomes, the SCC has not been adopted in Mozambique, one of the poorest countries in the world with major infrastructural constraints in its healthcare system, which could potentially benefit from the SCC implementation. Advancing improvements in lowering maternal and neonatal mortality, along with enhancing the overall health of the population, are key strategic aims outlined in the Mozambique Government’s 2020–2024 Five-Year Plan [ 4 ]. These objectives are also central to the UNICEF-Mozambique 2022–2026 Strategic Plan and key to the UNDP-Mozambique collaboration goals [ 28 , 29 ]. Although various national guidelines specific to certain procedures and complications during childbirth exist, they are not systematically integrated as in the SCC. Moreover, little is known about current childbirth practices in Mozambique and the feasibility and acceptability of adopting the SCC in local healthcare facilities. This formative study aims to identify facilitators and barriers to potential implementation of the SCC in Mozambique, provide insights into current childbirth practices and infrastructure in the country, and guide the Mozambique Ministry of Health (MoH)’s decisions on SCC adoption and adaptation to improve MCH outcomes nationwide.

Study setting

In Mozambique, the public health system is organized and administered at the national, provincial, and district levels. This structure includes four levels of health facilities, each with distinct roles and capacities. Maternity care is similarly organized within this structure [ 30 ].

Primary-level health facilities, designated as health centers, serve as the primary point of contact for the population. They provide primary health care and are classified as urban or rural based on their location, with some only having the minimal capacity to perform vaginal childbirth deliveries and others not being able to do so [ 31 ]. Secondary-level hospitals, divided into district, rural, and general hospitals, provide referral care, emergency services, and surgeries. They provide more comprehensive maternity services such as assisted deliveries and basic obstetric surgeries, but their capacity to perform C-sections varies by hospital. Tertiary and quaternary-level hospitals, which include provincial, central, and referral hospitals, provide specialized care and serve as referral centers with the capacity to offer advanced and comprehensive obstetric and neonatal care, including emergency C-sections for complicated pregnancies and births.

The study was conducted in Maputo city and Manhiça district in Maputo province, Mozambique. Maputo city is the capital and the largest city of Mozambique with a population of 1.09 million in 2017 [ 32 ]. It is located at the southern end of the country, close to Mozambique’s border with Eswatini and South Africa. The city is divided into 7administrative divisions, spanning a land area of 347.69 square kilometers. Compared to the rest of the country, Maputo City is notably better equipped with health personnel and facilities. It has 37 health facilities, including 1 quaternary central hospital, 3 secondary general hospitals, and 33 primary health centers—27 urban and 6 rural [ 33 ]. Manhiça District is a rural district in Maputo Province, covering 2,373 square kilometers and located 80 kilometers north of Maputo City, with a population of two hundred thousand [ 34 , 35 ]. Manhiça district has 21 primary rural health centers and health posts and 2 secondary rural, district referral hospitals [ 33 ].

Our study sites, Chamanculo General Hospital in Maputo City and Xinavane Rural Hospital in Manhiça District are both secondary hospitals offering comprehensive maternity care. While Chamanculo General Hospital does not offer C-section services, Xinavane Rural Hospital does. The maternity wards at both hospitals are divided into three areas: admission, delivery, and postpartum [ 31 ]. These areas correspond to the four pause points that the SCC uses to streamline the routine flow of childbirth delivery care: on admission, just before pushing (or C-section), soon after birth, and just before discharge. The birth attendants who participated in our study are essentially MCH nurses with midwifery skills, working 12-hour shifts [ 30 ]. They also rotate across various MCH departments within the hospitals, demonstrating proficiency in family planning, prenatal, intrapartum, and postnatal care, as well as gynecological services. Both hospitals employ a mix of different level MCH nurses, categorized by the extent of their education and training, including elementary (equals to Grade 7), basic (Grade 10), mid-level (Grade 12), and high-level (college-educated) nurses. MCH nurses with higher levels of education are equipped to manage more complex obstetric and gynecological cases, with those at the highest level being qualified to perform C-sections.

The information system in maternity care primarily consists of patient registration forms [ 36 ]. MCH nurses in maternity wards complete comprehensive registration forms for each mother, documenting clinical conditions and information from admission to discharge. These forms capture basic patient information, such as name, age, and national ID number, and clinical information, including gestational age, childbirth procedure and outcome, direct and indirect obstetric morbidity, and newborn conditions. Maternity care also incorporates data collection systems from various specific programs, such as the HIV and malaria programs [ 30 ]. From admission to the postpartum period, MCH nurses log and monitor progress of pregnancy, childbirth, postpartum conditions for mothers and newborns, and their medications.

Different guidelines are employed in different parts of the maternity ward. In general, the admission room personnel have access to guidelines for managing hypertension in pregnancy and sexually transmitted infections in pregnant women such as HIV and syphilis. The delivery room is equipped with guidelines for neonatal resuscitation. The postpartum services have guidelines for managing postpartum hypertension, postpartum infection management, and neonatal sepsis. All rooms follow guidelines for managing maternal bleeding before, during, and after childbirth. The current guidelines are specific to certain procedure or complication but are not integrated as the SCC. There is also no current standardized monitoring or reporting checklist used in the maternity wards.

The hospitals were selected as study sites for focus group discussions (FGDs) and interviews with providers taking into account the distance to the researchers’ office located in Maputo City, their capabilities to perform comprehensive maternity care, and their distinct rural and urban contexts. The inclusion of a diversity of hospitals offered a broad perspective on the varying conditions within Mozambican health facilities.

Study design

To ensure a comprehensive perspective, this qualitative study consists of three types of participants: birth attendants, clinical administrators, and decision-makers. The study conducted four FGDs with twenty-four birth attendants and six individual interviews with clinical administrators from Xinavane Rural Hospital in Manhiça District, Maputo Province, and Chamanculo General Hospital in Maputo City, as well as eight individual interviews with decision-makers at the MoH, the Departments of Public Health for Maputo city and Maputo province, and the Association of Midwives in Mozambique. The interviews and FGDs were guided by the Consolidated Framework for Implementation Research (CFIR) and covered four of five CFIR domains: (a) individual characteristics, (b) intervention (SCC) characteristics, and the facility’s (c) outer settings and (d) inner settings [ 37 ].

Data collection

Participants for this study were recruited using purposive sampling methods, aiming to include individuals with diverse backgrounds who were highly knowledgeable and experienced in following and implementing various policies and clinical guidelines related to childbirth practices and fulfilled the inclusion criteria and could offer valuable insights relevant to our research questions. The recruitment and data collection period took place September 16 th , 2022 to February 10 th , 2023. The FGDs with birth attendants and the interviews with clinical administrators were conducted at secure private offices at the two hospitals. One interview with a decision-maker was conducted via Zoom, while the other interviews with decision-makers took place either at the secure office of the Comité para a Saúde de Moçambique (Mozambique’s Health Committee) in Maputo or at the interviewees’ private offices. The clinical administrators interviewed at each clinical site included those managing MCH care. The clinical administrators also helped the study identified the birth attendants for FGDs. Each focus group comprised five to six birth attendants who met the inclusion criteria: being 18 years or older, having at least one year of experience in maternity care, availability and willingness to participate, fluency in Portuguese, and the ability and capacity to give consent. Similarly, clinical administrators and decision-makers were eligible if they had at least one year of experience managing or monitoring maternity services or MCH programs, were 18 years or older, fluent in Portuguese, available and willing to participate, and capable of giving informed consent. Decision makers were identified through the networks of our local collaborators with the Comité para Saúde de Moçambique and the Mozambique MoH. All participants were approached by a female researcher (AH, DM, or EK) and obtained written consent for participation in the interviews or the FGDs.

To assess the impact of various factors on SCC implementation, we designed the question guides of the FGD and interview based on CFIR. The questions were designed to assess current childbirth practices and infrastructure as well as the feasibility of implementing SCC to improve maternal and perinatal outcomes in Mozambique. The interview and FGD guides were tailored to the roles and responsibilities of the participants (see S2 Text ). We created a pilot FGD guide and tested it to ensure that study participants could adequately contribute to a rich discussion (see S1 Table ). The pilot FGD was conducted at Malhangalene Centro De Saúde (Health Center at Malhangalene) with seven birth attendants from five different health centers who did not work at the two selected clinical sites where formal data collection was to be conducted. The officers at Association of Midwives designated the birth attendants who participated in the pilot FGD. Each of them had rich prenatal-to-postnatal-care work experience from their clinical rotations in the maternity wards. We adjusted the structure and wording of the questions as needed and enhanced the moderating skills of the researchers during pilot [ 38 ].

Prior to data collection, all participants were given hard copies of the WHO SCC at least one day before the interview and FGD to familiarize themselves with its contents. After the interview and FGD, the SCC copies were collected by the researchers to avoid any unintended consequences resulting from the use of the SCC without proper instruction and support. The overall purpose of the SCC and each of its check items were explained to study participants before the FGD and interview. Participants were given opportunity before and after the FGD and interview to ask questions about the SCC and study, and those questions were subsequently addressed by the researchers. This was done to ensure all participants comprehended the content and intended use of the SCC. Participants received compensation for their participation.

Each interview and FGD lasted approximately 60 minutes, and each was scheduled at the convenience of participants, most often during their lunch breaks. All interviews and FGDs were conducted in Portuguese. A researcher (EK or DM) went through the SCC and the consent form verbatim in Portuguese before each interview or FGD and asked if there were any questions related to the study, the SCC, or the consent before the session started. Any questions raised by the participants were addressed accordingly. Participants signed written consent forms before interviews and FGDs. To assure their anonymity, participants were identified with a participant ID instead of their names during data collection and analysis. The interviews for clinical administrator and FGDs for birth attendants were conducted by two qualitative researchers, one of whom (EK) has a Doctor of Medicine degree from the School of Medicine at Eduardo Mondlane University in Mozambique and a Master of Public Health degree from Southern Medical University in China. EK had been working as a physician, district health director, and researcher at MoH since 2014, and she was also responsible for identifying and contacting the hospitals, clinical administrators, and decision-makers. The other researcher (DM) is a local research assistant has a bachelor’s degree in social science from Eduardo Mondlane University in Mozambique and is a qualitative researcher by training. The decision-maker interviews were conducted by EK and AH. AH has a Master of Public Health in Health Policy with formal qualitative study training from Yale School of Public Health in the U.S. The researchers worked in pairs during the interviews and FGDs. One served as the moderator and took detailed notes. The other researcher took comprehensive field notes and was also responsible for timekeeping. The field notes captured the behaviors and nonverbal cues of participants and, as complementary information to facilitate later data coding and analysis, described the physical spaces in which the interviews and FGDs were conducted [ 39 ].

All interviews and FGDs were recorded for later transcription, translation, and data analysis. Within 24 hours after each interview and FGD, the researchers also completed a summary report for each data collection session, including observations, personal reflections, memos, and key takeaways.

The hard copies of the research materials, such as field notes and consent forms, are stored in a locked cabinet in a locked office at Comité para a Saúde de Moçambique, and the electronic data, such as audio recordings and transcripts, were stored in Box, a secure password-protected database authorized by Yale University.

Data analysis

The audio recordings of the interviews and FGDs were uploaded to HappyScribe, a password-protected online software, and then transcribed and translated from Portuguese to English. To ensure their accuracy and integrity, the transcriptions and translations were then carefully reviewed by a bilingual researcher, EK.

The data analysis was performed by a team of three female researchers, AH, DB, and SR, from Yale University with formal qualitative study training. The data from FGDs and clinical administrators were coded and analyzed by AH, DB, and SR, and the data from decision-makers were coded and analyzed by AH and SR. The information in transcripts that might reveal the participant’s identity was removed. The data analysis employed a rigorous deductive thematic method, enabling a thorough and nuanced analysis of the data [ 40 ]. The coding process used a deductive approach, using the pre-established CFIR codebook as a guide [ 37 ]. During the development of the codebook, exemplar quotes, enriched code definitions and descriptions, and detailed inclusion and exclusion criteria were added to the initial CFIR codebook in Microsoft Excel to provide clear guidance for the coding process and contextualize the CFIR codebook for our study.

After developing the codebook, each member of the data analysis team independently coded each transcript using the comment feature in Microsoft Word. Throughout the coding process, the data analysis team met regularly to review and discuss the coded segments line by line and resolve any discrepancies through highly participatory group discussions to achieve consensus and ensure the coding consistency. When the coding was completed in the Microsoft Word, the transcripts were imported to NVivo 14, a qualitative analysis software, and recoded to match the coding in Word. The NVivo was used to enable the retrieval of the coded segments and facilitate the systematic analysis of the codes. The data analysis team also incorporated feedback from the interviews and FGDs moderators (EK and DM) to ensure the interpretations were aligned to the data. Furthermore, the detailed narrative for each code and findings from the coding process were organized according to each of the CFIR domains. Finally, the data analysis team identified common themes across the findings categorized by the CFIR domains. These themes were then categorized into SCC implementation facilitators and barriers.

Ethical statement

This study was approved prior to the start of data collection by the Human Subjects IRB committee at Yale University in the United States in May 2022 (IRB protocol #2000032748) and the Comité Nacional de Bioética para a Saúde in Mozambique (National Committee for Bioethics in Health, CNBS) in September 2022 (IRB protocol #00002657). Prior to collecting data, participants were provided with a consent form. EK went through the consent form in a thorough and word-for-word manner, explaining all aspects of the study, including the participants’ right to choose whether to participate, their ability to withdraw from the study at any point, the procedures in place for safeguarding the confidentiality and anonymity of their information, and the general contents of the FGD and interview. Participants were required to sign the consent forms if they wanted to participate, with one copy provided to them for their own records and another kept as part of the study documentation at the Maputo office of Comité para a Saúde de Moçambique.

Inclusivity in global research

Additional information regarding the ethical, cultural, and scientific considerations specific to inclusivity in global research for this study is included in the Supporting Information ( S1 Checklist ).

Conceptual framework

The CFIR examines the implementation environment of an intervention, and how to facilitate its effective implementation through the lens of five domains, (a) intervention characteristics, (b) outer setting, (c) inner setting, (d) individuals’ characteristics, and (e) implementation process [ 37 ]. As this is a formative study to assess the feasibility of SCC implementation, we excluded the implementation process domain as the SCC has not yet been implemented. Among the four domains, we identified eleven constructs that are relevant to our study for analyzing the qualitative data: (a) intervention characteristics (complexity, adaptability, relative advantage, and innovation cost), (b) outer setting (policies and laws, partnerships and connections, and societal pressure), (c) inner setting (compatibility, available resources, and culture), and (d) characteristics of individuals (knowledge and beliefs about the intervention).

Twenty-four birth attendants participated in the FGDs, and six clinical administrators and eight decision-makers took part in the individual interviews. As no new information emerged after the four FGDs and fourteen individual interviews, we considered that information saturation was reached. The duration of FGDs ranged from 39 minutes to 62 minutes, and interview time ranged from 26 minutes to 70 minutes. Detailed sociodemographic characteristics of the participants are presented in Table 1 .

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pgph.0003174.t001

All codes identified from the transcripts were mapped to CFIR constructs. Of the 48 CFIR constructs assessed, eleven were determined to be relevant barriers and/or facilitators to implementing the SCC. Specifically, one CFIR construct addressed facilitators (complexity), and five CFIR constructs addressed barriers (adaptability, relative advantage, innovation cost, available resources, and societal pressure). Six other CFIR constructs addressed both facilitators and barriers (policies and laws, partnerships and connections, compatibility, culture, and knowledge and beliefs about the intervention). The study findings were organized by themes below, and Table 2 linked the barriers and facilitators of the SCC implementation to specific CFIR constructs.

thumbnail

https://doi.org/10.1371/journal.pgph.0003174.t002

Facilitators

The scc is simple and easy to understand..

When participants were asked about the complexity of the SCC, they agreed that the content and format of the checklist were easy to understand.

“I don’t think it is complicated at all. It just has basic aspects of everyday life in a maternity ward, or the day-to-day life of a midwife, or a nurse, so I don’t think it’s complicated. It’s direct, it has very concrete aspects.” (Decision-maker 5)

The SCC aligned with the national maternal and child health agenda.

The participants stated that the current MoH guidelines and efforts were consistent and reflected in the SCC objectives, indicating that the SCC implementation aligned with the national maternal and child health agenda.

“In general, one (SCC) is applying what are practices according to the MoH guideline, which is humanized childbirth or humanization of childbirth… All nurses have this orientation.” (Clinical Administrator 5)

Participants mentioned that strong support and commitment from both the local community and public health leaders about safe childbirth were crucial, as they can significantly contribute to spreading information on MCH and motivate clinics and birth attendants to engage in the SCC implementation effort.

“We, in the community, have the community leaders, maternal health nursing component, and the traditional midwives. They help information dissemination of the maternal and child health package… We will be able to involve them, to know that there is a checklist… so that they can help the dissemination of information.” (Decision-maker 4)

Furthermore, decision-makers emphasized that the MoH undertook regular supervision visits, offered technical support, and conducted in-service training at clinics. These initiatives are designed to ensure guidelines compliance and improve service quality in maternity wards. Such efforts aligned with the objectives of the SCC and may aid in its effective implementation.

“We do the monitoring of the activities, supervision… both scheduled supervisions and surprise visits. We make surprise visits to maternity hospitals, mainly to check if in fact they are doing their job well… We also reinforce it with some in-service training. When we get there, in these supervisions, we also explain: ‘Look, you are not doing it right here.’ We correct what is good practice and follow up on the needs.” (Decision-maker 1)

Participants had positive beliefs about the SCC.

During the interviews and FGDs, the participants displayed a strong understanding, wealth of knowledge, and a high level of professionalism and dedication to improving the quality of childbirth practices. They were open to updating their knowledge using the SCC and acknowledged the importance of continuously learning and keeping themselves informed.

“Science is dynamic. There are things that are being abolished and things that are being introduced. So, I try to say you should implement this study, while one thing or another could be abolished, so as we are here the council, we are here today to learn…. Let’s give progress to this study.” (Birth Attendant 8)

Moreover, participants expressed confidence that the implementation of SCC would lead to positive changes in current practices and result in improved quality of maternity services.

“I think that the list has a format that goes according to what we are talking about, because what we need is a standard procedure for the teams. Then, for the complications, we will have more trained people, but we also need team with a minimum standard procedure, and the list is simple. It is a list that reduces the time of work or procedure of the colleague… I find the list simple and sufficient.” (Decision-maker 7)

The SCC was viewed as redundant.

Participants expressed that the SCC did not offer a significant advantage over their current work routine, viewing it as an additional form to fill out and adding to the workload of birth attendants.

“It would be one more instrument. It would be a repetition of what we already do… All these flowcharts that we have already exist. And that is exactly what we do. And it looks like we don’t read it, because this, because that, but no. We already do that … We end up having less time to do our activities, to exercise the technique. We stay longer, we have [to] read and write, which doesn’t help us much either. It is very tiring.” (Birth Attendant 3) “I don’t think that the Checklist, by itself, will meet the needs… what will happen is this list will be one more paper in the maternity ward… The form alone is not going to change anything. It is one more piece of paper, it is going to be one more tool. As I said, the current guideline already recommends many of these questions, and they are in the form. What do we do? We fill it out, fill it out, fill it out.” (Decision-maker 6)

The SCC might be incompatible with the current workflows.

The concepts and practices outlined in the SCC were found to be mostly consistent with current practices in the maternity unit according to the participants. Filling out the SCC itself, however, is likely to impede existing workflows due to human resource shortages and time constraints. Participants expressed concerns about how to allocate time for other clinical activities and fill out the SCC, as there may be competing priorities.

“Because of the overload of work, one or another thing ends up slipping away… We have gynecology, maternity, c-sections, pathological pregnancy, gynecology, admission, delivery room, it’s for one nurse… So, everything that happens there ends up exhausting your knowledge, and your strength, you don’t know what to do…. It’s not because she is unwelcome [the SCC], she is welcome, yes. But treating the person himself, the work, it becomes difficult to follow the form.” (Birth Attendant 9)

Moreover, participants expressed concern about the workload related to paperwork. They already had a significant amount of paperwork to fill out, and the addition of SCC might increase their workload. Some participants suggested simplifying the current paperwork instead of introducing a new one.

“It is complicated because we already have many instruments. If the list doesn’t come to remove anything, it comes to add, it’s another job… Now, if the list comes and reduces the work for us, and summarizes a lot of things, it is welcome. If it is to add to it, it will not make us comfortable.” (Clinical Administrator 4)

The SCC needs to be better aligned with the context.

Although the SCC was viewed as simple and easy to understand, participants voiced the need to adapt it to the local context. Participants proposed multiple adaptations to integrate the SCC into their work routines and contexts, enhancing its implementation feasibility. These adaptations included transforming the SCC into a pocketbook rather than adding it to existing paperwork, displaying it as a wall poster, incorporating a section explaining incomplete practices, using it to evaluate supply availability, and merging it with existing tools like the patient clinical registration form, which includes medical history and diagnoses.

“My suggestion would be that it should be in a format like these HIV flowcharts, for example. You don’t make us waste time even opening a document and looking for how to do it. Then nail it to the wall…the person looks, sees the explanations and does it. It is easier to do than in the form of a list.” (Clinical Administrator 2) “Or maybe one could think of a decentralized instrument, which could perhaps feed into another instrument already at the central level… If we had an instrument that helps us to check what is the quality of the work of our maternity ward… And maybe to send the information to the central level as well, to see what is happening, what is failing, which is to take the proper precautions.” (Clinical Administrator 5)

Inadequate external support may hinder SCC implementation.

Participants emphasized that the external financial support from the MoH to maternal health care was inadequate, and the assistance from funders and partners was distributed unevenly across the country, often focused on specific diseases in a vertical manner. This could impede the adoption and implementation of the SCC, given that maternal health care is currently underfunded and not given priority.

“The financial allocation for the reduction of maternal and child mortality in a direct way is minimal, is reduced, and is ineffective. We have a maternal and child health plan in a year that cannot meet 50% of the needs… The use of external funds, which is far from the Paris Declaration, we don’t have much flexibility of funds to decide where they are allocated. The care area is underfunded, and it is the area that we should improve. We have pillars that are necessary [to be improved, including] educating how the delivery has to be, pregnancy care, the significance of various stages of pregnancy, labor expectations, pain management, and practices." (Decision-maker 7)

With the specific pillars the decision-maker highlighted also being key elements in the SCC, the current lack of financial support for maternal and child health care could signal potential challenges in implementing the SCC.

Resource shortfalls may impede SCC implementation Resource shortfalls may impede SCC implementation.

A major barrier to the SCC implementation is the limited availability of resources, including human resources, materials, physical space, and professional training. Despite the perceived benefits of SCC, the severe shortage of resources makes it challenging to successfully implement the SCC in clinical settings.

“What we need in Mozambique, in fact, is more equipped rooms, more spacious rooms, because our infrastructure sometimes does not create these types of conditions for a well-designed guideline. The strategies are well designed, but our conditions don’t help us, they don’t favor us having this model birth (SCC) that we are talking about, which would be better.” (Decision-maker 3)

Notably, participants expressed concern that the implementation of SCC would further increase their overwhelming workload as there is typically only one nurse per shift in the maternity unit, responsible for caring for both mothers and newborns. The already serious staff shortage could not accommodate the addition of another instrument that might increase the provider burnout. Allocating scarce time to complete the SCC would further increase staff workload.

“The implementation of the list is not bad. But as we were just saying… the lack of human resources, I think that this list will be more of an overload, an extra work, where the staff at that moment are few… But the list is not bad. It is very good, it helps. It is the moment when someone can forget something, looking here, sees that here is something that can be done or should be done. But looking at the work you already have in the maternity ward, it’s a lot. There are many documents to be filled out. One more document, it’s more overload.” (Birth Attendant 18) “We would feel overwhelmed. [The nurse] couldn’t fill out… and she is going to be overload. How is it? She will even ask herself, ‘but can’t you see? Because I am all alone.’” (Birth Attendant 17)

The scarcity of essential birth supplies in the maternity ward posed another significant barrier to implementing the SCC and achieve its purpose to enhance the quality of childbirth practices.

“For the maternity case, we are missing too many antihypertensives. Just talk about methyldopa, hydralazine, dihydralazine… and this has made our work very difficult.” (Clinical Administrator 5) “There are no gloves. How will it go well? How will you take care of yourself? How will you comply with what the document [SCC] asks for?” (Clinical Administrator 1)

Meanwhile, the participants highlighted the importance of professional training for successful SCC implementation and requested refresher training to improve their knowledge and skills.

“I think that if the people who are [going] to use the checklist are not very well trained, they can have a complication because it [the SCC] can be filled out not in the same standard way. The training of the people who are going to use the form itself needs to be standardized.” (Clinical Administrator 4)

Furthermore, the cost of the SCC implementation poses another challenge. The health facilities in Mozambique have very limited resources, and the costs of reproducing, distributing, storing, and completing the SCC, including expenses such as printers, paper, and storage space, could add an additional financial burden on the clinics.

“The list is produced, and then it is the health unit’s responsibility to reproduce it. And that doesn’t go very far, because we will see that the health unit doesn’t have the capacity to reproduce the form itself…It is already difficult for the health unit to continue because they are not all able to multiply their own records.” (Clinical Administrator 4)

Low motivation and societal pressures deter providers from adopting SCC.

Participants indicated that their existing workload, particularly with paperwork and completing instruments, was already overwhelming. They expressed concerns about their ability to properly fill out additional forms, suggesting that introducing a new instrument could be daunting.

“We get blinded in front of a document. Many times, we get scared just by looking at the document. Do this, we have to fill it out like this. Sometimes we fill it out, but not properly as it should be.” (Birth Attendant 10) “Whenever we get a new instrument, there is resistance in change, because at some point, the nurses have to give their reasons because they have too many instruments to be able to fill out, to be able to check. When more than one instrument arrives, they get a little tired, a little angry, because we have many books to fill in.” (Decision-maker1)

Additionally, some participants expressed concerns that failure to fill out the SCC could result in penalties or other negative consequences.

“It would be possible [to implement the SCC]. It would help some, but it could also penalize us for things that are not our level of competence to resolve, such as the issue of lack of medicines, lack of running water, at some point in the anesthesia machine, a shortage of operating room staff.” (Clinical Administrator 5)

Moreover, many birth attendants reported experiencing stigma and pressure from mass media, local community, and patients, which further limited their motivation to adopt another instrument like SCC and improve the quality of the maternal and child health services.

The participants expressed that the social recognition of birth attendants was low, and this lack of recognition was a demoralizing factor in their work. Despite the birth attendants’ strong desire to improve their work and adopt SCC, they felt that their efforts were not valued or recognized by the community.

“Because if we look at the media, they are against us. Just for someone to be born outside, we are already on television. But if I attend childbirth outside without gloves to help, I won’t be on television. But if someone is born outside, even five meters from the hospital, we are going to be smeared with all of this. ‘Chamanculo is negligent, there was no emergency room.’ So, motivation factor.” (Birth Attendant 15)

There were instances in which the companions or patients complained the practices of the birth attendants, resulting in the spreading of negative comments about the birth attendants in the community, further diminishing their motivation to work.

“Even being a woman, a companion [of the delivery mother] doesn’t understand what happens inside the maternity ward. Even the techniques that the nurse will perform, she thinks you’re mistreating that person… She starts talking bad about us in the community.” (Birth Attendant 6)

Main findings and interpretation

This formative qualitative study sought to identify potential facilitators and barriers to implementing the SCC in the context of the childbirth practices and conditions in Mozambique at the time this study was conducted. The study explored the feasibility of SCC implementation by assessing the initial knowledge and attitudes of a diverse group of stakeholders from various professional backgrounds.

The barriers and facilitators identified in our study agree with most of the findings from the countries where the SCC had been tested before [ 14 , 21 , 26 , 27 , 41 ]. The common facilitators of SCC use were related to the checklist itself, as it’s easy to complete and acts as a useful reminder for essential childbirth practices that aligned with the national and local guidelines [ 14 ]. The major barriers were linked to local challenges, including insufficient material and human resources, inadequate training, perceptions of increased workload associated with the SCC use, lack of staff motivation to use SCC, and an underfunded MCH care [ 14 , 21 , 27 , 41 , 42 ].

In Mozambique, due primarily to the structural challenges of the overall health system, the implementation of SCC faces multiple obstacles. Support for MCH care from the MoH and external funders was found to be inadequate and not given priority, with resource distribution often focused on specific diseases through a vertical approach. This lack of funding for maternal care might further limit the resources available for adopting SCC and hindered the implementation of quality, evidence-based delivery practices required by SCC. Clinics in our study commonly faced shortages of essential medicines, equipment, and materials needed for critical childbirth practices. Additionally, the costs associated with reproducing, distributing, storing, and completing the SCC imposed an extra financial burden on the already under-resourced maternity services in the clinics. Moreover, given that there was often only one birth attendant per shift in the maternity ward, implementing and completing the SCC may have competed with other clinical activities for the limited time, resources, and attention of the birth attendant. As a result, birth attendants viewed the SCC as redundant, feeling it added to their workload without offering significant advantages over their current practices. They also found the prospect of introducing another instrument daunting, given the already substantial paperwork in the clinics. Additionally, there was concern that failing to complete the SCC could lead to penalties.

Meanwhile, mothers’ mistrust and perceived poor quality of care have led to blame directed at birth attendants, which may have contributed to their low motivation. Negative comments from the community further undermine the birth attendants’ social recognition and increase societal pressure on them. Our study participants highlighted poor morale, weak motivation, and low recognition among the primary reasons for their reluctance to adopt another protocol like the SCC, in the context of their already overwhelming workload. These barriers need to be addressed to facilitate the SCC implementation in Mozambique.

We recognized that implementing the SCC in our study context involves many interacting factors that potentially reinforce each other within a dynamic system. Therefore, we hypothesized that there were negative feedback loops that hindered the health system’s ability to implement the SCC. Informed by our findings we further hypothesize that these feedback loops were likely to be (a) a weak MCH care system, (b) limited availability of resources, (c) heavy birth attendant workload, and (d) low motivation among birth attendants ( Fig 1 ). Our hypotheses are consistent with findings from a previous study conducted in Nampula Mozambique seeking to understand how to improve breast feeding counseling through the health system [ 43 ], highlighting the fact that our findings have implications beyond just the SCC.

thumbnail

https://doi.org/10.1371/journal.pgph.0003174.g001

Despite these obstacles, many birth attendants remained committed to improving the quality of childbirth practices and adopting SCC. They recognized that the SCC aligned with the national MCH goals and the need to continue educating themselves. Birth attendants did not express the need for pay-for-performance for filling out SCC but suggested that allocating more financial resources towards creating better working conditions and strengthening the healthcare system would be helpful. Moreover, participants suggested modifying the format of SCC, such as displaying it as a poster in the maternity ward or integrating it into existing tools like the patient clinical registration form. This would help contextualize SCC’s use, better integrate it into the health providers work routines and facilitate its implementation. However, it is possible that altering the use and format of the SCC might contribute to potential changes in its original purpose and affect its efficacy.

Limitations and strengths.

This study has several limitations. Firstly, since participants lacked real-life experience in SCC implementation, the barriers and facilitators identified were not directly informed by their experience of using the SCC. Moreover, without implementing the SCC, this formative research study was not able to assess the actual implementation process domain of the CFIR or identify potential effective activities utilized in the SCC implementation [ 37 ]. However, we provided the SCC to participants at least one day before the interviews and FGDs and explained the purposes of the SCC iteratively before and throughout the interviews and FGDs to facilitate their understanding of its content and use. While we could not confirm whether participants had read the SCC beforehand, we took steps to ensure their understanding of its purpose and checklist items. Before each FGD and interview, participants were explained the purpose of the SCC and each of its checklist items. Researchers addressed the participants’ questions to ensure that they all understood the content and intended use of the SCC before proceeding with, and during and after the interviews and FGDs were conducted.

Moreover, it’s important to highlight that in our study participants were deliberately chosen for their extensive knowledge and experience in adhering to and implementing various clinical guidelines related to childbirth practices and policies. As frontline health workers and policymakers, they had extensive familiarity with the objective and integrated content of the SCC. During the FGDs and interviews, they indeed indicated that although the SCC might present a new format as a clinical checklist, the content was familiar to them. Additionally, based on their experience they were able to identify specific items in the SCC that they felt would be challenging and provided substantive feedback on these items during the discussions.

Secondly, we sampled one rural and one urban hospital in Maputo Province and Maputo City, aiming to represent varying conditions in health facilities. Nonetheless, our sample may not fully capture the reality across Mozambique, given the substantial differences in health care quality and access across the country, the external validity of our findings must be interpreted with caution. Moving forward, future SCC studies in Mozambique should include various levels and types of health facilities, including primary health centers, from different regions of the country.

Thirdly, while we hypothesized the presence of several negative feedback loops involving barriers from system-level to individual-level that may make SCC implementation challenging, we acknowledge that this hypothesis needs to be confirmed through further research as causal relationships cannot be established through a qualitative study. We further recognize that the hypothesized feedback loops are an oversimplified representation of barriers to SCC implementation. Further research will also be needed to understand how to counteract negative with positive feedback loops to enable SCC implementation in the context of under-resourced maternity healthcare systems.

Lastly, we fully acknowledge that it will be crucial to include the views of women and the community in the co-design of the SCC implementation process in Mozambique. As an initial formative study, we chose to concentrate first on the perspectives of birth attendants, clinical administrators, and decision-makers in Mozambique, aligning with the clinical context where the SCC is intended to be applied. Future community-engaged co-design studies conducted by our team will incorporate the voices of local women and the community to ensure comprehensive and inclusive insights.

Despite the limitations, this study has several strengths. While our findings confirm findings previously reported in other countries, this study stands out as the sole formative qualitative study that was conducted prior to actual SCC implementation and the first SCC study conducted in Mozambique. Our approach aligns closely with the WHO Safe Childbirth Checklist Implementation Guide [ 44 ], emphasizing the necessity of assessing available resources and current practices prior to large-scale implementation to determine how the SCC can be optimally employed and what prerequisites must be met for its success.

Conducting this study before SCC implementation offers several benefits. This formative study reflects a commitment to ensuring that SCC implementation aligns with and addresses the country’s specific needs. As reported by a previous study, SCC implementation might increase the workload and frustration of birth attendants [ 21 ]. Ignoring this clear finding confirmed in our study could inadvertently generate unintended consequences within local communities and the MCH care system in Mozambique.

Moreover, our study was carried out in close collaboration with Mozambique’s MoH based on the principles of mutual respect and benefit, equitable communication, and productive dialogue between the global health research team and the local partners, with a commitment to reporting our findings to local healthcare leadership [ 45 ]. The findings of this study have been presented to the decision-makers and researchers in Mozambique and will be further disseminated in the country to assist the MoH in determining the next steps for SCC implementation. We expect for our findings to support a co-design phase of an initiative to implement the SCC in Mozambique.

Implications.

Our study identified severe health care systems resource shortage as a key barrier to the SCC implementation in Mozambique, emphasizing the need to reconsider the focus of MCH studies and research methods used. Unlike the typical practice of conducting pre-post-implementation studies or randomized controlled trials (RCTs) to investigate facilitators and barriers for SCC implementation, our study shows that a proactive pre-implementation assessment can provide equally important contextual insights. Furthermore, conducting pre-implementation assessments could inform resource allocation strategies to address critical gaps in human and material resources for the SCC implementation with the ultimate goal of strengthening the overall MCH care system.

Furthermore, given that numerous barriers to SCC implementation are fundamentally linked to the shortcomings of Mozambique’s healthcare system, we call for future funders and partners shifting their focus from vertical funding to initiatives that prioritize the provision of essential materials, human resources, and professional training in primary care. Moreover, recognizing that there is no one-size-fits-all model for SCC implementation due to various contexts, future implementation research should include different types of health facilities and various levels of healthcare systems across Mozambique. Future research should take into account what we have learned from our study in Maputo City and Maputo Province and determine the optimal complementary intervention packages to adapt SCC implementation strategies to the country’s unique settings [ 42 ], taking the voices of women and communities fully into account.

In conclusion, our innovative study has played a crucial role in empowering local providers by listening to their voices and engaging them in the decision-making process for the implementation of the SCC in Mozambique. Their contributions have highlighted the urgent need for improving the quality of MCH care and enhancing the capacity of the health system in the country. Moreover, our study has identified various key factors that are vital for the successful implementation of the SCC, which include ensuring the availability of adequate human and material resources, providing comprehensive professional training, adapting the SCC contextually, maintaining strong political commitment, and garnering support from equitable partnerships. Lastly, we call for future research to undertake a holistic evaluation of the local context prior to the implementation of the SCC, thereby promoting decolonized global health research and practice and ensuring that interventions are contextually relevant and culturally sensitive.

Supporting information

S1 text. who safe childbirth checklist..

https://doi.org/10.1371/journal.pgph.0003174.s001

S2 Text. Question guides for FGDs and interviews.

https://doi.org/10.1371/journal.pgph.0003174.s002

S1 Table. Codebook and question guide for pilot FGD.

https://doi.org/10.1371/journal.pgph.0003174.s003

S1 Checklist. PLOS questionnaire on inclusivity in global research.

https://doi.org/10.1371/journal.pgph.0003174.s004

Acknowledgments

The authors would like to thank Dr. Lucian J. Davis, Dr. Ashely K. Hagaman, and the staff at Comité para a Saúde de Moçambique for their generous guidance and support.

  • 1. SDG Target 3.1 [Internet]. World Health Organization; [cited 2023 Mar 5]. https://www.who.int/data/gho/data/themes/topics/sdg-target-3-1-maternal-mortality
  • 2. Recommendations for Clinical Practice of Emergency Obstetrical and Neonatal Care in Africa. Brazzaville: WHO Regional Office for Africa; 2022.
  • 3. Plano Estratégico do Sector da Saúde (PESS) 2014–2019. Direcção de Planificação e Cooperação. MINISTÉRIO DA SAÚDE; 2013.
  • 4. Programa Quinquenal do Governo: 2020–2024. REPÚBLICA DE MOÇAMBIQUE; 2020.
  • 5. Maternal mortality ratio (modeled estimate, per 100,000 live births)—Mozambique. World Development Indicators. [Internet]. The World Bank Group; 2019. https://data.worldbank.org/indicator/SH.STA.MMRT?locations=MZ
  • 6. Mortality rate, neonatal (per 1,000 live births)—Mozambique. World Development Indicators. [Internet]. The World Bank Group; 2020 [cited 2023 Mar 5]. https://data.worldbank.org/indicator/SH.DYN.NMRT?locations=MZ
  • 7. WHO Global Health Observatory data repository [Internet]. 2023. https://apps.who.int/gho/data/view.main.STILLBIRTHv?lang=en
  • 8. Mortality rate, neonatal (per 1,000 live births)—Sub-Saharan Africa. World Development Indicators. [Internet]. The World Bank Group; https://data.worldbank.org/indicator/SH.DYN.NMRT?locations=ZG
  • 9. UNICEF Data: Stillbirth [Internet]. https://data.unicef.org/topic/child-survival/stillbirths/
  • 10. UNICEF Data: Maternal mortality [Internet]. https://data.unicef.org/topic/maternal-health/maternal-mortality/#:~:text=Levels%20of%20maternal%20mortality&text=There%20are%20large%20inequalities%20in,in%20Australia%20and%20New%20Zealand .
  • 11. Departamento de Saúde da Mulher e da Criança: Relatório Anual, 2020. MINISTÉRIO DA SAÚDE, DIRECÇÃO NACIONAL DE SAÚDE PÚBLICA; 2020.
  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 13. WHO safe childbirth checklist implementation guide: improving the quality of facility-based delivery for mothers and newborns. World Health Organization; 2015.
  • 26. World Health Organization. WHO safe childbirth checklist collaboration evaluation report [Internet]. Geneva: World Health Organization; 2017 [cited 2023 Sep 4]. 61 p. https://apps.who.int/iris/handle/10665/259953
  • 28. UNICEF MOZAMBIQUE 2022–2026: A strategic partnership for every child. UNICEF; 2023 Sep.
  • 29. Country programme document for Mozambique (2022–2026). Executive Board of the United Nations Development Programme, the United Nations Population Fund and the United Nations Office for Project Services; 2021 Dec 8.
  • 32. DIVULGAÇÃO OS DADOS DEFINITIVOS IV RGPH 2017. Instituto Nacional de Estatística; 2017 Dec.
  • 34. Perfil do Distrito da Manhiça. Instituto Nacional de Estatística; 2005.
  • 36. Livro de Registro de Maternidade (MOD-SIS-BO3). Ministério da Saúde. República de Moçambique.
  • 44. World Health Organization. WHO safe childbirth checklist implementation guide: improving the quality of facility-based delivery for mothers and newborns [Internet]. Geneva: World Health Organization; 2015 [cited 2023 Sep 10]. 61 p. https://apps.who.int/iris/handle/10665/199177

Promoting higher education students’ self-regulated learning through learning analytics: A qualitative study

  • Open access
  • Published: 07 September 2024

Cite this article

You have full access to this open access article

sample questions qualitative research interviews

  • Riina Kleimola   ORCID: orcid.org/0000-0003-2091-2798 1 ,
  • Laura Hirsto   ORCID: orcid.org/0000-0002-8963-3036 2 &
  • Heli Ruokamo   ORCID: orcid.org/0000-0002-8679-781X 1  

Learning analytics provides a novel means to support the development and growth of students into self-regulated learners, but little is known about student perspectives on its utilization. To address this gap, the present study proposed the following research question: what are the perceptions of higher education students on the utilization of a learning analytics dashboard to promote self-regulated learning? More specifically, this can be expressed via the following threefold sub-question: how do higher education students perceive the use of a learning analytics dashboard and its development as promoting the (1) forethought, (2) performance, and (3) reflection phase processes of self-regulated learning? Data for the study were collected from students ( N  = 16) through semi-structured interviews and analyzed using a qualitative content analysis. Results indicated that the students perceived the use of the learning analytics dashboard as an opportunity for versatile learning support, providing them with a means to control and observe their studies and learning, while facilitating various performance phase processes. Insights from the analytics data could also be used in targeting the students’ development areas as well as in reflecting on their studies and learning, both individually and jointly with their educators, thus contributing to the activities of forethought and reflection phases. However, in order for the learning analytics dashboard to serve students more profoundly across myriad studies, its further development was deemed necessary. The findings of this investigation emphasize the need to integrate the use and development of learning analytics into versatile learning processes and mechanisms of comprehensive support and guidance.

Explore related subjects

  • Artificial Intelligence
  • Digital Education and Educational Technology

Avoid common mistakes on your manuscript.

1 Introduction

Promoting students to become autonomous, self-regulated learners is a fundamental goal of education (Lodge et al., 2019 ; Puustinen & Pulkkinen, 2001 ). The importance of doing so is particularly highlighted in higher education (HE) contexts that strive to prepare its students for highly demanding and autonomous expert tasks (Virtanen, 2019 ). In order to perform successfully in diverse educational and professional settings, students need to take an active, self-initiated role in managing their learning processes, thereby assuming primary responsibility for their educational pursuits. Self-regulated learning (SRL) invites students to actively monitor, control, and regulate their cognition, motivation, and behavior in relation to their learning goals and contextual conditions (Pintrich, 2000 ). In an effort to create a favorable foundation for the development of SRL, many HE institutions have begun to explore and exploit the potential of emerging educational technologies, such as learning analytics (LA).

Despite the growing interest in adopting LA for educational purposes (Van Leeuwen et al., 2022 ), little is known about students’ perspectives on its utilization (Jivet et al., 2020 ; Wise et al., 2016 ). Additionally, there is only limited evidence on using LA to support SRL (Heikkinen et al., 2022 ; Jivet et al., 2018 ; Matcha et al., 2020 ; Viberg et al., 2020 ). Thus, more research is inevitably needed to better understand how students themselves consider the potential of analytics applications from the perspective of SRL. Involving students in the development of LA is particularly important, as they represent primary stakeholders targeted to benefit from its utilization (Dollinger & Lodge, 2018 ; West et al., 2020 ). LA should not only be developed for users but also with them in order to adapt its potential to their needs and expectations (Dollinger & Lodge, 2018 ; Klein et al., 2019 ).

LA is thought to provide a promising means to enhance student SRL by harnessing the massive amount of data stored in educational systems and facilitating appropriate means of support (Lodge et al., 2019 ). It is generally defined as “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs” (Conole et al., 2011 , para. 4). The reporting of such data is typically conducted through learning analytics dashboards (LADs) that aggregate diverse types of indicators about learners and learning processes in a visualized form (Corrin & De Barba, 2014 ; Park & Jo, 2015 ; Schwendimann et al., 2017 ). Recently, there has been a rapid movement into LADs that present analytics data directly to students themselves (Schwendimann et al., 2017 ; Teasley, 2017 ; Van Leeuwen et al., 2022 ). Such analytics applications generally aim to provide students with insights into their study progress as well as support for optimizing learning outcomes (Molenaar et al., 2019 ; Sclater et al., 2016 ; Susnjak et al., 2022 ).

The purpose of this qualitative study is to examine how HE students perceive the use and development of an LAD to promote the different phases and processes of SRL. Instead of taking a course-level approach, this study addresses a less-examined study path perspective that covers the entirety of studies, from the start of an HE degree to its completion. A specific emphasis is placed on such an LAD that students could use both independently across studies and together with their tutor teachers as a component of educational guidance. As analytics applications are largely still under development (Sclater et al., 2016 ), and mainly in the exploratory phase (Schwendimann et al., 2017 ; Susnjak et al., 2022 ), it is essential to gain an understanding of how students perceive the use of these applications as a form of learning support. Preparing students to become efficient self-regulated learners is increasingly—and simultaneously—a matter of helping them develop into efficient users of analytics data.

2 Theoretical framework

2.1 enhancing srl in he.

SRL, which has been the subject of wide research interest over the last two decades (Panadero, 2017 ), is referred to as “self-generated thoughts, feelings, and actions that are planned and cyclically adapted to the attainment of personal goals” (Zimmerman, 1999 , p. 14). Self-regulated students are proactive in their endeavors to learn, and they engage in diverse, personally initiated metacognitive, motivational, and behavioral processes to achieve their goals (Zimmerman, 1999 ). They master their learning through covert, cognitive means but also through behavioral, social, and environmental approaches that are reciprocally interdependent and interrelated (Zimmerman, 1999 , 2015 ), thus emphasizing the sociocognitive views on SRL (Bandura, 1986 ).

When describing and modelling SRL, researchers have widely agreed on its cyclical nature and its organization into several distinct phases and processes (Panadero, 2017 ; Puustinen & Pulkkinen, 2001 ). In the well-established model by Zimmerman and Moylan ( 2009 ), SRL occurs in the cyclic phases of forethought, performance, and self-reflection that take place before, during, and after students’ efforts to learn. In the forethought phase, students prepare themselves for learning and approach the learning tasks through the processes of planning and goal setting, and the activation of self-motivation beliefs, such as self-efficacy perceptions, outcome expectations, and personal interests. Next, in the performance phase, they carry out the actual learning tasks and make use of self-control processes and strategies, such as self-instruction, time management, help-seeking, and interest enhancement. Moreover, they keep records of their performance and monitor their learning, while promoting the achievement of desired goals. In the final self-reflection phase, students participate in the processes of evaluating their learning and reflecting on the perceived causes of their successes and failures, which typically results in different types of cognitive and affective self-reactions as responses to such activity. This phase also forms the basis for the approaches to be adjusted for and applied in the subsequent forethought phase, thereby completing the SRL cycle. The model suggests that the processes in each phase influence the following ones in a cyclical and interactive manner and provide feedback for subsequent learning efforts (Zimmerman & Moylan, 2009 ; Zimmerman, 2011 ). Participation in these processes allows students to become self-aware, competent, and decisive in their learning approaches (Kramarski & Michalsky, 2009 ).

Although several other prevalent SRL models with specific emphases also exist (e.g., Pintrich, 2000 ; Winne & Hadwin, 1998 ; for a review, see Panadero, 2017 ), the one presented above provides a comprehensive yet straightforward framework for identifying and examining the key phases and processes related to SRL (Panadero & Alonso-Tapia, 2014 ). Developing thorough insights into the student SRL is especially needed in an HE context, where the increase in digitized educational settings and tools requires students to manage their learning in a way that is autonomous and self-initiated. When pursuing an HE degree, students are expected to engage in the cyclical phases and processes of SRL as a continuous effort throughout their studies. Involvement in SRL is needed not only to successfully perform a single study module, course, or task but also to actively promote the entirety of studies throughout semesters and academic years. It therefore plays a central role in the successful completion of HE studies.

From the study path perspective, the forethought phase requires HE students to be active in the directing and planning of their studies and learning—that is, setting achievable goals, making detailed plans, finding personal interests, and trusting in their abilities to complete the degree. The performance phase, in turn, invites students to participate in the control and observation of their studies and learning. While completing their studies, they must regularly track study performance, visualize relevant study information, create functional study environments, maintain motivation and interest, and seek and receive productive guidance. The reflection phase, on the other hand, involves students in evaluating and reflecting on their studies and learning—that is, analyzing their learning achievements and processing resulting responses. These activities typically occur as overlapping, cyclic, and connected processes and as a continuum across studies. Additionally, the phases may appear simultaneously, as students strive to learn and receive feedback from different processes (Pintrich, 2000 ). The processes may also emerge in more than one phase (Panadero & Alonso-Tapia, 2014 ), and boundaries between the phases are not always that precise.

SRL is shown to benefit HE students in various ways. Research has evidenced, for instance, that online students who use their time efficiently, are aware of their learning behavior, think critically, and show efforts to learn despite challenges are likely to achieve academic success when studying in online settings (Broadbent & Poon, 2015 ). SRL is also shown to contribute to many non-academic outcomes in HE blended environments (for a review, see Anthonysamy et al., 2020 ). Despite this importance, research (e.g., Azevedo et al., 2004 ; Barnard-Brak et al., 2010 ) has indicated that students differ in their ways to self-regulate, and not all are competent self-regulated learners by default. As such, many students would require and benefit from support to develop their SRL (Moos, 2018 ; Wong et al., 2019 ).

Supporting student SRL is generally considered the responsibility of a teaching staff (Callan et al., 2022 ; Kramarski, 2018 ). It can also be a specific task given to tutor teachers assigned to each student or to a group of students for particular academic years. Sometimes referred to as advisors, they are often teachers of study programs who aim to help students in decision-making, study planning, and career reflection (De Laet et al., 2020 ), while offering them guidance and support for the better management of learning. In recent years, efforts have also been made to promote student SRL with educational technologies such as LA (e.g., Marzouk et al., 2016 ; Wise et al., 2016 ). LA is used to deliver insights for students themselves to better self-regulate their learning (e.g., Jivet et al., 2021 ; Molenaar et al., 2019 ), and also to facilitate the interaction between students and guidance personnel (e.g., Charleer et al., 2018 ). It is generally thought to promote the development of future competences needed by students in education and working life (Kleimola & Leppisaari, 2022 ), and to offer novel insights into their motivational drivers (Kleimola et al., 2023 ).

2.2 LA as a potential tool to promote SRL

Much of the recent development in the field of LA has focused on the design and implementation of LADs. In general, their purpose is to support sensemaking and encourage students and teachers to make informed decisions about learning and teaching processes (Jivet et al., 2020 ; Verbert et al., 2020 ). Schwendimann and colleagues ( 2017 ) refer to an LAD as a “display that aggregates different indicators about learner(s), learning process(es) and/or learning context(s) into one or multiple visualizations” (p. 37). Such indicators may provide information, for instance, about student actions and use of learning contents on a learning platform, or the results of one’s learning performance, such as grades (Schwendimann et al., 2017 ). Data can also be extracted from educational institutions’ student information systems to provide students with snapshots of their study progress and access to learning support (Elouazizi, 2014 ). While visualizations enable intuitive and quick interpretations of educational data (Papamitsiou & Economides, 2015 ), they additionally require careful preparation, as not all users may necessarily interpret them uniformly (Aguilar, 2018 ).

LADs can target various stakeholders, and recently there has been a growing interest in their development for students’ personal use (Van Leeuwen et al., 2022 ). Such displays, also known as student-facing dashboards, are thought to increase students’ knowledge of themselves and to assist them in achieving educational goals (Eickholt et al., 2022 ). They are also believed to promote student autonomy by encouraging students to take control of their learning and by supporting their intrinsic motivation to succeed (Bodily & Verbert, 2017 ). However, simply making analytics applications available to students does not guarantee that they will be used productively in terms of learning (Wise, 2014 ; Wise et al., 2016 ). Moreover, they may not necessarily cover or address the relevant aspects of learning (Clow, 2013 ). Thus, to promote the widespread acceptance and adoption of LADs, it is crucial to consider students’ perspectives on their use as a means of learning support (Divjak et al., 2023 ; Schumacher & Ifenthaler, 2018 ). If students’ needs are not adequately examined and met, such analytics applications may fail to encourage or even hinder the process of SRL (Schumacher & Ifenthaler, 2018 ).

Although previous research on students’ perceptions of LA to enhance their SRL appears to be limited, some studies have addressed such perspectives. Schumacher and Ifenthaler ( 2018 ) found that HE students appreciated LADs that help them plan and initiate their learning activities with supporting elements such as reminders, to-do lists, motivational prompts, learning objectives, and adaptive recommendations, thus promoting the forethought phase of SRL. The students in their study also expected such analytics applications to support the performance phase by providing analyses of their current situation and progress towards goals, materials to meet their individual learning needs, and opportunities for learning exploration and social interaction. To promote the self-reflection phase, the students anticipated LADs to allow for self-assessment, real-time feedback, and future recommendations but were divided as to whether they should receive comparative information about their own or their peers’ performance (Schumacher & Ifenthaler, 2018 ). Additionally, the students desired analytics applications to be holistic and advanced, as well as adaptable to individual needs (Schumacher & Ifenthaler, 2018 ).

Somewhat similar notions were made by Divjak and colleagues ( 2023 ), who discovered that students welcomed LADs that promote short-term planning and organization of learning but were wary of making comparisons or competing with peers, as they might demotivate learners. Correspondingly, De Barba et al. ( 2022 ) noted that students perceived goal setting and monitoring of progress from a multiple-goals approach as key features in LADs, but they were hesitant to view peer comparisons, as they could promote unproductive competition between students and challenge data privacy. In a similar vein, Rets et al. ( 2021 ) reported that students favored LADs that provide them with study recommendations but did not favor peer comparison unless additional information was included. Roberts et al. ( 2017 ), in turn, stressed that LADs should be customizable by students and offer them some level of control to support their SRL. Silvola et al. ( 2023 ) found that students perceived LADs as supportive for their study planning and monitoring at a study path level but also associated some challenges with them in terms of SRL. Further, Bennett ( 2018 ) found that students’ responses to receiving analytics data varied and were highly individual. There were different views, for instance, on the potential of analytics to motivate students: although it seemed to inspire most students, not all students felt the same way (Bennett, 2018 ; see also Corrin & De Barba, 2014 ; Schumacher & Ifenthaler, 2018 ). Moreover, LADs were reported to evoke varying affective responses in students (Bennett, 2018 ; Lim et al., 2021 ).

To promote student SRL, it is imperative that LADs comprehensively address and support all phases of SRL (Schumacher & Ifenthaler, 2018 ). However, a systematic literature review conducted by Jivet et al. ( 2017 ) indicated that students were often offered only limited support for goal setting and planning, and comprehensive self-monitoring, as very few of the LADs included in their study enabled the management of self-set learning goals or the tracking of study progress over time. According to Jivet et al. ( 2017 ), this might indicate that most LADs were mainly harnessed to support the reflection and self-evaluation phase of SRL, as the other phases were mostly ignored. Somewhat contradictory results were obtained by Viberg et al. ( 2020 ), whose literature review revealed that most studies aiming to measure or support SRL with LA were primarily focused on the forethought and performance phases and less on the reflection phase. Heikkinen et al. ( 2022 ) discovered that not many of the studies combining analytics-based interventions and SRL processes covered all phases of SRL.

It appears that further development is inevitably required for LADs to better promote student SRL as a whole. Similarly, there is a demand for their tight integration into pedagogical practices and learning processes to encourage their productive use (Wise, 2014 ; Wise et al., 2016 ). One such strategy is to use these analytics applications as a part of guidance activity and as a joint tool for both students and guidance personnel. In the study by Charleer et al. ( 2018 ), the LAD was shown to trigger conversations and to facilitate dialogue between students and study advisors, improve the personalization of guidance, and provide insights into factual data for further interpretation and reflection. However, offering students access to an LAD only during the guidance meeting may not be sufficient to meet their requirements for the entire duration of their studies. For instance, Charleer and colleagues ( 2018 ) found that the students were also interested in using the LAD independently, outside of the guidance context. Also, it seems that encouraging students to actively advance their studies with such analytics applications necessitates a student-centered approach and holistic development through research. According to Rets et al. ( 2021 ), there is a particular call for qualitative insights, as many previous LAD studies that included students have primarily used quantitative approaches (e.g., Beheshitha et al., 2016 ; Divjak et al., 2023 ; Kim et al., 2016 ).

2.3 Research questions

The purpose of this qualitative study is to examine how HE students perceive the utilization of an LAD in SRL. A specific emphasis was placed on its utilization as part of the forethought, performance, and reflection phase processes, considered central to student SRL. The main research question (RQ) and the threefold sub-question are as follows:

RQ: What are the perceptions of HE students on the utilization of an LAD to promote SRL?

How do HE students perceive the use of an LAD and its development as promoting the (1) forethought, (2) performance, and (3) reflection phase processes of SRL?

3.1 Context

The study was conducted in a university of applied sciences (UAS) in Finland that had launched an initial version of an LAD to be piloted together with its students and tutor teachers as a part of the guidance process. The LAD was descriptive in nature and consisted of commonly available analytics data and simple analytics indicators showing an individual student’s study progress and success in a study path. As is typical for descriptive analytics, it offered insights to better understand the past and present (Costas-Jauregui et al., 2021 ) while informing the future action (Van Leeuwen et al., 2022 ). The data were extracted from the UAS’ student information system and presented using Microsoft Power BI tools. No predictive or comparative information was included. The main display of the LAD consisted of three data visualizations and an information bar (see Fig.  1 , a–d), all presented originally in Finnish. Each visualization could also be expanded into a single display for more accurate viewing.

figure 1

An example of the main display of the piloted LAD with data visualizations ( a – c ) and an information bar ( d )

First, the LAD included a data visualization that illustrated a student’s study progress and success per semester using a line chart (Fig.  1 , a). It displayed the scales for total number of credit points (left) and grade point averages (right) for courses completed on a semester timeline. Data points on the chart displayed an individual student’s study performance with respect to these indicators in each semester and were connected to each other with a line. Pointing to one of these data points also opened a data box that indicated the student name and information about courses (course name, scope, grade, assessment date) from which the credit points and grade point averages were obtained.

Second, the LAD contained another type of line chart that indicated a student’s individual study progress over time in more detail (Fig.  1 , b). The chart displayed a timeline with three-month periods and illustrated a scale for the accumulated credit points. Data points on the chart indicated the accumulated number of credit points obtained from the courses and appeared in blue if the student had passed the course(s) and in red if the student had failed the course(s) at that time. As with the line chart above it, the data points in this chart also provided more detailed information about the courses behind the credit points and were intertwined with a line.

Third, the LAD offered information related to a student’s study success through a radar chart (Fig.  1 , c). The chart represented the courses taken by the student and displayed a scale for the grades received from them. The lowest grade was placed in the center of the chart and the highest one on its outer circle. The grades in between were scaled on the chart accordingly, and the courses performed with a similar grade were displayed close to each other. Data points on the chart represented the grades obtained from numerically evaluated courses and were merged with a line. Each data point also had a data box with the course name and the grade obtained.

Fourth, the LAD included an information bar (Fig.  1 , d) that displayed the student number and the student name (removed from the figure), the total number of accumulated credit points, the grade point average for passed courses, and the amount of credit points obtained from practical training.

The LAD was piloted in authentic guidance meetings in which a tutor teacher and a student discussed topical issues related to the completion of studies. Such meetings were a part of the UAS’ standard guidance discussions that were typically held 1–2 times during the academic year, or more often if needed. In the studied meetings, the students and tutor teachers collectively reviewed the LAD to support the discussion. Only the tutor teachers were commonly able to access the LAD, as it was still under development and in the pilot phase. However, the students could examine its use as presented by the tutor teacher. In addition to the LAD, the meeting focused on reviewing the student’s personal study plan, which contained information about their studies to be completed and could be viewed through the student information system. Most of the meetings were organized online, and their duration varied according to an individual student’s needs. A researcher (first author) attended the meetings as an observer.

3.2 Participants and procedures

Participants were HE students ( N  = 16) pursuing a bachelor’s degree at the Finnish University of Applied Sciences (UAS), ranging from 21 to 49 years of age (mean = 30.38, median = 29.5); 11 (68.75%) were female, and 5 (31.25%) were male. HE studies commenced between 2016 and 2020, and comprised different academic fields, including business administration, culture, engineering, humanities and education, and social services and health care. Depending on the degree, study scope ranged from 210 to 240 ECTS credit points, which take approximately three and a half to four years to complete. However, the students could also proceed at a faster or slower pace under certain conditions. The students were selected to represent different study fields and study stages, and to have studied for more than one academic year. Informed consent to participate in the study was obtained from all students, and their participation was voluntary. The research design was approved by the respective UAS.

Data for this qualitative study was collected through semi-structured, individual student interviews conducted in April–September 2022. To address certain topics in each interview, an interview guide was used. The interview questions incorporated into the guide were tested in two student test interviews to simulate a real interview situation and to assure intelligibility, as also suggested by Chenail ( 2011 ). Findings indicated that the questions were largely usable, functional, and understandable, but some had to be slightly refined to ensure their conciseness and to improve clarity and familiarity of expressions vis-à-vis the target group. Also, the order of questions was partly reshaped to support the flow of discussion.

In the interviews, the students were asked to provide information about their demographic and educational backgrounds as well as their overall opinions of educational practices and the use of LA. In particular, they were invited to share their views on the use of the piloted LAD and its development as promoting different phases and processes of SRL. Students’ perceptions were generally based on the assumption that they could use the LAD both independently during their studies and collectively with their tutor teachers as a component of the guidance process.

Interviews were conducted immediately or shortly after the guidance meeting. Interview duration ranged from 42 to 70 min. The graphical presentation of the LAD was commonly shown to the students to provide stimuli and evoke discussion, as suggested by Kwasnicka et al. ( 2015 ). The interviews were conducted by the same researcher (first author) who observed the guidance meetings. They were primarily held online, and only one was organized face-to-face. All interviews were video recorded for subsequent analysis.

3.3 Data analysis

Interview recordings were transcribed verbatim, accumulating a total of 187 pages of textual material for analysis (Times New Roman, 12-point font, line spacing 1). A qualitative content analysis method was used to analyze the data (see Mayring, 2000 ; Schreier, 2014 ) to enhance in-depth understanding of the research phenomenon and to inform practical actions (Krippendorf, 2019 ). Also, data were approached both deductively and inductively (see Elo & Kyngäs, 2008 ; Merriam & Tisdell, 2016 ), and the analysis was supported using the ATLAS.ti program.

Analysis began with a thorough familiarization with the data in order to develop a general understanding of the students’ perspectives. First, the data were deductively coded using Zimmerman and Moylan’s ( 2009 ) SRL model as a theoretical guide for analysis and as applied to the study path perspective. All relevant units of analysis—such as paragraphs, sentences, or phrases that addressed the use of the LAD or its development in relation to the processes of SRL presented in the model—were initially identified from the data, and then sorted into meaningful units with specific codes. The focus was placed on instances in the data that were applicable and similar to the processes represented in the model, but the analysis was not limited to those that fully corresponded to them. The preliminary analysis involved several rounds of coding that ultimately led to the formation of main categories, grouped into the phases of SRL. The forethought phase consisted of processes that emphasized the planning and directing of studies and learning with the LAD. The performance phase, in turn, involved processes that addressed the control and observation of studies and learning through the LAD. Finally, the reflection phase included processes that focused on evaluating and reflecting on studies and learning with the LAD.

Second, the data were approached inductively by examining the use of the LAD and its development as distinct aspects within each phase and process of SRL (i.e., the main categories). The aim was, on the one hand, to identify how the use of the LAD was considered to serve the students in the phases and processes of SRL in its current form, and on the other hand, how it should be improved to better support them. The analysis not only focused on the characteristics of the LAD but also on the practices that surrounded its use and development. The units of analysis were first condensed from the data and then organized into subcategories for similar units. As suggested by Schreier ( 2014 ), the process was continued until a saturation point was reached—that is, no additional categories could be found. As a result, subcategories for all of the main categories were identified.

Following Schreier’s ( 2014 ) recommendation, the categories were named and described with specific data examples. Additionally, some guidelines were added to highlight differences between categories and to avoid overlap. Using parts of this categorization framework as a coding scheme, a portion of the data (120 text segments) was independently coded into the main categories by the first and second authors. The results were then compared, and all disagreements were resolved through negotiation until a shared consensus was reached. After minor changes were made to the coding scheme, the first author recoded all data. The number of students who had provided responses to each subcategory was counted and added to provide details on the study. For study integrity, the results are supported by data examples with the students’ aliases and the study fields they represented. The quotations were translated from Finnish to English.

The results are reported by first answering the threefold sub-question, that is, how do HE students perceive the use of an LAD and its development as promoting the (1) forethought, (2) performance, and (3) reflection phase processes of SRL. The subsequent results are then summarized to address the main RQ, that is, what are the main findings on HE students’ perceptions on the utilization of an LAD to promote SRL.

4.1 LAD as a part of the forethought phase processes

The students perceived the use of the LAD and its development as related to the forethought phase processes of SRL through the categorization presented in Table  1 below.

Regarding the process of goal setting , almost all students ( n  = 15) emphasized that the use of the LAD promoted the targeting of goal-oriented study completion and competence development. Analytics indicators—such as grades, grade point averages, and accumulated credit points—adequately informed the students of areas they should aim for, further improve, or put more effort into. Only one student ( n  = 1) considered the analytics data too general for establishing goals. However, some students ( n  = 7) specifically mentioned their desire to set and enter individual goals in the LAD. The students were considered to have individual intentions, which should also be made visible in the LAD:

For example, someone might complete [their studies] in four years, someone might do [them] even faster, so maybe in a way that there is the possibility…to set…that, well, I want or my goal is to graduate in this time, and then it would kind of show in it. (Sophia, Humanities and Education student)

Moreover, some students ( n  = 6) wanted to obtain information on the degree program’s overall target times, study requirements, or pace recommendations through the LAD.

In relation to the process of study planning , the use of the LAD provided many students ( n  = 8) grounds to plan and structure the promotion and completion of their studies, such as which courses and types of studies to choose, and what kind of study pace and schedule to follow. However, an even greater set of students ( n  = 12) hoped that the LAD could provide them with more sophisticated tools for planning. For instance, it could inform them about studies to be completed, analyze their study performance in detail, or make predictions for the future. Moreover, it should offer them opportunities to choose courses, make enrollments, set schedules, get reminders, and take notes. One example of such an advanced analytics application was described as follows: ‟It would be a bit like a conversational tool with the student as well, that you would first put…your studies in the program, so it would [then] remind you regularly that hey, do this” (James, Humanities and Education student).

When discussing the use of the LAD, most students ( n  = 12) emphasized the critical role of personal interests and preferences , which was found to not only guide studying and learning in general but to also drive and shape the utilization of the LAD. According to the students, using such an analytics application could particularly benefit those students who, for instance, prefer monitoring of study performance, perceive information in a visualized form, are interested in analytics or themselves, or find it relevant for their studies. Prior familiarization was also considered useful: ‟Of course, there are those who use this kind of thing more and those who use this kind of thing in daily life, so they could especially benefit from this, probably more than I do” (Olivia, Social Services and Health Care student). Even though the LAD was considered to offer pertinent insights for many types of learners, it might not be suitable for all. For instance, it could be challenging for some students to comprehend analytics data or to make effective use of them in their studies. In the development of the LAD, such personal aspects should be noted. The students ( n  = 7) believed the LAD might better adapt to students’ individual needs if it allows them to customize its features and displays or to use it voluntarily based on one’s personal interests and needs.

When describing the use of LAD, half of the students ( n  = 8) discussed its connections with self-efficacy . Making use of analytics data appeared to strengthen the students’ beliefs in their abilities to study and learn in a targeted manner, even if their own feelings suggested otherwise. As one of the students stated:

It’s nice to see that progress, that it has happened although it feels that it hasn’t. So, you can probably set goals based on [an idea] that you’re likely to progress, you could set [them] that you could graduate sometime. (Emma, Engineering student)

On the other hand, the use of the LAD also seemed to require students to have sufficient self-efficacy. It was perceived as vital especially when the analytics data showed unfavorable study performance, such as failed or incomplete courses, or gaps in the study performance with respect to peers. One student ( n  = 1) suggested that the LAD could include praises as evidence of and support for appropriate study performance. Such incentives may help improve the students’ self-confidence as learners. Apart from this, however, the students had no other recommendations for developing the use of the LAD to support self-efficacy.

4.2 LAD as a part of the performance phase processes

The students discussed the use of the LAD and its development in relation to the performance phase processes of SRL according to the categories described in Table  2 below.

The students ( n  = 16) widely agreed that using the LAD benefited them in the process of metacognitive monitoring. By indicating the progress and success of study performance, the LAD was thought to be well suited for observing the course of studies and the development of competences. Moreover, it helped the students to gain awareness of their individual strengths and weaknesses, as well as successes and failures, in a study path. Tracking individual study performance was also found to contribute to purposeful study completion, as the following data example demonstrates:

It’s important especially when there is a target time to graduate, so of course you must follow and stay on track in many ways as there are many such pitfalls to easily fall into, [and] as I’ve fallen quite many times, it’s good [to monitor]. (Sarah, Culture student)

Additionally, the insights of monitoring could be used in future job searches to provide information about acquired competences to potential employers. The successful promotion of studies was generally perceived to require regular monitoring by both students and their educators. However, one of the students considered it a particular responsibility of the students themselves, as the studies were completed at an HE level and were thus voluntary for them. To provide more in-depth insights, many students ( n  = 12) recommended the incorporation of a course-level monitoring opportunity in the LAD. More detailed information was needed, for instance, about course descriptions, assignments completed, and grades received. The rest of the students ( n  = 4), however, wanted to keep the course-level monitoring within the learning management system. One of them stated that it could also be a place through which the students could use the LAD. Some students ( n  = 6) emphasized the need to reconsider current assessment practices to enable better tracking of study performance. Specifically, assessments could be made in greater detail and grades given immediately after course completion. The variation in scales and time points of assessments between the courses and degree programs posed potential challenges for monitoring, thus prompting the need to unify educational practices at the organizational level.

As an activity closely related to metacognitive monitoring, the process of imaging and visualizing was emphasized by the students as helping them to advance in their educational pursuits. Most students ( n  = 15) mentioned that using the LAD allowed them to easily image their study path and clarify their study situation. As one of them stated, ‟This is quite clear, this like, that you can see the overall situation with a quick glance” (Anna, Business Administration student). The visualizations were perceived as informative, tangible, and understandable. However, they were also thought to carry the risk of students neglecting some other relevant aspects of studying and learning in the course of attracting such focused attention. Although the visualizations were generally considered clear, some students ( n  = 11) noted that they could be further improved to better organize the analytics data. For instance, the students suggested the attractive use of colors and the categorization of different types of courses. Visual symbols, in turn, may be particularly effective in course-level data. Technical aspects should also be carefully considered to avoid false visualizations.

Regarding the process of environmental structuring , the LAD appeared to be a welcome addition to the study toolkit and overall study environment. A few students ( n  = 4) considered it appropriate to utilize the LAD as a separate PowerBI application alongside other (Microsoft O365) study tools, but they also felt that it could be utilized through other systems if necessary. However, one student ( n  = 1) raised the need for overall system integrations and some students ( n  = 8) expressed a specific wish to use the LAD as an integrated part of the student information system that was thought to improve its accessibility. A few students ( n  = 6) also wanted to receive some additional analytics data as related to the information stored in such a system. For instance, the students could be informed about their study progress or offered feedback on their overall performance in relation to the personal study plan. Other students ( n  = 10), in turn, did not consider the need for this or did not mention it. It was generally emphasized that the LAD should remain sufficiently clear and simple, as too much information can make its use ineffective:

I think there is just enough information in this. Of course, if you would want to add something small, you could, but I don’t know how much, because I feel that when there is too much information, so it’s a bit like you can’t get as much out of it as you could get. (Olivia, Social Services and Health Care student)

Moreover, the analytics data must be kept private and protected. The students generally desired personal access to the LAD; if given such an opportunity, almost all ( n  = 15) believed they would utilize it in the future, and only one ( n  = 1) was unsure about this prospect. The analytics data were believed to be of particular use when studies were actively promoted. Hence they should be made available to the students from the start of their studies.

Regarding the process of interest and motivation enhancement , all students ( n  = 16) mentioned that using the LAD stimulated their interest or enhanced their motivation, although to varying degrees. For some students, a general tracking of studies was enough to encourage them to continue their pursuits, while others were particularly inspired by seeing either high or low study performance. The development of motivation and interest was generally thought to be a hindrance if the students perceived the analytics data as unfavorable or lacking essential information. As one of students mentioned, ‟If your [chart] line was downward, and if there were only ones and zeros or something like that, it could in a way decrease the motivation” (Helen, Humanities and Education student). It appeared that enhancing interest and motivation was mainly dependent on the students’ own efforts to succeed in their course of study and thus to generate favorable analytics data. However, some students ( n  = 7) felt that it could be additionally enhanced by diversifying and improving the analytics tools in the LAD. For example, the opportunities for more detailed analyses and future study planning or comparisons of study performance with that of peers might further increase these students’ motivation and interest in their studies. Even so, it was also considered possible that especially comparisons between students might have the opposite, demotivating and discouraging effect.

All students ( n  = 16) mentioned that using the LAD facilitated the process of seeking and accessing help . It enabled the identification of potential support needs—for instance, if several courses were failed or left unfinished. As noted, they were perceived as alarming signals for the students themselves to seek help and for the guidance personnel to provide targeted support. As one of the students emphasized, it was important that not only ‟a teacher [tutor] gets interested in looking at what the situation is but also that a student would understand to communicate regarding the promotion of studies and situations” (Emily, Social Services and Health Care student). Some students ( n  = 9) suggested that the students, tutor teachers, or both could receive automated alerts if concerns were to arise. On the other hand, the impact of such automated notifications on changing the course of study was considered somewhat questionable. Above all, the students ( n  = 16) preferred human contact and personal support by the guidance personnel, who would use a sensitive approach to address possibly delicate issues. Support would be important to include in existing practices, as the tutor teachers should not be overburdened. One of the students also stated that the automated alerts could be sufficient if they just worked effectively.

4.3 LAD as a part of the reflection phase and processes

The students addressed the use of the LAD and its development as a part of the reflection phase processes of SRL through categories outlined in Table  3 .

The students widely appreciated the support provided by the use of the LAD for the process of evaluation and reflection. The majority ( n  = 15) mentioned that it allowed them to individually reflect on the underlying aspects of their study performance, such as what kind of learners they are, what type of teaching or learning methods suit them, and what factors impact their learning. Similarly, the students ( n  = 16) valued the possibility of examining the analytics data together with the guidance personnel, such as tutor teachers, and commonly expressed a desire to revisit the LAD in future guidance meetings. It was thought to promote the interpretation of analytics data and to facilitate collective reflection on the reasons behind one’s study success or failure. However, this might require a certain orientation from the guidance personnel, as the student describes below:

I feel that it’s possible to address such themes that what may perhaps cause this. Of course, a lot depends on how amenable the teacher [tutor] is, like are we focusing on how the studies are going but in a way, not so much on what may cause it. (Sophia, Humanities and Education student)

Some students ( n  = 8) proposed incorporating familiarization with analytics insights into course implementations of the degree programs. Additionally, many students ( n  = 11) expressed a desire to examine the student group’s general progress in tutoring classes together with the tutor teacher and peers, particularly if the results were properly targeted and anonymized, and presented in a discreet manner. However, some students ( n  = 5) found this irrelevant. The students were generally wary to evaluate and compare an individual student’s study performance in relation to the peer average through the LAD. While some students ( n  = 4) welcomed such an opportunity, others ( n  = 6) considered it unnecessary. A few students ( n  = 5) emphasized that such comparisons between students should be optional and visible if desired, and one student ( n  = 1) did not have a definite view about it. Rather than competing with others, the students stressed the importance of challenging themselves and evaluating study performance against their own goals or previous achievements.

According to the students ( n  = 16), the use of the LAD was associated with a wide range of affective reactions . Positive responses such as joy, relief, and satisfaction were considered to emerge if the analytics data displayed by the LAD was perceived as favorable and expected, and supportive of future learning. Similarly, negative responses such as anxiety, pressure, or stress were likely to occur if such data indicated poor performance, thus challenging the learning process. On the other hand, such self-reactions could also appear as neutral or indifferent, depending on the student and the situation. Individual responses were related not only to the current version of the LAD but also to its further development targets. Some students ( n  = 3) pointed out the importance of guidance and support, through which the affective reactions could be processed together with professionals. As one of the students underlined, it is important “that there is also that support for the studies, that it isn’t just like you have this chart, and it looks bad, that try to manage. Perhaps there is that support, support in a significant role as well” (Sophia, Humanities and Education student). It seemed critical that the students were not left alone with the LAD but rather were given assistance to deal with the various responses its use may elicit.

4.4 Summary of findings on LAD utilization to promote SRL among HE students

In summary, HE students’ perceptions on the utilization of an LAD to promote SRL phases and processes were largely congruent, but nonetheless partly varied. In particular, the students agreed on the support provided by the LAD during the performance phase and for the purpose of metacognitive monitoring. Such activity was thought to not only enable the students to observe their studies and learning, but to also create the basis for the emergence of all other processes, which were facilitated by the monitoring. That is, while the students familiarized themselves with the course of their studies via the analytics data, they could further apply these insights—for instance, to visualize study situations, enhance motivation, and identify possible support needs. Monitoring with the LAD was also perceived to partly promote the students to the forethought and reflection phases and processes by giving them grounds to target their development areas as well as to reflect on their studies and learning individually and jointly with their tutor teachers. However, it was clear that less emphasis was placed on using the LAD for study planning, addressing individual interests, activating self-efficacy, and supporting environmental structuring, thus giving incentives for their further investigation and future improvement.

Although the LAD used in this study seemed to serve many functions as such, its holistic development was deemed necessary for more thorough SRL support. In particular, the students agreed on the need to improve such an analytics application to further strengthen the performance phase processes—particularly monitoring—by, for instance, developing it for the students’ independent use, and by integrating it with instructional and guidance practices provided by their educators. Moreover, the students commonly wished for more advanced analytics tools that could more directly contribute to the planning of studies and joint reflection of group-level analytics data. To better support the various processes of SRL, new features were generally welcomed into the LAD, although the students’ views and emphases on them also varied. Mixed perspectives were related, for instance, to the need to enrich data or compare students within the LAD. Thus, it seemed important to develop the LAD to conform to the preferences of its users. Along with improving the LAD, students also paid attention to the development of pedagogical practices and guidance processes that together could create appropriate conditions for the emergence of SRL.

5 Discussion

The purpose of this study was to gain insights into HE students’ perceptions on the utilization of an LAD to promote their SRL. The investigation extended the previous research by offering in-depth descriptions of the specific phases and processes of SRL associated with the use of an LAD and its development targets. By applying a study path perspective, it also provided novel insights into how to promote students to become self-regulated learners and effective users of analytics data as an integral part of their studies in HE.

The students’ perspectives on the use of LAD and its development were initially explored as a part of the forethought phase processes of SRL, with a particular focus on the planning and directing of studies and learning. In line with previous research (e.g., Divjak et al., 2023 ; Schumacher & Ifenthaler, 2018 ; Silvola et al., 2023 ), the students in this study appreciated an analytics application that helped them prepare for their future learning endeavors—that is, the initial phase of the SRL cycle (see Zimmerman & Moylan, 2009 ). Using the LAD specifically allowed the students to recognize their development areas and offered a basis to organize their future coursework. However, improvements to allow students to set individual goals and make plans directly within the LAD, as well as to increase awareness of general degree goals, were also desired. These seem to be pertinent avenues for development, as goals may inspire the students not only to invest greater efforts in learning but also to track their achievements against these goals (Wise, 2014 ; Wise et al., 2016 ). While education is typically entered with individual starting points, it is important to allow the students to set personal targets and routes for their learning (Wise, 2014 ; Wise et al., 2016 ).

The results of this study indicate that the use of LADs is primarily driven and shaped by students’ personal interests and preferences, which commonly play a crucial role in the development of SRL (see Zimmerman & Moylan, 2009 ; Panadero & Alonso-Tapia, 2014 ). It might particularly benefit those students for whom analytics-related activities are characteristic and of interest, and who consider them personally meaningful for their studies. It has been argued that if students consider analytics applications serve their learning, they are also willing to use them (Schumacher & Ifenthaler, 2018 ; Wise et al., 2016 ). On the other hand, it has also been stated that not all students are necessarily able to maximize its possible benefits on their own and might need support in understanding its purpose (Wise, 2014 ) and in finding personal relevance for its use. The findings of this study suggest that a more individual fit of LADs could be promoted by allowing students to customize its functionalities and displays. Comparable results have also been obtained from other studies (e.g., Bennett, 2018 ; Rets et al., 2021 ; Roberts et al., 2017 ; Schumacher & Ifenthaler, 2018 ), thus highlighting the need to develop customized LADs that better meet the needs of diverse students and that empower them to control their analytics data. More attention may also be needed to promote the use and development of LADs to support self-efficacy, as it appeared to be an unrecognized potential still for many students in this study. According to Rets et al. ( 2021 ), using LADs for such a purpose might particularly benefit online learners and part-time students, who often face various requirements and thus may forget the efforts put into learning and giving themselves enough credit. By facilitating students’ self-confidence, it could also promote the necessary changes in study behavior, at least for those students with low self-efficacy (Rets et al., 2021 ).

Second, the students’ views on the use of the LAD and its development were investigated in terms of the performance phase processes of SRL, with an emphasis on the control and observation of studies and learning. In line with the results of other studies (De Barba et al., 2022 ; Rets et al., 2021 ; Schumacher & Ifenthaler, 2018 ; Silvola et al., 2023 ), the students preferred using the LAD to monitor their study performance—they wanted to follow their progress and success over time and keep themselves and their educators up to date. According to Jivet et al. ( 2017 ), such functionality directly promotes the performance phase of SRL. Moreover, it seemed to serve as a basis for other activities under SRL, all of which were heavily dependent and built on the monitoring. The results of this study, however, imply that monitoring opportunities should be further expanded to provide even more detailed insights. Moreover, they indicate the need to develop and refine pedagogical practices at the organizational level in order to better serve student monitoring. As monitoring plays a crucial role in SRL (Zimmerman & Moylan, 2009 ), it is essential to examine how it is related to other SRL processes and how it can be effectively promoted with analytics applications (Viberg et al., 2020 ).

In this study, the students used the LAD not only to monitor but also to image and visualize their learning. In accordance with the views of Papamitsiou and Economides ( 2015 ), the visualizations transformed the analytics data into an easily interpretable visual form. The visualizations were not considered to generate information overload, although such a concern has sometimes been associated with the use of LADs (e.g., Susnjak et al., 2022 ). However, the students widely preferred even more descriptive and nuanced illustrations to clarify and structure the analytics data. At the same time, care must be taken to ensure that the visualizations do not divert too much attention from other relevant aspects of learning, as was also found important in prior research (e.g., Charleer et al., 2018 ; Wise, 2014 ). It seems critical that an LAD inform but not overwhelm its users (Susnjak et al., 2022 ). As argued by Klein et al. ( 2019 ), confusing visualizations may not only generate mistrust but also lead to their complete nonuse.

Although the LAD piloted in the study was considered to be a relatively functional application, it could be even more accessible and usable if it was incorporated into the student information system and enriched with the data from it. Even then, however, the LAD should remain simple to use and its data privacy ensured. It has been argued that more information is not always better (Aguilar, 2018 ), and the analytics indicators must be carefully considered to truly optimize learning (Clow, 2013 ). While developing their SRL, students would particularly benefit from a well-structured environment with fewer distractions and more facilitators for learning (Panadero & Alonso-Tapia, 2014 ). The smooth promotion of studies also seems to require personal access to the analytics data. Similar to the learners in Charleer and colleagues’ ( 2018 ) study, the students in this study desired to take advantage of the LAD autonomously, beyond the guidance context. It was believed to be especially used when they were actively promoting their studies. This is seen as a somewhat expected finding given the significant role of study performance indicators in the LAD. However, the question is also raised as to whether such an analytics application would be used mainly by those students who progress diligently but would be ignored by those who advance only a little or not at all. Ideally, the LAD would serve students in different situations and at various stages of studies.

Using the LAD offered the students a promising means to enhance motivation and interest in their studies through the monitoring of analytics data. However, not all students were inspired in the same manner or similar analytics data displayed by the LAD. Although the LAD was seen as inspiring and interesting in many ways, it also had the potential to demotivate or even discourage. This finding corroborates the results of other studies reporting mixed results on the power of LADs to motivate students (e.g., Bennett, 2018 ; Corrin & de Barba, 2014 ; Schumacher & Ifenthaler, 2018 ). As such, it would be essential that the analytics applications consider and address students with different performance levels and motivational factors (Jivet et al., 2017 ). Based on the results of this study, diversifying the tools included in the LAD might also be necessary. On the other hand, the enhancement of motivation was also found to be the responsibility of the students themselves—that is, if the students wish the analytics application to display favorable analytics data and thus motivate them, they must first display concomitant effort in their studies.

The use of the LAD provided a convenient way to intervene if the students’ study performance did not meet expectations. With the LAD, both the students and their tutor teachers could detect signs of possible support needs and address them with guidance. In the future, such needs could also be reported through automated alerts. Overall, however, the students in this study preferred human contact and personal support over automated interventions, contrary to the findings obtained by Roberts and colleagues ( 2017 ). Being identified to their educators did not seem to be a particular concern for them, although it has been found to worry students in other contexts (e.g., Roberts et al., 2017 ). Rather, the students felt they would benefit more from personal support that was specifically targeted to them and sensitive in its approach. The students generally demanded delicate, ethical consideration when acting upon analytics data and in the provision of support, which was also found to be important in prior research (e.g., Kleimola & Leppisaari, 2022 ). Additionally, Wise and colleagues ( 2016 ) underlined the need to foster student agency and to prevent students from becoming overly reliant on analytics-based interventions: if all of the students’ mistakes are pointed out to them, they may no longer learn to recognize mistakes on their own. Therefore, to support SRL, it is essential to know when to intervene and when to let students solve challenges independently (Kramarski & Michalsky, 2009 ).

Lastly, the students’ perceptions on the use and development of the LAD were examined from the perspective of the reflection phase processes of SRL, with particular attention given to evaluation and reflection on studies and learning. The use of the LAD provided the students with a basis to individually reflect on the potential causes behind their study performance, for better or worse. Moreover, they could address such issues together with guidance personnel and thus make better sense of the analytics data. Corresponding to the results of Charleer et al.’s ( 2018 ) study, collective reflection on analytical data provided the students with new insights and supported their understanding. Engaging in such reflective practices offered the students the opportunity to complete the SRL cycle and draw the necessary conclusions regarding their performance for subsequent actions (see Zimmerman & Moylan, 2009 ). In the future, analytics-based reflection could also be implemented in joint tutoring classes and courses included in the degree programs. This would likely promote the integration of LADs into the activity flow of educational environments, as recommended by Wise and colleagues ( 2016 ). In sum, using LADs should be a regular part of pedagogical practices and learning processes (Wise et al., 2016 ).

When evaluating and reflecting on their studies and learning, the students preferred to focus on themselves and their own development as learners. Similar to earlier findings (e.g., Divjak et al., 2023 ; Rets et al., 2021 ; Roberts et al., 2017 ; Schumacher & Ifenthaler, 2018 ), the students felt differently about the need to develop LADs to compare their study performance with that of other students. Although this function could help some of the students to position themselves in relation to their peers, others thought it should be optional or completely avoided. In agreement with the findings of Divjak et al. ( 2023 ), it seemed that the students wanted to avoid mutual competition comparisons; however, it might not be harmful for everyone and in every case. Consequently, care is required when considering the kind of features in the LAD that offer real value to students in a particular context (Divjak et al., 2023 ). Rather than limiting the point of reference only to peers, it might be useful to also offer students other targets for comparative activity, such as individual students’ previous progress or goals set for the activity (Wise, 2014 ; Wise et al., 2016 ; see also Bandura, 1986 ). In addition, it is important that students not be left alone to face and cope with the various reactions that may be elicited by such evaluation and reflection with analytics data (Kleimola & Leppisaari, 2022 ). As the results of this study and those of others (e.g., Bennett, 2018 ; Lim et al., 2021 ) generally indicate, affective responses evoked by LADs may vary and are not always exclusively positive. Providing a safe environment for students to reflect on successes and failures and to process the resulting responses might not only encourage necessary changes in future studies but also promote the use of an LAD as a learning support.

In summary, the results of this study imply that making an effective use of an analytics application—even with a limited amount of analytics data and functionality available—may facilitate the growth of students into self-regulated learners. That is, even if the LAD principally addresses some particular phase or process of SRL, it can act as a catalyst to encourage students in the development of SRL on a wider scale. This finding also emphasizes the interdependent and interactive nature of SRL (see Zimmerman, 2011 ; Zimmerman & Moylan, 2009 ) that similarly seems to characterize the use of an LAD. However, the potential of LADs to promote SRL may be lost unless students themselves are (pro)active in initiating and engaging with such activity or receive appropriate pedagogical support for it. There appears to be a specific need for guidance that is sensitive to the students’ affective reactions and would help students learn and develop with analytics data. Providing the students with adequate support is particularly critical if their studies have not progressed favorably or as planned. It seems important that the LAD would not only target those students who are already self-regulated learners but, with appropriate support and guidance, would also serve those students who are gradually growing in that direction.

5.1 Limitations and further research

This study has some limitations. First, it involved a relatively small number of HE students who were examined in a pilot setting. Although the sample was sufficient to provide in-depth insights and the saturation point was reached, it might be useful in further research to use quantitative approaches and diverse groups of students to improve the generalizability of results to a larger student population. Also, addressing the perspectives of guidance personnel, specifically tutor teachers, could provide additional insights into the use and development of LADs to promote SRL.

Second, the LAD piloted and investigated in this study was not yet widely in use or accessible by the students. Moreover, it was examined for a relatively brief time, so the students’ perceptions were shaped not only by their experiences but also by their expectations of its potential. Future research on students and tutor teachers with more extensive user experience could build an even more profound picture of the possibilities and limitations of the LAD from a study path perspective. Such investigation might also benefit from trace data collected from the students’ and tutor teachers’ interactions with the LAD. It would be valuable to examine how the students and tutor teachers make use of the LAD in the long term and how it is integrated into learning activities and pedagogical practices.

Third, due to the emphasis on an HE institution and the analytics application used in this specific context, the transferability of results may be limited. However, the results of this study offer many important and applicable perspectives to consider in various educational environments where LADs are implemented and aimed at supporting students across their studies.

6 Conclusions

The results of this study offer useful insights for the creation of LADs that are closely related to the theoretical aspects of learning and that meet the particular needs of their users. In particular, the study increases the understanding of how such analytics applications should be connected to the entirety of studies—that is, what kind of learning processes and pedagogical support are needed alongside them to best serve students in their learning. Consequently, it encourages a comprehensive consideration and promotion of pedagogy, educational technology, and related practices in HE. The role of LA in supporting learning and guidance seems significant, so investments must be made in its appropriate use and development. In particular, the voice of the students must be listened to, as it promotes their commitment to the joint development process and fosters the productive use of analytics applications in learning. At its best, LA becomes an integral part of HE settings, one that helps students to complete their studies and contributes to their development into self-regulated learners.

Data availability

Not applicable.

Abbreviations

Higher education

  • Learning analytics
  • Learning analytics dashboard

Research question

  • Self-regulated learning

Aguilar, S. J. (2018). Examining the relationship between comparative and self-focused academic data visualizations in at-risk college students’ academic motivation. Journal of Research on Technology in Education , 50 (1), 84–103. https://doi.org/10.1080/15391523.2017.1401498

Article   Google Scholar  

Anthonysamy, L., Koo, A-C., & Hew, S-H. (2020). Self-regulated learning strategies and non-academic outcomes in higher education blended learning environments: A one decade review. Education and Information Technologies , 25 (5), 3677–3704. https://doi.org/10.1007/s10639-020-10134-2

Azevedo, R., Guthrie, J. T., & Seibert, D. (2004). The role of self-regulated learning in fostering students’ conceptual understanding of complex systems with hypermedia. Journal of Educational Computing Research , 30 (1–2), 87–111. https://doi.org/10.2190/DVWX-GM1T-6THQ-5WC7

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory . Prentice-Hall.

Barnard-Brak, L., Paton, V. O., & Lan, W. Y. (2010). Profiles in self-regulated learning in the online learning environment. The International Review of Research in Open and Distributed Learning , 11 (1), 61–80. https://doi.org/10.19173/irrodl.v11i1.769

Beheshitha, S. S., Hatala, M., Gašević, D., & Joksimović, S. (2016). The role of achievement goal orientations when studying effect of learning analytics visualizations. Proceedings of the 6th International Conference on Learning Analytics and Knowledge (pp. 54–63). Association for Computing Machinery. https://doi.org/10.1145/2883851.2883904

Bennett, E. (2018). Students’ learning responses to receiving dashboard data: Research report . Huddersfield Centre for Research in Education and Society, University of Huddersfield.

Bodily, R., & Verbert, K. (2017). Review of research on student-facing learning analytics dashboards and educational recommender systems. IEEE Transactions on Learning Technologies , 10 (4), 405–418. https://doi.org/10.1109/TLT.2017.2740172

Broadbent, J., & Poon, W. L. (2015). Self-regulated learning strategies & academic achievement in online higher education learning environments: A systematic review. The Internet and Higher Education , 27 , 1–13. https://doi.org/10.1016/j.iheduc.2015.04.007

Callan, G., Longhurst, D., Shim, S., & Ariotti, A. (2022). Identifying and predicting teachers’ use of practices that support SRL. Psychology in the Schools , 59 (11), 2327–2344. https://doi.org/10.1002/pits.22712

Charleer, S., Moere, A. V., Klerkx, J., Verbert, K., & De Laet, T. (2018). Learning analytics dashboards to support adviser-student dialogue. IEEE Transactions on Learning Technologies , 11 (3), 389–399. https://doi.org/10.1109/TLT.2017.2720670

Chenail, R. J. (2011). Interviewing the investigator: Strategies for addressing instrumentation and researcher bias concerns in qualitative research. The Qualitative Report , 16 (1), 255–262. https://doi.org/10.46743/2160-3715/2011.1051

Clow, D. (2013). An overview of learning analytics. Teaching in Higher Education , 18 (6), 683–695. https://doi.org/10.1080/13562517.2013.827653

Conole, G., Gašević, D., Long, P., & Siemens, G. (2011). Message from the LAK 2011 general & program chairs. Proceedings of the 1st International Conference on Learning Analytics and Knowledge . Association for Computing Machinery. https://doi.org/10.1145/2090116

Corrin, L., & De Barba, P. (2014). Exploring students’ interpretation of feedback delivered through learning analytics dashboards. In B. Hegarty, J. McDonald, & S-K. Loke (Eds.), ASCILITE 2014 conference proceedings—Rhetoric and reality: Critical perspectives on educational technology (pp. 629–633). Australasian Society for Computers in Learning in Tertiary Education (ASCILITE). https://www.ascilite.org/conferences/dunedin2014/files/concisepapers/223-Corrin.pdf

Costas-Jauregui, V., Oyelere, S. S., Caussin-Torrez, B., Barros-Gavilanes, G., Agbo, F. J., Toivonen, T., Motz, R., & Tenesaca, J. B. (2021). Descriptive analytics dashboard for an inclusive learning environment. 2021 IEEE Frontiers in Education Conference (FIE) (pp. 1–9). IEEE. https://doi.org/10.1109/FIE49875.2021.9637388

De Barba, P., Oliveira, E. A., & Hu, X. (2022). Same graph, different data: A usability study of a student-facing dashboard based on self-regulated learning theory. In S. Wilson, N. Arthars, D. Wardak, P. Yeoman, E. Kalman, & D. Y. T. Liu (Eds.), ASCILITE 2022 conference proceedings: Reconnecting relationships through technology (Article e22168). Australasian Society for Computers in Learning in Tertiary Education (ASCILITE). https://doi.org/10.14742/apubs.2022.168

De Laet, T., Millecamp, M., Ortiz-Rojas, M., Jimenez, A., Maya, R., & Verbert, K. (2020). Adoption and impact of a learning analytics dashboard supporting the advisor: Student dialogue in a higher education institute in Latin America. British Journal of Educational Technology , 51 (4), 1002–1018. https://doi.org/10.1111/bjet.12962

Divjak, B., Svetec, B., & Horvat, D. (2023). Learning analytics dashboards: What do students actually ask for? Proceedings of the 13th International Learning Analytics and Knowledge Conference (pp. 44–56). Association for Computing Machinery. https://doi.org/10.1145/3576050.3576141

Dollinger, M., & Lodge, J. M. (2018). Co-creation strategies for learning analytics. Proceedings of the 8th International Conference on Learning Analytics and Knowledge (pp. 97–101). Association for Computing Machinery. https://doi.org/10.1145/3170358.3170372

Eickholt, J., Weible, J. L., & Teasley, S. D. (2022). Student-facing learning analytics dashboard: Profiles of student use. IEEE Frontiers in Education Conference (FIE) (1–9). IEEE. https://doi.org/10.1109/FIE56618.2022.9962531

Elo, S., & Kyngäs, H. (2008). The qualitative content analysis process. Journal of Advanced Nursing , 62 (1), 107–115. https://doi.org/10.1111/j.1365-2648.2007.04569.x

Elouazizi, N. (2014). Critical factors in data governance for learning analytics. Journal of Learning Analytics , 1 (3), 211–222. https://doi.org/10.18608/jla.2014.13.25

Heikkinen, S., Saqr, M., Malmberg, J., & Tedre, M. (2022). Supporting self-regulated learning with learning analytics interventions: A systematic literature review. Education and Information Technologies , 28 (3), 3059–3088. https://doi.org/10.1007/s10639-022-11281-4

Jivet, I., Scheffel, M., Drachsler, H., & Specht, M. (2017). Awareness is not enough: Pitfalls of learning analytics dashboards in the educational practice. In É. Lavoué, H. Drachsler, K. Verbert, J. Broisin, & M. Pérez-Sanagustín (Eds.), Lecture notes in computer science: Vol. 10474. Data driven approaches in digital education (pp. 82–96). Springer. https://doi.org/10.1007/978-3-319-66610-5_7

Jivet, I., Scheffel, M., Specht, M., & Drachsler, H. (2018). License to evaluate: Preparing learning analytics dashboards for educational practice. Proceedings of the 8th International Conference on Learning Analytics and Knowledge (pp. 31–40). Association for Computing Machinery. https://doi.org/10.1145/3170358.3170421

Jivet, I., Scheffel, M., Schmitz, M., Robbers, S., Specht, M., & Drachsler, H. (2020). From students with love: An empirical study on learner goals, self-regulated learning and sense-making of learning analytics in higher education. The Internet and Higher Education , 47 , 100758. https://doi.org/10.1016/j.iheduc.2020.100758

Jivet, I., Wong, J., Scheffel, M., Valle Torre, M., Specht, M., & Drachsler, H. (2021). Quantum of choice: How learners’ feedback monitoring decisions, goals and self-regulated learning skills are related. Proceedings of 11th International Conference on Learning Analytics and Knowledge (pp. 416–427). Association for Computing Machinery. https://doi.org/10.1145/3448139.3448179

Kim, J., Jo, I-H., & Park, Y. (2016). Effects of learning analytics dashboard: Analyzing the relations among dashboard utilization, satisfaction, and learning achievement. Asia Pacific Education Review , 17 (1), 13–24. https://doi.org/10.1007/s12564-015-9403-8

Kleimola, R., & Leppisaari, I. (2022). Learning analytics to develop future competences in higher education: A case study. International Journal of Educational Technology in Higher Education , 19 (1), 17. https://doi.org/10.1186/s41239-022-00318-w

Kleimola, R., López-Pernas, S., Väisänen, S., Saqr, M., Sointu, E., & Hirsto, L. (2023). Learning analytics to explore the motivational profiles of non-traditional practical nurse students: A mixed-methods approach. Empirical Research in Vocational Education and Training , 15 (1), 11. https://doi.org/10.1186/s40461-023-00150-0

Klein, C., Lester, J., Rangwala, H., & Johri, A. (2019). Technological barriers and incentives to learning analytics adoption in higher education: Insights from users. Journal of Computing in Higher Education , 31 (3), 604–625. https://doi.org/10.1007/s12528-019-09210-5

Kramarski, B. (2018). Teachers as agents in promoting students’ SRL and performance: Applications for teachers’ dual-role training program. In D. H. Schunk, & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (2nd ed., pp. 223–239). Routledge/Taylor & Francis Group. https://doi.org/10.4324/9781315697048-15

Kramarski, B., & Michalsky, T. (2009). Investigating preservice teachers’ professional growth in self-regulated learning environments. Journal of Educational Psychology , 101 (1), 161–175. https://doi.org/10.1037/a0013101

Krippendorff, K. (2019). Content analysis (4th ed.). SAGE Publications. https://doi.org/10.4135/9781071878781

Kwasnicka, D., Dombrowski, S. U., White, M., & Sniehotta, F. F. (2015). Data-prompted interviews: Using individual ecological data to stimulate narratives and explore meanings. Health Psychology , 34 (12), 1191–1194. https://doi.org/10.1037/hea0000234

Lim, L-A., Dawson, S., Gašević, D., Joksimović, S., Pardo, A., Fudge, A., & Gentili, S. (2021). Students’ perceptions of, and emotional responses to, personalised learning analytics-based feedback: An exploratory study of four courses. Assessment & Evaluation in Higher Education , 46 (3), 339–359. https://doi.org/10.1080/02602938.2020.1782831

Lodge, J. M., Panadero, E., Broadbent, J., & De Barba, P. G. (2019). Supporting self-regulated learning with learning analytics. In J. M. Lodge, J. Horvath, & L. Corrin (Eds.), Learning analytics in the classroom: Translating learning analytics research for teachers (pp. 45–55). Routledge. https://doi.org/10.4324/9781351113038-4

Marzouk, Z., Rakovic, M., Liaqat, A., Vytasek, J., Samadi, D., Stewart-Alonso, J., Ram, I., Woloshen, S., Winne, P. H., & Nesbit, J. C. (2016). What if learning analytics were based on learning science? Australasian Journal of Educational Technology , 32 (6), 1–18. https://doi.org/10.14742/ajet.3058

Matcha, W., Uzir, N. A., Gašević, D., & Pardo, A. (2020). A systematic review of empirical studies on learning analytics dashboards: A self-regulated learning perspective. IEEE Transactions on Learning Technologies , 13 (2), 226–245. https://doi.org/10.1109/TLT.2019.2916802

Mayring, P. (2000). Qualitative content analysis. Forum Qualitative Sozialforschung Forum: Qualitative Social Research , 1 (2). https://doi.org/10.17169/fqs-1.2.1089

Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design and implementation (4th ed.). JosseyBass.

Molenaar, I., Horvers, A., & Baker, R. S. (2019). Towards hybrid human-system regulation: Understanding children’ SRL support needs in blended classrooms. Proceedings of the 9th International Conference on Learning Analytics and Knowledge (pp. 471–480). Association for Computing Machinery. https://doi.org/10.1145/3303772.3303780

Moos, D. C. (2018). Emerging classroom technology: Using self-regulation principles as a guide for effective implementation. In D. H. Schunk, & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (2nd ed., pp. 243–253). Routledge. https://doi.org/10.4324/9781315697048-16

Panadero, E. (2017). A review of self-regulated learning: Six models and four directions for research. Frontiers in Psychology , 8 ., Article 422. https://doi.org/10.3389/fpsyg.2017.00422

Panadero, E., & Alonso-Tapia, J. (2014). How do students self-regulate? Review of Zimmerman’s cyclical model of self-regulated learning. Anales De Psicología , 30 (2), 450–462. https://doi.org/10.6018/analesps.30.2.167221

Papamitsiou, Z., & Economides, A. A. (2015). Temporal learning analytics visualizations for increasing awareness during assessment. RUSC Universities and Knowledge Society Journal , 12 (3), 129–147. https://doi.org/10.7238/rusc.v12i3.2519

Park, Y., & Jo, I. (2015). Development of the learning analytics dashboard to support students’ learning performance. Journal of Universal Computer Science , 21 (1), 110–133. https://doi.org/10.3217/jucs-021-01-0110

Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 451–502). Academic Press. https://doi.org/10.1016/B978-012109890-2/50043-3

Puustinen, M., & Pulkkinen, L. (2001). Models of self-regulated learning: A review. Scandinavian Journal of Educational Research , 45 (3), 269–286. https://doi.org/10.1080/00313830120074206

Rets, I., Herodotou, C., Bayer, V., Hlosta, M., & Rienties, B. (2021). Exploring critical factors of the perceived usefulness of a learning analytics dashboard for distance university students. International Journal of Educational Technology in Higher Education , 18 (1). https://doi.org/10.1186/s41239-021-00284-9

Roberts, L. D., Howell, J. A., & Seaman, K. (2017). Give me a customizable dashboard: Personalized learning analytics dashboards in higher education. Technology Knowledge and Learning , 22 (3), 317–333. https://doi.org/10.1007/s10758-017-9316-1

Schreier, M. (2014). Qualitative content analysis. In U. Flick (Ed.), The SAGE handbook of qualitative data analysis (pp. 170–183). SAGE. https://doi.org/10.4135/9781446282243

Schumacher, C., & Ifenthaler, D. (2018). Features students really expect from learning analytics. Computers in Human Behavior , 78 , 397–407. https://doi.org/10.1016/j.chb.2017.06.030

Schwendimann, B. A., Rodríguez-Triana, M. J., Vozniuk, A., Prieto, L. P., Boroujeni, M. S., Holzer, A., Gillet, D., & Dillenbourg, P. (2017). Perceiving learning at a glance: A systematic literature review of learning dashboard research. IEEE Transactions on Learning Technologies , 10 (1), 30–41. https://doi.org/10.1109/TLT.2016.2599522

Sclater, N., Peasgood, A., & Mullan, J. (2016). Learning analytics in higher education—A review of UK and international practice: Full report . Jisc. https://www.jisc.ac.uk/sites/default/files/learning-analytics-in-he-v2_0.pdf

Silvola, A., Sjöblom, A., Näykki, P., Gedrimiene, E., & Muukkonen, H. (2023). Learning analytics for academic paths: Student evaluations of two dashboards for study planning and monitoring. Frontline Learning Research , 11 (2), 78–98. https://doi.org/10.14786/flr.v11i2.1277

Susnjak, T., Ramaswami, G. S., & Mathrani, A. (2022). Learning analytics dashboard: A tool for providing actionable insights to learners. International Journal of Educational Technology in Higher Education , 19 (1). https://doi.org/10.1186/s41239-021-00313-7

Teasley, S. D. (2017). Student facing dashboards: One size fits all? Technology Knowledge and Learning , 22 (3), 377–384. https://doi.org/10.1007/s10758-017-9314-3

Van Leeuwen, A., Teasley, S., & Wise, A. (2022). Teacher and student facing analytics. In C. Lang, G. Siemens, A. Wise, D. Gašević, & A. Merceron (Eds.), Handbook of learning analytics (2nd ed., pp. 130–140). Society for Learning Analytics Research. https://doi.org/10.18608/hla22.013

Verbert, K., Ochoa, X., De Croon, R., Dourado, R. A., & De Laet, T. (2020). Learning analytics dashboards: The past, the present and the future. Proceedings of the 10th International Conference on Learning Analytics and Knowledge (pp. 35–40). Association for Computing Machinery. https://doi.org/10.1145/3375462.3375504

Viberg, O., Khalil, M., & Baars, M. (2020). Self-regulated learning and learning analytics in online learning environments: A review of empirical research. Proceedings of the 10th International Conference on Learning Analytics and Knowledge (pp. 524–533). Association for Computing Machinery. https://doi.org/10.1145/3375462.3375483

Virtanen, P. (2019). Self-regulated learning in higher education: Basic dimensions, individual differences, and relationship with academic achievement (Helsinki Studies in Education, 1798–8322) [Doctoral dissertation, University of Helsinki]. University of Helsinki Open Repository. https://urn.fi/URN:ISBN:978-951-51-5681-5

West, D., Luzeckyj, A., Toohey, D., Vanderlelie, J., & Searle, B. (2020). Do academics and university administrators really know better? The ethics of positioning student perspectives in learning analytics. Australasian Journal of Educational Technology , 36 (2), 60–70. https://doi.org/10.14742/ajet.4653

Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In D. Hacker, J. Dunlosky, & A. Graesser (Eds.), Metacognition in educational theory and practice (pp. 277–304). Erlbaum.

Wise, A. F. (2014). Designing pedagogical interventions to support student use of learning analytics. Proceedings of the 4th International Conference on Learning Analytics and Knowledge (pp. 203–211). Association for Computing Machinery. https://doi.org/10.1145/2567574.2567588

Wise, A. F., Vytasek, J. M., Hausknecht, S., & Zhao, Y. (2016). Developing learning analytics design knowledge in the middle space: The student tuning model and align design framework for learning analytics use. Online Learning , 20 (2), 155–182. https://doi.org/10.24059/olj.v20i2.783

Wong, J., Baars, M., Davis, D., Van Der Zee, T., Houben, G-J., & Paas, F. (2019). Supporting self-regulated learning in online learning environments and MOOCs: A systematic review. International Journal of Human–Computer Interaction , 35 (4–5), 356–373. https://doi.org/10.1080/10447318.2018.1543084

Zimmerman, B. J. (1999). Attaining self-regulation: A social cognitive perspective. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 13–39). Academic Press. https://doi.org/10.1016/B978-012109890-2/50031-7

Zimmerman, B. J. (2011). Motivational sources and outcomes of self-regulated learning and performance. In B. J. Zimmerman, & D. H. Schunk (Eds.), Handbook of self-regulation of learning and performance (pp. 49–64). Routledge/Taylor & Francis Group.

Zimmerman, B. J. (2015). Self-regulated learning: Theories, measures, and outcomes. In J. D. Wright (Ed.), International Encyclopedia of the Social & Behavioral Sciences (2nd ed., pp. 541–546), Elsevier. https://doi.org/10.1016/B978-0-08-097086-8.26060-1

Zimmerman, B. J., & Moylan, A. R. (2009). Self-regulation: Where metacognition and motivation intersect. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of metacognition in education (pp. 299–315). Routledge/Taylor & Francis Group.

Download references

Acknowledgements

Language process: In the preparation process of the manuscript, the Quillbot Paraphraser tool was used to improve language clarity in some parts of the text (e.g., word choice). The manuscript was also proofread by a professional. After using this tool and service, the authors reviewed and revised the text as necessary, taking full responsibility for the content of this manuscript.

The authors also thank the communications and information technology specialists of the UAS under study for their support in editing Fig. 1 for publication.

This research was partly funded by Business Finland through the European Regional Development Fund (ERDF) project “Utilization of learning analytics in the various educational levels for supporting self-regulated learning (OAHOT)” (Grant no. 5145/31/2019). The article was completed with grants from the Finnish Cultural Foundation’s Central Ostrobothnia Regional Fund (Grant no. 25221232) and The Emil Aaltonen Foundation (Grant no. 230078), which were awarded to the first author.

Open Access funding provided by University of Lapland.

Author information

Authors and affiliations.

Faculty of Education, University of Lapland, Rovaniemi, Finland

Riina Kleimola & Heli Ruokamo

School of Applied Educational Science and Teacher Education, University of Eastern Finland, Joensuu, Finland

Laura Hirsto

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: RK, LH. Data collection: RK. Formal analysis: RK, LH. Writing—original draft: RK. Writing—review and editing: RK, LH, HR. Supervision: LH, HR. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Riina Kleimola .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Kleimola, R., Hirsto, L. & Ruokamo, H. Promoting higher education students’ self-regulated learning through learning analytics: A qualitative study. Educ Inf Technol (2024). https://doi.org/10.1007/s10639-024-12978-4

Download citation

Received : 14 February 2024

Accepted : 09 August 2024

Published : 07 September 2024

DOI : https://doi.org/10.1007/s10639-024-12978-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Higher education student
  • Qualitative study
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. How to Write Awesome Qualitative Research Questions: Types & Examples (2022)

    sample questions qualitative research interviews

  2. How to Write Qualitative Research Questions

    sample questions qualitative research interviews

  3. 83 Qualitative Research Questions & Examples

    sample questions qualitative research interviews

  4. Qualitative Vs Quantitative Survey Questions

    sample questions qualitative research interviews

  5. How to Write Awesome Qualitative Research Questions: Types & Examples (2022)

    sample questions qualitative research interviews

  6. 13.2 Qualitative interview techniques

    sample questions qualitative research interviews

VIDEO

  1. Qualitative Research: Interviews and Focus Groups

  2. Mastering Research Interviews: Proven Techniques for Successful Data Collection

  3. Tellet in 90 seconds: A Simple Guide to AI-Powered Qualitative Research Interviews

  4. Qualitative and quantitative interviews

  5. The Worst Qualitative Interview Mistakes You Can Make: Avoid Closed-ended Questions

  6. What Is The Purpose of Qualitative Research?

COMMENTS

  1. Top 20 Qualitative Research Interview Questions & Answers

    Learn how to respond to common interview questions for qualitative research roles, such as ensuring credibility, adapting methodology, and conducting participant observation. See examples of effective answers and tips for demonstrating your qualitative skills and experience.

  2. 83 Qualitative Research Questions & Examples

    83 Qualitative Research Questions & Examples

  3. How to write qualitative research questions

    Learn how to write effective qualitative research questions for your projects, and why getting it right matters so much. Find out the difference between quantitative and qualitative research questions, and the four types of qualitative research questions with examples.

  4. 35 qualitative research interview questions examples

    35 qualitative research interview questions examples

  5. Qualitative Interview Questions: Guidance for Novice Researchers

    (PDF) Qualitative Interview Questions: Guidance for Novice ...

  6. Qualitative Research Questions: Gain Powerful Insights

    Learn the basics of qualitative research questions, including their key components, how to craft them effectively, and 25 example questions. Qualitative research questions focus on the "how" and "why" of things, rather than the "what", and can be used to explore a wide range of topics.

  7. Chapter 11. Interviewing

    Learn about different types of interviews, from unstructured conversations to semistructured guides, and how to conduct them for qualitative research. The term for a sample being interviewed is interviewee, and the person who conducts the interview is the interviewer.

  8. How to Conduct a Qualitative Interview (2024 Guide)

    Learn how to plan, conduct, and analyze qualitative interviews for research purposes. Find tips and best practices on interview questions, participants, location, rapport, and data analysis.

  9. Twelve tips for conducting qualitative research interviews

    Twelve tips for conducting qualitative research interviews

  10. How To Do Qualitative Interviews For Research

    Learn how to plan, conduct and analyze qualitative interviews for your research project. Avoid common mistakes and follow useful strategies for different interview approaches, techniques and ethics.

  11. 6 Qualitative Research and Interviews

    Learn how to conduct in-depth interviews for qualitative research, including how to create an interview guide, ask open-ended questions, and analyze interview data. This web page does not cover standardized interviews, which are more similar to surveys.

  12. PDF TIPSHEET QUALITATIVE INTERVIEWING

    TIPSHEET - QUALITATIVE INTERVIEWING

  13. PDF Asking the Right Question: Qualitative Research Design and Analysis

    Learn about different approaches, methods, and techniques of qualitative research, and how to design and conduct qualitative study visits and data collection. This presentation covers the objectives, strengths, limitations, and tips of qualitative research, as well as examples and resources.

  14. Types of Interviews in Research

    Learn about the different types of interviews in qualitative research, such as structured, semi-structured, unstructured, and focus groups. Compare their advantages and disadvantages, and see examples of interview questions.

  15. Chapter 13: Interviews

    Learn how to use interviews in qualitative research to explore participants' experiences, opinions and motivations. Compare different types of interviews (key stakeholder, dyad, narrative, life history) and techniques (semi-structured, in-depth) and get tips on how to write interview questions and conduct interviews.

  16. 5 Qualitative Research Interview Questions (With Answers)

    Here's a list of five qualitative research interview questions and some sample answers to consider when practicing for your interview: 1. Define market research and explain how it works. Interviewers may ask this question to evaluate your basic understanding of research and how to gather and understand it. Market research refers to another form ...

  17. Library Support for Qualitative Research

    Interview Research - Library Support for Qualitative Research

  18. How to Write Qualitative Research Questions: Types & Examples

    Learn how to write effective qualitative research questions that explore the depth and nuances of a phenomenon, focusing on "why" and "how". Find out when to use qualitative questions, how to choose the right structure, and what characteristics make good qualitative questions.

  19. Preparing Questions for a Qualitative Research Interview

    Learn how to prepare effective questions for qualitative research interviews based on the type of interview, the information you need, and the balance between open-ended and probing questions. Explore examples of structured, semi-structured, and unstructured questions for different research objectives.

  20. How to carry out great interviews in qualitative research

    How to Carry Out Great Interviews in Qualitative Research

  21. A Step-by-Step Guide for a Successful Qualitative Interview

    Learn how to conduct qualitative interviews to gain in-depth insights from individual respondents. This guide covers the steps, tips, and use cases for qualitative research, from determining objectives and audience to transcribing and analyzing responses.

  22. Confusing questions in qualitative inquiry: Research, interview, and

    The topic I take up today is one of the asking questions in qualitative inquiry. More specifically, I want to direct your attention to three different types of questioning activities: developing research questions, developing interview questions, and developing analytical questions for the purpose of analysis.

  23. Big enough? Sampling in qualitative inquiry

    Any senior researcher, or seasoned mentor, has a practiced response to the 'how many' question. Mine tends to start with a reminder about the different philosophical assumptions undergirding qualitative and quantitative research projects (Staller, 2013). As Abrams (2010) points out, this difference leads to "major differences in sampling ...

  24. Sample of document analysis for qualitative studies

    By systematically categorizing information, researchers can identify key narratives and trends that emerge from qualitative data sources, such as interviews or open-ended survey responses. To effectively conduct content analysis, consider the following steps: Define the Research Questions: Establish clear objectives to guide the analysis ...

  25. To share or not to share, that is the question: a qualitative study of

    To address those research questions, we conducted a qualitative study comprising 14 semi-structured interviews and 136 open-ended survey responses with Chinese astronomers to understand their ...

  26. Perspectives on a peer-driven intervention to promote pre-exposure

    JT designed the study and conducted the participant interviews, with assistance and guidance from PC. SAM acted as our qualitative research expert and led the training of research assistants for interview coding and analysis. JK, IK, HP, and HM coded the interviews. IK led the thematic analysis, with support from SAM and JT.

  27. Exploring the feasibility and acceptability of community paramedicine

    Interviews started with introductory questions about the participants' roles within their agency to build rapport, better understand the participants' experience in EMS, and describe the goals of the research project. The rest of the interview questions focused on MIH-CP program history and functions.

  28. Barriers and facilitators for implementing the WHO Safe Childbirth

    We conducted a qualitative study involving focus group discussions with birth attendants (n = 24) and individual interviews with clinical administrators (n = 6) and decision-makers (n = 8). The Consolidated Framework for Implementation Research guided the questions used in the interviews and focus group discussions, as well as the subsequent ...

  29. Promoting higher education students' self-regulated learning through

    2.3 Research questions. The purpose of this qualitative study is to examine how HE students perceive the utilization of an LAD in SRL. A specific emphasis was placed on its utilization as part of the forethought, performance, and reflection phase processes, considered central to student SRL. The main research question (RQ) and the threefold sub ...