We Trust in Human Precision
20,000+ Professional Language Experts Ready to Help. Expertise in a variety of Niches.
API Solutions
- API Pricing
- Cost estimate
- Customer loyalty program
- Educational Discount
- Non-Profit Discount
- Green Initiative Discount1
Value-Driven Pricing
Unmatched expertise at affordable rates tailored for your needs. Our services empower you to boost your productivity.
- Special Discounts
- Enterprise transcription solutions
- Enterprise translation solutions
- Transcription/Caption API
- AI Transcription Proofreading API
Trusted by Global Leaders
GoTranscript is the chosen service for top media organizations, universities, and Fortune 50 companies.
GoTranscript
One of the Largest Online Transcription and Translation Agencies in the World. Founded in 2005.
Speaker 1: Validity and reliability are probably among the most confusing and frustrating terms when it comes to qualitative research. There are so many definitions and so many discussions and so many alternative terms have been put forward, so it doesn't really help to understand what validity is and how we can ensure that our findings are valid or how we can increase these findings' validity. So in this video, I'll take you through six steps to increase the validity of your qualitative findings. In quantitative research, validity and reliability are quite straightforward terms. So reliability refers to replicability and consistency of certain measurements and validity to whether this measurement is measuring what it's supposed to measure. So it's quite straightforward. But think about qualitative research. Can we really talk about consistency of our instruments? Imagine that you're interviewing the same person twice and asking the same questions. Even though you're asking the same questions, this person is not likely to give you exactly the same answers. So for this reason, reliability doesn't really refer to qualitative research. It's not that relevant. And usually, people discuss validity rather than reliability of qualitative studies. And validity of qualitative research is usually discussed in terms of three common threads to validity, which are three different types of bias. Respondent bias, researcher bias, and reactivity. So respondent bias refers to a situation where your participants are not giving you honest responses for any reason. They may feel that the topic is threatening to their self-esteem, for example, or they may simply try to please you and give you the answers they think you are looking for. Researcher bias refers to the influence of your previous knowledge and assumptions on your study, which may be a very dangerous and a very risky factor in your study. I've talked about the role of assumptions quite a lot in my other videos and in my blog. And finally, reactivity refers to the role of you as a researcher and your influence, your physical presence in the research situation, and its possible influence on the data, on what the participants say, and so on and so forth. And in order to minimize the potential influence of these three types of bias on your study, Robson suggests the following six strategies to deal with threats to validity. Prolonged involvement refers to you as a researcher being involved in the research situation in your participants' environment, which is likely to result in the increase in the level of trust between you and your participants. This in turn is likely to reduce the risk of respondent bias and reactivity as you generate this common trust. However, it is likely to increase the risk of researcher bias because you and your participants are likely to generate some set of common assumptions. And as I said, assumptions may be a very dangerous thing for your research. Triangulation is such a broad topic and I'm sure that you've at least heard about it before, if not read about it. Triangulation may refer to many things, including triangulation of data, so when you collect different kinds of data, triangulation of methodology, when you have, for example, mixed methods research, or triangulation of theory, where you're comparing what's emerging from your data to previous existing theories. In any case, triangulation is likely to reduce all kinds of threats to validity, so just remember that it's always good to consider triangulating these different aspects of your study. Peer debriefing refers to any input or feedback from other people. This may happen during internal events, such as seminars or workshops in your university, or external, such as conferences. In any case, the feedback and quite likely criticism that you'll receive from other people helps you become more objective and helps you see and become aware of certain limitations of your study. And this is likely to reduce researcher's bias, so again, researcher's bias which was about your previous assumptions and your previous knowledge. So you're becoming more objective and more aware of how your study may be improved. Member checking may mean a couple of things, but in essence it refers to the practice of seeking clarification with your participants. So asking them to clarify certain things before you actually jump into conclusions and describe your interpretation of that data. So it may be simply keeping in touch with your participants, sending them a text message or an email, and asking them whether what you think they meant when they said something in the interview is actually what they meant. Another practice is to send them interview transcripts. So to send them the whole transcript and ask them to delete or change things or add things to that transcript. And finally, you have a method called validation interview, which is all about member checking. So it's basically a whole interview which serves the purpose of this clarification that I discussed. So after you've conducted the first run of analysis after the interview, you conduct another interview and you just ask your participants about your interpretations and about anything that was not clear to you. Negative case analysis is one of my favorite things to do. And I talk extensively about it in my self-study course on how to analyze qualitative data. But basically what it involves is analyzing these cases or data sets that do not match the rest of the data, do not match the trends or patterns that emerge in the rest of the data. And although you may feel tempted to ignore these cases, you may fear that they will ruin your data or your findings, quite often they tell you more about the rest of the data than these actual other cases themselves. So negative cases highlight not just how this one case is different from the rest of the data, but they actually highlight the similarities between the rest of the data. So this is a very, very valuable and important thing to do. And finally, keeping an audit trail means that you keep a record of all the activities involved in your research. So all the audio recordings, your methodological decisions, your researcher diary, your coding book, just having all of this available so you can, for example, demonstrate it to somebody. So again, this way you become really transparent and the validity of your findings cannot really be argued. Importantly, don't worry about having to apply all these strategies in your study. Firstly, some of them are almost natural, like peer debriefing. So as a student, it's very likely that you will receive feedback, you will talk to other people about your study, you will receive feedback and criticism. So you don't really have to worry about consciously applying it as a strategy. And secondly, you can choose some of these strategies, a combination of these strategies. You don't really have to apply every single one on the list. However, it is important to think about validity and it's very important to talk about it in your study. So if you demonstrate that you are thinking about validity and you demonstrate what exactly you did to increase this validity, it will be a major, major advantage to you and to your study.
- Open access
- Published: 28 August 2024
A qualitative study identifying implementation strategies using the i-PARIHS framework to increase access to pre-exposure prophylaxis at federally qualified health centers in Mississippi
- Trisha Arnold ORCID: orcid.org/0000-0003-3556-5717 1 , 2 ,
- Laura Whiteley 2 ,
- Kayla K. Giorlando 1 ,
- Andrew P. Barnett 1 , 2 ,
- Ariana M. Albanese 2 ,
- Avery Leigland 1 ,
- Courtney Sims-Gomillia 3 ,
- A. Rani Elwy 2 , 5 ,
- Precious Patrick Edet 3 ,
- Demetra M. Lewis 4 ,
- James B. Brock 4 &
- Larry K. Brown 1 , 2
Implementation Science Communications volume 5 , Article number: 92 ( 2024 ) Cite this article
3 Altmetric
Metrics details
Mississippi (MS) experiences disproportionally high rates of new HIV infections and limited availability of pre-exposure prophylaxis (PrEP). Federally Qualified Health Centers (FQHCs) are poised to increase access to PrEP. However, little is known about the implementation strategies needed to successfully integrate PrEP services into FQHCs in MS.
The study had two objectives: identify barriers and facilitators to PrEP use and to develop tailored implementation strategies for FQHCs.
Semi-structured interviews were conducted with 19 staff and 17 PrEP-eligible patients in MS FQHCs between April 2021 and March 2022. The interview was guided by the integrated-Promoting Action on Research Implementation in Health Services (i-PARIHS) framework which covered PrEP facilitators and barriers. Interviews were coded according to the i-PARIHS domains of context, innovation, and recipients, followed by thematic analysis of these codes. Identified implementation strategies were presented to 9 FQHC staff for feedback.
Data suggested that PrEP use at FQHCs is influenced by patient and clinic staff knowledge with higher levels of knowledge reflecting more PrEP use. Perceived side effects are the most significant barrier to PrEP use for patients, but participants also identified several other barriers including low HIV risk perception and untrained providers. Despite these barriers, patients also expressed a strong motivation to protect themselves, their partners, and their communities from HIV. Implementation strategies included education and provider training which were perceived as acceptable and appropriate.
Conclusions
Though patients are motivated to increase protection against HIV, multiple barriers threaten uptake of PrEP within FQHCs in MS. Educating patients and providers, as well as training providers, are promising implementation strategies to overcome these barriers.
Peer Review reports
Contributions to the literature
We propose utilizing Federally Qualified Health Centers (FQHCs) to increase pre-exposure prophylaxis (PrEP) use among people living in Mississippi.
Little is currently known about how to distribute PrEP at FQHCs.
We comprehensively describe the barriers and facilitators to implementing PrEP at FQHCs.
Utilizing effective implementation strategies of PrEP, such as education and provider training at FQHCs, may increase PrEP use and decrease new HIV infections.
Introduction
The HIV outbreak in Mississippi (MS) is among the most critical in the United States (U.S.). It is distinguished by significant inequalities, a considerable prevalence of HIV in remote areas, and low levels of HIV medical care participation and virologic suppression [ 1 ]. MS has consistently ranked among the states with the highest HIV rates in the U.S. This includes being the 6th highest in new HIV diagnoses [ 2 ] and 2nd highest in HIV diagnoses among men who have sex with men (MSM) compared to other states [ 2 , 3 , 4 ]. Throughout MS, the HIV epidemic disproportionately affects racial and ethnic minority groups, particularly among Black individuals. A spatial epidemiology and statistical modeling study completed in MS identified HIV hot spots in the MS Delta region, Southern MS, and in greater Jackson, including surrounding rural counties [ 5 ]. Black race and urban location were positively associated with HIV clusters. This disparity is often driven by the complex interplay of social, economic, and structural factors, including poverty, limited access to healthcare, and stigma [ 5 ].
Pre-exposure prophylaxis (PrEP) has gained significant recognition due to its safety and effectiveness in preventing HIV transmission when taken as prescribed [ 6 , 7 , 8 , 9 ]. However, despite the progression in PrEP and its accessibility, its uptake has been slow among individuals at high risk of contracting HIV, particularly in Southern states such as MS [ 10 , 11 , 12 , 13 , 14 ]. According to the CDC [ 5 ], “4,530 Mississippians at high risk for HIV could potentially benefit from PrEP, but only 927 were prescribed PrEP.” Several barriers hinder PrEP use in MS including limited access to healthcare, cost, stigma, and medical mistrust [ 15 , 16 , 17 ].
Federally qualified health centers (FQHCs) are primary healthcare organizations that are community-based and patient-directed, serve geographically and demographically diverse patients with limited access to medical care, and provide care regardless of a patient’s ability to pay [ 18 ]. FQHCs in these areas exhibit reluctance in prescribing or counseling patients regarding PrEP, primarily because they lack the required training and expertise [ 19 , 20 , 21 ]. Physicians in academic medical centers are more likely to prescribe PrEP compared to those in community settings [ 22 ]. Furthermore, providers at FQHCs may exhibit less familiarity with conducting HIV risk assessments, express concerns regarding potential side effects of PrEP, and have mixed feelings about prescribing it [ 23 , 24 ]. Task shifting might also be needed as some FQHCs may lack sufficient physician support to manage all aspects of PrEP care. Tailored strategies and approaches are necessary for FQHCs to effectively navigate the many challenges that threaten their patients’ access to and utilization of PrEP.
The main objectives of this study were to identify the barriers and facilitators to PrEP use and to develop tailored implementation strategies for FQHCs providing PrEP. To service these objectives, this study had three specific aims. Aim 1 involved conducting a qualitative formative evaluation guided by the integrated-Promoting Action on Research Implementation in Health Services (i-PARIHS) framework- with FQHC staff and PrEP-eligible patients across three FQHCs in MS [ 25 ]. Interviews covered each of the three i-PARIHS domains: context, innovation, and recipients. These interviews sought to identify barriers and facilitators to implementing PrEP. Aim 2 involved using interview data to select and tailor implementation strategies from the Expert Recommendations for Implementing Change (ERIC) project [ 26 ] (e.g., provider training) and methods (e.g., telemedicine, PrEP navigators) for the FQHCs. Aim 3 was to member-check the selected implementation strategies and further refine these if necessary. Data from all three aims are presented below. The standards for reporting qualitative research (SRQR) checklist was used to improve the transparency of reporting this qualitative study [ 27 ].
Formative evaluation interviews
Interviews were conducted with 19 staff and 17 PrEP-eligible patients from three FQHCs in Jackson, Canton, and Clarksdale, Mississippi. Staff were eligible to participate if they were English-speaking and employed by their organization for at least a year. Eligibility criteria for patients included: 1) English speaking, 2) aged 18 years or older, 3) a present or prior patient at the FQHC, 4) HIV negative, and 5) currently taking PrEP or reported any one of the following factors that may indicate an increased risk for HIV: in the past year, having unprotected sex with more than one person with unknown (or positive) HIV status, testing positive for a sexually transmitted infection (STI) (syphilis, gonorrhea, or chlamydia), or using injection drugs.
Data collection
The institutional review boards of the affiliated hospitals approved this study prior to data collection. An employee at each FQHC acted as a study contact and assisted with recruitment. The contacts advertised the study through word-of-mouth to coworkers and relayed the contact information of those interested to research staff. Patients were informed about the study from FQHC employees and flyers while visiting the FQHC for HIV testing. Those interested filled out consent-to-contact forms, which were securely and electronically sent to research staff. Potential participants were then contacted by a research assistant, screened for eligibility, electronically consented via DocuSign (a HIPAA-compliant signature capturing program), then scheduled for an interview. Interviews occurred remotely over Zoom, a HIPAA-compliant, video conferencing platform. Interviews were conducted until data saturation was reached. In addition to the interview, all participants were asked to complete a short demographics survey via REDCap, a HIPAA-compliant, online, data collection tool. Each participant received a $100 gift card for their time.
The i-PARIHS framework guided interview content and was used to create a semi-structured interview guide [ 28 ]. Within the i-PARIHS framework’s elements, the interview guide content included facilitators and barriers to PrEP use at the FQHC: 1) the innovation, (PrEP), such as its degree of fit with existing practices and values at FQHCs; 2) the recipients (individuals presenting to FQHCs), such as their PrEP awareness, barriers to receiving PrEP such as motivation, resources, support, and personal PrEP experiences; and 3) the context of the setting (FQHCs), such as clinic staff PrEP awareness, barriers providing PrEP services, and recommendations regarding PrEP care. Interviews specifically asked about the use of telemedicine, various methods for expanding PrEP knowledge for both patients and providers (e.g., social media, advertisements, community events/seminars), and location of services (e.g., mobile clinics, gyms, annual health checkups, health fairs). Staff and patients were asked the same interview questions. Data were reviewed and analyzed iteratively throughout data collection, and interview guides were adapted as needed.
Data analysis
Interviews were all audio-recorded, then transcribed by an outside, HIPAA-certified transcription company. Transcriptions were reviewed for accuracy by the research staff who conducted the interviews.
Seven members of the research team (TA, LW, KKG, AB, CSG, AL, LKB) independently coded the transcripts using an a priori coding schedule that was developed using the i-PARIHS and previous studies [ 15 , 16 , 17 ]. All research team members were trained in qualitative methods prior to beginning the coding process. The coding scheme covered: patient PrEP awareness, clinic staff PrEP awareness, barriers to receiving PrEP services, barriers to providing PrEP services, and motivation to take PrEP. Each coder read each line of text and identified if any of the codes from the a priori coding framework were potentially at play in each piece of text. Double coding was permitted when applicable. New codes were created and defined when a piece of text from transcripts represented a new important idea. Codes were categorized according to alignment with i-PARIHS constructs. To ensure intercoder reliability, the first 50% of the interviews were coded by two researchers. Team meetings were regularly held to discuss coding discrepancies (to reach a consensus). Coded data were organized using NVivo software (Version 12). Data were deductively analyzed using reflexive thematic analysis, a six-step process for analyzing and reporting qualitative data, to determine themes relevant to selecting appropriate implementation strategies to increase PrEP use at FQHCs in MS [ 29 ]. The resulting thematic categories were used to select ERIC implementation strategies [ 26 ]. Elements for each strategy were then operationalized and the mechanism of change for each strategy was hypothesized [ 30 , 31 ]. Mechanisms define how an implementation strategy will have an effect [ 30 , 31 ]. We used the identified determinants to hypothesize the mechanism of change for each strategy.
Member checking focus groups
Member checking is when the data or results are presented back to the participants, who provide feedback [ 32 ] to check for accuracy [ 33 ] and improve the validity of the data [ 34 ]. This process helps reduce the possibility of misrepresentation of the data [ 35 ]. Member checking was completed with clinic staff rather than patients because the focus was on identifying strategies to implement PrEP in the FQHCs.
Two focus groups were conducted with nine staff from the three FQHCs in MS. Eligibility criteria were the same as above. A combination of previously interviewed staff and non-interviewed staff were recruited. Staff members were a mix of medical (e.g., nurses, patient navigators, social workers) and non-medical (e.g., administrative assistant, branding officer) personnel. Focus group one had six participants and focus group two had three participants. The goal was for focus group participants to comprise half of staff members who had previously been interviewed and half of non-interviewed staff.
Participants were recruited and compensated via the same methods as above. All participants electronically consented via DocuSign, and then were scheduled for a focus group. Focus groups occurred remotely over Zoom. Focus groups were conducted until data saturation was reached and no new information surfaced. The goal of the focus groups was to member-check results from the interviews and assess the feasibility and acceptability of selected implementation strategies. PowerPoint slides with the results and implementation strategies written in lay terms were shared with the participants, which is a suggested technique to use in member checking [ 33 ]. Participants were asked to provide feedback on each slide.
Focus groups were all audio-recorded, then transcribed. Transcriptions were reviewed for accuracy by the research staff who completed focus groups. Findings from the focus groups were synthesized using rapid qualitative analyses [ 36 , 37 ]. Facilitators (TA, PPE) both took notes during the focus groups of the primary findings. Notes were then compared during team meetings and results were finalized. Results obtained from previous findings of the interviews and i-PARIHS framework were presented. To ensure the reliability of results, an additional team member (KKG) read the transcripts to verify the primary findings and selected supportive quotes for each theme. Team meetings were regularly held to discuss the results.
Thirty-six semi-structured interviews in HIV hot spots were completed between April 2021 and March 2022. Among the 19 FQHC staff, most staff members had several years of experience working with those at risk for HIV. Staff members were a mix of medical (e.g., doctors, nurses, CNAs, social workers) and non-medical (e.g., receptionists, case managers) personnel. Table 1 provides the demographic characteristics for the 19 FQHC clinic staff and 17 FQHC patients.
Table 2 provides a detailed description of the findings within each category: PrEP knowledge, PrEP barriers, and PrEP motivation. Themes are described in detail, with representative quotes, below. Implementation determinants are specific factors that influence implementation outcomes and can be barriers or facilitators. Table 3 highlights which implementation determinants can increase ( +) or decrease (-) the implementation of PrEP at FQHCs in MS. Each determinant, mapped to its corresponding i-PARIHS construct, is discussed in more detail below. There were no significant differences in responses across the three FQHCs.
PrEP knowledge
Patient prep awareness (i-parihs: recipients).
Most patients had heard of PrEP and were somewhat familiar with the medication. One patient described her knowledge of PrEP as follows, “I know that PrEP is I guess a program that helps people who are high-risk with sexual behaviors and that doesn't have HIV, but they're at high-risk.”- Patient, Age 32, Female, Not on PrEP. However, many lacked knowledge of who may benefit from PrEP, where to receive a prescription, the different medications used for PrEP, and the efficacy of PrEP. Below is a comment made by a patient listing what she would need to know to consider taking PrEP. “I would need to know the price. I would need to know the side effects. I need to know the percentage, like, is it 100 or 90 percent effective.”— Patient, Age Unknown, Female, Not on PrEP. Patients reported learning about PrEP via television and social media commercials, medical providers, and their social networks. One patient reported learning about PrEP from her cousin. “The only person I heard it [PrEP] from was my cousin, and she talks about it all the time, givin’ us advice and lettin’ us know that it’s a good thing.”— Patient, Age Unknown, Female, Not on PrEP.
Clinic Staff PrEP Awareness (i-PARIHS: Context)
Training in who may benefit from PrEP and how to prescribe PrEP varied among clinic staff at different FQHCs. Not all clinics offered formal PrEP education for employees; however, most knew that PrEP is a tool used for HIV prevention. Staff reported learning about PrEP via different speakers and meetings. A clinic staff member reported learning about PrEP during quarterly meetings. “Well, sometimes when we have different staff meetings, we have them quarterly, and we discuss PrEP. Throughout those meetings, they tell us a little bit of information about it, so that's how I know about PrEP.” – Staff, Dental Assistant, Female. Some FQHC staff members reported having very little knowledge of PrEP. One staff member shared that she knew only the “bare minimum” about PrEP, stating,
“I probably know the bare minimum about PrEP. I know a little about it [PrEP] as far as if taken the correct way, it can prevent you from gettin’ HIV. I know it [PrEP] doesn’t prevent against STDs but I know it’s a prevention method for HIV and just a healthier lifestyle.” –Staff, Accountant, Female
A few of the organizations had PrEP navigators to which providers refer patients. These providers were well informed on who to screen for PrEP eligibility and the process for helping the patient obtain a PrEP prescription. One clinic staff member highlighted how providers must be willing to be trained in the process of prescribing PrEP and make time for patients who may benefit. Specifically, she said,
“I have been trained [for PrEP/HIV care]. It just depends on if that’s something that you’re willing to do, they can train on what labs and stuff to order ’cause it’s a whole lot of labs. But usually, I try to do it. At least for everybody that’s high-risk.” – Staff, OB/GYN Nurse Practitioner, Female
Another clinic staff member reported learning about PrEP while observing another staff member being training in PrEP procedures.
“Well, they kinda explained to me what it [PrEP] is, but I was in training with the actual PrEP person, so it was kinda more so for his training. I know what PrEP is. I know the medications and I know he does a patient assistance program. If my patients have partners who are not HIV positive and wanna continue to be HIV negative, I can refer 'em.” – Staff, Administrative Assistant, Female
PrEP barriers
Barriers receiving prep services (i-parihs: recipients, innovation).
Several barriers to receiving PrEP services were identified in both patient and clinic staff interviews. There was a strong concern for the side effects of PrEP. One patient heard that PrEP could cause weight gain and nightmares, “I’m afraid of gaining weight. I’ve heard that actual HIV medication, a lotta people have nightmares or bad dreams.” - Patient, Age 30, Female, Not on PrEP. Another patient was concerned about perceived general side effects that many medications have. “Probably just the [potential] side effects. You know, most of the pills have allergic reactions and side effects, dizziness, seizures, you know.” - Patient, Age 30, Female, Not on PrEP.
The burden of remembering to take a daily pill was also mentioned as a barrier to PrEP use. One female patient explained how PrEP is something she is interested in taking; however, she would be unable to take a daily medication.
“I’m in school now and not used to takin’ a medication every day. I was takin’ a birth control pill, but now take a shot. That was one of the main reasons that I didn’t start PrEP cause they did tell me I could get it that day. So like I wanna be in the mind state to where I’m able to mentally, in my head, take a pill every day. PrEP is somethin’ that I wanna do.” - Patient, Age Unknown, Female, Not on PrEP
Stigma and confidentiality were also barriers to PrEP use at FQHCs. One staff member highlighted how in small communities it is difficult to go to a clinic where employees know you personally. Saying,
“If somebody knows you’re going to talk to this specific person, they know what you’re goin’ back there for, and that could cause you to be a little hesitant in coming. So there’s always gonna be a little hesitancy or mistrust, especially in a small community. Everybody knows everybody. The people that you’re gonna see goes to church with you.” – Staff, Accountant, Female
Some patients had a low perceived risk of HIV and felt PrEP may be an unnecessary addition to their routine. One patient shared that if she perceived she was at risk for HIV, then she would be more interested in taking PrEP, “If it ever came up to the point where I would need it [PrEP], then yes, I would want to know more about it [PrEP].”— Patient, Age Unknown, Female, Not on PrEP.
Some participants expressed difficulty initiating or staying on PrEP because of associated costs, transportation and/or scheduling barriers. A staff member explained how transportation may be available in the city but not available in more rural areas,
“I guess it all depends on the person and where they are. In a city it might take a while, but at least they have the transportation compared to someone that lives in a rural area where transportation might be an issue.” - Staff, Director of Nurses, Female
Childcare during appointments was also mentioned as a barrier, “It looks like here a lot of people don't have transportation or reliable transportation and another thing I don't have anybody to watch my kids right now. —Staff, Patient Navigator, Female.
Barriers Providing PrEP Services (i-PARIHS: Context)
Barriers to providing PrEP services were also identified. Many providers are still not trained in PrEP procedures nor feel comfortable discussing or prescribing PrEP to their patients. One patient shared an experience of going to a provider who was PrEP-uninformed and assumed his medication was to treat HIV,
“Once I told her about it [PrEP], she [clinic provider] literally right in front of me, Googled it [PrEP], and then she was Googlin’ the medication, Descovy. I went to get a lab work, and she came back and was like, “Is this for treatment?” I was like, “Why would you automatically think it’s for treatment?” I literally told her and the nurse, “I would never come here if I lived here.” - Patient, Age 50, Male, Taking PrEP
Also, it was reported that there is not enough variety in the kind of providers who offer PrEP (e.g., OB/GYN, primary care). Many providers such as OB/GYNs could serve as a great way to reach individuals who may benefit from PrEP; however, patients reported a lack of PrEP being discussed in annual visits. “My previous ones (OB/GYN), they’ve talked about birth control and every other method and they asked me if I wanted to get tested for HIV and any STIs, but the conversation never came up about PrEP.” -Patient, Age Unknown, Female, Not on PrEP.
PrEP motivation
Motivation to take prep (i-parihs: recipients).
Participants mentioned several motivators that enhanced patient willingness to use PrEP. Many patients reported being motivated to use PrEP to protect themselves and their partners from HIV. Additionally, participants reported wanting to take PrEP to help their community. One patient reported being motivated by both his sexuality and the rates of HIV in his area, saying, “I mean, I'm bisexual. So, you know, anyway I can protect myself. You know, it's just bein' that the HIV number has risen. You know, that's scary. So just being, in, an area with higher incidents of cases.”— Patient, Age Unknown, Male, Not on PrEP . Some participants reported that experiencing an HIV scare also motivated them to consider using PrEP. One patient acknowledged his behaviors that put him at risk and indicated that this increased his willingness to take PrEP, “I was havin' a problem with, you know, uh, bein' promiscuous. You know? So it [PrEP] was, uh, something that I would think, would help me, if I wasn't gonna change the way I was, uh, actin' sexually.”— Patient, Age Unknown, Male, Taking PrEP .
Table 3 outlines the implementation strategies identified from themes from the interview and focus group data. Below we recognize the barriers and determinants to PrEP uptake for patients attending FQHCs in MS by each i-PARIHS construct (innovation, recipient, context) [ 28 ]. Based on the data, we mapped the determinants to specific strategies from the ERIC project [ 26 ] and hypothesized the mechanism of change for each strategy [ 30 , 31 ].
Two focus groups were conducted with nine staff from threeFQHCs in MS. There were six participants in the 1st focus group and three in the 2nd. Staff members were a mix of medical (e.g., nurses, patient navigators, social workers) and non-medical (e.g., administrative assistant, branding officer) personnel. Table 4 provides the demographic characteristics for the FQHC focus group participants.
Staff participating in the focus groups generally agreed that the strategies identified via the interviews were appropriate and acceptable. Focus group content helped to further clarify some of the selected strategies. Below we highlight findings by each strategy domain.
PrEP information dissemination
Participants specified that awareness of HIV is lower, and stigma related to PrEP is higher in rural areas. One participant specifically said,
“There is some awareness but needs to be more awareness, especially to rural areas here in Mississippi. If you live in the major metropolitan areas there is a lot of information but when we start looking at the rural communities, there is not a lot.” – Staff, Branding Officer, Male
Participants strongly agreed that many patients don’t realize they may benefit from PrEP and that more inclusive advertisements are needed. A nurse specifically stated,
“ When we have new clients that come in that we are trying to inform them about PrEP and I have asked them if they may have seen the commercial, especially the younger population. They will say exactly what you said, that “Oh, I thought that was for homosexuals or whatever,” and I am saying “No, it is for anyone that is at risk.” – Staff, Nurse, Female
Further, staff agreed that younger populations should be included in PrEP efforts to alleviate stigma. Participants added that including PrEP information with other prevention methods (i.e., birth control, vaccines) is a good place to include parents and adolescents:
“Just trying to educate them about Hepatitis and things of that nature, Herpes. I think we should also, as they are approaching 15, the same way we educate them about their cycle coming on and what to expect, it’s almost like we need to start incorporating this (PrEP education), even with different forms of birth control methods with our young ladies.” – Staff, Nurse, Female
Participants agreed that PrEP testimonials would be helpful, specifically from people who started PrEP, stopped, and then were diagnosed with HIV. Participants indicated that this may improve PrEP uptake and persistence. One nurse stated:
“I have seen where a patient has been on PrEP a time or two and at some point, early in the year or later part of the year, and we have seen where they’ve missed those appointments and were not consistent with their medication regimen. And we have seen those who’ve tested positive for HIV. So, if there is a way we could get one of those patients who will be willing to share their testimony, I think they can really be impactful because it’s showing that taking up preventive measures was good and then kind of being inconsistent, this is what the outcome is, unfortunately.” – Staff, Nurse, Female
Increase variety and number of PrEP providers
Participants agreed that a “PrEP champion” (someone to promote PrEP and answer PrEP related questions) would be helpful, especially for providers who need more education about PrEP to feel comfortable prescribing. A patient navigator said,
“I definitely think that a provider PrEP champion is needed in every clinic or organization that is offering PrEP. And it goes back to what we were saying about the providers not being knowledgeable on it [PrEP]. If you have a PrEP champion that already knows this information, it is gonna benefit everybody, patients, patient advocates, the provider, everyone all around. Everyone needs a champion." – Staff, Patient Navigator, Female
Staff noted that they have walk-in appointments for PrEP available; however, they often have too many walk-in appointments to see everyone. They noted that having more resources and providers may alleviate this barrier for some patients:
“We still have challenges with people walking in versus scheduling an appointment, but we do have same day appointments. It is just hard sometimes because the volume that we have at our clinic and the number of patients that we have that walk in on a daily basis.” – Staff, Social Worker, Female
Enhance PrEP provider alliance and trust
Participants agreed that educational meetings would be beneficial and highlighted that meetings should happen regularly and emphasized a preference for in-person meetings. This is emphasized by the statement below,
“They should be in-person with handouts. You have to kind of meet people where they are as far as learning. Giving the knowledge, obtaining the knowledge, and using it, and so you have to find a place. I definitely think that yearly in-person training to update guidelines, medication doses, different things like that." – Staff, Patient Navigator, Female
Staff also suggested hosting one very large collaborative event to bring together all organizations that offer PrEP and HIV testing to meet and discuss additional efforts:
“What I would like to see happen here in the state of Mississippi, because we are so high on the list for new HIV infections, I would like to see a big collaborative event. As far as PrEP goes, those that are not on PrEP, one big collaborative event with different community health centers. You do testing, we do PrEP, and the referral get split. Everyone coming together for one main purpose.” – Staff, Patient Navigator, Female
Increase access to PrEP
Participants highlighted that most of the clinics they worked for already offer a variety of service sites (pharmacy, mobile clinic) but that more clinics should offer these alternative options for patients to receive PrEP. One patient navigator outlined the services they offer,
“We have a mobile unit. We do not have a home health travel nurse. We do telephone visits. We offer primary care, OB/GYN. We have our own pharmacy. We also have samples in our pharmacy available to patients that can’t get their medicine on the same day cos we like to implement same day PrEP. It has worked for us. More people should utilize those services.” – Staff, Patient Navigator, Female
Other staff suggested utilizing minute clinics and pharmacies at grocery stores. Highlighting, that offering PrEP at these locations may increase PrEP uptake.
There has been great scientific expansion of HIV prevention research and priorities must now pivot to addressing how to best implement effective interventions like PrEP [ 38 ]. PrEP remains underutilized among individuals who may benefit, particularly in Southern states such as MS [ 10 , 11 , 12 , 13 , 14 ]. Implementation science could help ameliorate this by identifying barriers and facilitators to PrEP rollout and uptake. We selected and defined several strategies from the ERIC project [ 26 ] to increase PrEP use utilizing FQHCs. Our results, as shown in Table 3 , highlight the four domains of strategies selected: 1) PrEP Information Dissemination, 2) Increase Variety and Number of PrEP Providers, 3) Enhance PrEP Provider Alliance and Trust, and 4) Increase Access to PrEP.
Firstly, individuals cannot utilize PrEP if they are not aware of its presence and utility. In Mississippi, advertising PrEP services is integral to implementation efforts given the existing stigma and lack of health literacy in this region [ 39 ]. Potential avenues for expanding PrEP awareness are integrating it into educational curriculums, adolescents’ routine preventative healthcare, and health fairs. This study compliments prior research that people should be offered sexual health and PrEP education at a younger age to increase awareness of risk, foster change in social norms and enhance willingness to seek out prevention services [ 40 , 41 ]. To meet the resulting growing need for PrEP educators, healthcare professionals should receive up-to-date PrEP information and training, so that they can confidently relay information to their patients. Similar to existing research, increasing provider education could accelerate PrEP expansion [ 42 , 43 , 44 ]. Training programs aimed at increasing provider PrEP knowledge may increase PrEP prescriptions provided [ 43 ] by addressing one of the most frequently listed barriers to PrEP prescription among providers [ 45 , 46 ].
Many patients prefer to receive PrEP at the healthcare locations they already attend and report a barrier to PrEP being limited healthcare settings that offer PrEP [ 39 , 47 , 48 , 49 ]. The aforementioned PrEP training could increase the number of healthcare workers willing to provide PrEP services. It is also imperative that providers in a diverse range of healthcare settings (e.g., primary care, OB/GYN, pediatricians and adolescent medicine providers) join the list of those offering PrEP to reduce stigma and enhance patient comfort.
These results mirrored other studies in the South that have shown that using relatable healthcare providers and trusted members of the community may serve to facilitate PrEP uptake [ 41 , 50 , 51 ]. If patients have a larger number of PrEP providers to choose from, they can select one that best fits their needs (e.g., location, in-network) and preferences (e.g., familiarity, cultural similarities). Enhanced comfort facilitates a strong patient-provider alliance and can lead to more open/honest communication regarding HIV risk behavior.
The lack of conveniently located PrEP providers is consistently reported as a structural barrier in the South [ 44 , 52 ]. This creates an increase in the demand on patients to attend regular follow-up appointments. The three strategies above all play a vital role in increasing access to PrEP. If more individuals are trained to provide PrEP care, there will be more PrEP providers, and patients can choose the best option for them. A sizeable influx of new PrEP providers could help staff new care facilities and service options in the community (e.g., mobile health units, home care, community-based clinics, telemedicine). Offering PrEP via telemedicine and mobile clinics to patients has been largely supported in the literature [ 44 , 53 , 54 ]. Intra- and inter-organizational collaborations could similarly increase PrEP access by sharing information and resources to ensure patients get timely, reliable care.
Our results largely supported previous findings by two systematic reviews on the barriers to PrEP uptake and implementation strategies to overcome it [ 39 , 47 ]. Sullivan et.al.’s review focused on the Southern U.S. [ 38 ], while Bonacci et. al. explored steps to improve PrEP equity for Black and Hispanic/Latino communities [ 47 ]. Both agreed that barriers to PrEP access are complex. Thus, cooperation from policymakers and the expansion of state Medicaid or targeted Medicaid waivers is vital to make PrEP attainable for those living in the coverage gap. Further, many FQHCs receive Ryan White funding for HIV care and treatment, contracting flexibility in the utility of these other sources of support may aid in eliminating the cost of PrEP as a barrier. They also stressed the need for educating community members and healthcare personnel about PrEP, increasing and diversifying PrEP service sites, normalizing PrEP campaigns and screening to alleviate stigma, and streamlining clinical procedures to facilitate the option for same-day PrEP. However, they also noted that these strategies are easier said than done. This further highlights the need for prioritizing research efforts towards implementation studies for effectiveness and practicality of overcoming the complex and systemic needs around HIV prevention/treatment.
The present study was able to build on past findings by providing a more holistic view of the barriers to PrEP use and possible strategies to address them through querying PrEP-eligible patients, medical providers, and non-medical staff. By interviewing a diverse range of stakeholders, it was possible to identify unmet patient needs, current PrEP care procedures and infrastructure, and attitudes and needed resources among those who could potentially be trained to provide PrEP in the future.
Limitations
Our results are limited to participants and clinic staff who were willing to engage in a research interview to discuss PrEP and FQHCs. Results are only generalizable to Mississippi and may be less relevant for other geographic areas. However, this is a strength given these strategies are meant to be tailored specifically to FQHCs in MS. Due to COVID-19 restrictions, interviews were conducted via Zoom. This allowed us to reach participants unable to come in physically for an interview and may have increased their comfort responding to questions [ 55 ]. However, some participants may have been less comfortable discussing via Zoom, which may have limited their willingness to respond.
This study highlighted the need for implementing PrEP strategies to combat HIV in Mississippi. PrEP knowledge, barriers, and motivation were identified as key factors influencing PrEP utilization, and four domains of strategies were identified for improving PrEP accessibility and uptake. Future research should further refine and assess the feasibility and acceptability of selected and defined implementation strategies and test strategies.
Availability of data and materials
De-identified data from this study are not available in a public archive due to sensitive nature of the data. De-identified data from this study will be made available (as allowable according to institutional IRB standards) by emailing the corresponding author.
Abbreviations
Mississippi
Pre-Exposure Prophylaxis
Federally Qualified Health Centers
Integrated-Promoting Action on Research Implementation in Health Services
Expert Recommendations for Implementing Change
Men Who Have Sex With Men
Health MSDo. Mississippi’s Ending the HIV Epidemic Plan: MSDH; 2021. Available from: http://healthyms.com/msdhsite/_static/resources/5116.pdf .
Digre P, Avoundjian T, Johnson K, Peyton D, Lewis C, Barnabas RV, et al. Barriers, facilitators, and cost of integrating HIV-related activities into sexually transmitted disease partner services in Jackson, Mississippi. Sexually Transmit Dis. 2021;48(3):145–51.
Article Google Scholar
Rosenberg ES, Grey JA, Sanchez TH, Sullivan PS. Rates of prevalent HIV infection, prevalent diagnoses, and new diagnoses among men who have sex with men in US states, metropolitan statistical areas, and counties, 2012–2013. JMIR Public Health Surveill. 2016;2(1):e5684.
Khosropour CM, Backus KV, Means AR, Beauchamps L, Johnson K, Golden MR, et al. A pharmacist-led, same-day, HIV pre-exposure prophylaxis initiation program to increase PrEP uptake and decrease time to PrEP initiation. AIDS Patient Care STDS. 2020;34(1):1–6.
Article PubMed PubMed Central Google Scholar
Stopka TJ, Brinkley-Rubinstein L, Johnson K, Chan PA, Hutcheson M, Crosby R, et al. HIV Clustering in Mississippi: spatial epidemiological study to inform implementation science in the deep south. JMIR Publ Health Surveil. 2018;4(2):e35-e.
Choopanya K, Martin M, Suntharasamai P, Sangkum U, Mock PA, Leethochawalit M, et al. Antiretroviral prophylaxis for HIV infection in injecting drug users in Bangkok, Thailand (the Bangkok Tenofovir Study): a randomised, double-blind, placebo-controlled phase 3 trial. The Lancet. 2013;381(9883):2083–90.
Molina J-M, Capitant C, Spire B, Pialoux G, Cotte L, Charreau I, et al. On-demand preexposure prophylaxis in men at high risk for HIV-1 infection. N Engl J Med. 2015;373:2237–46.
Article CAS PubMed Google Scholar
Molina J, Charreau I, Spire B, Cotte L, Chas J, Capitant C, et al. ANRS IPERGAY Study Group Efficacy, safety, and effect on sexual behaviour of on-demand pre-exposure prophylaxis for HIV in men who have sex with men: an observational cohort study. Lancet HIV. 2017;4(9):e402–10.
Article PubMed Google Scholar
Centers for Disease Control and Prevention. How effective is PrEP? : CDC; 2022. Available from: https://www.cdc.gov/hiv/basics/prep/prep-effectiveness.html .
Kirby T, Thornber-Dunwell M. Uptake of PrEP for HIV slow among MSM. Lancet. 2014;383(9915):399–400.
Elopre L, Kudroff K, Westfall AO, Overton ET, Mugavero MJ. The right people, right places, and right practices: disparities in PrEP access among African American men, women and MSM in the Deep South. J Acquired Immune Deficiency Syndr (1999). 2017;74(1):56.
Article PubMed Central Google Scholar
Brantley ML, Rebeiro PF, Pettit AC, Sanders A, Cooper L, McGoy S, et al. Temporal trends and sociodemographic correlates of PrEP uptake in Tennessee, 2017. AIDS Behav. 2019;23:304–12.
Hollcroft MR, Gipson J, Barnes A, Mena L, Dombrowski JC, Ward LM, et al. PrEP acceptance among eligible patients attending the largest PrEP Clinic in Jackson, Mississippi. J Int Assoc Providers AIDS Care (JIAPAC). 2023;22:23259582231167960.
PubMed Google Scholar
Chase E, Mena L, Johnson KL, Prather M, Khosropour CM. Patterns of Pre-exposure Prophylaxis (PrEP) Use in a Population Accessing PrEP in Jackson, Mississippi. AIDS Behav. 2023;27(4):1082–90. https://doi.org/10.1007/s10461-022-03845-9 .
Arnold T, Brinkley-Rubinstein L, Chan PA, Perez-Brumer A, Bologna ES, Beauchamps L, et al. Social, structural, behavioral and clinical factors influencing retention in Pre-Exposure Prophylaxis (PrEP) care in Mississippi. PLoS ONE. 2017;12(2):e0172354.
Cahill S, Taylor SW, Elsesser SA, Mena L, Hickson D, Mayer KH. Stigma, medical mistrust, and perceived racism may affect PrEP awareness and uptake in black compared to white gay and bisexual men in Jackson, Mississippi and Boston, Massachusetts. AIDS care. 2017;29(11):1351–8.
Arnold T, Gaudiano BA, Barnett AP, Elwy AR, Whiteley L, Giorlando KK, et al. Development of an acceptance based PrEP intervention (ACTPrEP) to engage young black MSM in the South utilizing the Adaptome Model of intervention adaptation. J Contextual Behav Sci. 2023;28:60–70.
Administration HRS. What is a Health Center? 2023. Available from: https://bphc.hrsa.gov/about-health-centers/what-health-center .
Oster AM, Dorell CG, Mena LA, Thomas PE, Toledo CA, Heffelfinger JD. HIV risk among young African American men who have sex with men: A case–control study in Mississippi. Am J Public Health. 2011;101(1):137–43.
Hall HI, Li J, McKenna MT. HIV in predominantly rural areas of the United States. J Rural Health. 2005;21(3):245–53.
Williams PB, Sallar AM. HIV/AIDS and African American men: Urban-rural differentials in sexual behavior, HIV knowledge, and attitude towards condoms use. J Natl Med Assoc. 2010;102(12):1139–49.
Krakower D, Mayer KH. Engaging healthcare providers to implement HIV pre-exposure prophylaxis. Curr Opin HIV AIDS. 2012;7(6):593.
Krakower D, Ware N, Mitty JA, Maloney K, Mayer KH. HIV providers’ perceived barriers and facilitators to implementing pre-exposure prophylaxis in care settings: a qualitative study. AIDS Behav. 2014;18:1712–21.
Hakre S, Blaylock JM, Dawson P, Beckett C, Garges EC, Michael NL, Danaher PJ, Scott PT, Okulicz JF. Knowledge, attitudes, and beliefs about HIV pre-exposure prophylaxis among US Air Force Health Care Providers. Medicine (Baltimore). 2016;95(32):e4511. https://doi.org/10.1097/MD.0000000000004511 .
Yakovchenko V, Bolton RE, Drainoni ML, Gifford AL. Primary care provider perceptions and experiences of implementing hepatitis C virus birth cohort testing: a qualitative formative evaluation. BMC Health Serv Res. 2019;19(1):236.
Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10(1):21.
O'Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):1245–51. https://doi.org/10.1097/ACM.0000000000000388 .
Harvey G, Kitson A. PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice. Implement Sci. 2016;11(1):33.
Braun V, Clarke V, Hayfield N, Terry G. Thematic Analysis. In: Liamputtong P, editor. Handbook of research methods in health social sciences. Singapore: Springer Singapore; 2019. p. 843–60.
Chapter Google Scholar
Lewis CC, Klasnja P, Powell BJ, Lyon AR, Tuzzio L, Jones S, Walsh-Bailey C, Weiner B. From Classification to Causality: Advancing Understanding of Mechanisms of Change in Implementation Science. Front Public Health. 2018;6:136. https://doi.org/10.3389/fpubh.2018.00136 .
Lewis CC, Powell BJ, Brewer SK, Nguyen AM, Schriger SH, Vejnoska SF, et al. Advancing mechanisms of implementation to accelerate sustainable evidence-based practice integration: protocol for generating a research agenda. BMJ Open. 2021;11(10):e053474.
Varpio L, Ajjawi R, Monrouxe LV, O’Brien BC, Rees CE. Shedding the cobra effect: problematising thematic emergence, triangulation, saturation and member checking. Med Educ. 2017;51(1):40–50.
McKim C. Meaningful member-checking: a structured approach to member-checking. Am J Qual Res. 2023;7(2):41–52.
Google Scholar
Elo S, Kääriäinen M, Kanste O, Pölkki T, Utriainen K, Kyngäs H. Qualitative content analysis: A focus on trustworthiness. SAGE Open. 2014;4(1):2158244014522633.
Candela AG. Exploring the function of member checking. The qualitative report. 2019;24(3):619–28.
Vindrola-Padros C, Johnson GA. Rapid techniques in qualitative research: a critical review of the literature. Qual Health Res. 2020;30(10):1596–604.
Neal JW, Neal ZP, VanDyke E, Kornbluh M. Expediting the analysis of qualitative data in evaluation: a procedure for the Rapid Identification of Themes from Audio recordings (RITA). Am J Eval. 2015;36(1):118–32.
Theobald S, Brandes N, Gyapong M, El-Saharty S, Proctor E, Diaz T, et al. Implementation research: new imperatives and opportunities in global health. The Lancet. 2018;392(10160):2214–28.
Sullivan PS, Mena L, Elopre L, Siegler AJ. Implementation strategies to increase PrEP Uptake in the South. Curr HIV/AIDS Rep. 2019;16(4):259–69.
Elopre L, Ott C, Lambert CC, Amico KR, Sullivan PS, Marrazzo J, et al. Missed prevention opportunities: why young, black MSM with recent HIV diagnosis did not access HIV pre-exposure prophylaxis services. AIDS Behav. 2021;25(5):1464–73.
Arnold T, Giorlando KK, Barnett AP, Gaudiano BA, Rogers BG, Whiteley L, et al. Social, structural, behavioral, and clinical barriers influencing Pre-exposure Prophylaxis (PrEP) use among young black men who have sex with men in the south: a qualitative update to a 2016 study. Arch Sex Behav. 2024;53(2):785–97.
Edelman EJ, Moore BA, Calabrese SK, Berkenblit G, Cunningham CO, Ogbuagu O, et al. Preferences for implementation of HIV pre-exposure prophylaxis (PrEP): Results from a survey of primary care providers. Prev Med Rep. 2020;17: 101012.
Petroll AE, Walsh JL, Owczarzak JL, McAuliffe TL, Bogart LM, Kelly JA. PrEP awareness, familiarity, comfort, and prescribing experience among US primary care providers and HIV specialists. AIDS Behav. 2017;21(5):1256–67.
Barnett AP, Arnold T, Elwy AR, Brock JB, Giorlando KK, Sims-Gomillia C, et al. Considerations for PrEP implementation at federally qualified health centers in Mississippi: perspectives from staff and patients. AIDS Educ Prev. 2023;35(4):309–19.
Seidman D, Carlson K, Weber S, Witt J, Kelly PJ. United States family planning providers’ knowledge of and attitudes towards preexposure prophylaxis for HIV prevention: a national survey. Contraception. 2016;93(5):463–9.
Clement ME, Seidelman J, Wu J, Alexis K, McGee K, Okeke NL, et al. An educational initiative in response to identified PrEP prescribing needs among PCPs in the Southern U.S. AIDS Care. 2018;30(5):650–5.
Bonacci RA, Smith DK, Ojikutu BO. Toward greater pre-exposure prophylaxis equity: increasing provision and uptake for black and Hispanic/Latino individuals in the US. Am J Prev Med. 2021;61(5 Suppl 1):S60-s72.
Arnold T, Whiteley L, Elwy RA, Ward LM, Konkle-Parker DJ, Brock JB, et al. Mapping Implementation Science with Expert Recommendations for Implementing Change (MIS-ERIC): strategies to improve PrEP use among black cisgender women living in Mississippi. J Racial Ethn Health Disparities. 2023;10(6):2744–61.
Hirschhorn LR, Brown RN, Friedman EE, Greene GJ, Bender A, Christeller C, Bouris A, Johnson AK, Pickett J, Modali L, Ridgway JP. Black Cisgender Women's PrEP Knowledge, Attitudes, Preferences, and Experience in Chicago. J Acquir Immune Defic Syndr. 2020;84(5):497–507. https://doi.org/10.1097/QAI.0000000000002377 .
Pichon LC, Teti M, McGoy S, Murry VM, Juarez PD. Engaging black men who have sex with men (MSM) in the South in identifying strategies to increase PrEP uptake. BMC Health Serv Res. 2022;22(1):1491.
Auerbach JD, Kinsky S, Brown G, Charles V. Knowledge, attitudes, and likelihood of pre-exposure prophylaxis (PrEP) use among US women at risk of acquiring HIV. AIDS Patient Care STDS. 2015;29(2):102–10.
Siegler AJ, Bratcher A, Weiss KM. Geographic access to preexposure prophylaxis clinics among men who have sex with men in the United States. Am J Public Health. 2019;109(9):1216–23.
Rousseau E, Julies RF, Madubela N, Kassim S. Novel platforms for biomedical HIV prevention delivery to key populations — community mobile clinics, peer-supported, pharmacy-Led PrEP delivery, and the use of telemedicine. Curr HIV/AIDS Rep. 2021;18(6):500–7.
Article CAS PubMed PubMed Central Google Scholar
Player MS, Cooper NA, Perkins S, Diaz VA. Evaluation of a telemedicine pilot program for the provision of HIV pre-exposure prophylaxis in the Southeastern United States. AIDS Care. 2022;34(12):1499–505.
Gray LM, Wong-Wylie G, Rempel GR, Cook K. Expanding qualitative research interviewing strategies: Zoom video communications. Qual Rep. 2020;25(5):1292–301.
Download references
Acknowledgements
Authors would like to acknowledge and thank Sarah Bailey for reviewing the manuscript and assisting for formatting.
This study was funded by the National Institute of Health (R34MH115744) and was facilitated by the Providence/Boston Center for AIDS Research (P30AI042853). Additionally, work by Dr. Trisha Arnold was supported by the National Institute of Mental Health Grant (K23MH124539-01A1) and work by Dr. Andrew Barnett was supported by the National Institute of Mental Health Grant (T32MH078788). Dr. Elwy is supported by a Department of Veterans Affairs Research Career Scientist Award (RCS 23–018).
Author information
Authors and affiliations.
Department of Psychiatry, Rhode Island Hospital, One Hoppin Street, Coro West, 204, Providence, RI, 02903, USA
Trisha Arnold, Kayla K. Giorlando, Andrew P. Barnett, Avery Leigland & Larry K. Brown
Department of Psychiatry and Human Behavior, Warren Alpert Medical School of Brown University, Providence, RI, USA
Trisha Arnold, Laura Whiteley, Andrew P. Barnett, Ariana M. Albanese, A. Rani Elwy & Larry K. Brown
Department of Population Health Science, University of Mississippi Medical Center, Jackson, MS, USA
Courtney Sims-Gomillia & Precious Patrick Edet
Department of Medicine, University of Mississippi Medical Center, Jackson, MS, USA
Demetra M. Lewis & James B. Brock
Center for Healthcare Organization and Implementation Research, VA Bedford Healthcare System, Bedford, MA, USA
A. Rani Elwy
You can also search for this author in PubMed Google Scholar
Contributions
TA and ARE led the conceptualization of this paper. TA, LW, LKB, DML, and JBB completed the literature search and study design. TA, LW, LKB, KKG, PPE, AB, AL, and CSG assisted with analyzing and interpreting the data. TA, ARE, and AMA finalized the results and implementation concepts of the study. All authors read and approved the final manuscript.
Corresponding author
Correspondence to Trisha Arnold .
Ethics declarations
Ethics approval and consent to participate.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The study was approved by both the Rhode Island Hospital Institutional Review Board and the University of Mississippi Medical Center Institutional Review Board. Informed consent was obtained from all individual participants included in the study.
Consent for publication
Not applicable.
Competing interests
All authors declare that they have no conflicts of interest or competing interests.
Additional information
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
About this article
Cite this article.
Arnold, T., Whiteley, L., Giorlando, K.K. et al. A qualitative study identifying implementation strategies using the i-PARIHS framework to increase access to pre-exposure prophylaxis at federally qualified health centers in Mississippi. Implement Sci Commun 5 , 92 (2024). https://doi.org/10.1186/s43058-024-00632-6
Download citation
Received : 22 May 2024
Accepted : 21 August 2024
Published : 28 August 2024
DOI : https://doi.org/10.1186/s43058-024-00632-6
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- HIV Prevention
- Implementation Science
- Pre-exposure prophylaxis (PrEP)
- Telemedicine
Implementation Science Communications
ISSN: 2662-2211
- Submission enquiries: Access here and click Contact Us
- General enquiries: [email protected]
Log in using your username and password
- Search More Search for this keyword Advanced search
- Latest content
- Current issue
- Write for Us
- BMJ Journals
You are here
- Volume 18, Issue 2
- Issues of validity and reliability in qualitative research
- Article Text
- Article info
- Citation Tools
- Rapid Responses
- Article metrics
- Helen Noble 1 ,
- Joanna Smith 2
- 1 School of Nursing and Midwifery, Queens's University Belfast , Belfast , UK
- 2 School of Human and Health Sciences, University of Huddersfield , Huddersfield , UK
- Correspondence to Dr Helen Noble School of Nursing and Midwifery, Queens's University Belfast, Medical Biology Centre, 97 Lisburn Rd, Belfast BT9 7BL, UK; helen.noble{at}qub.ac.uk
https://doi.org/10.1136/eb-2015-102054
Statistics from Altmetric.com
Request permissions.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Evaluating the quality of research is essential if findings are to be utilised in practice and incorporated into care delivery. In a previous article we explored ‘bias’ across research designs and outlined strategies to minimise bias. 1 The aim of this article is to further outline rigour, or the integrity in which a study is conducted, and ensure the credibility of findings in relation to qualitative research. Concepts such as reliability, validity and generalisability typically associated with quantitative research and alternative terminology will be compared in relation to their application to qualitative research. In addition, some of the strategies adopted by qualitative researchers to enhance the credibility of their research are outlined.
Are the terms reliability and validity relevant to ensuring credibility in qualitative research?
Although the tests and measures used to establish the validity and reliability of quantitative research cannot be applied to qualitative research, there are ongoing debates about whether terms such as validity, reliability and generalisability are appropriate to evaluate qualitative research. 2–4 In the broadest context these terms are applicable, with validity referring to the integrity and application of the methods undertaken and the precision in which the findings accurately reflect the data, while reliability describes consistency within the employed analytical procedures. 4 However, if qualitative methods are inherently different from quantitative methods in terms of philosophical positions and purpose, then alterative frameworks for establishing rigour are appropriate. 3 Lincoln and Guba 5 offer alternative criteria for demonstrating rigour within qualitative research namely truth value, consistency and neutrality and applicability. Table 1 outlines the differences in terminology and criteria used to evaluate qualitative research.
- View inline
Terminology and criteria used to evaluate the credibility of research findings
What strategies can qualitative researchers adopt to ensure the credibility of the study findings?
Unlike quantitative researchers, who apply statistical methods for establishing validity and reliability of research findings, qualitative researchers aim to design and incorporate methodological strategies to ensure the ‘trustworthiness’ of the findings. Such strategies include:
Accounting for personal biases which may have influenced findings; 6
Acknowledging biases in sampling and ongoing critical reflection of methods to ensure sufficient depth and relevance of data collection and analysis; 3
Meticulous record keeping, demonstrating a clear decision trail and ensuring interpretations of data are consistent and transparent; 3 , 4
Establishing a comparison case/seeking out similarities and differences across accounts to ensure different perspectives are represented; 6 , 7
Including rich and thick verbatim descriptions of participants’ accounts to support findings; 7
Demonstrating clarity in terms of thought processes during data analysis and subsequent interpretations 3 ;
Engaging with other researchers to reduce research bias; 3
Respondent validation: includes inviting participants to comment on the interview transcript and whether the final themes and concepts created adequately reflect the phenomena being investigated; 4
Data triangulation, 3 , 4 whereby different methods and perspectives help produce a more comprehensive set of findings. 8 , 9
Table 2 provides some specific examples of how some of these strategies were utilised to ensure rigour in a study that explored the impact of being a family carer to patients with stage 5 chronic kidney disease managed without dialysis. 10
Strategies for enhancing the credibility of qualitative research
In summary, it is imperative that all qualitative researchers incorporate strategies to enhance the credibility of a study during research design and implementation. Although there is no universally accepted terminology and criteria used to evaluate qualitative research, we have briefly outlined some of the strategies that can enhance the credibility of study findings.
- Sandelowski M
- Lincoln YS ,
- Barrett M ,
- Mayan M , et al
- Greenhalgh T
- Lingard L ,
Twitter Follow Joanna Smith at @josmith175 and Helen Noble at @helnoble
Competing interests None.
Read the full text or download the PDF:
Validity in Qualitative Research
How do we assess and assure Validity in Qualitative Research ? This can be a bit of a tricky topic, as qualitative research involves humans understanding humans, a necessarily subjective practice from the get-go. Nevertheless, there are some questions the researcher can ask and some techniques he or she can employ to establish a reasonable level of validity.
Whether it is employed in business or the social sciences, it is often used to inform decisions that have important implications, thus assuring a high level of validity is essential. While the results should never be extrapolated over a larger population, (as they never come from a large enough sample to be statistically significant), validity can be established such that it can be used to inform meaningful decisions.
One measure of validity in qualitative research is to ask questions such as: “Does it make sense?” and “Can I trust it?” This may seem like a fuzzy measure of validity to someone disciplined in quantitative research, for example, but in a science that deals in themes and context, these questions are important.
Steps in Ensuring Validity
The first step in ensuring validity is choosing a well-trained and skilled moderator (or facilitator). A good moderator will check personal bias and expectations at the door. He or she is interested in learning as much candid information from the research participants as possible, and respectful neutrality is a must if the goal is valid qualitative research. For this reason, organizations often employ moderators from outside the group or organization to help ensure that the responses are genuine and not influenced by “what we want to hear.” For some academic applications, the moderator will disclose his or her perspectives and biases in the reporting of the data as a matter of full disclosure.
While a good moderator is key, a good sample group is also essential. Are the participants truly members of the segment from which they are recruited? Ethical recruiting is an important issue in qualitative research, as data collected from individuals who are not truly representative of their segment will not lead to valid results.
Another way to promote validity is to employ a strategy known as triangulation. To accomplish this, the research is done from multiple perspectives. This could take the form of using several moderators, different locations, multiple individuals analyzing the same data . . . essentially any technique that would inform the results from different angles. For some applications, for example, an organization may choose to run focus groups in parallel through two entirely different researchers and then compare the results.
Validity in qualitative research can also be checked by a technique known as respondent validation. This technique involves testing initial results with participants to see if they still ring true. Although the research has been interpreted and condensed, participants should still recognize the results as authentic and, at this stage, may even be able to refine the researcher’s understanding.
When the study permits, deep saturation into the research will also promote validity. If responses become more consistent across larger numbers of samples, the data becomes more reliable.
Another technique to establish validity is to actively seek alternative explanations to what appear to be research results. If the researcher is able to exclude other scenarios, he is or she is able to strengthen the validity of the findings. Related to this technique is asking questions in an inverse format.
While the techniques to establish validity in qualitative research may seem less concrete and defined than in some of the other scientific disciplines, strong research techniques will, indeed, assure an appropriate level of validity in qualitative research.
Additional Webpages Related to Validity in Qualitative Research
- Intellectus Qualitative : The ultimate platform designed to redefine your qualitative research experience.
- Conducting Qualitative Research
- Content Analysis
- Ethnography
- The Focus Group
Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
- Knowledge Base
Methodology
Reliability vs. Validity in Research | Difference, Types and Examples
Published on July 3, 2019 by Fiona Middleton . Revised on June 22, 2023.
Reliability and validity are concepts used to evaluate the quality of research. They indicate how well a method , technique. or test measures something. Reliability is about the consistency of a measure, and validity is about the accuracy of a measure.opt
It’s important to consider reliability and validity when you are creating your research design , planning your methods, and writing up your results, especially in quantitative research . Failing to do so can lead to several types of research bias and seriously affect your work.
Reliability | Validity | |
---|---|---|
What does it tell you? | The extent to which the results can be reproduced when the research is repeated under the same conditions. | The extent to which the results really measure what they are supposed to measure. |
How is it assessed? | By checking the consistency of results across time, across different observers, and across parts of the test itself. | By checking how well the results correspond to established theories and other measures of the same concept. |
How do they relate? | A reliable measurement is not always valid: the results might be , but they’re not necessarily correct. | A valid measurement is generally reliable: if a test produces accurate results, they should be reproducible. |
Table of contents
Understanding reliability vs validity, how are reliability and validity assessed, how to ensure validity and reliability in your research, where to write about reliability and validity in a thesis, other interesting articles.
Reliability and validity are closely related, but they mean different things. A measurement can be reliable without being valid. However, if a measurement is valid, it is usually also reliable.
What is reliability?
Reliability refers to how consistently a method measures something. If the same result can be consistently achieved by using the same methods under the same circumstances, the measurement is considered reliable.
What is validity?
Validity refers to how accurately a method measures what it is intended to measure. If research has high validity, that means it produces results that correspond to real properties, characteristics, and variations in the physical or social world.
High reliability is one indicator that a measurement is valid. If a method is not reliable, it probably isn’t valid.
If the thermometer shows different temperatures each time, even though you have carefully controlled conditions to ensure the sample’s temperature stays the same, the thermometer is probably malfunctioning, and therefore its measurements are not valid.
However, reliability on its own is not enough to ensure validity. Even if a test is reliable, it may not accurately reflect the real situation.
Validity is harder to assess than reliability, but it is even more important. To obtain useful results, the methods you use to collect data must be valid: the research must be measuring what it claims to measure. This ensures that your discussion of the data and the conclusions you draw are also valid.
Here's why students love Scribbr's proofreading services
Discover proofreading & editing
Reliability can be estimated by comparing different versions of the same measurement. Validity is harder to assess, but it can be estimated by comparing the results to other relevant data or theory. Methods of estimating reliability and validity are usually split up into different types.
Types of reliability
Different types of reliability can be estimated through various statistical methods.
Type of reliability | What does it assess? | Example |
---|---|---|
The consistency of a measure : do you get the same results when you repeat the measurement? | A group of participants complete a designed to measure personality traits. If they repeat the questionnaire days, weeks or months apart and give the same answers, this indicates high test-retest reliability. | |
The consistency of a measure : do you get the same results when different people conduct the same measurement? | Based on an assessment criteria checklist, five examiners submit substantially different results for the same student project. This indicates that the assessment checklist has low inter-rater reliability (for example, because the criteria are too subjective). | |
The consistency of : do you get the same results from different parts of a test that are designed to measure the same thing? | You design a questionnaire to measure self-esteem. If you randomly split the results into two halves, there should be a between the two sets of results. If the two results are very different, this indicates low internal consistency. |
Types of validity
The validity of a measurement can be estimated based on three main types of evidence. Each type can be evaluated through expert judgement or statistical methods.
Type of validity | What does it assess? | Example |
---|---|---|
The adherence of a measure to of the concept being measured. | A self-esteem questionnaire could be assessed by measuring other traits known or assumed to be related to the concept of self-esteem (such as social skills and ). Strong correlation between the scores for self-esteem and associated traits would indicate high construct validity. | |
The extent to which the measurement of the concept being measured. | A test that aims to measure a class of students’ level of Spanish contains reading, writing and speaking components, but no listening component. Experts agree that listening comprehension is an essential aspect of language ability, so the test lacks content validity for measuring the overall level of ability in Spanish. | |
The extent to which the result of a measure corresponds to of the same concept. | A is conducted to measure the political opinions of voters in a region. If the results accurately predict the later outcome of an election in that region, this indicates that the survey has high criterion validity. |
To assess the validity of a cause-and-effect relationship, you also need to consider internal validity (the design of the experiment ) and external validity (the generalizability of the results).
The reliability and validity of your results depends on creating a strong research design , choosing appropriate methods and samples, and conducting the research carefully and consistently.
Ensuring validity
If you use scores or ratings to measure variations in something (such as psychological traits, levels of ability or physical properties), it’s important that your results reflect the real variations as accurately as possible. Validity should be considered in the very earliest stages of your research, when you decide how you will collect your data.
- Choose appropriate methods of measurement
Ensure that your method and measurement technique are high quality and targeted to measure exactly what you want to know. They should be thoroughly researched and based on existing knowledge.
For example, to collect data on a personality trait, you could use a standardized questionnaire that is considered reliable and valid. If you develop your own questionnaire, it should be based on established theory or findings of previous studies, and the questions should be carefully and precisely worded.
- Use appropriate sampling methods to select your subjects
To produce valid and generalizable results, clearly define the population you are researching (e.g., people from a specific age range, geographical location, or profession). Ensure that you have enough participants and that they are representative of the population. Failing to do so can lead to sampling bias and selection bias .
Ensuring reliability
Reliability should be considered throughout the data collection process. When you use a tool or technique to collect data, it’s important that the results are precise, stable, and reproducible .
- Apply your methods consistently
Plan your method carefully to make sure you carry out the same steps in the same way for each measurement. This is especially important if multiple researchers are involved.
For example, if you are conducting interviews or observations , clearly define how specific behaviors or responses will be counted, and make sure questions are phrased the same way each time. Failing to do so can lead to errors such as omitted variable bias or information bias .
- Standardize the conditions of your research
When you collect your data, keep the circumstances as consistent as possible to reduce the influence of external factors that might create variation in the results.
For example, in an experimental setup, make sure all participants are given the same information and tested under the same conditions, preferably in a properly randomized setting. Failing to do so can lead to a placebo effect , Hawthorne effect , or other demand characteristics . If participants can guess the aims or objectives of a study, they may attempt to act in more socially desirable ways.
It’s appropriate to discuss reliability and validity in various sections of your thesis or dissertation or research paper . Showing that you have taken them into account in planning your research and interpreting the results makes your work more credible and trustworthy.
Section | Discuss |
---|---|
What have other researchers done to devise and improve methods that are reliable and valid? | |
How did you plan your research to ensure reliability and validity of the measures used? This includes the chosen sample set and size, sample preparation, external conditions and measuring techniques. | |
If you calculate reliability and validity, state these values alongside your main results. | |
This is the moment to talk about how reliable and valid your results actually were. Were they consistent, and did they reflect true values? If not, why not? | |
If reliability and validity were a big problem for your findings, it might be helpful to mention this here. |
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
- Normal distribution
- Degrees of freedom
- Null hypothesis
- Discourse analysis
- Control groups
- Mixed methods research
- Non-probability sampling
- Quantitative research
- Ecological validity
Research bias
- Rosenthal effect
- Implicit bias
- Cognitive bias
- Selection bias
- Negativity bias
- Status quo bias
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Middleton, F. (2023, June 22). Reliability vs. Validity in Research | Difference, Types and Examples. Scribbr. Retrieved August 29, 2024, from https://www.scribbr.com/methodology/reliability-vs-validity/
Is this article helpful?
Fiona Middleton
Other students also liked, what is quantitative research | definition, uses & methods, data collection | definition, methods & examples, get unlimited documents corrected.
✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts
Validity and Reliability in Qualitative Research
Post prepared and written by Joe Tise, PhD, Senior Education Researcher
In this series we have discovered the many ways in which evidence of validity can be produced and ways in which reliable data can be produced. To be sure, the bulk of this series was focused on quantitative research, but any mixed-methods or qualitative researcher will tell you that quantitative research only tells us one piece of the puzzle.
Qualitative research is needed to answer questions not suited for quantitative research, and validity and reliability need to be considered in qualitative research too. Qualitative research includes numerous methodological approaches, such as individual and focus group interviews, naturalistic observations, artifact analysis, and even open-ended survey questions. Unlike quantitative research–which utilizes forms, surveys, tests, institutional data, etc.–in qualitative research, the researcher often is the data collection mechanism and the analysis mechanism.
Researchers usually don’t run a statistical analysis on qualitative data; instead, a researcher typically analyzes the qualitative data, extracts meaning from it, and answers a research question from that meaning. Though this is similar to quantitative research, some of the analysis methods can be viewed as more subjective.
So, how can we know that results obtained from a qualitative analysis reflect some truth, and not the researcher’s personal biases, experiences, or lenses?
Reliability and validity are equally important to consider in qualitative research. Ways to enhance validity in qualitative research include:
- Use multiple analysts
- Create/maintain audit trails
- Conduct member checks
- Include positionality statements
- Solicit peer review of analytical approach
- Triangulate findings via multiple data sources
- Search for and discuss negative cases (i.e., those which refute a theme)
Building reliability can include one or more of the following:
- Clearly define your codes and criteria for applying them
- Use detailed transcriptions which include things like pauses, crosstalk, and non-word verbal expressions
- Train coders on a common set of data
- Ensure coders are consistent with each other before coding the reset of the data
- Periodically reassess interrater agreement/reliability
- Use high-quality recording devices
The most well-known measure of qualitative reliability in education research is inter-rater reliability and consensus coding. I want to make a distinction between two common measures of inter-rater reliability: percent agreement and Cohen’s Kappa.
Percent agreement refers to the percentage of coding instances in which two raters assign the same code to a common “piece” of data. Because this is a simple percentage, it’s more intuitive to understand. But it also does not account for chance–in any deductive coding framework (i.e., when all possible codes are already defined), there is a random chance that two coders will apply the same code without actually “seeing” the same thing in the data.
By contrast, Cohen’s Kappa is designed to parse out the influence of chance agreement, and for this reason Cohen’s Kappa will always be smaller than the percent agreement for a given dataset. Many qualitative data analysis software packages (e.g., NVivo) will calculate both percent agreement and Cohen’s Kappa.
In consensus coding, multiple raters code the same data, discuss the codes that may apply, and decide together how to code the data. With consensus coding, the need for inter-rater agreement/reliability metrics is circumvented, because by definition, you will always have 100% agreement/reliability. The major downside of consensus coding is, of course, the time and effort needed to engage it. With large sets of qualitative data, consensus coding may not be feasible.
For a deeper dive into these topics, there are many excellent textbooks that explore the nuances of qualitative validity and reliability. Below, you’ll find a selection of recommended resources, as well as others that provide detailed insights into strengthening qualitative research methods.
Corbin, J., & Strauss, A. (2015). Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory (4th ed.). Sage Publications. Creswell, J. W., & Báez, J. C. (2021). 30 Essential Skills for the Qualitative Researcher (2nd ed.). Sage Publications. Creswell, J. W., & Poth, C. N. (2018). Qualitative inquiry and research design: Choosing among five approaches . Sage Publications. Saldaña, J. (2013). An introduction to codes and coding. In The coding manual for qualitative researchers (pp. 1–40). Sage Publications.
Comments are closed
Privacy Overview
Cookie | Duration | Description |
---|---|---|
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
- Subscribe to journal Subscribe
- Get new issue alerts Get alerts
Secondary Logo
Journal logo.
Colleague's E-mail is Invalid
Your message has been successfully sent to your colleague.
Save my selection
Rigor or Reliability and Validity in Qualitative Research: Perspectives, Strategies, Reconceptualization, and Recommendations
Cypress, Brigitte S. EdD, RN, CCRN
Brigitte S. Cypress, EdD, RN, CCRN , is an assistant professor of nursing, Lehman College and The Graduate Center, City University of New York.
The author has disclosed that she has no significant relationships with, or financial interest in, any commercial companies pertaining to this article.
Address correspondence and reprint requests to: Brigitte S. Cypress, EdD, RN, CCRN, Lehman College and The Graduate Center, City University of New York, PO Box 2205, Pocono Summit, PA 18346 ( [email protected] ).
Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site ( www.dccnjournal.com ).
Issues are still raised even now in the 21st century by the persistent concern with achieving rigor in qualitative research. There is also a continuing debate about the analogous terms reliability and validity in naturalistic inquiries as opposed to quantitative investigations. This article presents the concept of rigor in qualitative research using a phenomenological study as an exemplar to further illustrate the process. Elaborating on epistemological and theoretical conceptualizations by Lincoln and Guba, strategies congruent with qualitative perspective for ensuring validity to establish the credibility of the study are described. A synthesis of the historical development of validity criteria evident in the literature during the years is explored. Recommendations are made for use of the term rigor instead of trustworthiness and the reconceptualization and renewed use of the concept of reliability and validity in qualitative research, that strategies for ensuring rigor must be built into the qualitative research process rather than evaluated only after the inquiry, and that qualitative researchers and students alike must be proactive and take responsibility in ensuring the rigor of a research study. The insights garnered here will move novice researchers and doctoral students to a better conceptual grasp of the complexity of reliability and validity and its ramifications for qualitative inquiry.
Conducting a naturalistic inquiry in general is not an easy task. Qualitative studies are more complex in many ways than a traditional investigation. Quantitative research follows a structured, rigid, preset design with the methods all prescribed. In naturalistic inquiries, planning and implementation are simultaneous, and the research design can change or is emergent. Preliminary steps must be accomplished before the design is fully implemented from making initial contact and gaining entry to site, negotiating consent, building and maintaining trust, and identifying participants. The steps of a qualitative inquiry are also repeated multiple times during the process. As the design unfolds, the elements of this design are put into place, and the inquirer has minimal control and should be flexible. There is continuous reassessment and reiteration. Data collection is carried out using multiple techniques, and whatever the source maybe, it is the researcher who is the sole instrument of the study and the primary mode of collecting the information. All the while during these processes, the qualitative inquirer must be concerned with rigor. 1 Appropriate activities must be conducted to ensure that rigor had been attended to in the research process rather than only adhering to set criteria for rigor after the completion of the study. 1-4
Reliability and validity are 2 key aspects of all research. Researchers assert that rigor of qualitative research equates to the concepts reliability and validity and all are necessary components of quality. 5,6 However, the precise definition of quality has created debates among naturalistic inquirers. Other scholars consider different criteria to describe rigor in qualitative research process. 7 The 2 concepts of reliability and validity have been operationalized eloquently in quantitative texts but at the same time were deemed not pertinent to qualitative inquiries in the 1990s. Meticulous attention to the reliability and validity of research studies is particularly vital in qualitative work, where the researcher's subjectivity can so readily cloud the interpretation of the data and where research findings are often questioned or viewed with skepticism by the scientific community (Brink, 1993).
This article will discuss the issue of rigor in relation to qualitative research and further illustrate the process using a phenomenological study as an exemplar based on Lincoln and Guba's 1 (1985) techniques. This approach will clarify and define some of these complex concepts. There are numerous articles about trustworthiness in the literature that are too complex, confusing, and full of jargon. Some of these published articles also discuss rigor vis-à-vis reliability and validity in a very complicated way. Rigor will be first defined followed by how “reliability and validity” should be applied to qualitative research methods during the inquiry (constructive) rather than only post hoc evaluation. Strategies to attain reliability and validity will be described including the criteria and techniques for ensuring its attainment in a study. This discussion will critically focus on the misuse or nonuse of the concept of reliability and validity in qualitative inquiries, reestablish its importance, and relate both to the concept of rigor. Reflecting on my own research experience, recommendations for the renewed use of the concept of reliability and validity in qualitative research will be presented.
RIGOR VERSUS TRUSTWORTHINESS
Rigor of qualitative research continues to be challenged even now in the 21st century—from the very idea that qualitative research alone is open to questions, so with the terms rigor and trustworthiness . It is critical to understand rigor in research. Rigor is simply defined as the quality or state of being very exact, careful, or with strict precision 8 or the quality of being thorough and accurate. 9 The term qualitative rigor itself is an oxymoron, considering that qualitative research is a journey of explanation and discovery that does not lend to stiff boundaries. 10
Rigor and truth are always of concern for qualitative research. 11 Rigor has also been used to express attributes related to the qualitative research process. 12,13 Per Morse et al 4 (2002), without rigor, research is worthless, becomes fiction, and loses its use. The authors further defined rigor as the strength of the research design and the appropriateness of the method to answer the questions. It is expected that qualitative studies be conducted with extreme rigor because of the potential of subjectivity that is inherent in this type of research. This is a more difficult task when dealing with narratives and people than numbers and statistics. 14 Davies and Dodd 13 (2002) refer rigor to the reliability and validity of research and that, inherent to the conception, the concept is a quantitative bias. Several researchers argued that reliability and validity pertain to quantitative research, which is unrelated or not pertinent to qualitative inquiry because it is aligned with the positivist view. 15 It is also suggested that a new way of looking at reliability and validity will ensure rigor in qualitative inquiry. 1,16 From Lincoln and Guba's crucial work in the 1980s, reliability and validity were replaced with the concept “trustworthiness.” Lincoln and Guba 1 (1985) were the first to address rigor in their model of trustworthiness of qualitative research. Trustworthiness is used as the central concept in their framework to appraise the rigor of a qualitative study.
Trustworthiness is described in different ways by researchers. Trustworthiness refers to quality, authenticity, and truthfulness of findings of qualitative research. It relates to the degree of trust, or confidence, readers have in results. 14 Yin 17 (1994) describes trustworthiness as a criterion to judge the quality of a research design. Trustworthiness addressed methods that can ensure one has carried out the research process correctly. 18 Manning 19 (1997) considered trustworthiness as parallel to the empiricist concepts of internal and external validity, reliability, and objectivity. Seale 20 (1999) asserted that trustworthiness of a research study is based on the concepts of reliability and validity. Guba 2 (1981), Guba and Lincoln 3 (1982), and Lincoln and Guba 1 (1985) refer to trustworthiness as something that evolved from 4 major concerns that relate to it in which the set of criteria were based on. Trustworthiness is a goal of the study and, at the same time, something to be judged during the study and after the research is conducted. The 4 major traditional criteria are summarized into 4 questions about truth value, applicability, consistency, and neutrality. From these, they proposed 4 analogous terms within the naturalistic paradigm to replace the rationalistic terms: credibility, transferability, dependability, and confirmability. 1 For each of these 4 naturalistic terms are research activities or steps that the inquirer should be engage in to be able to safeguard or satisfy each of the previously mentioned criteria and thus attain trustworthiness (Supplemental Digital Content 1, https://links.lww.com/DCCN/A18 ). Guba and Lincoln 1 (1985) stated:
The criteria aid inquirers in monitoring themselves and in guiding activities in the field, as a way of determining whether or not various stages in the research are meeting standards for quality and rigor. Finally, the same criteria may be used to render ex-post facto judgments on the products of research, including reports, case studies, or proposed publications.
Standards and checklist were developed in the 1990s based on Lincoln and Guba's 1 (1985) established criteria, which were then discarded in favor of principles. 21 These standards and checklists consisted of long list of strategies used by qualitative researchers, which were thought to cause harm because of the confusion on which strategies were appropriate for certain designs or what type of naturalistic inquiry is being evaluated. Thus, researchers interpreted missing data as faults and flaws. 21 Morse 21 (2012) further claimed that these standards became the qualitative researchers' “worst enemies” and such an approach was not appropriate. Guba and Lincoln 18 (1989) later proposed a set of guidelines for post hoc evaluation of a naturalistic inquiry to ensure trustworthiness based on the framework of naturalism and constructivism and beyond the conventional methodological ideas. The aspects of their criteria have been fundamental to development of standards used to evaluate the quality of qualitative inquiry. 4
THE RIGOR DEBATES: TRUSTWORTHINESS OR RELIABILITY AND VALIDITY?
A research endeavor, whether quantitative or qualitative, is always evaluated for its worth and merits by peers, experts, reviewers, and readers. Does this mean that a study is differentiated between “good” and “bad”? What determines a “good” from a “bad” inquiry? For a quantitative study, this would mean determining the reliability and validity, and for qualitative inquiries, this would mean determining rigor and trustworthiness. According to Golafshani 22 (2003), if the issues of reliability, validity, trustworthiness, and rigor are meant to differentiating a “good” from “bad” research, then testing and increasing the reliability, validity, trustworthiness, and rigor will be important to the research in any paradigm. However, do reliability and validity in quantitative research equate totally to rigor and trustworthiness in qualitative research? There are many ways to assess the “goodness” of a naturalistic inquiry. Guba and Lincoln 18 (1989) asked, “‘What standards ought apply?’… goodness criteria like paradigms are rooted in certain assumptions. Thus, it is not appropriate to judge constructivist evaluations by positivistic criteria or standards or vice versa. To each its proper and appropriate set.”
Reliability and validity are analogues and are determined differently than in quantitative inquiry. 21 The nature and purpose of the quantitative and qualitative traditions are also different that it is erroneous to apply the same criteria of worthiness or merit. 23,24 The qualitative researcher should not focus on quantitatively defined indicators of reliability and validity, but that does not mean that rigorous standards are not appropriate for evaluating findings. 11 Evaluation, like democracy, is a process that, to be at its best, depends on the application of enlightened and informed self-interest. 18 Agar 24 (1986), on the other hand, suggested that terms such as reliability and validity are comparative with the quantitative view and do not fit the details of qualitative research. A different language is needed to fit the qualitative view. From Leininger 25 (1985), Krefting 23 (1991) asserted that addressing reliability and validity in qualitative research is such a different process that quantitative labels should not be used. The incorrect application of the qualitative criteria of rigor to studies is as problematic as the application of inappropriate quantitative criteria. 23 Smith 26 (1989) argued that, for qualitative research, this means that the basis of truth or trustworthiness becomes a social agreement. He emphasizes that what is judged true or trustworthy is what we can agree, conditioned by time and place, and is true or trustworthy. Validity standards in qualitative research are also even more challenging because of the necessity to incorporate rigor and subjectivity, as well as creativity into the scientific process. 27 Furthermore, Leininger 25 (1985) claimed that it is not whether the data are reliable or valid but how the terms reliability and validity are defined. Aside from the debate whether reliability and validity criteria should be used similarly in qualitative inquiries, there is also an issue of not using the concepts at all in naturalistic studies.
Designing a naturalistic inquiry is very different from a traditional quantitative notion of design and that defining a “good” qualitative inquiry is controversial and has gone through many changes. 21 First is the confusion on the use of terminologies “rigor” and “trustworthiness.” Morse 28 (2015) suggested that it is time to return to the terminology of mainstream social science and to use “rigor” rather than “trustworthiness.” Debates also continue about why some qualitative researchers do not use the concept of reliability and validity in their studies referring to Lincoln and Guba's 1 (1985) criteria for trustworthiness, namely, transferability, dependability, confirmability, and credibility. Morse 28 (2015) further suggested replacing these criteria to reliability, validity, and generalizability. The importance and centrality of reliability and validity to qualitative inquiries have in some way been disregarded even in the current times. Researchers from the United Kingdom and Europe continue to do so but not much so in North America. 4 According to Morse 21 (2012), this gives the impression that these concepts are of no concern to qualitative research. Morse 29 (1999) stated, “Is the terminology worth making a fuzz about?”, when Lincoln and Guba 1 (1985) described trustworthiness and reliability and validity as analogs. Morse 29 (1999) further articulated that:
To state that reliability and validity are not pertinent to qualitative inquiry places qualitative research in the realm of being not reliable and not valid. Science is concerned with rigor, and by definition, good rigorous research must be reliable and valid. If qualitative research is unreliable and invalid, then it must not be science. If it is not science, then why should it be funded, published, implemented, or taken seriously?
RELIABILITY AND VALIDITY IN QUALITATIVE RESEARCH
Reliability and validity should be taken into consideration by qualitative inquirers while designing a study, analyzing results, and judging the quality of the study, 30 but for too long, the criteria used for evaluating rigor are applied after a research is completed—a considerably wrong tactic. 4 Morse and colleagues 4 (2002) argued that, for reliability and validity to be actively attained, strategies for ensuring rigor must be built into the qualitative research process per se not to be proclaimed only at the end of the inquiry. The authors suggest that focusing on strategies to establish rigor at the completion of the study (post hoc), rather than during the inquiry, exposes the investigators to the risk of missing and addressing serious threats to the reliability and validity until it is too late to correct them. They further asserted that the interface between reliability and validity is important especially for the direction of the analysis process and the development of the study itself.
Reliability
In the social sciences, the whole notion of reliability in and of itself is problematic. 31 The scientific aspect of reliability assumes that repeated measures of a phenomenon (with the same results) using objective methods establish the truth of the findings. 32-35 Merriam 36 (1995) stated that, “The more times the findings of a study can be replicated, the more stable or reliable the phenomenon is thought to be.” In other words, it is the idea of replicability, 22,34,37 repeatability, 21,22,26,30,31,36,38-40 and stability of results or observation. 25,39,41 The issues are that human behaviors and interactions are never static or the same. Measurements and observations can also be repeatedly wrong. Furthermore, researchers have argued that the concept reliability is misleading and has no relevance in qualitative research related to the notion of “measurement method,” as in quantitative studies. 40,42 It is a fact that quantitative research is supported by the positivist or scientific paradigm that regards the world as made up of observable, measurable facts. Qualitative research, on the other hand, produces findings not arrived at by means of statistical procedures or other means of quantification. On the basis of the constructivist paradigm, it is a naturalistic inquiry that seeks to understand phenomena in context-specific settings in which the researcher does not attempt to manipulate the phenomenon of interest. 23 If reliability is used as a criterion in qualitative research, it would mean that the study is “not good.” A thorough description of the entire research process that allows for intersubjectivity is what indicates good quality when using qualitative methodology. Reliability is based on consistency and care in the application of research practices, which are reflected in the visibility of research practices, analysis, and conclusions, reflected in an open account that remains mindful of the partiality and limits of the research findings. 13 Reliability and similar terms are presented in Supplemental Digital Content 2 (see Supplemental Digital Content 2, https://links.lww.com/DCCN/A19 ).
Validity is broadly defined as the state of being well grounded or justifiable, relevant, meaningful, logical, confirming to accepted principles or the quality of being sound, just, and well founded. 8 The issues surrounding the use and nature of the term validity in qualitative research are controversial and many. It is a highly debated topic both in social and educational research and is still often a subject of debate. 43 The traditional criteria for validity find their roots in a positivist tradition, and to an extent, positivism has been defined by a systematic theory of validity. 22 Validity is rooted from empirical conceptions as universal laws, evidence, objectivity, truth, actuality, deduction, reason, fact, and mathematical data, to name only a few. Validity in research is concerned with the accuracy and truthfulness of scientific findings. 44 A valid study should demonstrate what actually exists and is accurate, and a valid instrument or measure should actually measure what it is supposed to measure. 5,22,29,31,42,45
Novice researchers can become easily perplexed in attempting to understand the notion of validity in qualitative inquiry. 44 There is a multiple array of terms similar to validity in the literature, which the authors equate to same such as authenticity, goodness, adequacy, trustworthiness, verisimilitude, credibility, and plausibility. 1,45-51 Validity is not a single, fixed, or universal concept but rather a contingent construct, inescapably grounded in the processes and intentions of particular research methodologies. 39 Some qualitative researchers have argued that the term validity is not applicable to qualitative research and have related it to terms such as quality, rigor, and trustworthiness. 1,13,22,38,42,52-54 I argue that the concepts of reliability and validity are overarching constructs that can be appropriately used in both quantitative and qualitative methodologies. To validate means to investigate, to question, and to theorize, which are all activities to ensure rigor in a qualitative inquiry. For Leininger 25 (1985), the term validity in a qualitative sense means gaining knowledge and understanding of the nature (ie, the meaning, attributes, and characteristics) of the phenomenon under study. A qualitative method seeks for a certain quality that is typical for a phenomenon or that makes the phenomenon different than others.
Some naturalistic inquirers agree that assuring validity is a process whereby ideals are sought through attention to specified criteria, and appropriate techniques are used to address any threats to validity of a naturalistic inquiry. However, other researchers argue that procedures and techniques are not an assurance of validity and will not necessarily produce sound data or credible conclusions. 38,48,55 Thus, some argued that they should abandon the concept of validity and seek alternative criteria with which to judge their work. Criteria are the standards or rules to be upheld as ideals in qualitative research on which a judgment or decisions may be based, 4,56 whereas the techniques are the methods used to diminish identified validity threats. 56 Criteria, for some researchers, are used to test the quality of the research design, whereas for some, they are the goal of the study. There is also the trend to treat standards, goals, and criteria synonymously. I concur with Morse 29 (1999) that introducing parallel terminology and criteria diminishes qualitative inquiry from mainstream science and scientific legitimacy. The development of alternative criteria compromises the issue of rigor. We must work to have a consensus toward criteria and terminology that are used in mainstream science and how it is attained within the qualitative inquiry during the research process rather than at the end of the study. Despite all these, researchers developed validity criteria and techniques during the years. A synthesis of validity criteria development is summarized in Supplemental Digital Content 3 (see Supplemental Digital Content 3, https://links.lww.com/DCCN/A20 ). The techniques for demonstrating validity are presented in Supplemental Digital Content 4 (see Supplemental Digital Content 4, https://links.lww.com/DCCN/A21 ).
Reliability and Validity as Means in Ensuring the Quality of Findings of a Phenomenological Study in Intensive Care Unit
Reliability and validity are 2 factors that any qualitative researcher should be concerned about while designing a study, analyzing results, and judging its quality. Just as the quantitative investigator must attend to the question of how external and internal validity, reliability, and objectivity will be provided for in the design, so must the naturalistic inquirer arrange for credibility, transferability, dependability, and confirmability. 1 Lincoln and Guba 1 (1985) clearly established these 4 criteria as benchmarks for quality based on the identification of 4 aspects of trustworthiness that are relevant to both quantitative and qualitative studies, which are truth value, applicability, consistency, and neutrality. Guba 2 (1981) stated, “It is to these concerns that the criteria must speak.”
Rigor of a naturalistic inquiry such as phenomenology may be operationalized using the criteria of credibility, transferability, dependability, and confirmability. This phenomenological study aimed to understand and illuminate the meaning of the phenomenon of the lived experiences of patients, their family members, and the nurses during critical illness in the intensive care unit (ICU). From Lincoln and Guba 1 (1985), I first asked the question, “How can I persuade my audience that the research findings of my inquiry are worth paying attention to, and worth taking account of?” My answers to these questions were based on the identified 4 criteria set forth by Lincoln and Guba 1 (1985).
Credibility, the accurate and truthful depiction of a participant's lived experience, was achieved in this study through prolonged engagement and persistent observation to learn the context of the phenomenon in which it is embedded and to minimize distortions that might creep into the data. To achieve this, I spent 6 months with nurses, patients, and their families in the ICU to become oriented to the situation and also to build trust and rapport with the participants. Peer debriefing was conducted through meetings and discussions with an expert qualitative researcher to allow for questions and critique of field journals and research activities. Triangulation was achieved by cross-checking the data and interpretations within and across each category of participants by 2 qualitative researchers. Member checks were accomplished by constantly checking data and interpretations with the participants from which data were solicited.
Transferability was enhanced by using purposive sampling method and providing a thick description and a robust data with a wide possible range of information through the detailed and accurate descriptions of the patients, their family members, and the nurses' lived ICU experiences and by continuously returning to the texts. In this study, recruitment of participants and data collection continued until the data are saturated and complete and replicate. According to Morse et al 4 (2002), interviewing additional participants is for the purpose of increasing the scope, adequacy, and appropriateness of the data. I immersed myself into the phenomenon to know, describe, and understand it fully, comprehensively, and thoroughly. Special care was given to the collection, identification, and analysis of all data pertinent to the study. The audiotaped data were meticulously transcribed by a professional transcriber for future scrutiny. During the analysis phase, every attempt was made to document all aspects of the analysis. Analysis in qualitative research refers to the categorization and ordering of information in such a way as to make sense of the data and to writing a final report that is true and accurate. 36 Every effort was made to coordinate methodological and analytical materials. After I categorized and was able to make sense of the transcribed data, all efforts were exhausted to illuminate themes and descriptors as they emerge.
Lincoln and Guba 1 (1985) use “dependability” in qualitative research, which closely corresponds to the notion of “reliability” in quantitative research. Dependability was achieved by having 2 expert qualitative nursing researchers review the transcribed material to validate the themes and descriptors identified. To be able to validate my findings related to the themes, a doctoral-prepared nursing colleague was asked to review some of the transcribed materials. Any new themes and descriptors illuminated by my colleague were acknowledged and considered. It was then compared with my own thematic analysis from the entire participant's transcribed data. If the theme identified by the colleague did not appear in my own thematic analysis, it was agreed by both analysts not to use the said theme. It was my goal that both analysts agree on the findings related to themes and meanings within the transcribed material.
Confirmability was met by maintaining a reflexive journal during the research process to keep notes and document introspections daily that would be beneficial and pertinent during the study. An audit trail also took place to examine the processes whereby data were collected and analyzed and interpretations were made. The audit trail took the form of documentation (the actual interview notes taken) and a running account of the process (my daily field journal). I maintained self-awareness of my role as the sole instrument of this study. After each interview, I retired in 1 private room to document additional perceptions and recollections from the interviews (Supplemental Digital Content 5, https://links.lww.com/DCCN/A22 ).
Through reflexivity and bracketing, I was always on guard of my own biases, assumptions, beliefs, and presuppositions that I might bring to the study but was also aware that complete reduction is not possible. Van Manen 44 (1990) stated that “if we simply try to forget or ignore what we already know, we may find that the presuppositions persistently creep back into our reflections.” During data collection and analysis, I made my orientation and preunderstanding of critical illness and critical care explicit but held them deliberately at bay and bracketed them. Aside from Lincoln and Guba's 1 (1985) 4 criteria for trustworthiness, a question arises as to the reliability of the researcher as the sole instrument of the study.
Reliability related to the researcher as the sole instrument who conducted the data collection and analysis is a limitation of any phenomenological study. The use of humans as instruments is not a new concept. Lincoln and Guba 1 (1985) articulated that humans uniquely qualify as the instrument of choice for naturalistic inquiry. Some of the giants of conventional inquiry have recognized that humans can provide data very nearly as reliable as that produced by “more” objective means. These are formidable characteristics, but they are meaningless if the human instrument is not also trustworthy. However, no human instrument is expected to be perfect. Humans have flaws, and errors could be committed. When Lincoln and Guba 1 (1985) asserted that qualitative methods come more easily to hand when the instrument is a human being, they mean that the human as instrument is inclined toward methods that are extensions of normal activities. They believe that the human will tend therefore toward interviewing, observing, mining available documents and records, taking account of nonverbal cues, and interpreting inadvertent unobtrusive measures. All of which are complex tasks. In addition, one would not expect an individual to function adequately as human instruments without an extensive background or training and experience. This study has reliability in that I have acquired knowledge and the required training for research at a doctoral level with the professional and expert guidance of a mentor. As Lincoln and Guba 1 (1985) said, “Performance can be improved…when that learning is guided by an experienced mentor, remarkable improvements in human instrumental performance can be achieved.” Whereas reliability in quantitative research depends on instrument construction, in qualitative research, the researcher is the instrument of the study. 31 A reliable research is a credible research. Credibility of a qualitative research depends on the ability and effort of the researcher. 22 We have established that a study can be reliable without being valid, but a study cannot be valid without being reliable.
Establishing validity is a major challenge when a qualitative research project is based on a single, cross-sectional, unstructured interview as the basis for data collection. How do I make judgments about the validity of the data? In qualitative research, the validity of the findings is related to the careful recording and continual verification of the data that the researcher undertakes during the investigative practice. If the validity or trustworthiness can be maximized or tested, then more credible and defensible result may lead to generalizability as the structure for both doing and documenting high-quality qualitative research. Therefore, the quality of a research is related to generalizability of the result and thereby to the testing and increasing of the validity or trustworthiness of the research.
One potential threat to validity that researchers need to consider is researcher bias. Researcher bias is frequently an issue because qualitative research is open and less structured than quantitative research. This is because qualitative research tends to be exploratory. Researcher bias tends to result from selective observation and selective recording of information and from allowing one's personal views and perspectives to affect how data are interpreted and how the research is conducted. Therefore, it is very important that the researchers are aware of their own perceptions and opinions because they may taint their research findings and conclusions. I brought all past experiences and knowledge into the study but learned to set aside my own strongly held perceptions, preconceptions, and opinions. I truly listened to the participants to learn their stories, experiences, and meanings.
The key strategy used to understand researcher bias is called reflexivity. Reflexivity means that the researchers actively engage in critical self-reflection about their potential biases and predispositions that they bring to the qualitative study. Through reflexivity, researchers become more self-aware and monitor and attempt to control their biases. Phenomenological researchers can recognize that their interpretation is correct because the reflective process awakens an inner moral impulse. 4,59 I did my best to be always on guard of my own biases, preconceptions, and assumptions that I might bring to this study. Bracketing was also applied.
Husserl 60 (1931) has made some key conceptual elaborations, which led him to assert that an attempt to hold a previous belief about the phenomena under study in suspension to perceive it more clearly is needed in phenomenological research. This technique is called bracketing. Bracketing is another strategy used to control bias. Husserl 60 (1931) explained further that phenomenological reduction is the process of defining the pure essence of a psychological phenomenon. Phenomenological reduction is a process whereby empirical subjectivity is suspended, so that pure consciousness may be defined in its essential and absolute “being.” This is accomplished by a method of bracketing empirical data away from consideration. Bracketing empirical data away from further investigation leaves pure consciousness, pure phenomena, and pure ego as the residue of phenomenological reduction. Husserl 60 (1931) uses the term epoche (Greek word for “a cessation”) to refer to this suspension of judgment regarding the true nature of reality. Bracketed judgment is an epoche or suspension of inquiry, which places in brackets whatever facts belong to essential “being.”
Bracketing was conducted to separate the assumptions and biases from the essences and therefore achieve an understanding of the phenomenon as experienced by the participants of the study. The collected and analyzed data were presented to the participants, and they were asked whether the narrative is accurate and a true reflection of their experience. My interpretation and descriptions of the narratives were presented to the participants to achieve credibility. They were given the opportunity to review the transcripts and modify it if they wished to do so. As I was the one who served as the sole instrument in obtaining data for this phenomenological study, my goal was that my perceptions would reflect the participant's ICU experiences and that the participants would be able to see their lived experience through the researcher's eyes. Because qualitative research designs are flexible and emergent in nature, there will always be study limitations.
Awareness of the limitations of a research study is crucial for researchers. The purpose of this study was to understand the ICU experiences of patients, their family members, and the nurses during critical illness. One limitation of this phenomenological study as a naturalistic inquiry was the inability of the researcher to fully design and provide specific ideas needed for the study. According to Lincoln and Guba 1 (1985), naturalistic studies are virtually impossible to design in any definitive way before the study is actually undertaken. The authors stated:
Designing a naturalistic study means something very different from the traditional notion of “design”—which as often as not meant the specification of a statistical design with its attendant field conditions and controls. Most of the requirements normally laid down for a design statement cannot be met by naturalists because the naturalistic inquiry is largely emergent.
Within the naturalistic paradigm, designs must be emergent rather than preordinate because (1) meaning is determined by context to such a great extent. For this particular study, the phenomenon and context were the experience of critical illness in the ICU; (2) the existence of multiple realities constrains the development of a design based on only 1 (the investigator's) construction; (3) what will be learned at a site is always dependent on the interaction between the investigator and the context, and the interaction is also not fully predictable; and (4) the nature of mutual shapings cannot be known until they are witnessed. These factors underscore the indeterminacy under which naturalistic inquirer functions. The design must therefore be “played by ear”; it must unfold, cascade, and emerge. It does not follow, however, that, because not all of the elements of the design can be prespecified in a naturalistic inquiry, none of them can. Design in the naturalistic sense means planning for certain broad contingencies without however indicating exactly what will be conducted on relation to each. 1
Reliability and validity are such fundamental concepts that should be continually operationalized to meet the conditions of a qualitative inquiry. Morse et al 4,29 (2002) articulated that “by refusing to acknowledge the centrality of reliability and validity in qualitative methods, qualitative methodologists have inadvertently fostered the default notion that qualitative research must therefore be unreliable and invalid, lacking in rigor, and unscientific.” Sparkes 59 (2001) asserted that Morse et al 4,26 (2002) is right in warning us that turning our backs on such fundamental concepts as validity could cost us dearly. This will in turn affect how we mentor novices, early career researchers, and doctoral students in their qualitative research works.
Reliability is inherently integrated and internally needed to attain validity. 1,26 I concur with the use of the term rigor rather than trustworthiness in naturalistic studies. I have also discussed that I accede that strategies for ensuring rigor must be built into the qualitative research process per se rather than evaluated only after the inquiry is conducted. Threats to reliability and validity cannot be actively addressed by using standards and criteria applied at the end of the study. Ensuring rigor must be upheld by the researcher during the investigation rather than the external judges of the completed study. Whether a study is quantitative or qualitative, rigor is a desired goal that is met through the inclusion of different philosophical perspectives inherent in a qualitative inquiry and the strategies that are specific to each methodological approach including the verification techniques to be observed during the research process. It also involves the researcher's creativity, sensitivity, flexibility, and skill in using the verification strategies that determine the reliability and validity of the evolving study.
Some naturalistic inquirers agree that assuring validity is a process whereby ideals are sought through attention to specified criteria, and appropriate techniques are used to address any threats to validity of a naturalistic inquiry. However, other researchers argue that procedures and techniques are not an assurance of validity and will not necessarily produce sound data or credible conclusions. 38,48,55 Thus, some argued that they should abandon the concept of validity and seek alternative criteria with which to judge their work.
Lincoln and Guba's 1 (1985) standards of validity demonstrate the necessity and convenience of overarching principles to all qualitative research, yet there is a need for a reconceptualization of criteria of validity in qualitative research. The development of validity criteria in qualitative research poses theoretical issues, not simply technical problems. 60 Whittemore et al 58 (2001) explored the historical development of validity criteria in qualitative research and synthesized the findings that reflect a contemporary reconceptualization of the debate and dialogue that have ensued in the literature during the years. The authors further presented primary (credibility, authenticity, criticality, and integrity) and secondary (explicitness, vividness, creativity, thoroughness, congruence, and sensitivity) validity criteria to be used in the evaluative process. 56 Before the work of Whittemore and colleagues, 58 Creswell and Miller 48 (2000) asserted that the constructivist lens and paradigm choice should guide validity evaluation and procedures from the perspective of the researcher (disconfirming evidence), the study participants (prolonged engagement in the field), and external reviewers/readers (thick, rich description). Morse et al 4 in 2002 presented 6 major evaluation criteria for validity and asserted that they are congruent and are appropriate within the philosophy of the qualitative tradition. These 6 criteria are credibility, confirmability, meaning in context, recurrent patterning, saturation, and transferability. Synthesis of validity criteria is presented in Supplemental Digital Content 3 (see Supplemental Digital Content 3, https://links.lww.com/DCCN/A20 ).
Common validity techniques in qualitative research refer to design consideration, data generation, analytic procedures, and presentation. 56 First is the design consideration. Developing a self-conscious design, the paradigm assumption, the purposeful choice of small sample of informants relevant to the study, and the use of inductive approach are some techniques to be considered. Purposive sampling enhances the transferability of the results. Interpretivist and constructivist inquiry follows an inductive approach that is flexible and emergent in design with some uncertainty and fluidity within the context of the phenomenon of interest 56,58 and not based on a set of determinate rules. 61 The researcher does not work with a priori theory; rather, these are expected to emerge from the inquiry. Data are analyzed inductively from specific, raw units of information to subsuming categories to define questions that can be followed up. 1 Qualitative studies also follow a naturalistic and constructivist paradigm. Creswell and Miller 48 (2000) suggest that the validity is affected by the researchers' perception of validity in the study and their choice of paradigm assumption. Determining fit of paradigm to focus is an essential aspect of a naturalistic inquiry. 1 Paradigms rest on sets of beliefs called axioms. 1 On the basis of the naturalistic axioms, the researcher should ask questions related to multiplicity or complex constructions of the phenomenon, the degree of investigator-phenomenon interaction and the indeterminacy it will introduce into the study, the degree of context dependence, whether values are likely to be crucial to the outcome, and the constraints that may be placed on the researcher by a variety of significant others. 1
Validity during data generation is evaluated through the researcher's ability to articulate data collection decisions, demonstrate prolonged engagement and persistent observation, provide verbatim transcription, and achieve data saturation. 56 Methods are means to collect evidence to support validity, and this refers to the data obtained by considering a context for a purpose. The human instrument operating in an indeterminate situation falls back on techniques such as interview, observation, unobtrusive measures, document and record analysis, and nonverbal cues. 1 Others remarked that rejecting methods or technical procedures as assurance of truth, thus validity of a qualitative study, lies in the skills and sensitivities of the researchers and how they use themselves as a knower and an inquirer. 57,62 The understanding of the phenomenon is valid if the participants are given the opportunity to speak freely according to their own knowledge structures and perceptions. Validity is therefore achieved when using the method of open-ended, unstructured interviews with strategically chosen participants. 42 We also know that a thorough description of the entire research process enabling unconditional intersubjectivity is what indicates good quality when using a qualitative method. This enhances a clearer and better analysis of data.
Analytical procedures are vital in qualitative research. 56 Not very much can be said about data analysis in advance of a qualitative study. 1 Data analysis is not an inclusive phase that can be marked out as occurring at some singular time during the inquiry. 1 It begins from the very first data collection to facilitate the emergent design and grounding of theory. Validity in a study thus is represented by truthfulness of findings after a careful analysis. 56 Consequently, qualitative researchers seek to illuminate and extrapolate findings to similar situations. 22,63 It is a fact that the interpretations of any given social phenomenon may reflect, in part, the biases and prejudices the interpreters bring to the task and the criteria and logic they follow in completing it. 64 In any case, individuals will draw different conclusions to the debate surrounding validity and will make different judgments as a result. 50 There is a wide array of analytic techniques that the qualitative researcher can choose from based on the contextual factors that will help contribute to the decision as to which technique will optimally reflect specific criteria of validity. 65 Presentation of findings is accomplished by providing an audit trail and evidence that support interpretations, acknowledging the researcher's perspective and providing thick descriptions. Morse et al 4 in 2002 set forth strategies for ensuring validity that include investigator responsiveness and verification through methodological coherence, theoretical sampling and sampling adequacy, an active analytic stance, and saturation. The authors further stated that “these strategies, when used appropriately, force the researcher to correct both the direction of the analysis and the development of the study as necessary, thus ensuring reliability and validity of the completed project (p17). Recently in 2015, Morse 28 presented that the strategies for ensuring validity in a qualitative study are prolonged engagement, persistent observation, thick and rich description, negative case analysis, peer review or debriefing, clarifying researcher's bias, member checking, external audits, and triangulation. These strategies can be upheld with the help of an expert mentor who can in turn guide and affect the reliability and validity of early career researchers and doctoral students' qualitative research works. Techniques for demonstrating validity are summarized in Supplemental Digital Content 4 (see Supplemental Digital Content 4, https://links.lww.com/DCCN/A21 ).
Qualitative researchers and students alike must be proactive and take responsibility in ensuring the rigor of a research study. A lot of times, rigor is at the backseat in some researchers and doctoral students' work related to their novice abilities, lack of proper mentorship, and issues with time and funding. Students should conduct projects that are smaller in scope guided by an expert naturalistic inquirer to come up with the product with depth and, at the same time, gain the grounding experience necessary to become an excellent researcher. Attending to rigor throughout the research process will have important ramifications for qualitative inquiry. 4,26
Qualitative research is not intended to be scary or beyond the grasp of novices and doctoral students. Conducting a naturalistic inquiry is an experience of exploration, discovery, description, and understanding of a phenomenon that transcends one's own research journey. Attending to the rigor of qualitative research is a vital part of the investigative process that offers critique and thus further development of the science.
- Cited Here |
- Google Scholar
Phenomenology; Qualitative research; Reliability; Rigor; Validity
Supplemental Digital Content
- DCCN_2017_04_11_CYPRESS_DCCN-D-16-00060_SDC1.pdf; [PDF] (3 KB)
- DCCN_2017_04_11_CYPRESS_DCCN-D-16-00060_SDC2.pdf; [PDF] (4 KB)
- DCCN_2017_04_11_CYPRESS_DCCN-D-16-00060_SDC3.pdf; [PDF] (78 KB)
- DCCN_2017_04_11_CYPRESS_DCCN-D-16-00060_SDC4.pdf; [PDF] (70 KB)
- DCCN_2017_04_11_CYPRESS_DCCN-D-16-00060_SDC5.pdf; [PDF] (4 KB)
- + Favorites
- View in Gallery
Readers Of this Article Also Read
Family presence during resuscitation: the education needs of critical care..., increasing access to palliative care services in the intensive care unit, educational interventions to improve support for family presence during..., nursing practices and policies related to family presence during resuscitation.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
- Advanced Search
- Journal List
- Am J Pharm Educ
- v.84(1); 2020 Jan
A Review of the Quality Indicators of Rigor in Qualitative Research
Jessica l. johnson.
a William Carey University School of Pharmacy, Biloxi, Mississippi
Donna Adkins
Sheila chauvin.
b Louisiana State University, School of Medicine, New Orleans, Louisiana
Attributes of rigor and quality and suggested best practices for qualitative research design as they relate to the steps of designing, conducting, and reporting qualitative research in health professions educational scholarship are presented. A research question must be clear and focused and supported by a strong conceptual framework, both of which contribute to the selection of appropriate research methods that enhance trustworthiness and minimize researcher bias inherent in qualitative methodologies. Qualitative data collection and analyses are often modified through an iterative approach to answering the research question. Researcher reflexivity, essentially a researcher’s insight into their own biases and rationale for decision-making as the study progresses, is critical to rigor. This article reviews common standards of rigor, quality scholarship criteria, and best practices for qualitative research from design through dissemination.
INTRODUCTION
Within the past 20 years, qualitative research in health professions education has increased significantly, both in practice and publication. Today, one can pick up most any issue of a wide variety of health professions education journals and find at least one article that includes some type of qualitative research, whether a full study or the inclusion of a qualitative component within a quantitative or mixed methods study. Simultaneously, there have been recurrent calls for enhancing rigor and quality in qualitative research.
As members of the academic community, we share responsibility for ensuring rigor in qualitative research, whether as researchers who design and implement, manuscript reviewers who critique, colleagues who discuss and learn from each other, or scholarly teachers who draw upon results to enhance and innovate education. Therefore, the purpose of this article is to summarize standards of rigor and suggested best practices for designing, conducting, and reporting high-quality qualitative research. To begin, Denzin and Lincoln’s definition of qualitative research, a long-standing cornerstone in the field, provides a useful foundation for summarizing quality standards and best practices:
Qualitative research involves the studied use and collection of a variety of empirical materials – case study; personal experience; introspection; life story; interview; artifacts; cultural texts and productions; observational, historical, interactional, and visual texts – that describe the routine and problematic moments and meanings in individual lives. Accordingly, qualitative researchers deploy a wide range of interconnected interpretative practices, hoping always to get a better understanding of the subject matter at hand. It is understood, however, that each practice makes the world visible in a different way. Hence there is frequently a commitment to using more than one interpretative practice in any study. 1
In recent years, multiple publications have synthesized quality criteria and recommendations for use by researchers and peer reviewers alike, often in the form of checklists. 2-6 Some authors have raised concerns about the use of such checklists and adherence to strict, universal criteria because they do not afford sufficient flexibility to accommodate the diverse approaches and multiple interpretive practices often represented in qualitative studies. 7-11 They argue that a strict focus on using checklists of specific technical criteria may stifle the diversity and multiplicity of practices that are so much a part of achieving quality and rigor within the qualitative paradigm. As an alternative, some of these authors have published best practice guidelines for use by researchers and peer reviewers to achieve and assess methodological rigor and research quality. 12,13
Some journals within the field of health professions education have also established best practice guidance, as opposed to strict criteria or a checklist, for qualitative research. These have been disseminated as guiding questions or evaluation categories. In 2015, Academic Medicine produced an expanded second edition of a researcher/author manual that includes specific criteria with extensive explanations and examples. 14 Still others have disseminated best practice guidelines through a series of methodological articles within journal publications. 2
In this article, attributes of rigor and quality and suggested best practices are presented as they relate to the steps of designing, conducting, and reporting qualitative research in a step-wise approach.
BEST PRACTICES: STEP-WISE APPROACH
Step 1: identifying a research topic.
Identifying and developing a research topic is comprised of two major tasks: formulating a research question, and developing a conceptual framework to support the study. Formulating a research question is often stimulated by real-life observations, experiences, or events in the researcher’s local setting that reflect a perplexing problem begging for systematic inquiry. The research question begins as a problem statement or set of propositions that describe the relationship among certain concepts, behaviors, or experiences. Agee 15 and others 16,17 note that initial questions are usually too broad in focus and too vague regarding the specific context of the study to be answerable and researchable. Creswell reminds us that initial qualitative research questions guide inquiry, but they often change as the author’s understanding of the issue develops throughout the study. 16 Developing and refining a primary research question focused on both the phenomena of interest and the context in which it is situated is essential to research rigor and quality.
Glassick, Huber, and Maeroff identified six criteria applicable to assessing the quality of scholarship. 18,19 Now commonly referred to as the Glassick Criteria ( Table 1 ), these critical attributes outline the essential elements of any scholarly approach and serve as a general research framework for developing research questions and designing studies. The first two criteria, clear purpose and adequate preparation, are directly related to formulating effective research questions and a strong conceptual framework.
Glassick’s Criteria for Assessing the Quality of Scholarship of a Research Study 18
Generating and refining a qualitative research question requires thorough, systematic, and iterative review of the literature, and the use of those results to establish a clear context and foundation for the question and study design. Using an iterative approach, relevant concepts, principles, theories or models, and prior evidence are identified to establish what is known, and more importantly, what is not known. The iterative process contributes to forming a better research question, the criteria for which can be abbreviated by the acronym FINER, ie, f easible, i nteresting, n ovel, e thical, and r elevant, that is answerable and researchable, in terms of research focus, context specificity, and the availability of time, logistics, and resources to carry out the study. Developing a FINER research question is critical to study rigor and quality and should not be rushed, as all other aspects of research design depend on the focus and clarity of the research question(s) guiding the study. 15 Agee provides clear and worthwhile additional guidance for developing qualitative research questions. 15
Reflexivity, the idea that a researcher’s preconceptions and biases can influence decisions and actions throughout qualitative research activities, is a critical aspect of rigor even at the earliest stages of the study. A researcher’s background, beliefs, and experiences may affect any aspect of the research from choosing which specific question to investigate through determining how to present the results. Therefore, even at this early stage, the potential effect of researcher bias and any ethical considerations should be acknowledged and addressed. That is, how will the question’s influence on study design affect participants’ lives, position the researcher in relationship with others, or require specific methods for addressing potential areas of research bias and ethical considerations?
A conceptual framework is then actively constructed to provide a logical and convincing argument for the research. The framework defines and justifies the research question, the methodology selected to answer that question, and the perspectives from which interpretation of results and conclusions will be made. 5,6,20 Developing a well-integrated conceptual framework is essential to establishing a research topic based upon a thorough and integrated review of relevant literature (addressing Glassick criteria #1 and #2: clear purpose and adequate preparation). Key concepts, principles, assumptions, best practices, and theories are identified, defined, and integrated in ways that clearly demonstrate the problem statement and corresponding research question are answerable, researchable, and important to advancing thinking and practice.
Ringsted, Hodges, and Sherpbier describe three essential parts to an effective conceptual framework: theories and/or concepts and principles relevant to the phenomenon of interest; what is known and unknown from prior work, observations, and examples; and the researcher’s observations, ideas, and suppositions regarding the research problem statement and question. 21 Lingard describes four types of unknowns to pursue during literature review: what no one knows; what is not yet well understood; what controversy or conflicting results, understandings, or perspectives exist; and what are unproven assumptions. 22 In qualitative research, these unknowns are critical to achieving a well-developed conceptual framework and a corresponding rigorous study design.
Recent contributions from Ravitch and colleagues present best practices in developing frameworks for conceptual and methodological coherence within a study design, regardless of the research approach. 23,24 Their recommendations and arguments are highly relevant to qualitative research. Figure 1 reflects the primary components of a conceptual framework adapted from Ravitch and Carl 23 and how all components contribute to decisions regarding research design, implementation, and applications of results to future thinking, study, and practice. Notice that each element of the framework interacts with and influences other elements in a dynamic and interactive process from the beginning to the end of a research project. The intersecting bidirectional arrows represent direct relationships between elements as they relate to specific aspects of a qualitative research study.
Adaptation of Ravitch and Carl’s Components of a Conceptual Framework 23
Maxwell also provides useful guidance for developing an effective conceptual framework specific to the qualitative research paradigm. 17 The 2015 second edition of the Review Criteria for Research Manuscripts 14 and work by Ravitch and colleagues 23,24 provide specific guidance for applying the conceptual framework to each stage of the research process to enhance rigor and quality. Quality criteria for assessing a study’s problem statement, conceptual framework, and research question include the following: introduction builds a logical case and provides context for the problem statement; problem statement is clear and well-articulated; conceptual framework is explicit and justified; research purpose and/or question is clearly stated; and constructs being investigated are clearly identified and presented. 14,24,25 As best practice guidelines, these criteria facilitate quality and rigor while providing sufficient flexibility in how each is achieved and demonstrated.
While a conceptual framework is important to rigor in qualitative research, Huberman and Miles caution qualitative researchers about developing and using a framework to the extent that it influences qualitative design deductively because this would violate the very principles of induction that define the qualitative research paradigm. 25 Our profession’s recent emphasis on a holistic admissions process for pharmacy students provides a reasonable example of inductive and deductive reasoning and their respective applications in qualitative and quantitative research studies. Principles of inductive reasoning are applied when a qualitative research study examines a representative group of competent pharmacy professionals to generate a theory about essential cognitive and affective skills for patient-centered care. Deductive reasoning could then be applied to design a hypothesis-driven prospective study that compares the outcomes of two cohorts of students, one group admitted using traditional criteria and one admitted based on a holistic admissions process revised to value the affective skills of applicants. Essentially, the qualitative researcher must carefully generate a conceptual framework that guides the research question and study design without allowing the conceptual framework to become so rigid as to dictate a testable hypothesis, which is the founding principle of deductive reasoning. 26
Step 2: Qualitative Study Design
The development of a strong conceptual framework facilitates selection of appropriate study methods to minimize the bias inherent in qualitative studies and help readers to trust the research and the researcher (see Glassick criteria #3 in Table 1 ). Although researchers can employ great flexibility in the selection of study methods, inclusion of best practice methods for assuring the rigor and trustworthiness of results is critical to study design. Lincoln and Guba outline four criteria for establishing the overall trustworthiness of qualitative research results: credibility, the researcher ensures and imparts to the reader supporting evidence that the results accurately represent what was studied; transferability, the researcher provides detailed contextual information such that readers can determine whether the results are applicable to their or other situations; dependability, the researcher describes the study process in sufficient detail that the work could be repeated; confirmability, the researcher ensures and communicates to the reader that the results are based on and reflective of the information gathered from the participants and not the interpretations or bias of the researcher. 27
Specific best practice methods used in the sampling and data collection processes to increase the rigor and trustworthiness of qualitative research include: clear rationale for sampling design decisions, determination of data saturation, ethics in research design, member checking, prolonged engagement with and persistent observation of study participants, and triangulation of data sources. 28
Qualitative research is focused on making sense of lived, observed phenomenon in a specific context with specifically selected individuals, rather than attempting to generalize from sample to population. Therefore, sampling design in qualitative research is not random but defined purposively to include the most appropriate participants in the most appropriate context for answering the research question. Qualitative researchers recognize that certain participants are more likely to be “rich” with data or insight than others, and therefore, more relevant and useful in achieving the research purpose and answering the question at hand. The conceptual framework contributes directly to determining sample definitions, size, and recruitment of participants. A typical best practice is purposive sampling methods, and when appropriate, convenience sampling may be justified. 29
Purposive sampling reflects intentional selection of research participants to optimize data sources for answering the research question. For example, the research question may be best answered by persons who have particular experience (critical case sampling) or certain expertise (key informant sampling). Similarly, additional participants may be referred for participation by active participants (snowball sampling) or may be selected to represent either similar or opposing viewpoints (confirming or disconfirming samples). Again, the process of developing and using a strong conceptual framework to guide and justify methodological decisions, in this case defining and establishing the study sample, is critical to rigor and quality. 30 Convenience sampling, using the most accessible research participants, is the least rigorous approach to defining a study sample and may result in low accuracy, poor representativeness, low credibility, and lack of transferability of study results.
Qualitative studies typically reflect designs in which data collection and analysis are done concurrently, with results of ongoing analysis informing continuing data collection. Determination of a final sample size is largely based on having sufficient opportunity to collect relevant data until new information is no longer emerging from data collection, new coding is not feasible, and/or no new themes are emerging; that is, reaching data saturation , a common standard of rigor for data collection in qualitative studies . Thus, accurately predicting a sample size during the planning phases of qualitative research can be challenging. 30 Care should be taken that sufficient quantity (think thick description) and quality (think rich description) of data have been collected prior to concluding that data saturation has been achieved. A poor decision regarding sample size is a direct consequence of sampling strategy and quality of data generated, which leaves the researcher unable to fully answer the research question in sufficient depth. 30
Though data saturation is probably the most common terminology used to describe the achievement of sufficient sample size, it does not apply to all study designs. For example, one could argue that in some approaches to qualitative research, data collection could continue infinitely if the event continues infinitely. In education, we often anecdotally observe variations in the personality and structure of a class of students, and as generations of students continue to evolve with time, so too would the data generated from observing each successive class. In such situations, data saturation might never be achieved. Conversely, the number of participants available for inclusion in a sample may be small and some risk of not reaching data saturation may be unavoidable. Thus, the idea of fully achieving data saturation may be unrealistic when applied to some populations or research questions. In other instances, attrition and factors related to time and resources may contribute to not reaching data saturation within the limits of the study. By being transparent in the process and reporting of results when saturation may not have been possible, the resulting data may still contribute to the field and to further inquiry. Replication of the study using other samples and conducting additional types of follow-up studies are other options for better understanding the research phenomenon at hand. 31
In addition to defining the sample and selecting participants, other considerations related to sampling bias may impact the quantity and quality of data generated and therefore the quality of the study result. These include: methods of recruiting, procedures for informed consent, timing of the interviews in relation to experience or emotion, procedures for ensuring participant anonymity/confidentiality, interview setting, and methods of recording/transcribing the data. Any of these factors could potentially change the nature of the relationship between the researcher and the study participants and influence the trustworthiness of data collected or the study result. Thus, ongoing application of previously mentioned researcher reflexivity is critical to the rigor of the study and quality of sampling. 29,30
Common qualitative data collection methods used in health professions education include interview, direct observation methods, and textual/document analysis. Given the unique and often highly sensitive nature of data being collected by the researcher, trustworthiness is an essential component of the researcher-participant relationship. Ethical conduct refers to how moral principles and values are part of the research process. Participants’ perceptions of ethical conduct are fundamental to a relationship likely to generate high quality data. During each step of the research process, care must be taken to protect the confidentiality of participants and shield them from harm relating to issues of respect and dignity. Researchers must be respectful of the participants’ contributions and quotes, and results must be reported truthfully and honestly. 8
Interview methods range from highly structured to increase dependability or completely open-ended to allow for interviewers to clarify a participant’s response for increased credibility and confirmability. Regardless, interview protocols and structure are often modified or refined, based on concurrent data collection and analysis processes to support or refute preliminary interpretations and refine focus and continuing inquiry. Researcher reflexivity, or acknowledgement of researcher bias, is absolutely critical to the credibility and trustworthiness of data collection and analysis in such study designs. 32
Interviews should be recorded and transcribed verbatim prior to coding and analysis. 28 Member checking, a common standard of rigor, is a practice to increase study credibility and confirmability that involves asking a research subject to verify the transcription of an interview. 1,16,28 The research subject is asked to verify the completeness and accuracy of an interview transcript to ensure the transcript truthfully reflects the meaning and intent of the subject’s contribution.
Prolonged engagement involves the researcher gaining familiarity and understanding of the culture and context surrounding the persons or situations being studied. This strategy supports reflexivity, allowing the researcher to determine how they themselves may be a source of bias during the data collection process by altering the nature of how individuals behave or interact with others in the presence of the researcher. Facial expressions, spoken language, body language, style of dress, age, race, gender, social status, culture, and the researcher’s relationship with the participants may potentially influence either participants’ responses or how the researcher interprets those responses. 33 “Fitting in” by demonstrating an appreciation and understanding of the cultural norms of the population being studied potentially allows the researcher to obtain more open and honest responses from participants. However, if the research participants or topic are too familiar or personal, this may also influence data collection or analysis and interpretation of the results. 33 The possible applications of this section to faculty research with student participants in the context of pharmacy education are obvious, and researcher reflexivity is critical to rigor.
Some researchers using observational methods adopt a strategy of direct field observation, while others play partial or full participant roles in the activity being observed. In both observation scenarios, it is impossible to separate the researcher from the environment, and researcher reflexivity is essential. The pros and cons of observation approach, relative to the research question and study purpose, should be evaluated by the researcher, and the justification for the observational strategy selected should be made clear. 34 Regardless of the researcher’s degree of visibility to the study participants, persistent observation of the targeted sample is critical to the confirmability standard and to achieving data saturation. That is, study conclusions must be clearly grounded in persistent phenomena witnessed during the study, rather than on a fluke event. 28
Researchers acknowledge that observational methodologies are limited by the reality that the researcher carries a bias in determining what is observed, what is recorded, how it is recorded, and how it is transcribed for analysis. A study’s conceptual framework is critical to achieving rigor and quality and provides guidance in developing predetermined notions or plans for what to observe, how to record, and how to minimize the influence of potential bias. 34 Researcher notes should be recorded as soon as possible after the observation event to optimize accuracy. The more detailed and complete the notes, the more accurate and useful they can be in data analysis or in auditing processes for enhancing rigor in the interpretation phase of the study. 34
Triangulation is among the common standards of rigor applied within the qualitative research paradigm. Data triangulation is used to identify convergence of data obtained through multiple data sources and methods (eg, observation field notes and interview transcripts) to avoid or minimize error or bias and optimize accuracy in data collection and analysis processes. 33,35,36
Again, researcher practice in reflexivity throughout research processes is integral to rigor in study design and implementation. Researchers must demonstrate attention to appropriate methods and reflective critique, which are represented in both core elements of the conceptual framework ( Figure 1 ) and Glassick criteria ( Table 1 ). In so doing, the researcher will be well-prepared to justify sampling design and data collection decisions to manuscript reviewers and, ultimately, readers.
Step 3: Data Analysis
In many qualitative studies, data collection runs concurrently with data analysis. Specific standards of rigor are commonly used to ensure trustworthiness and integrity within the data analysis process, including use of computer software, peer review, audit trail, triangulation, and negative case analysis.
Management and analyses of qualitative data from written text, observational field notes, and interview transcriptions may be accomplished using manual methods or the assistance of computer software applications for coding and analysis. When managing very large data sets or complex study designs, computer software can be very helpful to assist researchers in coding, sorting, organizing, and weighting data elements. Software applications can facilitate ease in calculating semi-quantitative descriptive statistics, such as counts of specific events, that can be used as evidence that the researcher’s analysis is based on a representative majority of data collected ( inclusivism ) rather than focusing on selected rarities ( anecdotalism ). Using software to code data can also make it easier to identify deviant cases, detect coding errors, and estimate interrater reliability among multiple coders. 37 While such software helps to manage data, the actual analyses and interpretation still reside with the researcher.
Peer review, another common standard of rigor, is a process by which researchers invite an independent third-party researcher to analyze a detailed audit trail maintained by the study author. The audit trail methodically describes the step-by-step processes and decision-making throughout the study. Review of this audit trail occurs prior to manuscript development and enhances study confirmability. 1,16 The peer reviewer offers a critique of the study methods and validation of the conclusions drawn by the author as a thorough check on researcher bias.
Triangulation also plays a role in data analysis, as the term can also be used to describe how multiple sources of data can be used to confirm or refute interpretation, assertions, themes, and study conclusions. If a theme or theory can be arrived at and validated using multiple sources of data, the result of the study has greater credibility and confirmability. 16,33,36 Should any competing or controversial theories emerge during data collection or analysis, it is vital to the credibility and trustworthiness of the study that the author disclose and explore those negative cases. Negative case analysis refers to actively seeking out and scrutinizing data that do not fit or support the researcher’s interpretation of the data. 16
The use of best practices applying to data collection and data analysis facilitates the full examination of data relative to the study purpose and research question and helps to prevent premature closure of the study. Rather than stopping at the initial identification of literal, first-level assertion statements and themes, authors must progress to interpreting how results relate to, revise, or expand the conceptual framework, or offer an improved theory or model for explaining the study phenomenon of interest. Closing the loop on data collection is critical and is achieved when thorough and valid analysis can be linked back to the conceptual framework, as addressed in the next section.
Step 4: Drawing Valid Conclusions
Lingard and Kennedy 38 succinctly state that the purpose of qualitative research is to deepen one’s understanding of specific perspectives, observations, experiences, or events evidenced through the behaviors or products of individuals and groups as they are situated in specific contexts or circumstances. Conclusions generated from study results should enhance the conceptual framework, or contribute to a new theory or model development, and are most often situated within the discussion and conclusion sections of a manuscript.
The discussion section should include interpretation of the results and recommendations for practice. Interpretations should go beyond first-level results or literal description of observed behaviors, patterns, and themes from analysis. The author’s challenge is to provide a complete and thorough examination and explanation of how specific results relate to each other, contribute to answering the research question, and achieve the primary purpose of the research endeavor. The discussion should “close the loop” by integrating study results and analysis with the original conceptual framework. The discussion section should also provide a parsimonious narrative or graphical explanation and interpretation of study results that enhances understanding of the targeted phenomena.
The conclusion section should provide an overall picture or synopsis of the study, including its important and unique contributions to the field from the perspective of both conceptual and practical significance. The conclusion should also include personal and theoretical perspectives and future directions for research. Together, the discussion and conclusion should include responses to the larger questions of the study’s contributions, such as: So what? Why do these results matter? What next?
The strength of conclusions is dependent upon the extent to which standards of rigor and best practices were demonstrated in design, data collection, data analysis, and interpretation, as described in previous sections of this article. 4,12,17,23,24 Quality and rigor expectations for drawing valid conclusions and generating new theories are reflected in the following essential features of rigor and quality, which include: “Close the loop” to clearly link research questions, study design, data collection and analysis, and interpretation of results. Reflect effective integration of the study results with the conceptual framework and explain results in ways that relate, support, elaborate, and/or challenge conclusions of prior scholarship. Descriptions of new or enhanced frameworks or models are clear and effectively grounded in the study results and conclusions. Practical or theoretical implications are effectively discussed, including guidance for future studies. Limitations and issues of reflexivity and ethics are clearly and explicitly described, including references to actions taken to address these areas. 3,4,12,14
Step 5: Reporting Research Results
Key to quality reporting of qualitative research results are clarity, organization, completeness, accuracy, and conciseness in communicating the results to the reader of the research manuscript. O’Brien and others 4 proposed a standardized framework specifically for reporting qualitative studies known as the Standards for Reporting Qualitative Research (SRQR, Table 2 ). This framework provides detailed explanations of what should be reported in each of 21 sections of a qualitative research manuscript. While the SRQR does not explicitly mention a conceptual framework, the descriptions and table footnote clarification for the introduction and problem statement reflect the essential elements and focus of a conceptual framework. Ultimately, readers of published work determine levels of credibility, trustworthiness, and the like. A manuscript reviewer, the first reader of a study report, has the responsibility and privilege of providing critique and guidance to authors regarding achievement of quality criteria, execution and reporting of standards of rigor, and the extent to which meaningful contributions to thinking and practice in the field are presented. 13,39
An Adaptation of the 21 Elements of O’Brien and Colleagues’ Standards for Reporting Qualitative Research (SRQR) 4
Authors must avoid language heavy with connotations or adjectives that insert the researcher’s opinion into the database or manuscript. 14,40 The researcher should be as neutral and objective as possible in interpreting data and in presenting results. Thick and rich descriptions, where robust descriptive language is used to provide sufficient contextual information, enable the reader to determine credibility, transferability, dependability, and confirmability .
The process of demonstrating the credibility of research is rooted in honest and transparent reporting of how biases and other possible confounders were identified and addressed throughout study processes. Such reporting, first described within the study’s conceptual framework, should be revisited in reporting the work. Confounders may include the researcher’s training and previous experiences, personal connections to the background theory, access to the study population, and funding sources. These elements and processes are best represented in Glassick’s criteria for effective presentation and reflective critique ( Table 1 , criteria 5 and 6). Transferability is communicated, in part, through description of sampling factors such as: geographical location of the study, number and characteristics of participants, and the timeframe of data collection and analysis. 40 Such descriptions also contribute to the credibility of the results and readers’ determination of transfer to their and other contexts. To ensure dependability, the research method must be reported in detail such that the reader can determine proper research practices have been followed and that future researchers can repeat the study. 40 The confirmability of the results is influenced by reducing or at a minimum explaining any researcher influence on the result by applying and meeting standards of rigor such as member checking, triangulation, and peer review. 29,33
In qualitative studies, the researcher is often the primary instrument for data collection. Any researcher biases not adequately addressed or errors in judgement can affect the quality of data and subsequent research results. 33 Thus, due to the creative interpretative and contextually bound nature of qualitative studies, the application of standards of rigor and adherence to systematic processes well-documented in an audit trail are essential. The application of rigor and quality criteria extend beyond the researcher and are also important to effective peer review processes within a study and for scholarly dissemination. The goal of rigor in qualitative research can be described as ensuring that the research design, method, and conclusions are explicit, public, replicable, open to critique, and free of bias. 41 Rigor in the research process and results are achieved when each element of study methodology is systematic and transparent through complete, methodical, and accurate reporting. 33 Beginning the study with a well-developed conceptual framework and active use of both researcher reflexivity and rigorous peer review during study implementation can drive both study rigor and quality.
As the number of published qualitative studies in health professions educational research increases, it is important for our community of health care educators to keep in mind the unique aspects of rigor in qualitative studies presented here. Qualitative researchers should select and apply any of the above referenced study methods and research practices, as appropriate to the research question, to achieve rigor and quality. As in any research paradigm, the goal of quality and rigor in qualitative research is to minimize the risk of bias and maximize the accuracy and credibility of research results. Rigor is best achieved through thoughtful and deliberate planning, diligent and ongoing application of researcher reflexivity, and honest communication between the researcher and the audience regarding the study and its results.
Advertisement
Qualitative research and content validity: developing best practices based on science and experience
- Published: 27 September 2009
- Volume 18 , pages 1263–1278, ( 2009 )
Cite this article
- Meryl Brod 1 ,
- Laura E. Tesler 1 &
- Torsten L. Christensen 2
14k Accesses
566 Citations
1 Altmetric
Explore all metrics
Establishing content validity for both new and existing patient-reported outcome (PRO) measures is central to a scientifically sound instrument development process. Methodological and logistical issues present a challenge in regard to determining the best practices for establishing content validity.
This paper provides an overview of the current state of knowledge regarding qualitative research to establish content validity based on the scientific methodological literature and authors’ experience.
Conceptual issues and frameworks for qualitative interview research, developing the interview discussion guide, reaching saturation, analysis of data, developing a theoretical model, item generation and cognitive debriefing are presented. Suggestions are offered for dealing with logistical issues regarding facilitator qualifications, ethics approval, sample recruitment, group logistics, taping and transcribing interviews, honoraria and documenting content validity.
Conclusions
It is hoped this paper will stimulate further discussion regarding best practices for establishing content validity so that, as the PRO field moves forward, qualitative research can be evaluated for quality and acceptability according to scientifically established principles.
This is a preview of subscription content, log in via an institution to check access.
Access this article
Subscribe and save.
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Price includes VAT (Russian Federation)
Instant access to the full article PDF.
Rent this article via DeepDyve
Institutional subscriptions
Similar content being viewed by others
How to use and assess qualitative research methods
What Use Is It Anyway? Considering Modes of Application and Contributions of Qualitative Inquiry
Introduction: Considering Qualitative, Quantitative and Mixed Methods Research
Explore related subjects.
- Artificial Intelligence
- Medical Ethics
Nunally, J., & Bernstein, I. (1994). Psychometric theory (3rd ed., p. 104). McGraw-Hill: New York.
Google Scholar
Leidy, N., & Vernon, M. (2008). Perspectives on patient-reported outcomes. Content validity and qualitative research in a changing clinical trial environment. Pharmacoeconomics, 26 (5), 363–370.
Article PubMed Google Scholar
U.S. Department of Health and Human Services. (2008). Food and drug administration. Guidance for industry. Patient-reported outcome measures: Use in medical product development to support labeling claims. Rockville, MD. http://www.fda.gov/cder/guidance/index.htm .
Denzin, N., & Lincoln, Y. (Eds.). (2003). Collection and interpreting qualitative materials (2nd ed.). Thousand Oaks, CA: Sage.
Snape, D., & Spencer, L. (2004). The foundations of qualitative research. In J. Ritchie & J. Lewis (Eds.), Qualitative research practice: A guide for social science students and researchers (pp. 1–23). London: SAGE.
Theobald, S., & Nhlema-Simwaka, B. (2008). The research, policy and practice interface: Reflections on using applied social research to promote equity in health in Malawi. Social Science and Medicine, 67 , 760–770.
Friedland, G. H. (2006). HIV medication adherence: The intersection of biomedical, biobehavioral, and social science research and clinical practice. Journal of Acquired Immune Deficiency Syndromes, 43 (Suppl 1), 53–59.
Greenhalgh, T., & Taylor, R. (1997). How to read a paper: Papers that go beyond numbers (qualitative research). British Medical Journal, 315 , 740–743.
PubMed CAS Google Scholar
Firestone, W. A., & Herriott, R. E. (1983). The formalization of qualitative research: An adaptation of “soft science” to the policy world. Evaluation Review, 7 , 437–466.
Article Google Scholar
Belue, R., Taylor-Richardson, K. D., Lin, J., Rivera, A. T., & Grandison, D. (2006). African Americans and participation in clinical trials: Differences in beliefs and attitudes by gender. Contemporary Clinical Trials, 27 , 498–505.
Article PubMed CAS Google Scholar
Featherstone, K., & Donavan, J. L. (1998). Random allocation or allocation at random? Patients’ perspectives of participation in a randomised controlled trial. BMJ, 317 , 1177–1180.
Lawton, J., Fox, A., Fox, C., & Kinmonth, A. L. (2003). Participating in the United Kingdom prospective diabetes study (UKPDS): A qualitative study of patients’ experiences. British Journal of General Practice, 53 , 394–398.
PubMed Google Scholar
Madsen, S. M., Holm, S., & Riis, P. (2009). Attitudes towards clinical research among cancer trial participants and non-participants: An interview study using a grounded theory approach. Journal of Medical Ethics, 33 , 234–240.
Marsden, J., & Bradburn, J. (2004). Patient and clinician collaboration in the design of a national randomized breast cancer trial. Health Expectations, 7 , 6–17.
Paterniti, D. A., Chen, M. S., Chiechi, C., Beckett, L. A., Horan, N., Turrell, C., et al. (2005). Asian Americans and cancer clinical trials: A mixed-methods approach to understanding awareness and experience. Cancer Supplement, 104 (12), 3015–3024.
Silberfeld, M., Rueda, S., Krahn, M., & Naglie, G. (2002). Content validity for dementia of three generic preference based health related quality of life instruments. Quality of Life Research, 11 , 71–79.
Waters, E., Maher, E., Salmon, L., Reddihough, D., & Boyd, R. (2005). Development of a condition-specific measure of quality of life for children with cerebral palsy: Empirical thematic data reported by parents and children. Child: Care, Health and Development, 31 (2), 127–135.
Article CAS Google Scholar
Wieringa, N. F., Peschar, J. L., Denig, P., de Graeff, P. A., & Vos, R. (2003). Connecting pre-marketing clinical research and medical practice. International Journal of Technology Assessment in Health Care, 19 (1), 202–219.
Glaser, B., & Strauss, A. (1967). The discovery of grounded theory . Chicago: Aldine Press.
Corbin, J., & Strauss, A. (1990). Grounded theory research: Procedures, canons and evaluative criteria. Qualitative Sociology, 13 (1), 3–21.
Charmaz, K. (2003). Qualitative interviewing and grounded theory analysis. In J. A. Holstein & J. F. Gubrium (Eds.), Inside interviewing: New lenses, new concerns (pp. 311–330). Thousand Oaks, CA: Sage.
Patton, M. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks, CA: Sage.
McGhee, G., Marland, G. R., & Atkinson, J. (2007). Grounded theory research: Literature reviewing anf reflexivity. Journal of Advanced Nursing, 60 (3), 334–342.
Corbin, J., & Strauss, A. (2007). Basics of qualitative research (3rd ed.). Newbury Park, CA: Sage.
Patrick, D. L., Burke, L. B., Powers, J. H., Scott, J. A., Rock, E. P., Dawisha, S., et al. (2007). Patient-reported outcomes to support medical product labeling claims: FDA perspective. Value Health, 10 (Suppl 2), S125–S137.
Morgan, D. (1996). Focus groups. Annual Review of Sociology, 22 , 129–152.
Stewart, D., Shamdasani, P. N., & Rook, D. W. (2006). Focus groups (2nd ed.). Thousand Oaks, CA: Sage.
Quine, S., & Cameron, I. (1995). The use of focus groups with the disabled elderly. Qualitative Health Research, 5 (4), 454–462.
Koppelman, N., & Bourjolly, J. (2001). Conducting focus groups with women with severe psychiatric disabilities: A methodological overview. Psychiatric Rehabilitation Journal, 25 (2), 142–151.
Kitzinger, J. (1995). Qualitative research: Introducing focus groups. BMJ, 311 , 299–302.
Greenbaum, T. (2000). Moderating focus groups: A practical guide for group facilitation . Thousand Oaks, CA: Sage.
Morgan, D. (1997). Focus groups as qualitative research (2nd ed.). Thousand Oaks, CA: Sage.
Hollander, J. (2004). The social contexts of focus groups. Journal of Contemporary Ethnography, 33 (5), 602–637.
Turner, R. R., Quittner, A. L., Parasuraman, B. M., Kallich, J. D., Cleeland, C. S., & Mayo/FDA Patient-Reported Outcomes Consensus Meeting Group. (2007). Patient-reported outcomes: Instrument development and selection issues. Value Health, 10 (Suppl 2), S86–S93.
Willis, G. B. (2004). Cognitive interviewing: A tool for improving questionnaire design . Thousand Oaks: Sage.
Willis, G. B. (2004). Cognitive interviewing revisited: A useful technique, in theory? In S. Presser, J. M. Rothgeb, M. P. Couper, J. T. Lessler, E. Martin, & E. Singer (Eds.), Methods for testing and evaluating survey questionnaires (pp. 23–44). New York: Wiley-IEEE.
Chapter Google Scholar
Beatty, P. (2004). The dynamics of cognitive interviewing. In S. Presser, J. M. Rothgeb, M. P. Couper, J. T. Lessler, E. Martin, & E. Singer (Eds.), Methods for testing and evaluating survey questionnaires (pp. 45–66). New York: Wiley-IEEE.
Cutliffe, J. (2000). Methodological issues in grounded theory. Journal of Advanced Nursing, 31 (6), 1476–1484.
Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18 (1), 59–82.
Poland, B. (2003). Transcription quality. In J. A. Holstein & J. F. Gubrium (Eds.), Inside interviewing: New lenses, new concerns (pp. 267–288). Thousand Oaks, CA: Sage.
Bernard, H. R. (2005). Research methods in anthropology (4th ed.). Walnut Creek, CA: Rowman Altamira.
St John, W., & Johnson, P. (2000). The pros and cons of data analysis software for qualitative research. Journal of Nursing Scholarship, 32 (4), 393–397.
Jennings, B. (2007). Qualitative analysis: A case of software or ‘peopleware?’. Research in Nursing and Health, 30 , 483–484.
Morison, M., & Moir, J. (1998). The role of computer software in the analysis of qualitative data: Efficient clerk, research assistant or Trojan horse? Journal of Advanced Nursing, 28 (1), 106–116.
Ritchie, J., Spencer, L., & O’Connor, W. (2003). Carrying out qualitative analysis. In J. Ritchie & J. Lewis (Eds.), Qualitative research practice: A guide for social science students and researchers (pp. 219–262). London: Sage.
Hruschka, D., Schwartz, D., St John, D., Picone-Decaro, E., Jenkins, R., & Carey, J. (2004). Reliability in coding open-ended data: Lessons learned from HIV behavioral research. Field Methods 307–331.
Rothman, M. L., Beltran, P., Cappelleri, J. C., Lipscomb, J., Teschendorf, B., & Mayo/FDA Patient-Reported Outcomes Consensus Meeting Group. (2007). Patient-reported outcomes: Conceptual issues. Value Health, 10 (Suppl 2), S66–S75.
Bradburn, N. M., Sudman, S., & Wansink, B. (2004). Asking questions . New York: John Wiley and Sons.
Acquadro, C., Conway, C., Wolf, B., Anfray, C., Hareendran, A., Mear, I., et al. (2008). Development of a standardized classification system for the translations of patient-reported outcome (PRO) measures. Quality of Life Newsletter, 39 , 5.
Frost, M. H., Reeve, B. B., Liepa, A. M., Stauffer, J. W., & Hays, R. D. (2007). What is sufficient evidence for the reliability and validity of patient-reported outcome measures? Value in Health, 10 (2), S94–S105.
Beatty, P. C., & Willis, G. B. (2007). Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly , 1–25.
Willis, G. B. (1999). Cognitive interviewing: A “how to” guide. Resource document. National Cancer Institute. http://appliedresearch.cancer.gov/areas/cognitive/interview.pdf . Accessed 2 May 2009.
Krueger, R. (1995). The future of focus groups. Qualitative Health Research, 5 (4), 524–530.
U.S. Department of Health and Human Services. (2008). Food and drug administration. CFR—Code of Federal Regulations Title 21: Part 812—Investigational Device Exemptions. Resource Document. https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfCFR/CFRSearch.cfm?fr=812.140 .
U.S. Department of Health and Human Services. (2008). Food and drug administration. CFR—Code of Federal Regulations Title 21: Part 812—Investigational New Drug Application. Resource Document. https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfCFR/CFRSearch.cfm?fr=312.62 .
Revicki, D. A., Gnanasakthy, A., & Weinfurt, K. (2007). Documenting the rationale and psychometric characteristics of patient reported outcomes for labeling and promotional claims: The PRO evidence dossier. Quality of Life Research, 16 , 717–723.
Download references
Author information
Authors and affiliations.
The Brod Group, 219 Julia Avenue, Mill Valley, CA, 94941, USA
Meryl Brod & Laura E. Tesler
Novo Nordisk A/S, Global Development, Krogshøjvej 29, 2880, Bagsværd, Denmark
Torsten L. Christensen
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Meryl Brod .
Rights and permissions
Reprints and permissions
About this article
Brod, M., Tesler, L.E. & Christensen, T.L. Qualitative research and content validity: developing best practices based on science and experience. Qual Life Res 18 , 1263–1278 (2009). https://doi.org/10.1007/s11136-009-9540-9
Download citation
Accepted : 13 September 2009
Published : 27 September 2009
Issue Date : November 2009
DOI : https://doi.org/10.1007/s11136-009-9540-9
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Assessing content validity
- Patient-reported outcomes (PRO) development
- Qualitative research
- Find a journal
- Publish with us
- Track your research
IMAGES
VIDEO
COMMENTS
Speaker 1: Validity and reliability are probably among the most confusing and frustrating terms when it comes to qualitative research. There are so many definitions and so many discussions and so many alternative terms have been put forward, so it doesn't really help to understand what validity is and how we can ensure that our findings are valid or how we can increase these findings' validity.
Background Mississippi (MS) experiences disproportionally high rates of new HIV infections and limited availability of pre-exposure prophylaxis (PrEP). Federally Qualified Health Centers (FQHCs) are poised to increase access to PrEP. However, little is known about the implementation strategies needed to successfully integrate PrEP services into FQHCs in MS. Purpose The study had two objectives ...
In this manner, both the research process and results can be assured of high rigor and robustness. ... From a realism standpoint, Porter then proposes multiple and open approaches for validity in qualitative research that incorporate parallel perspectives[43,44] and diversification of meanings. Any work of qualitative research, when read by the ...
However, the increased importance given to qualitative information in the evidence-based paradigm in health care and social policy requires a more precise conceptualization of validity criteria that goes beyond just academic reflection. After all, one can argue that policy verdicts that are based on qualitative information must be legitimized by valid research, just as quantitative effect ...
Although the tests and measures used to establish the validity and reliability of quantitative research cannot be applied to qualitative research, there are ongoing debates about whether terms such as validity, reliability and generalisability are appropriate to evaluate qualitative research.2-4 In the broadest context these terms are applicable, with validity referring to the integrity and ...
Abstract. Much contemporary dialogue has centered on the difficulty of establishing validity criteria in qualitative research. Developing validity standards in qualitative research is challenging because of the necessity to incorporate rigor and subjectivity as well as creativity into the scientific process.
Validity in qualitative research can also be checked by a technique known as respondent validation. This technique involves testing initial results with participants to see if they still ring true. Although the research has been interpreted and condensed, participants should still recognize the results as authentic and, at this stage, may even ...
Fundamental Criteria: General Research Quality. Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3.Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy's "Eight big‐tent criteria for excellent ...
The rejection of reliability and validity in qualitative inquiry in the 1980s has resulted in an interesting shift for "ensuring rigor" from the investigator's actions during the course of the research, to the reader or consumer of qualitative inquiry.
Validity refers to how accurately a method measures what it is intended to measure. If research has high validity, that means it produces results that correspond to real properties, characteristics, and variations in the physical or social world. High reliability is one indicator that a measurement is valid.
Reliability and validity are equally important to consider in qualitative research. Ways to enhance validity in qualitative research include: Building reliability can include one or more of the following: The most well-known measure of qualitative reliability in education research is inter-rater reliability and consensus coding.
The development of validity criteria in qualitative research poses theoretical issues, not simply technical problems. 60 Whittemore et al 58 (2001) explored the historical development of validity criteria in qualitative research and synthesized the findings that reflect a contemporary reconceptualization of the debate and dialogue that have ...
Issues of trustworthiness in qualitative leisure research, often demonstrated through particular techniques of reliability and/or validity, is often either nonexistent, unsubstantial, or unexplained.
Therefore, to help scholars conduct high-quality and rigorous qualitative research, for each approach, we describe basic tenets, when to use such an approach, and what makes it distinctive from the others. ... Debates to assure quality in qualitative research have been the point of conversations among scholars for decades (e.g. Denzin & Lincoln ...
Processual Validity. In qualitative research, validity does not present a unitary concept. 12,28 Therefore, a processual approach may offer qualitative research more flexibility to adapt projects to different situations, contexts, epistemological paradigms, and personal styles in conducting research. In addition, qualitative methods should not use very strict or "one best way" strategies ...
Rather than prescribing what reliability and/or validity should look like, researchers should attend to the overall trustworthiness of qualitative research by more directly addressing issues associated with reliability and/or validity, as aligned with larger issues of ontological, epistemological, and paradigmatic affiliation.
Reliability in qualitative research refers to the stability of responses to multiple coders of data sets. It can be enhanced by detailed field notes by using recording devices and by transcribing the digital files. However, validity in qualitative research might have different terms than in quantitative research. Lincoln and Guba (1985) used "trustworthiness" of ...
In qualitative research, validity does not present a unitary concept. 12,28 Therefore, ... Hence, every step matters for continuous co-construction. 23,37 High-quality qualitative research can yield insightful contributions to providing secure evidence for health care decisions. 39 Thus, in a modified quote, ...
Abstract. Attributes of rigor and quality and suggested best practices for qualitative research design as they relate to the steps of designing, conducting, and reporting qualitative research in health professions educational scholarship are presented. A research question must be clear and focused and supported by a strong conceptual framework ...
His account of validity in qualitative research is, at least in part, an attempt to uncover 'theory-in-use'. He distinguishes five types of validity: descriptive validity, interpretive validity, theoretical validity, generalisability and evaluative validity.[1] Maxwell notes that in experimental research threats to validity are "addressed ...
Validity in Qualitative 2. Feedback: "Soliciting feedback from others is an extremely useful strategy for identifying validity threats, your own biases and assumptions, and flaws in your logic or methods" (Maxwell, 1996, p. 94). Member Checks: "systematically soliciting feedback about one's data and conclusions from the people you are ...
Typically, the authors then move into presenting a comprehensive literature review, which should map the existing and up-to-date research knowledge and explicate the research problem that the study at hand will address, as well as its significance to the audience. 1 A high-quality qualitative study, just like any other study, should have a ...
To support the assertion that items have high content validity, items generated should use the language of the subjects interviewed and directly reflect the content of qualitative statements made by subjects. ... T.L. Qualitative research and content validity: developing best practices based on science and experience. Qual Life Res 18, 1263 ...
Validity and reliability or trustworthiness are fundamental issues in scientific research whether qualitative, quantitative, or mixed research. It is a necessity for researchers to describe which ...