It studies the tables containing the groups’ means to compare and distinguish between the categorised and independent variables. It includes the tables containing the data of the participant groups and sub-groups of survey respondents.
It is also known as paired testing, where two people are assigned specific identities and qualifications to compare and study types of discrimination.
In historical research , an investigator collects, analyses the information to understand, describe, and explain the events that occurred in the past. Researchers try to find out what happened exactly during a certain period of time as accurately and as closely as possible. It does not allow any manipulation or control of variables.
1. | 2. Methods of Analysing Data |
---|---|
Researchers use multiple theories to explain specific phenomena, situations, and types of behavior. It takes a long to go through the textual data. Coding is a way of tagging the data and organising it into a sequence of symbols, numbers, and letters to highlight the relevant points. Quantitative data is used to validate interpretations of historical events or incidents. |
Quantitative research is associated with numerical data or data that can be measured. It is used to study a large group of population. The information is gathered by performing statistical, mathematical, or computational techniques.
Quantitative research isn’t simply based on statistical analysis or quantitative techniques but rather uses a certain approach to theory to address research hypotheses or research questions, establish an appropriate research methodology, and draw findings & conclusions .
Some most commonly employed quantitative research strategies include data-driven dissertations, theory-driven studies, and reflection-driven research. Regardless of the chosen approach, there are some common quantitative research features as listed below.
1. | 2. Methods of Analysing Data |
---|---|
It is a method of collecting, analysing, and interpreting ample data to discover underlying patterns and details. Statistics are used in every field to make better decisions. The correlational analysis is carried out to discover the interrelationship between the two or more aspects of a situation. It distributes values around some central value, such an average. Example: the distance separating the highest from the lowest value. It counts the maximum and a minimum number of responses to a question or the occurrence of a specific phenomenon. It determines the nature of social problems, such as ethnic or gender discrimination. It explains the relationship between one dependent binary variable and one or more independent variables. This parametric technique is used while comparing two populations or samples. |
We hear you.
At ResearchProspect, our expert writers can help you with your quantitative dissertation whether you are a sports science student, medical or biological science, education or business, psychology, social sciences, engineering, project management, or any other science-based degree. We guarantee 100% commitment, 100% Plagiarism-free work, 100% Confidentiality and 100% Satisfaction
It is a type of scientific research where a researcher collects evidence to seek answers to a question . It is associated with studying human behaviour from an informative perspective. It aims at obtaining in-depth details of the problem.
As the term suggests, qualitative research is based on qualitative research methods, including participants’ observations, focus groups, and unstructured interviews.
Qualitative research is very different in nature when compared to quantitative research. It takes an established path towards the research process , how research questions are set up, how existing theories are built upon, what research methods are employed, and how the findings are unveiled to the readers.
You may adopt conventional methods, including phenomenological research, narrative-based research, grounded theory research, ethnographies , case studies , and auto-ethnographies.
Again, regardless of the chosen approach to qualitative research, your dissertation will have unique key features as listed below.
Now that you know the unique differences between quantitative and qualitative research methods, you may want to learn a bit about primary and secondary research methods.
Here is an article that will help you distinguish between primary and secondary research and decide whether you need to use quantitative and/or qualitative primary research methods in your dissertation.
Alternatively, you can base your dissertation on secondary research, which is descriptive and explanatory in essence.
Action research
Action research aims at finding an immediate solution to a problem. The researchers can also act as the participants of the research. It is used in the educational field.
A case study includes data collection from multiple sources over time. It is widely used in social sciences to study the underlying information, organisation, community, or event. It does not provide any solution to the problem. Researchers cannot act as the participants of the research.
Ethnography
In this type of research, the researcher examines the people in their natural environment. Ethnographers spend time with people to study people and their culture closely. They can consult the literature before conducting the study.
1. | 2. Methods of Analysing Data |
---|---|
with open-ended questions | It is a method of studying and retrieving meaningful information from documents. It aims at identifying patterns of themes in the collected information, such as face-to-face interviews, texts, and transcripts. , field observations, and interviews. It is a study of how language is used in texts and contexts. |
When you combine quantitative and qualitative methods of research, the resulting approach becomes mixed methods of research.
Over the last few decades, much of the research in academia has been conducted using mixed methods because of the greater legitimacy this particular technique has gained for several reasons including the feeling that combining the two types of research can provide holistic and more dependable results.
Here is what mixed methods of research involve:
Note: However, this method has one prominent limitation, which is, as previously mentioned, combining qualitative and quantitative research can be difficult because they both are different in terms of design and approach. In many ways, they are contrasting styles of research, and so care must be exercised when basing your dissertation on mixed methods of research.
When choosing a research method for your own dissertation, it would make sense to carefully think about your research topic , research questions , and research objectives to make an intelligent decision in terms of the philosophy of research design .
Dissertations based on mixed methods of research can be the hardest to tackle even for PhD students.
Our writers have years of experience in writing flawless and to the point mixed methods-based dissertations to be confident that the dissertation they write for you will be according to the technical requirements and the formatting guidelines.
Read our guarantees to learn more about how you can improve your grades with our dissertation services.
Please Find Below an Example of Research Methods Section in a Dissertation or Thesis.
Diversity management became prominent in the late twentieth century, with foundations in America. Historically homogeneous or nondiverse nations, such as Finland, have not yet experienced the issues associated with rising cultural and ethnic diversity in the workforce. Regardless of the environment, workforce diversity garners greater attention and is characterised by its expanding relevance due to globalised and international companies, global and national worker mobility, demographic shifts, or enhancing productivity.
As a result, challenges of diversity management have been handled through legal, financial, and moral pressures (Hayes et al., 2020). The evolving structure of the working population in terms of language, ethnic background, maturity level, faith, or ethnocultural history is said to pose a challenge to human resource management (HRM) in utilising diversity: the understanding, abilities, and expertise prospects of the entire workforce to deal with possible developments.
The European approach to diversity management is regarded as growing. However, it is found to emphasise the relationship to business and lack competence in diversity management problems. Mass immigration concentrates variety, sometimes treated as cultural minority issues, implying the normalisation of anti-discrimination actions (Yadav and Lenka, 2020).
These causes, in turn, have provided the basis of comprehensive diversity research, which has generated different theories, frameworks, concepts, and guidelines from interdisciplinary viewpoints, such as industrial and organisational psychology and behaviour (OB), cultural studies, anthropology, migration, economics, postcolonialism, and so on. And in the form of international, social and cultural, organisational, group, and individual scale diversity analysis. This dissertation focuses on diversity concerns from impression management, specifically from HRM as an executive-level phenomenon (Seliverstova, 2021).
As conceptual frameworks, organisational structures concentrating on the production of diversity and social psychology, notably social identity theory with diverse ‘identities’ of persons or intergroup connections, are primarily employed. The study’s primary goal in the workplace is to discover inequities or examine the effects of diversity on workplace outcomes.
Individual study interests include behaviours, emotions, intelligence, intercultural skills or competencies, while group research interests include group dynamics, intergroup interactions, effectiveness, and cooperation or collaboration. Organisational studies address themes such as workforce composition, workplace equality, and diversity challenges and how they may be managed accordingly. Domestic diversity, omitting national distinctions, or global diversity, about diverse country cultures, might be studied further (AYDIN and ÖZEREN, 2018).
Diversity is a context-dependent, particular, comparative, complicated, plural phrase or idea with varying interpretations in different organisations and cultures and no unified definition. As a result, in addition to many internal and external elements, diversity may be managed, individuals taught, and organisations have grown in various ways. This dissertation considers diversity in an organisational environment as a construct of ‘differences’ to be handled (Cummings, 2018).
Various management systems have grown in stages, bringing diverse diversity management concepts. Equality/equal opportunities (EO) legislation and diversity management are the two conventional approaches and primary streams with differing theoretical foundations for managing and dealing with workforce diversity challenges (DM).
These approaches relate to whether diversity is handled by increasing sameness by legal pressures or by voluntarily respecting people’s differences, which shows an organisation’s responsiveness and proactivity toward managing diversity. But most of the literature in this area has avoided the impression management theories (Coad and Guenther, 2014). Therefore, this study will add a new dimension in this area by introducing impression management analysis.
This research aims to analyse the impact of organisational structure on human resources diversification from the viewpoint of impression managerial theory. It has the following objectives:
This research will answer the following questions:
The organisational structure significantly impacts the recruitment of diverse human resources.
According to Staniec and Zakrzewska-Bielawska (2010), considering strategy-oriented activities and organisational components are the critical foundation in the organisational structure required to align structure strategy. Each company’s internal organisation is somewhat distinctive, resulting from various corporate initiatives and historical conditions.
Furthermore, each design is based on essential success elements and vital tasks inherent in the firm plan. This article offers empirical research on unique organisational structure elements in Polish firms in the context of concentration and diversification tactics. And companies that adopted concentration techniques mainly used functional organisational structures.
Tasks were primarily classified and categorised based on functions and phases of the technical process, with coordination based on hierarchy. Jobs were also highly centralised and formalised. Organisational structures of an active type were also prevalent in many firms. Only a handful of the evaluated organisations possessed flexible contemporary divisional or matrix structures appropriate to differentiation. However, it appears that even such organisations should adjust their organisational solutions to perform successfully in an immensely complex and chaotic environment.
Similarly, according to Yang and Konrad (2011), diversity management techniques are the institutionalised methods created and applied by organisations to manage diversity among all organisational shareholders. They examined the existing research on the causes and significance of diversity management approaches.
They construct a research model indicating many potential routes for future study using institutional and resource-based theories. They also offer prospective avenues for study on diversity management techniques to further the two theoretical viewpoints. The findings indicate that research on diverse management practises might provide perceptions into the two ideologies. Diversity management provides a method for reconciling the agency vs structure issue for institutional concept.
Furthermore, diversity management is a suitable framework for studying how institutional pressures are translated into organisational action and the relationship between complying with institutional mandates and attaining high performance. Research on diversity management raises the importance of environmental normative elements in resource-based reasoning.
It allows for exploring essential resource sources and the co-evolution of diversity resources and management capacities, potentially developing dynamic resource-based theory. Furthermore, a review of the existing research on diversity management practices reveals that research in this field has nearly entirely concentrated on employee-related activities.
However, in establishing the idea of diversity management practises, we included the practises that companies put in place to manage diversity across all stakeholder groups on purpose. Management techniques for engaging with consumers, dealers, supervisors, board directors, and community members are critical for meeting institutional theory’s social and normative commitments.
Moreover, according to Sippola (2014), this research looks at diversity management from the standpoint of HRM. The study aims to discover the effects of expanding workforce diversity on HRM inside firms. This goal will be accomplished through four papers examining diversity management’s impacts on HRM from various viewpoints and mostly in longitudinal contexts.
The purpose of the first article, as a pilot survey, is to determine the reasons, advantages, and problems of rising cultural diversity and the consequences for HRM to get a preliminary grasp of the issue in the specific setting. According to the report, diversity is vital for productivity but is not often emphasised in HRM strategy.
The key areas that were changed were acquisition, development, and growth. The second article examines how different diversity management paradigms recognised in businesses affect HRM. It offers an experimentally verified typology that explains reactive or proactive strategic and operational level HRM activities in light of four alternative diversity management perspectives.
The third essay will examine how a ‘working culture bridge group’ strategy fosters and enhances workplace diversity. The research looks into how development goals are defined, what training and development techniques are used, and the consequences and causal factors when an analysis measures the training and development approach.
The primary goal of article four is to establish which components of diversity management design are globally integrated into multinational corporations (MNCs) and which integrating (delivery) methods are employed to facilitate it. Another goal is to identify the institutional problems faced by the Finnish national diversity setting during the integration process.
The findings show that the example organisation achieved more excellent global uniformity at the level of diversification concept through effective use of multiple frameworks but was forced to rely on a more multinational approach to implementing diversification policies and procedures. The difficulties faced emphasised the distinctiveness of Finland’s cognitive and normative institutional setting for diversity.
Furthermore, according to Guillaume et al. (2017), to compensate for the dual-edged character of demographic workplace diversity impacts on social inclusion, competence, and well-being-related factors, research has shifted away from straightforward main effect methods and begun to investigate factors that moderate these effects.
While there is no shortage of primary research on the circumstances that lead to favourable or poor results, it is unknown which contextual elements make it work. Using the Classification framework as a theoretical lens, they examine variables that moderate the impacts of workplace diversity on social integration, performance, and well-being outcomes, emphasising characteristics that organisations and managers can influence.
They suggest future study directions and end with practical applications. They concluded that faultlines, cross-categorisation, and status variations across demographic groupings highlight variety. Cross-categorisation has been proven to reduce intergroup prejudice while promoting social inclusion, competence, and well-being. Whether faultlines and subgroup status inequalities promote negative or good intergroup interactions and hinder social integration, performance, and well-being depends on whether situational factors encourage negative or positive intergroup connections. The impacts were not mitigated by team size or diversity type.
Furthermore, our data demonstrate that task characteristics are essential for workgroup diversity. Any demographic diversity in workgroups can promote creativity, but only when combined with task-relevant expertise improves the performance of teams undertaking complicated tasks. The type of team and the industrial context do not appear to play an effect. It is unclear if these findings apply to relational demography and organisational diversity impacts. There is some evidence that, under some settings, relational demography may increase creativity, and, as previously said, demographic variety may help firms function in growth-oriented strategy contexts.
Likewise, according to Ali, Tawfeq, and Dler (2020), diversity management refers to organisational strategies that strive to increase the integration of people from diverse backgrounds into the framework of corporate goals. Organisations should develop productive ways to implement diversity management (DM) policies to establish a creative enterprise that can enhance their operations, goods, and services.
Furthermore, human resource management HRM is a clever tool for any firm to manage resources within the company. As a result, this article explores the link between DM, HR policies, and workers’ creative work-related behaviours in firms in Kurdistan’s Fayoum city. According to the questionnaire, two hypotheses were tested: the influence of HRM on diversity management, HRM on innovation, and the impact of diversity management on innovation.
The first premise is that workplace diversity changes the nature of working relationships, how supervisors and managers connect, and how workers respond to one another. It also addresses human resource functions such as record-keeping, training, recruiting, and employee competence needs. The last premise on the influence of diversity management on innovation is that workplace diversity assists a business in hiring a diverse range of personnel.
In other words, a vibrant population need individuals of varied personalities. Workplace diversity refers to a company’s workforce consisting of employees of various genders, ages, faiths, races, ethnicities, cultural backgrounds, religions, dialects, training, capabilities, etc. According to the study’s findings, human resource management strategies have a substantial influence on diversity management.
Second, diversity management was found to have a considerable impact on creativity. Finally, human resource management techniques influenced innovation significantly. Based on the findings, it was discovered that diversity management had a more significant influence on creation than human resource management.
Lastly, according to Li et al. (2021), the universal trend of rising workplace age diversity has increased the study focus on the organisational effects of age-diverse workforces. Prior research has mainly concentrated on the statistical association between age diversity and organisational success rather than experimentally examining the probable processes behind this relationship.
They argue that age diversity influences organisational performance through human and social capital using an intellectual capital paradigm. Moreover, they investigate workplace functional diversity and age-inclusive management as two confounding factors affecting the benefits of age diversity on physical and human capital.
Their hypotheses were evaluated using data from the Association for Human Resource Management’s major manager-reported workplace survey. Age diversity was favourably linked with organisational performance via the mediation of higher human and social capital. Furthermore, functional diversity and age-inclusive management exacerbated the favourable benefits of age variety on human and social capital. Their study gives insight into how age-diverse workforces might generate value by nurturing knowledge-based organisational resources.
Although there is a vast body of research in diversity in the human resource management area, many researchers explored various dimensions. But no study explicitly discovers the impact of organisational culture on human resource diversification. Moreover, no researchers examined the scope of impression management in this context.
Therefore, this research will fill this considerable literature gap by finding the direct impact of organisational structure on human resource diversification. Secondly, by introducing a new dimension of impression management theory. It will open new avenues for research in this area, and it will help HR managers to formulate better policies for a more inclusive organisational structure.
It will be mixed quantitative and qualitative research based on the secondary data collected through different research journals and case studies of various companies. Firstly, the quantitative analysis will be conducted through a regression analysis to show the organisational structure’s impact on human resource diversification.
The dummy variable will be used to show organisational structure, and diversification will be captured through the ethnic backgrounds of the employees. Moreover, different variables will be added to the model, such as competency, social inclusion, etc. It will fulfil the objective of identifying various factors which affect the management decision to recruit diverse human resources. Secondly, a systematic review of the literature will be conducted for qualitative analysis to add the impression management dimension to the research. Google Scholar, JSTOR, Scopus, etc., will be used to search keywords such as human resource diversity, impression management, and organisation structure.
Although research offers a comprehensive empirical analysis on the relationship under consideration due to lack of resources, the study is limited to secondary data. It would be better if the research would’ve been conducted on the primary data collected through the organisations. That would’ve captured the actual views of the working professionals. It would’ve increased the validity of the research.
Ali, M., Tawfeq, A., & Dler, S. (2020). Relationship between Diversity Management and Human Resource Management: Their Effects on Employee Innovation in the Organizations. Black Sea Journal of Management and Marketing, 1 (2), 36-44.
AYDIN, E., & ÖZEREN, E. (2018). Rethinking workforce diversity research through critical perspectives: emerging patterns and research agenda. Business & Management Studies: An International Journal, 6 (3), 650-670.
Coad, A., & Guenther, C. (2014). Processes of firm growth and diversification: theory and evidence. Small Business Economics, 43 (4), 857-871.
Cummings, V. (2018). Economic Diversification and Empowerment of Local Human Resources: Could Singapore Be a Model for the GCC Countries?. In. Economic Diversification in the Gulf Region, II , 241-260.
Guillaume, Y., Dawson, J., Otaye‐Ebede, L., Woods, S., & West, M. (2017). Harnessing demographic differences in organizations: What moderates the effects of workplace diversity? Journal of Organizational Behavior, 38 (2), 276-303.
Hayes, T., Oltman, K., Kaylor, L., & Belgudri, A. (2020). How leaders can become more committed to diversity management. Consulting Psychology Journal: Practice and Research, 72 (4), 247.
Li, Y., Gong, Y., Burmeister, A., Wang, M., Alterman, V., Alonso, A., & Robinson, S. (2021). Leveraging age diversity for organizational performance: An intellectual capital perspective. Journal of Applied Psychology, 106 (1), 71.
Seliverstova, Y. (2021). Workforce diversity management: a systematic literature review. Strategic Management, 26 (2), 3-11.
Sippola, A. (2014). Essays on human resource management perspectives on diversity management. Vaasan yliopisto.
Staniec, I., & Zakrzewska-Bielawska, A. (2010). Organizational structure in the view of single business concentration and diversification strategies—empirical study results. Recent advances in management, marketing, finances. WSEAS Press, Penang, Malaysia .
Yadav, S., & Lenka, U. (2020). Diversity management: a systematic review. Equality, Diversity and Inclusion: An International Journal .
Yang, Y., & Konrad, A. (2011). Understanding diversity management practices: Implications of institutional theory and resource-based theory. Group & Organization Management, 36 (1), 6-38.
What is the difference between research methodology and research methods.
Research methodology helps you conduct your research in the right direction, validates the results of your research and makes sure that the study you are conducting answers the set research questions.
Research methods are the techniques and procedures used for conducting research. Choosing the right research method for your writing is an important aspect of the research process.
The types of research methods include:
Quantitative research is associated with numerical data or data that can be measured. It is used to study a large group of population. The information is gathered by performing statistical, mathematical, or computational techniques.
It is a type of scientific research where a researcher collects evidence to seek answers to a question . It is associated with studying human behavior from an informative perspective. It aims at obtaining in-depth details of the problem.
Mixed methods of research involve:
Action research for my dissertation?, A brief overview of action research as a responsive, action-oriented, participative and reflective research technique.
In historical research, a researcher collects and analyse the data, and explain the events that occurred in the past to test the truthfulness of observations.
A variable is a characteristic that can change and have more than one value, such as age, height, and weight. But what are the different types of variables?
USEFUL LINKS
LEARNING RESOURCES
COMPANY DETAILS
until file(s) become available
Multimodal intelligence, where AI systems can process and integrate information from multiple modalities, such as text, visual, audio, etc., has emerged as a key concept in today’s data-driven era. This cross-modal approach finds diverse applications and transformative potential across industries. By fusing heterogeneous data streams, multimodal AI generates representations more akin to human-like intelligence than traditional unimodal techniques.
In this thesis, we aim to advance the field of multimodal intelligence by focusing on three crucial dimensions: multimodal alignment, robustness, and generalizability. By introducing new approaches and methods, we aim to improve the performance, robustness, and interpretability of multimodal models in practical applications. In this thesis, we address these critical questions: (1) How do we explore the inner semantic alignment between different types of data? How can the learned alignment help advance multimodal applications? (2) How robust are the multimodal models? How can we improve the models’ robustness in real-world applications? (3) How do we generalize the knowledge of one learned domain to another unlearned domain?
This thesis makes contributions to all three technical challenges. We start with a contribution of learning cross-modal semantic alignment, where we explore establishing rich connections between language and image/video data, with a focus on the multimodal summarization task. By aligning the semantic content of language with visual elements, the resulting models can possess a more nuanced understanding of the underlying concepts. We delve into the application of Optimal Transport-based approaches to learn cross-domain alignment, enabling models to provide interpretable explanations of their multimodal reasoning process.
For the next contribution, we develop comprehensive evaluation metrics and methodologies to assess the robustness of multimodal models. By simulating distribution shifts and measuring the model’s performance under different scenarios, we can gain a deeper understanding of the model’s adaptability and identify potential vulnerabilities. We also adopt Optimal Transport to improve the model’s robustness performance through data augmentation via Wasserstein Geodesic perturbation.
The third contribution revolves around the generalizability of multimodal systems, with an emphasis on the interactive domain and the healthcare domain. In the interactive domain, we develop new learning paradigms for learning executable robotic policy plans from visual observations by incorporating latent language encoding. We also use retrieval augmentation to make the vision-language models capable of recognizing and providing knowledgeable answers in real-world entity-centric VQA. In the healthcare domain, we bridge the gap by transferring the knowledge of LLMs to clinical ECG and EEG. In addition, we design retrieval systems that can automatically match the clinical healthcare signal to the most similar records in the database. This functionality can significantly aid in diagnosing diseases and reduce physicians’ workload.
In essence, this thesis seeks to propel the field of multimodal AI forward by enhancing alignment, robustness, and generalizability, thus paving the way for more sophisticated and efficient multimodal AI systems.
This page contains reference examples for published dissertations or theses.
Kabir, J. M. (2016). Factors influencing customer satisfaction at a fast food hamburger chain: The relationship between customer satisfaction and customer loyalty (Publication No. 10169573) [Doctoral dissertation, Wilmington University]. ProQuest Dissertations & Theses Global.
Miranda, C. (2019). Exploring the lived experiences of foster youth who obtained graduate level degrees: Self-efficacy, resilience, and the impact on identity development (Publication No. 27542827) [Doctoral dissertation, Pepperdine University]. PQDT Open. https://pqdtopen.proquest.com/doc/2309521814.html?FMT=AI
Zambrano-Vazquez, L. (2016). The interaction of state and trait worry on response monitoring in those with worry and obsessive-compulsive symptoms [Doctoral dissertation, University of Arizona]. UA Campus Repository. https://repository.arizona.edu/handle/10150/620615
Published dissertation or thesis references are covered in the seventh edition APA Style manuals in the Publication Manual Section 10.6 and the Concise Guide Section 10.5
Date created, date modified, defense date, research director(s), committee members.
Oclc number, additional groups.
Usage metrics.
What’s the difference between method and methodology.
Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.
Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).
In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .
In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.
Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research.
Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .
Action research is conducted in order to solve a particular issue immediately, while case studies are often conducted over a longer period of time and focus more on observing and analyzing a particular ongoing phenomenon.
Action research is focused on solving a problem or informing individual and community-based knowledge in a way that impacts teaching, learning, and other related processes. It is less focused on contributing theoretical input, instead producing actionable input.
Action research is particularly popular with educators as a form of systematic inquiry because it prioritizes reflection and bridges the gap between theory and practice. Educators are able to simultaneously investigate an issue as they solve it, and the method is very iterative and flexible.
A cycle of inquiry is another name for action research . It is usually visualized in a spiral shape following a series of steps, such as “planning → acting → observing → reflecting.”
To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.
Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.
While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.
Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.
Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.
You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.
Content validity shows you how accurately a test or other measurement method taps into the various aspects of the specific construct you are researching.
In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.
The higher the content validity, the more accurate the measurement of the construct.
If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.
Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.
When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.
For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).
On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analyzing whether each one covers the aspects that the test was designed to cover.
A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.
Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.
Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.
Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .
This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .
Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.
Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .
Snowball sampling is best used in the following cases:
The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.
Reproducibility and replicability are related terms.
Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.
The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).
Purposive and convenience sampling are both sampling methods that are typically used in qualitative data collection.
A convenience sample is drawn from a source that is conveniently accessible to the researcher. Convenience sampling does not distinguish characteristics among the participants. On the other hand, purposive sampling focuses on selecting participants possessing characteristics associated with the research study.
The findings of studies based on either convenience or purposive sampling can only be generalized to the (sub)population from which the sample is drawn, and not to the entire population.
Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.
On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.
Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.
However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.
In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection, using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.
A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.
Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.
Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .
A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.
The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .
An observational study is a great choice for you if your research question is based purely on observations. If there are ethical, logistical, or practical concerns that prevent you from conducting a traditional experiment , an observational study may be a good choice. In an observational study, there is no interference or manipulation of the research subjects, as well as no control or treatment groups .
It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.
While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.
Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.
Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.
Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.
Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.
You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .
When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.
Construct validity is often considered the overarching type of measurement validity , because it covers all of the other types. You need to have face validity , content validity , and criterion validity to achieve construct validity.
Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.
There are two subtypes of construct validity.
Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.
The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.
Naturalistic observation is a qualitative research method where you record the behaviors of your research subjects in real world settings. You avoid interfering or influencing anything in a naturalistic observation.
You can think of naturalistic observation as “people watching” with a purpose.
A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.
In statistics, dependent variables are also called:
An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.
Independent variables are also called:
As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.
Overall, your focus group questions should be:
A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when:
More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .
Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .
Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.
This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.
The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.
There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.
A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:
An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.
Unstructured interviews are best used when:
The four most common types of interviews are:
Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .
In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.
Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.
Deductive reasoning is also called deductive logic.
There are many different types of inductive reasoning that people use formally or informally.
Here are a few common types:
Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.
Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.
In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.
Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.
Inductive reasoning is also called inductive logic or bottom-up reasoning.
A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.
A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).
Triangulation can help:
But triangulation can also pose problems:
There are four main types of triangulation :
Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.
However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.
Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.
Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.
Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.
In general, the peer review process follows the following steps:
Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.
You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.
Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.
Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.
Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.
Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.
Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.
Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.
Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.
For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.
After data collection, you can use data standardization and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.
Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.
These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.
Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimize or resolve these.
Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.
Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.
In this process, you review, analyze, detect, modify, or remove “dirty” data to make your dataset “clean.” Data cleaning is also called data cleansing or data scrubbing.
Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.
These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.
Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .
You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.
You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.
Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.
Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.
Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .
These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.
In multistage sampling , you can use probability or non-probability sampling methods .
For a probability sample, you have to conduct probability sampling at every stage.
You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.
Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.
But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .
These are four of the most common mixed methods designs :
Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.
Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.
In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.
This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.
No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.
To find the slope of the line, you’ll need to perform a regression analysis .
Correlation coefficients always range between -1 and 1.
The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.
The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.
These are the assumptions your data must meet if you want to use Pearson’s r :
Quantitative research designs can be divided into two main categories:
Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.
A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.
The priorities of a research design can vary depending on the field, but you usually have to specify:
A research design is a strategy for answering your research question . It defines your overall approach and determines how you will collect and analyze data.
Questionnaires can be self-administered or researcher-administered.
Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.
Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.
You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.
Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.
Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.
A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.
The third variable and directionality problems are two main reasons why correlation isn’t causation .
The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.
The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.
Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.
Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.
While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .
Controlled experiments establish causality, whereas correlational studies only show associations between variables.
In general, correlational research is high in external validity while experimental research is high in internal validity .
A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.
A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.
Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.
A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .
A correlation reflects the strength and/or direction of the association between two or more variables.
Random error is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .
You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.
Systematic error is generally a bigger problem in research.
With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.
Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.
Random and systematic error are two types of measurement error.
Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).
Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).
On graphs, the explanatory variable is conventionally placed on the x-axis, while the response variable is placed on the y-axis.
The term “ explanatory variable ” is sometimes preferred over “ independent variable ” because, in real world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.
Multiple independent variables may also be correlated with each other, so “explanatory variables” is a more appropriate term.
The difference between explanatory and response variables is simple:
In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:
Depending on your study topic, there are various other methods of controlling variables .
There are 4 main types of extraneous variables :
An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.
A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.
In a factorial design, multiple independent variables are tested.
If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.
Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .
Advantages:
Disadvantages:
While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .
Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.
In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.
In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.
The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.
Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.
In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.
To implement random assignment , assign a unique number to every member of your study’s sample .
Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.
Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.
In contrast, random assignment is a way of sorting the sample into control and experimental groups.
Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.
In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.
“Controlling for a variable” means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.
Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.
Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .
If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .
A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.
Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.
Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.
If something is a mediating variable :
A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.
A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.
There are three key steps in systematic sampling :
Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .
Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.
For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 x 5 = 15 subgroups.
You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.
Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.
For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.
In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).
Once divided, each subgroup is randomly sampled using another probability sampling method.
Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.
However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.
There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.
Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.
The clusters should ideally each be mini-representations of the population as a whole.
If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,
If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.
The American Community Survey is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.
Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data is then collected from as large a percentage as possible of this random subset.
Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .
Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity as they can use real-world interventions instead of artificial laboratory settings.
A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.
Blinding is important to reduce research bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .
If participants know whether they are in a control or treatment group , they may adjust their behavior in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.
Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .
A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.
However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).
For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.
An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.
Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.
Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.
The type of data determines what statistical tests you should use to analyze your data.
A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.
To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.
In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).
The process of turning abstract concepts into measurable variables and indicators is called operationalization .
There are various approaches to qualitative data analysis , but they all share five steps in common:
The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .
There are five common approaches to qualitative research :
Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.
Operationalization means turning abstract conceptual ideas into measurable observations.
For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.
Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.
When conducting research, collecting original data has significant advantages:
However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.
Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.
There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control and randomization.
In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.
In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .
In statistical control , you include potential confounders as variables in your regression .
In randomization , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.
A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.
Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.
To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.
Yes, but including more than one of either type requires multiple research questions .
For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.
You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .
To ensure the internal validity of an experiment , you should only change one independent variable at a time.
No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!
You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .
Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.
In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.
Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .
Probability sampling means that every member of the target population has a known chance of being included in the sample.
Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .
Using careful research design and sampling procedures can help you avoid sampling bias . Oversampling can be used to correct undercoverage bias .
Some common types of sampling bias include self-selection bias , nonresponse bias , undercoverage bias , survivorship bias , pre-screening or advertising bias, and healthy user bias.
Sampling bias is a threat to external validity – it limits the generalizability of your findings to a broader group of people.
A sampling error is the difference between a population parameter and a sample statistic .
A statistic refers to measures about the sample , while a parameter refers to measures about the population .
Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.
Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.
There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment and situation effect.
The two types of external validity are population validity (whether you can generalize to other groups of people) and ecological validity (whether you can generalize to other situations and settings).
The external validity of a study is the extent to which you can generalize your findings to different groups of people, situations, and measures.
Cross-sectional studies cannot establish a cause-and-effect relationship or analyze behavior over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .
Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.
Sometimes only cross-sectional data is available for analysis; other times your research question may only require a cross-sectional study to answer it.
Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.
The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .
Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.
Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.
Longitudinal study | Cross-sectional study |
---|---|
observations | Observations at a in time |
Observes the multiple times | Observes (a “cross-section”) in the population |
Follows in participants over time | Provides of society at a given point |
There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction and attrition .
Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.
In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .
The research methods you use depend on the type of data you need to answer your research question .
A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.
A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.
In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.
Discrete and continuous variables are two types of quantitative variables :
Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).
Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).
You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .
You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .
In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:
Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .
Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:
When designing the experiment, you decide:
Experimental design is essential to the internal and external validity of your experiment.
I nternal validity is the degree of confidence that the causal relationship you are testing is not influenced by other factors or variables .
External validity is the extent to which your results can be generalized to other contexts.
The validity of your experiment depends on your experimental design .
Reliability and validity are both about how well a method measures something:
If you are doing experimental research, you also have to consider the internal and external validity of your experiment.
A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.
In statistics, sampling allows you to test a hypothesis about the characteristics of a population.
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.
Want to contact us directly? No problem. We are always here for you.
Our team helps students graduate by offering:
Scribbr specializes in editing study-related documents . We proofread:
Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .
The add-on AI detector is powered by Scribbr’s proprietary software.
The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.
You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Original Submission Date Received: .
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
Management solutions and stabilization of a pre-existing concealed goaf underneath an open-pit slope.
2. engineering background, 2.1. mine geological condition, 2.2. mine production status, 2.3. goaf distribution, 3. goaf group risk classification, 3.1. methodology, 3.2. evaluation of quantitative grading standards, 3.3. computation for target approaching, 4. key parameters of the pco-goaf treatment scheme, 4.1. safe thickness of the overlying rock in pco-goafs, 4.2. backfill strength, 5. analysis of slope stability, 5.1. basic parameters of the model, 5.2. setting of the blasting vibration conditions, 5.3. calculation of the safety factor for the slope, 6. conclusions.
Institutional review board statement, informed consent statement, data availability statement, acknowledgments, conflicts of interest.
Click here to enlarge figure
Code | Evaluation Index | 1# | 2# | 3# | 4# | 5# |
---|---|---|---|---|---|---|
Q1 | Height/m | 3.8 | 5.5 | 4.1 | 6.3 | 5.5 |
Q2 | Ratio of pillar width to height | 6.5 | 205.6 | 16.0 | 158.4 | 4.5 |
Q3 | Pillar area/m | 25 | 1131 | 66 | 998 | 25 |
Q4 | Exposed area of goaf/m | 2063 | 3895.6 | 1740 | 2787.8 | 724 |
Q5 | volume of goaf/m | 8294.3 | 21,901.2 | 6496 | 18,908.9 | 3982 |
Q6 | Span of goaf/m | 335.5 | 408.5 | 355 | 184 | 54 |
Q7 | Depth/m | 108.2 | 99.9 | 141.9 | 74.8 | 76.5 |
Q8 | Distance from slope /m | 295.9 | 181 | 132.6 | 144.7 | 120 |
Q9 | Shape factor of goaf | 8 | 8 | 4 | 6 | 4 |
Q10 | Status of adjacent goaf | 8 | 6 | 4 | 4 | 8 |
Evaluation Index | Grade I | Grade II | Grade III | Grade IV |
---|---|---|---|---|
Q1 | 6~8 | 4~6 | 3~4 | 0~3 |
Q2 | 0.06~0.11 | 0.03~0.06 | 0.015~0.03 | 0~0.015 |
Q3 | 0~200 | 200~500 | 500~1000 | 1000~1500 |
Q4 | 3000~4500 | 1500~3000 | 500~1500 | 0~500 |
Q5 | 15,000~25,000 | 10,000~15,000 | 5000~10,000 | 0~5000 |
Q6 | 250~450 | 100~250 | 50~100 | 0~50 |
Q7 | 0~40 | 40~80 | 80~120 | 120~150 |
Q8 | 0~70 | 70~140 | 140~210 | 210~300 |
Q9 | 7~9 | 5~7 | 3~5 | 0~3 |
Q10 | 7~9 | 5~7 | 3~5 | 0~3 |
PCO-Goaf Group | Bullseye Proximity Degree | The Grade of PCO-Goaf Group | |||
---|---|---|---|---|---|
Grade I | Grade II | Grade III | Grade IV | ||
1# | 0.9852 | 0.9530 | 0.9134 | 0.9003 | I |
2# | 0.9576 | 0.9609 | 0.8701 | 0.8544 | II |
3# | 0.8719 | 0.9400 | 0.9969 | 0.9454 | III |
4# | 0.8915 | 0.9621 | 0.9366 | 0.8995 | II |
5# | 0.9448 | 0.9602 | 0.9609 | 0.9399 | III |
Rock | Density (kN/m ) | Cohesive Force c (kPa) | Internal Friction Angle φ (°) |
---|---|---|---|
Felsic Hornstone | 26.6 | 362 | 38.41 |
Chalcopyrite-veinlet-bearing Biotitic Felsic Hornfels | 26.6 | 338 | 38.17 |
Biotite diopside plagioclase | 28.1 | 389 | 38.93 |
Skarn | 32.9 | 362 | 38.97 |
Biotite diopside plagioclase | 28.1 | 389 | 38.93 |
Granite | 25.5 | 380 | 39.4 |
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
Na, Q.; Chen, Q.; Tao, Y.; Zhang, X.; Tan, Y. Management Solutions and Stabilization of a Pre-Existing Concealed Goaf Underneath an Open-Pit Slope. Appl. Sci. 2024 , 14 , 6849. https://doi.org/10.3390/app14156849
Na Q, Chen Q, Tao Y, Zhang X, Tan Y. Management Solutions and Stabilization of a Pre-Existing Concealed Goaf Underneath an Open-Pit Slope. Applied Sciences . 2024; 14(15):6849. https://doi.org/10.3390/app14156849
Na, Qing, Qiusong Chen, Yunbo Tao, Xiangyu Zhang, and Yi Tan. 2024. "Management Solutions and Stabilization of a Pre-Existing Concealed Goaf Underneath an Open-Pit Slope" Applied Sciences 14, no. 15: 6849. https://doi.org/10.3390/app14156849
Article access statistics, further information, mdpi initiatives, follow mdpi.
Subscribe to receive issue release notifications and newsletters from MDPI journals
Home » Research Methods – Types, Examples and Guide
Table of Contents
Definition:
Research Methods refer to the techniques, procedures, and processes used by researchers to collect , analyze, and interpret data in order to answer research questions or test hypotheses. The methods used in research can vary depending on the research questions, the type of data that is being collected, and the research design.
Types of Research Methods are as follows:
Qualitative research methods are used to collect and analyze non-numerical data. This type of research is useful when the objective is to explore the meaning of phenomena, understand the experiences of individuals, or gain insights into complex social processes. Qualitative research methods include interviews, focus groups, ethnography, and content analysis.
Quantitative research methods are used to collect and analyze numerical data. This type of research is useful when the objective is to test a hypothesis, determine cause-and-effect relationships, and measure the prevalence of certain phenomena. Quantitative research methods include surveys, experiments, and secondary data analysis.
Mixed Method Research refers to the combination of both qualitative and quantitative research methods in a single study. This approach aims to overcome the limitations of each individual method and to provide a more comprehensive understanding of the research topic. This approach allows researchers to gather both quantitative data, which is often used to test hypotheses and make generalizations about a population, and qualitative data, which provides a more in-depth understanding of the experiences and perspectives of individuals.
The following Table shows the key differences between Quantitative, Qualitative and Mixed Research Methods
Research Method | Quantitative | Qualitative | Mixed Methods |
---|---|---|---|
To measure and quantify variables | To understand the meaning and complexity of phenomena | To integrate both quantitative and qualitative approaches | |
Typically focused on testing hypotheses and determining cause and effect relationships | Typically exploratory and focused on understanding the subjective experiences and perspectives of participants | Can be either, depending on the research design | |
Usually involves standardized measures or surveys administered to large samples | Often involves in-depth interviews, observations, or analysis of texts or other forms of data | Usually involves a combination of quantitative and qualitative methods | |
Typically involves statistical analysis to identify patterns and relationships in the data | Typically involves thematic analysis or other qualitative methods to identify themes and patterns in the data | Usually involves both quantitative and qualitative analysis | |
Can provide precise, objective data that can be generalized to a larger population | Can provide rich, detailed data that can help understand complex phenomena in depth | Can combine the strengths of both quantitative and qualitative approaches | |
May not capture the full complexity of phenomena, and may be limited by the quality of the measures used | May be subjective and may not be generalizable to larger populations | Can be time-consuming and resource-intensive, and may require specialized skills | |
Typically focused on testing hypotheses and determining cause-and-effect relationships | Surveys, experiments, correlational studies | Interviews, focus groups, ethnography | Sequential explanatory design, convergent parallel design, explanatory sequential design |
Examples of Research Methods are as follows:
Qualitative Research Example:
A researcher wants to study the experience of cancer patients during their treatment. They conduct in-depth interviews with patients to gather data on their emotional state, coping mechanisms, and support systems.
Quantitative Research Example:
A company wants to determine the effectiveness of a new advertisement campaign. They survey a large group of people, asking them to rate their awareness of the product and their likelihood of purchasing it.
Mixed Research Example:
A university wants to evaluate the effectiveness of a new teaching method in improving student performance. They collect both quantitative data (such as test scores) and qualitative data (such as feedback from students and teachers) to get a complete picture of the impact of the new method.
Research methods are used in various fields to investigate, analyze, and answer research questions. Here are some examples of how research methods are applied in different fields:
Research methods serve several purposes, including:
Research methods are used when you need to gather information or data to answer a question or to gain insights into a particular phenomenon.
Here are some situations when research methods may be appropriate:
Research methods provide several advantages, including:
Researcher, Academic Writer, Web developer
Christen Johnson, a 2023 graduate of the EdD in Educational and Professional Practice , published her dissertation titled, The MONROE Method: A Methodology on Navigating Race, Oppression, and Equity in Medical Education through Physician Cultural Responsibility .
Through the transformative research paradigm and transformative learning theory, a mixed-methods study of deidentified qualitative and quantitative data, Johnson used analytical software to research physician burnout and how the Physician Cultural Responsibility practice can be integrated into medical education.
The practice of Physician Cultural Responsibility provides a means to overcome health disparities and support physicians while embracing the intersectionality of the populations they serve. As there is no standardized curriculum to address teaching the practice of Physician Cultural Responsibility, Johnson’s study aims to evaluate a proposed curriculum for the adoption of Physician Cultural Responsibility into students’ physician professional identity, student experience, and knowledge transfer. This includes inclusive and culturally responsive pedagogy aimed at supporting the students’ development of skills that improve the patient-physician connection with all patients, hoping to limit the impact of personal biases on medical practice and dismantle the social categorization of medicine.
Results suggest that the successful adoption of Physician Cultural Responsibility in physician identity development, successful knowledge transfer, as well as improvements in collaboration, belonging, and support in student experiences within first-year medical students is essential for the practice to be lifelong. Johnson hopes the practice of Physician Cultural Responsibility and its adoption in physician professional identity yields an opportunity to create the culture change necessary within medicine to improve equitable patient-centered care for all patients, overcome health disparities, and support physicians through the challenges of medical practice.
A champion for health equity, scholar-practitioner, and board-certified family physician, Johnson’s clinical practice in family medicine is rooted in equitable practice, lifestyle medicine, and seeking system interventions that bring healing to patients, their families, and communities. She lives the phrase “To whom much is given, much is required” through her career in leadership and dedication to serving the most vulnerable communities and educating others to do the same.
Learn more about Johnson and read her dissertation, The MONROE Method: A Methodology on Navigating Race, Oppression, and Equity in Medical Education through Physician Cultural Responsibility, here.
Related posts.
Three years after Yolande “Falami” Devoe earned her master’s in educational leadership from Antioch University Midwest, she suffered two mini-strokes. “I started on a self-care trek as a result of…
The Ohio Voices for Learning: A Reggio Inspired Forum (OVL) held its spring statewide conference at AUM on Saturday, March 16, 2019. OVL is a network of study groups from…
Antioch University Midwest and Antioch University Online congratulate the graduating class of 2019! This year we had the honor of watching 68 students from five-degree programs walk across the stage…
IMAGES
COMMENTS
Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research and your dissertation topic.
The methods section of an APA style paper is where you report in detail how you performed your study. Research papers in the social and natural sciences
Writing Your Thesis Methods and Results Christy Ley Senior Thesis Tutorial November 15, 2013
Learn how to write up a high-quality research methodology chapter for your dissertation or thesis. Step by step instructions + examples.
Research Methods | Definitions, Types, Examples Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make.
Your Methods Section contextualizes the results of your study, giving editors, reviewers and readers alike the information they need to understand and interpret your work. Your methods are key to establishing the credibility of your study, along with your data and the results themselves. A complete methods section should provide enough detail for a skilled researcher to replicate your process ...
Dissertation Methodology In any research, the methodology chapter is one of the key components of your dissertation. It provides a detailed description of the methods you used to conduct your research and helps readers understand how you obtained your data and how you plan to analyze it. This section is crucial for replicating the study and validating its results.
Research methodology refers to how your project will be designed, what you will observe or measure, and how you will collect and analyze data. The methods you choose must be appropriate for your field and for the specific research questions you are setting out to answer.
Mixed-method approaches combine both qualitative and quantitative methods, and therefore combine the strengths of both types of research. Mixed methods have gained popularity in recent years. When undertaking mixed-methods research you can collect the qualitative and quantitative data either concurrently or sequentially.
4 Writing the Materials and Methods (Methodology) Section The Materials and Methods section briefly describes how you did your research. In other words, what did you do to answer your research question? If there were materials used for the research or materials experimented on you list them in this section.
Learn how to write a thesis, a long essay or dissertation on a specific subject, with this comprehensive guide and examples.
What is a thesis research methodology? A thesis research methodology explains the type of research performed, justifies the methods that you chose by linking back to the literature review, and describes the data collection and analysis procedures. It is included in your thesis after the Introduction section.
A methods chapter written for a thesis is written in the past tense to indicate what you have done. There is no single correct way to structure the methodology section. The structure of your work will depend on the discipline you are working within, as well as the structure of your overall research project. If your work is built around a single ...
The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.
What is a Methodology? The methodology is perhaps the most challenging and laborious part of the dissertation. Essentially, the methodology helps in understanding the broad, philosophical approach behind the methods of research you chose to employ in your study. The research methodology elaborates on the 'how' part of your research.
Materials and Methods sections (full) Materials and Methods or "Experimental Procedures" should be the easiest part to write of any scientific report, thesis, or dissertation - you know what you did! That said, in my experience of marking MRes reports and even doctoral theses, there are a lot of common mistakes that can be fixed and ...
Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research.
Writing a thesis can be a daunting experience. Other than a dissertation, it is one of the longest pieces of writing students typically complete. It relies on your ability to conduct research from start to finish: choosing a relevant topic, crafting a proposal, designing your research, collecting data, developing a robust analysis, drawing strong conclusions, and writing concisely.
What are the different research methods for the dissertation, and which one should I use? Choosing the right research method for a dissertation is a grinding and perplexing aspect of the dissertation research process.
It is very important to choose the right research methodology and methods for your thesis, as your research is the base that your entire thesis will rest on. It will be difficult for me to choose a research method for you. You will be the best judge of the kind of methods that work for your research. However, I can guide you on how you can choose an appropriate study design and research ...
In this thesis, we aim to advance the field of multimodal intelligence by focusing on three crucial dimensions: multimodal alignment, robustness, and generalizability. By introducing new approaches and methods, we aim to improve the performance, robustness, and interpretability of multimodal models in practical applications.
A LOW-FREQUENCY INVESTIGATION OF ACOUSTICALLY COUPLED SPACES USING THE FINITE-DIFFERENCE TIME-DOMAIN METHOD By Jonathan Botts A Thesis Submitted to the Graduate
This page contains reference examples for published dissertations or theses, which are considered published when they are available from a database such as ProQuest Dissertations and Theses Global or PDQT Open, an institutional repository, or an archive.
A mixed-methods dissertation combines both quantitative and qualitative research approaches to gather and analyze data. It typically uses methods such as surveys, interviews, and focus groups, as well as statistical analysis.
A THESIS SUBMITTED TO THE FACULTY OF GRADUATE STUDIES IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF NURSING ... study, the actual research itself was carried out using certain methods. This chapter will also detail the sampling, data collection and analysis methods that were utilized in this study. ...
The medical domain also contends with acute data challenges involving the prohibitive costs of acquiring, labeling, and publicizing data. To address these challenges, we introduce six new deep learning methods that enhance data efficiency and improve task performance by harnessing prior knowledge inherent in medical images.
What's the difference between method and methodology? Methodology refers to the overarching strategy and rationale of your research project. It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives. Methods are the specific tools and procedures ...
Pre-existing concealed goafs underneath open-pit slopes (PCO-goafs) pose a serious threat to the stability of open-pit slopes (OP-slopes), which is a common problem worldwide. In this paper, the variable weight-target approaching method, equilibrium beam theory, Pratt's arch theory, and numerical simulation are used to analyze the management solutions and stability of five PCO-goaf groups in ...
Research methods are used in various fields to investigate, analyze, and answer research questions. Here are some examples of how research methods are applied in different fields: Psychology: Research methods are widely used in psychology to study human behavior, emotions, and mental processes. For example, researchers may use experiments ...
Christen Johnson, a 2023 graduate of the EdD in Educational and Professional Practice, published her dissertation titled, The MONROE Method: A Methodology on Navigating Race, Oppression, and Equity in Medical Education through Physician Cultural Responsibility. Through the transformative research paradigm and transformative learning theory, a mixed-methods study of deidentified qualitative and ...