Units of meaning.
In general, qualitative analysis begins with organizing data. Large amounts of data need to be stored in smaller and manageable units, which can be retrieved and reviewed easily. To obtain a sense of the whole, analysis starts with reading and rereading the data, looking at themes, emotions and the unexpected, taking into account the overall picture. You immerse yourself in the data. The most widely used procedure is to develop an inductive coding scheme based on actual data [ 11 ]. This is a process of open coding, creating categories and abstraction. In most cases, you do not start with a predefined coding scheme. You describe what is going on in the data. You ask yourself, what is this? What does it stand for? What else is like this? What is this distinct from? Based on this close examination of what emerges from the data you make as many labels as needed. Then, you make a coding sheet, in which you collect the labels and, based on your interpretation, cluster them in preliminary categories. The next step is to order similar or dissimilar categories into broader higher order categories. Each category is named using content-characteristic words. Then, you use abstraction by formulating a general description of the phenomenon under study: subcategories with similar events and information are grouped together as categories and categories are grouped as main categories. During the analysis process, you identify ‘missing analytical information’ and you continue data collection. You reread, recode, re-analyse and re-collect data until your findings provide breadth and depth.
Throughout the qualitative study, you reflect on what you see or do not see in the data. It is common to write ‘analytic memos’ [ 3 ], write-ups or mini-analyses about what you think you are learning during the course of your study, from designing to publishing. They can be a few sentences or pages, whatever is needed to reflect upon: open codes, categories, concepts, and patterns that might be emerging in the data. Memos can contain summaries of major findings and comments and reflections on particular aspects.
In ethnography, analysis begins from the moment that the researcher sets foot in the field. The analysis involves continually looking for patterns in the behaviours and thoughts of the participants in everyday life, in order to obtain an understanding of the culture under study. When comparing one pattern with another and analysing many patterns simultaneously, you may use maps, flow charts, organizational charts and matrices to illustrate the comparisons graphically. The outcome of an ethnographic study is a narrative description of a culture.
In phenomenology, analysis aims to describe and interpret the meaning of an experience, often by identifying essential subordinate and major themes. You search for common themes featuring within an interview and across interviews, sometimes involving the study participants or other experts in the analysis process. The outcome of a phenomenological study is a detailed description of themes that capture the essential meaning of a ‘lived’ experience.
Grounded theory generates a theory that explains how a basic social problem that emerged from the data is processed in a social setting. Grounded theory uses the ‘constant comparison’ method, which involves comparing elements that are present in one data source (e.g., an interview) with elements in another source, to identify commonalities. The steps in the analysis are known as open, axial and selective coding. Throughout the analysis, you document your ideas about the data in methodological and theoretical memos. The outcome of a grounded theory study is a theory.
Descriptive generic qualitative research is defined as research designed to produce a low inference description of a phenomenon [ 12 ]. Although Sandelowski maintains that all research involves interpretation, she has also suggested that qualitative description attempts to minimize inferences made in order to remain ‘closer’ to the original data [ 12 ]. Descriptive generic qualitative research often applies content analysis. Descriptive content analysis studies are not based on a specific qualitative tradition and are varied in their methods of analysis. The analysis of the content aims to identify themes, and patterns within and among these themes. An inductive content analysis [ 11 ] involves breaking down the data into smaller units, coding and naming the units according to the content they present, and grouping the coded material based on shared concepts. They can be represented by clustering in treelike diagrams. A deductive content analysis [ 11 ] uses a theory, theoretical framework or conceptual model to analyse the data by operationalizing them in a coding matrix. An inductive content analysis might use several techniques from grounded theory, such as open and axial coding and constant comparison. However, note that your findings are merely a summary of categories, not a grounded theory.
Analysis software can support you to manage your data, for example by helping to store, annotate and retrieve texts, to locate words, phrases and segments of data, to name and label, to sort and organize, to identify data units, to prepare diagrams and to extract quotes. Still, as a researcher you would do the analytical work by looking at what is in the data, and making decisions about assigning codes, and identifying categories, concepts and patterns. The computer assisted qualitative data analysis (CAQDAS) website provides support to make informed choices between analytical software and courses: http://www.surrey.ac.uk/sociology/research/researchcentres/caqdas/support/choosing . See Box 5 for further reading on qualitative analysis.
Ethnography | • Atkinson P, Coffey A, Delamount S, Lofland J, Lofmand L. Handbook of ethnography. Sage: Thousand Oaks (CA); 2001. • Spradley J. The ethnographic interview. Holt Rinehart & Winston: New York (NY); 1979. • Spradley J. Participant observation. Holt Rinehart & Winston: New York (NY); 1980. |
Phenomenology | • Colaizzi PF. Psychological research as the phenomenologist views it. In: Valle R, King M, editors. Essential phenomenological alternative for psychology. New York (NY): Oxford University Press; 1978. p. 41-78. • Smith J.A, Flowers P, Larkin M. Interpretative phenomenological analysis. Theory, method and research. Sage: London; 2010. |
Grounded theory | • Charmaz K. Constructing grounded theory. 2nd ed. Sage: Thousand Oaks (CA); 2014. • Corbin J, Strauss A. Basics of qualitative research. Techniques and procedures for developing grounded theory. Sage: Los Angeles (CA); 2008. |
Content analysis | • Elo S, Kääriäinen M, Kanste O, Pölkki T, Utriainen K, Kyngäs H. Qualitative Content Analysis: a focus on trustworthiness. Sage Open 2014: 1–10. DOI: 10.1177/2158244014522633. • Elo S. Kyngäs A. The qualitative content analysis process. J Adv Nurs. 2008; 62: 107–115. • Hsieh HF. Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005; 15: 1277–1288. |
The next and final article in this series, Part 4, will focus on trustworthiness and publishing qualitative research [ 13 ].
The authors thank the following junior researchers who have been participating for the last few years in the so-called ‘Think tank on qualitative research’ project, a collaborative project between Zuyd University of Applied Sciences and Maastricht University, for their pertinent questions: Erica Baarends, Jerome van Dongen, Jolanda Friesen-Storms, Steffy Lenzen, Ankie Hoefnagels, Barbara Piskur, Claudia van Putten-Gamel, Wilma Savelberg, Steffy Stans, and Anita Stevens. The authors are grateful to Isabel van Helmond, Joyce Molenaar and Darcy Ummels for proofreading our manuscripts and providing valuable feedback from the ‘novice perspective’.
The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.
Sampling and hypothesis testing in research methodology, a basic approach in sampling methodology and sample size calculation, what is a population and sampling technique used in intention towards online halal cosmetic purchasing research, what are different research approaches comprehensive review of qualitative, quantitative, and mixed method research, their applications, types, and limitations, data analysis within a scientific research methodology, a researcher companion of data collection and validation methods, determining sample size; how to calculate survey sample size, qualitative research methodology in social sciences, development and validation of survey questionnaire & experimental data – a systematical review-based statistical approach.
12 references, a comparison of sampling methods, organizational research: determining appropriate sample size in survey research.
Marketing research: an applied approach, case study research design and methods, business research for decision making, survey research methods, related papers.
Showing 1 through 3 of 0 Related Papers
Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser .
Enter the email address you signed up with and we'll email you a reset link.
Sandeep Kumar
Shevoni Wisidagama
Adv Bukhari
Achini Gunawardena
Morin J.-F., Olsson Ch., Atikcan E. (eds) , Research Methods in the Social Sciences: An A-Z of key concepts, Oxford, OUP
Emilie van Haute
Amandeep Shoker
okedare toyinbliss
SEEU Review
Aidin Salamzadeh
Collecting data using an appropriate sampling technique is a challenging task for a researcher to do. The researchers will be unable to collect data from all possible situations, which will preclude them from answering the study’s research questions in their current form. In light of the enormous number and variety of sampling techniques/methods available, the researcher must be knowledgeable about the differences to select the most appropriate sampling technique/method for the specific study under consideration. In this context, this study also looks into the basic concepts in probability sampling, kinds of probability sampling techniques with their advantages and disadvantages. Social science researchers will benefit from this study since it will assist them in choosing the most suitable probability sampling technique(s) for completing their research smoothly and successfully.
American Journal of Biomedical Science and Research
Kyu-Seong Kim
Mohsin Hassan Alvi
The Manual for Sampling Techniques used in Social Sciences is an effort to describe various types of sampling methodologies that are used in researches of social sciences in an easy and understandable way. Characteristics, benefits, crucial issues/ draw backs, and examples of each sampling type are provided separately.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Net Journal of Social Sciences
joseph adekeye
Chiradee Francia
Rachel Irish Kozicki
christopher masebo
Andrea Weinberg
Biometrics & Biostatistics International Journal
Rufai Iliyasu
Manuela Gabor
Rajender Parsad
College Teaching
Deborah NOLAN
Dr. VISHAL VARIA
PIJUSH KHAN
Nydia Sánchez
Oxfam Research Guidelines
Martin Walsh
Nikki Laning
MSC Publishing Services
Murry C. Siyasiya
Moses Adeleke ADEOYE
Air Medical Journal
cheryl Thompson
Iyorter Anenge
Hu li za zhi The journal of nursing
Prof. Dr. İlker Etikan
Imran Ahmad Sajid
Journal of Health Specialties
Aamir Omair
Alliana Ulila
Advances in Business Information Systems and Analytics
SUDEEPTA PRADHAN
Colm O'Muircheartaigh
Sample Surveys: Design, Methods and Applications, Vol. 29A, Elsevier B.V.
Hagit Glickman
Root out friction in every digital experience, super-charge conversion rates, and optimise digital self-service
Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve
Increase revenue and loyalty with real-time insights and recommendations delivered straight to teams on the ground
Know how your people feel and empower managers to improve employee engagement, productivity, and retention
Take action in the moments that matter most along the employee journey and drive bottom line growth
Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people
Get faster, richer insights with qual and quant tools that make powerful market research available to everyone
Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts
Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market
Meet the operating system for experience management
Popular Use Cases
The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results.
Sampling methods, types & techniques.
15 min read Your comprehensive guide to the different sampling methods available to researchers – and how to know which is right for your research.
In survey research, sampling is the process of using a subset of a population to represent the whole population. To help illustrate this further, let’s look at data sampling methods with examples below.
Let’s say you wanted to do some research on everyone in North America. To ask every person would be almost impossible. Even if everyone said “yes”, carrying out a survey across different states, in different languages and timezones, and then collecting and processing all the results , would take a long time and be very costly.
Sampling allows large-scale research to be carried out with a more realistic cost and time-frame because it uses a smaller number of individuals in the population with representative characteristics to stand in for the whole.
However, when you decide to sample, you take on a new task. You have to decide who is part of your sample list and how to choose the people who will best represent the whole population. How you go about that is what the practice of sampling is all about.
Free eBook: 2024 Market Research Trends
Although the idea of sampling is easiest to understand when you think about a very large population, it makes sense to use sampling methods in research studies of all types and sizes. After all, if you can reduce the effort and cost of doing a study, why wouldn’t you? And because sampling allows you to research larger target populations using the same resources as you would smaller ones, it dramatically opens up the possibilities for research.
Sampling is a little like having gears on a car or bicycle. Instead of always turning a set of wheels of a specific size and being constrained by their physical properties, it allows you to translate your effort to the wheels via the different gears, so you’re effectively choosing bigger or smaller wheels depending on the terrain you’re on and how much work you’re able to do.
Sampling allows you to “gear” your research so you’re less limited by the constraints of cost, time, and complexity that come with different population sizes.
It allows us to do things like carrying out exit polls during elections, map the spread and effects rates of epidemics across geographical areas, and carry out nationwide census research that provides a snapshot of society and culture.
Sampling strategies in research vary widely across different disciplines and research areas, and from study to study.
There are two major types of sampling methods: probability and non-probability sampling.
As we delve into these categories, it’s essential to understand the nuances and applications of each method to ensure that the chosen sampling strategy aligns with the research goals.
There’s a wide range of probability sampling methods to explore and consider. Here are some of the best-known options.
With simple random sampling , every element in the population has an equal chance of being selected as part of the sample. It’s something like picking a name out of a hat. Simple random sampling can be done by anonymising the population – e.g. by assigning each item or person in the population a number and then picking numbers at random.
Pros: Simple random sampling is easy to do and cheap. Designed to ensure that every member of the population has an equal chance of being selected, it reduces the risk of bias compared to non-random sampling.
Cons: It offers no control for the researcher and may lead to unrepresentative groupings being picked by chance.
With systematic sampling the random selection only applies to the first item chosen. A rule then applies so that every nth item or person after that is picked.
Best practice is to sort your list in a random way to ensure that selections won’t be accidentally clustered together. This is commonly achieved using a random number generator. If that’s not available you might order your list alphabetically by first name and then pick every fifth name to eliminate bias, for example.
Next, you need to decide your sampling interval – for example, if your sample will be 10% of your full list, your sampling interval is one in 10 – and pick a random start between one and 10 – for example three. This means you would start with person number three on your list and pick every tenth person.
Pros: Systematic sampling is efficient and straightforward, especially when dealing with populations that have a clear order. It ensures a uniform selection across the population.
Cons: There’s a potential risk of introducing bias if there’s an unrecognised pattern in the population that aligns with the sampling interval.
Stratified sampling involves random selection within predefined groups. It’s a useful method for researchers wanting to determine what aspects of a sample are highly correlated with what’s being measured. They can then decide how to subdivide (stratify) it in a way that makes sense for the research.
For example, you want to measure the height of students at a college where 80% of students are female and 20% are male. We know that gender is highly correlated with height, and if we took a simple random sample of 200 students (out of the 2,000 who attend the college), we could by chance get 200 females and not one male. This would bias our results and we would underestimate the height of students overall. Instead, we could stratify by gender and make sure that 20% of our sample (40 students) are male and 80% (160 students) are female.
Pros: Stratified sampling enhances the representation of all identified subgroups within a population, leading to more accurate results in heterogeneous populations.
Cons: This method requires accurate knowledge about the population’s stratification, and its design and execution can be more intricate than other methods.
With cluster sampling, groups rather than individual units of the target population are selected at random for the sample. These might be pre-existing groups, such as people in certain zip codes or students belonging to an academic year.
Cluster sampling can be done by selecting the entire cluster, or in the case of two-stage cluster sampling, by randomly selecting the cluster itself, then selecting at random again within the cluster.
Pros: Cluster sampling is economically beneficial and logistically easier when dealing with vast and geographically dispersed populations.
Cons: Due to potential similarities within clusters, this method can introduce a greater sampling error compared to other methods.
The non-probability sampling methodology doesn’t offer the same bias-removal benefits as probability sampling, but there are times when these types of sampling are chosen for expediency or simplicity. Here are some forms of non-probability sampling and how they work.
People or elements in a sample are selected on the basis of their accessibility and availability. If you are doing a research survey and you work at a university, for example, a convenience sample might consist of students or co-workers who happen to be on campus with open schedules who are willing to take your questionnaire .
This kind of sample can have value, especially if it’s done as an early or preliminary step, but significant bias will be introduced.
Pros: Convenience sampling is the most straightforward method, requiring minimal planning, making it quick to implement.
Cons: Due to its non-random nature, the method is highly susceptible to biases, and the results are often lacking in their application to the real world.
Like the probability-based stratified sampling method, this approach aims to achieve a spread across the target population by specifying who should be recruited for a survey according to certain groups or criteria.
For example, your quota might include a certain number of males and a certain number of females. Alternatively, you might want your samples to be at a specific income level or in certain age brackets or ethnic groups.
Pros: Quota sampling ensures certain subgroups are adequately represented, making it great for when random sampling isn’t feasible but representation is necessary.
Cons: The selection within each quota is non-random and researchers’ discretion can influence the representation, which both strongly increase the risk of bias.
Participants for the sample are chosen consciously by researchers based on their knowledge and understanding of the research question at hand or their goals.
Also known as judgment sampling, this technique is unlikely to result in a representative sample , but it is a quick and fairly easy way to get a range of results or responses.
Pros: Purposive sampling targets specific criteria or characteristics, making it ideal for studies that require specialised participants or specific conditions.
Cons: It’s highly subjective and based on researchers’ judgment, which can introduce biases and limit the study’s real-world application.
With this approach, people recruited to be part of a sample are asked to invite those they know to take part, who are then asked to invite their friends and family and so on. The participation radiates through a community of connected individuals like a snowball rolling downhill.
Pros: Especially useful for hard-to-reach or secretive populations, snowball sampling is effective for certain niche studies.
Cons: The method can introduce bias due to the reliance on participant referrals, and the choice of initial seeds can significantly influence the final sample.
Choosing the right sampling method is a pivotal aspect of any research process, but it can be a stumbling block for many.
Here’s a structured approach to guide your decision.
If you aim to get a general sense of a larger group, simple random or stratified sampling could be your best bet. For focused insights or studying unique communities, snowball or purposive sampling might be more suitable.
The nature of the group you’re studying can guide your method. For a diverse group with different categories, stratified sampling can ensure all segments are covered. If they’re widely spread geographically , cluster sampling becomes useful. If they’re arranged in a certain sequence or order, systematic sampling might be effective.
Your available time, budget and ease of accessing participants matter. Convenience or quota sampling can be practical for quicker studies, but they come with some trade-offs. If reaching everyone in your desired group is challenging, snowball or purposive sampling can be more feasible.
Decide if you want your findings to represent a much broader group. For a wider representation, methods that include everyone fairly (like probability sampling ) are a good option. For specialised insights into specific groups, non-probability sampling methods can be more suitable.
Before fully committing, discuss your chosen method with others in your field and consider a test run.
Using a sample is a kind of short-cut. If you could ask every single person in a population to take part in your study and have each of them reply, you’d have a highly accurate (and very labor-intensive) project on your hands.
But since that’s not realistic, sampling offers a “good-enough” solution that sacrifices some accuracy for the sake of practicality and ease. How much accuracy you lose out on depends on how well you control for sampling error, non-sampling error, and bias in your survey design . Our blog post helps you to steer clear of some of these issues.
Finding the best sample size for your target population is something you’ll need to do again and again, as it’s different for every study.
To make life easier, we’ve provided a sample size calculator . To use it, you need to know your:
If any of those terms are unfamiliar, have a look at our blog post on determining sample size for details of what they mean and how to find them.
In the ever-evolving business landscape, relying on the most recent market research is paramount. Reflecting on 2022, brands and businesses can harness crucial insights to outmaneuver challenges and seize opportunities.
Equip yourself with this knowledge by exploring Qualtrics’ ‘2022 Market Research Global Trends’ report.
Delve into this comprehensive study to unearth:
Find out how Qualtrics XM can help you conduct world-class research
Sampling and non-sampling errors 10 min read, how to determine sample size 16 min read, convenience sampling 15 min read, non-probability sampling 17 min read, probability sampling 8 min read, stratified random sampling 13 min read, simple random sampling 10 min read, request demo.
Ready to learn more about Qualtrics?
14 Accesses
1 Altmetric
Explore all metrics
Recent studies in active learning, particularly in uncertainty sampling, have focused on the decomposition of model uncertainty into reducible and irreducible uncertainties. In this paper, the aim is to simplify the computational process while eliminating the dependence on observations. Crucially, the inherent uncertainty in the labels is considered, i.e. the uncertainty of the oracles. Two strategies are proposed, sampling by Klir uncertainty, which tackles the exploration–exploitation dilemma, and sampling by evidential epistemic uncertainty, which extends the concept of reducible uncertainty within the evidential framework, both using the theory of belief functions. Experimental results in active learning demonstrate that our proposed method can outperform uncertainty sampling.
This is a preview of subscription content, log in via an institution to check access.
Subscribe and save.
Price includes VAT (Russian Federation)
Instant access to the full article PDF.
Rent this article via DeepDyve
Institutional subscriptions
Data availability.
All data used is this study are available publicly online. The datasets were extracted directly in the repositories available with the links in the folloing section.
The code for theoretical experiments is available at this link: https://anonymous.4open.science/r/evidential-uncertainty-sampling-D453 . Link to the code for the experimental part in active learning: https://anonymous.4open.science/r/evidential-active-learning-B266 .
For details on experiments conducted in theoretical sections, visit: https://anonymous.4open.science/r/evidential-uncertainty-sampling-D453 .
From now on, the model used is K -NN ( K -Nearest Neighbors), with a probabilistic output and on the distance-weighted version available with scikit-learn (Pedregosa et al., 2011 ), every other parameters are scikit-learn default parameters. The uncertainty used is the least confidence measure given in Eq. ( 5 ).
In the example, the word “tails” is written in Finnish, the word “heads” is called Kruuna .
The notion of plausibility within the theory of belief functions used in the proposed methods differs from the one presented here and will be discussed in greater detail in Sect. 4 .
The uncertainty no longer depends on observations, but the model does.
From now, the Evidential K -nearest Neighbors model of (Deœux, 1995 ) is considered.
This representation also applies to labeling performed by a machine.
Experiments where conducted according to the following code: https://anonymous.4open.science/r/evidential-active-learning-B266 .
An entropy of 1 means that the classes are perfectly equidistributed and an entropy of 0 would indicate the total over-representation of one of the classes.
Although it can also be to maximize performance given a cost.
Abdar, M., Pourpanah, F., Hussain, S., et al. (2021). A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Information Fusion, 76 , 243–297.
Article Google Scholar
Abe, N., Zadrozny, B., & Langford, J. (2006). Outlier detection by active learning. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006 , 504–509.
Aggarwal, C., Kong, X., Gu, Q., et al. (2014). Active Learning: A Survey, Data Classification: Algorithms and Applications . CRC Press.
Google Scholar
Bondu, A., Lemaire, V. & Boullé, M. (2010). Exploration vs. exploitation in active learning: A Bayesian approach. In The 2010 international joint conference on neural networks (IJCNN) (pp. 1–7).
Charpentier, B., Zügner, D., & Günnemann, S., et al. (2020). Posterior network: Uncertainty estimation without OOD samples via density-based pseudo-counts. In H. Larochelle, M. Ranzato, & R. Hadsell (Eds.), Advances in Neural Information Processing Systems (Vol. 33, pp. 1356–1367). Curran Associates Inc.
Dempster, A. P. (1967). Upper and lower probabilities induced by a multivalued mapping. The Annals of Mathematical Statistics, 38 (2), 325–339.
Article MathSciNet Google Scholar
Demšar, J. (2006). Statistical comparisons of classifiers over multiple data sets. The Journal of Machine Learning Research, 7 , 1–30.
MathSciNet Google Scholar
Deng, Y. (2020). Uncertainty measure in evidence theory. Science China Information Sciences, 63 , 210201.
Denœux, T. (1995). A k-nearest neighbor classification rule based on Dempster–Shafer theory. IEEE Transactions on Systems, Man and Cybernetics , 219 .
Denoeux, T., & Bjanger, M. (2000). Induction of decision trees from partially classified data using belief functions. Systems, Man, and Cybernetics, 4 , 2923–2928.
Dua, D. & Graff, C. (2017). UCI ML Repository. https://archive.ics.uci.edu/
Dubois, D., & Prade, H. (1987). Properties of measures of information in evidence and possibility theories. Fuzzy Sets and Systems, 24 (2), 161–182.
Elouedi, Z., Mellouli, K., & Smets, P. (2001). Belief decision trees: Theoretical foundations. International Journal of Approximate Reasoning, 28 (2), 91–124.
Hacohen, G., Dekel, A. & Weinshall, D. (2022). Active learning on a budget: Opposite strategies suit high and low budgets. In Chaudhuri, K., Jegelka, S., Song, L., et al. (Eds.), International conference on machine learning, 2022, Baltimore, Maryland, USA, proceedings of machine learning research (vol. 162, pp. 8175–8195). PMLR.
Hoarau, A., Martin, A., Dubois, J. C., et al. (2022). Imperfect labels with belief functions for active learning. In Belief functions: Theory and applications . Springer.
Hoarau, A., Martin, A., Dubois, J. C., et al. (2023a). Evidential random forests. Expert Systems with Applications, 230 .
Hoarau, A., Thierry, C., Martin, A., et al. (2023b). Datasets with rich labels for machine learning. In 2023 IEEE international conference on fuzzy systems (FUZZ-IEEE) (pp. 1–6).
Hora, S. C. (1996). Aleatory and epistemic uncertainty in probability elicitation with an example from hazardous waste management. Reliability Engineering & System Safety, 54 (2), 217–223. Treatment of Aleatory and Epistemic Uncertainty.
Huang, L., Ruan, S. & Xing, Y., et al. (2023). A review of uncertainty quantification in medical image analysis: Probabilistic and non-probabilistic methods.
Hüllermeier, E., & Waegeman, W. (2021). Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. Machine Learning, 110 , 457–506.
Hüllermeier, E., Destercke, S. & Shaker, M.H. (2022). Quantification of credal uncertainty in machine learning: A critical analysis and empirical comparison. In Cussens, J., & Zhang, K. (Eds.), Proceedings of the thirty-eighth conference on uncertainty in artificial intelligence, proceedings of machine learning research (vol. 180, pp. 548–557). PMLR.
Kendall, A. & Gal, Y. (2017). What uncertainties do we need in Bayesian deep learning for computer vision? In NIPS .
Klir, G. J., & Wierman, M. J. (1998). Uncertainty-based information: Elements of generalized information theory . Springer.
Kottke, D., Calma, A., Huseljic, D., et al. (2017). Challenges of reliable, realistic and comparable active learning evaluation. In Proceedings of the workshop and tutorial on interactive adaptive learning (pp. 2–14).
Lewis, D. D., & Gale, W.. A. (1994). A sequential algorithm for training text classifiers. In SIGIR .
Martens, T., Perini, L., & Davis, J. (2023). Semi-supervised learning from active noisy soft labels for anomaly detection. Machine learning and knowledge discovery in databases: Research track: European conference, ECML PKDD 2023, Turin (pp. 219–236). Springer-Verlag.
Martin, A. (2019). Conflict management in information fusion with belief functions. In E. Bossé & G. L. Rogova (Eds.), Information quality in information fusion and decision making. Information fusion and data science (pp. 79–97). Springer.
Nguyen, V. L., Shaker, M. H., & Hüllermeier, E. (2022). How to measure uncertainty in uncertainty sampling for active learning. Machine Learning, 111 , 89–122.
Pedregosa, F., Varoquaux, G., Gramfort, A., et al. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12 , 2825–2830.
Senge, R., Bösner, S., Dembczynski, K., et al. (2014). Reliable classification: Learning classifiers that distinguish aleatoric and epistemic uncertainty. Information Science, 255 , 16–29.
Sensoy, M., Kaplan, L., & Kandemir, M., et al. (2018). Evidential deep learning to quantify classification uncertainty. In S. Bengio, H. Wallach, & H. Larochelle (Eds.), Advances in Neural Information Processing Systems. (Vol. 31). Curran Associates Inc.
Settles, B. (2009). Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin–Madison.
Shafer, G. (1976). A mathematical theory of evidence . Princeton University Press.
Book Google Scholar
Smets, P. & Kennes, R. (1994). The transferable belief model. Artificial Intelligence .
Thierry, C., Hoarau, A., Martin, A,. et al. (2022). Real bird dataset with imprecise and uncertain values. In 7th International conference on belief functions .
Yuan, B., Yue, X., Lv, Y., et al. (2020). Evidential deep neural networks for uncertain data classification. In: Knowledge science, engineering and management (proceedings of KSEM 2020). Lecture notes in computer science . Springer Verlag.
Download references
This work is funded by the Brittany region and the Côtes-d’Armor department. The authors also received funding from IRISA, the University of Rennes, DRUID and Orange SA.
Authors and affiliations.
CNRS, IRISA, DRUID, University of Rennes, 35000, Rennes, France
Arthur Hoarau, Yolande Le Gall, Jean-Christophe Dubois & Arnaud Martin
Orange Innovation, 22300, Lannion, France
Vincent Lemaire
You can also search for this author in PubMed Google Scholar
Arthur Hoarau, Vincent Lemaire, Jean-Christophe Dubois, Yolande Le Gall and Arnaud Martin contributed to the manuscript equally.
Correspondence to Arthur Hoarau .
Conflict of interest.
Arthur Hoarau, Jean-Christophe Dubois, Yolande Le Gall and Arnaud Martin received research support from the University of Rennes, the IRISA laboratory and the DRUID team. Vincent Lemaire received research support from Orange SA.
Not applicable.
All authors have read and approved the final manuscript.
All authors approved the publication.
Editor: Myra Spiliopoulou.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
Hoarau, A., Lemaire, V., Le Gall, Y. et al. Evidential uncertainty sampling strategies for active learning. Mach Learn (2024). https://doi.org/10.1007/s10994-024-06567-2
Download citation
Received : 22 November 2023
Revised : 09 April 2024
Accepted : 04 May 2024
Published : 27 June 2024
DOI : https://doi.org/10.1007/s10994-024-06567-2
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Advertisement
MINNEAPOLIS / ST. PAUL (07/01/2024)—University of Minnesota College of Science and Engineering Dean Andrew Alleyne has named four new department heads in the college. All bring a wealth of academic, research, and leadership abilities to their departments.
Professor Kevin Dorfman has been appointed as the new d epartment h ead for the Department of Chemical Engineering and Materials Science (CEMS). Dorfman started his five-year term on July 1, 2024.
Dorfman joined the University of Minnesota faculty in January of 2006 and was quickly promoted up the ranks, receiving tenure in 2011, promotion to professor in 2015, and named a Distinguished McKnight Professor in 2020. He previously served as the director of undergraduate studies in chemical engineering from 2018-2022, where he headed a large-scale revision of the chemical engineering curriculum and saw the department through its most recent ABET accreditation.
His research focuses on polymer physics and microfluidics, with applications in self-assembly and biotechnology. He is particularly well known for his integrated experimental and computational work on DNA confinement in nanochannels and its application towards genome mapping. Dorfman’s research has been recognized by numerous national awards including the AIChE Colburn Award, Packard Fellowship in Science and Engineering, NSF CAREER Award, and DARPA Young Faculty Award.
Dorfman received a bachelor’s degree in chemical engineering from Penn State and a master’s and Ph.D. in chemical engineering from MIT.
Professor Archis Ghate has been appointed as the new Department Head for the Department of Industrial and Systems Engineering after a national search. Ghate will begin his five-year term on July 8, 2024.
Ghate is an expert in operations research and most recently served as the Fluor Endowed Chair in the Department of Industrial Engineering at Clemson University. Previously, he was a professor of industrial and systems engineering at the University of Washington. He has won several research and teaching awards, including an NSF CAREER Award.
Ghate’s research in optimization spans areas as varied as health care, transportation and logistics, manufacturing, economics, and business analytics. He also served as a principal research scientist at Amazon working on supply chain optimization technologies.
Ghate received bachelor’s and master’s degrees, both in chemical engineering, from the Indian Institute of Technology. He also received a master’s degree in management science and engineering from Stanford University and a Ph.D. in industrial and operations engineering from the University of Michigan.
Professor Chris Hogan has been appointed as the new department head for the Department of Mechanical Engineering. Hogan started his five-year term on July 1, 2024.
Hogan, who currently holds the Carl and Janet Kuhrmeyer Chair, joined the University of Minnesota in 2009, and since then has taught fluid mechanics and heat transfer to nearly 1,000 undergraduates, advised 25+ Ph.D. students and postdoctoral associates, and served as the department’s director of graduate studies from 2015-2020. He most recently served as associate department head.
He is a leading expert in particle science with applications including supersonic-to-hypersonic particle impacts with surfaces, condensation and coagulation, agricultural sprays, and virus aerosol sampling and control technologies. He has authored and co-authored more than 160 papers on these topics. He currently serves as the editor-in-chief of the Journal of Aerosol Science . Hogan received the University of Minnesota College of Science and Engineering’s George W. Taylor Award for Distinguished Research in 2023.
Hogan holds a bachelor’s degree Cornell University and a Ph.D. from Washington University in Saint Louis.
Professor James Kakalios has been appointed as the new department head for the School of Physics and Astronomy. Kakalios started his five-year term on July 1, 2024.
Since joining the School of Physics and Astronomy in 1988, Kakalios has built a research program in experimental condensed matter physics, with particular emphasis on complex and disordered systems. His research ranges from the nano to the neuro with experimental investigations of the electronic and optical properties of nanostructured semiconductors and fluctuation phenomena in neurological systems.
During his time at the University of Minnesota, Kakalios has served as both director of undergraduate studies and director of graduate studies. He has received numerous awards and professorships including the University’s Taylor Distinguished Professorship, Andrew Gemant Award from the American Institute of Physics, and the Award for Public Engagement with Science from the American Association for the Advancement of Science (AAAS). He is a fellow of both the American Physical Society and AAAS.
In addition to numerous research publications, Kakalios is the author of three popular science books— The Physics of Superheroes , The Amazing Story of Quantum Mechanics , and The Physics of Everyday Things .
Kaklios received a bachelor’s degree from City College of New York and master’s and Ph.D. degrees from the University of Chicago.
Rhonda Zurn, College of Science and Engineering, [email protected]
University Public Relations, [email protected]
Find more news and feature stories on the CSE news page .
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Original Submission Date Received: .
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
Constrained spectral–spatial attention residual network and new cross-scene dataset for hyperspectral classification.
2.1. hyperspectral image classification, 2.2. cross-scene hyperspectral image classification, 3. proposed method, 3.1. overview, 3.2. spectral feature learning module, 3.3. spatial feature learning module, 3.4. feature fusion and loss function, 4.1. data description and evaluation metrics, 4.1.1. data description.
4.2.1. experimental setup, 4.2.2. compared methods, 4.3. parameter analysis and ablation experiments, 4.3.1. parameter analysis, 4.3.2. ablation experiments.
5. discussion, 6. conclusions, author contributions, data availability statement, conflicts of interest.
Click here to enlarge figure
Class No. | Class Name | Training | Testing |
---|---|---|---|
1 | Alfalfa | 4 | 46 |
2 | Corn-Notill | 142 | 1428 |
3 | Corn-Mintill | 83 | 830 |
4 | Corn | 23 | 237 |
5 | Grass-Pasture | 48 | 483 |
6 | Grass-Trees | 73 | 730 |
7 | Grass-Pasture-Mowed | 2 | 28 |
8 | Hay-Windrowed | 47 | 478 |
9 | Oats | 2 | 20 |
10 | Soybean-Notill | 97 | 972 |
11 | Soybean-Mintill | 245 | 2455 |
12 | Soybean-Clean | 59 | 593 |
13 | Wheat | 20 | 205 |
14 | Woods | 126 | 1265 |
15 | Buildings-Grass-Trees-Drives | 38 | 386 |
16 | Stone-Steel-Towers | 9 | 93 |
Total | - | 1018 | 10,249 |
Class No. | Class Name | Training | Testing |
---|---|---|---|
1 | Brocoli-Green-Weeds-1 | 20 | 2009 |
2 | Brocoli-Green-Weeds-2 | 37 | 3726 |
3 | Fallow | 19 | 1976 |
4 | Fallow-Rough-Plow | 19 | 1394 |
5 | Fallow-Smooth | 26 | 2678 |
6 | Stubble | 39 | 3959 |
7 | Celery | 35 | 3579 |
8 | Grapes-Untrained | 112 | 11,271 |
9 | Soil-Vinyar-Develop | 62 | 6203 |
10 | Corn-Senesced-Green-Weeds | 32 | 3278 |
11 | Lettuce-Romaine-4wk | 10 | 1068 |
12 | Lettuce-Romaine-5wk | 19 | 1927 |
13 | Lettuce-Romaine-6wk | 9 | 916 |
14 | Lettuce-Romaine-7wk | 10 | 1070 |
15 | Vinyard-Untrained | 72 | 7268 |
16 | Vinyard-Vertical-Trellis | 18 | 1807 |
Total | - | 539 | 54,129 |
No. | Name | Training (C17) | Testing (C17) | Testing (C16) |
---|---|---|---|---|
1 | Cabbage | 16 | 821 | 1806 |
2 | Potato | 100 | 5014 | 1281 |
3 | Scallion | 98 | 4901 | 1850 |
4 | Wheat | 506 | 25,341 | 29,176 |
5 | Cole Flower | 35 | 1753 | 502 |
6 | Corn | 256 | 12,830 | 1513 |
7 | Chinese Cabbage | 1 | 79 | 342 |
8 | Peanut | 251 | 12,583 | 9842 |
9 | Broad Bean | 1 | 1066 | 1801 |
10 | Onion | 71 | 3563 | 1399 |
11 | Pit-Pond | 49 | 2487 | 469 |
12 | Greenhouse | 51 | 2567 | 82 |
13 | Poplar Tree | 76 | 3835 | 4942 |
14 | Peach Tree | 3 | 150 | 937 |
15 | Privet Tree | 40 | 2040 | 1954 |
16 | Pear Tree | 6 | 305 | – |
17 | Purple Leaf Plum | 26 | 1318 | 654 |
Total | - | 1606 | 80,653 | 58,550 |
Dataset | SAM | ||||||
---|---|---|---|---|---|---|---|
IP | 96.95 | 97.37 | 97.84 | 97.99 | 97.78 | 97.46 | |
SA | 97.42 | 97.64 | 98.13 | 98.35 | 98.57 | 98.12 | |
C17 | 94.37 | 94.82 | 95.07 | 94.96 | 94.97 | 94.63 | |
C16 | 64.15 | 67.04 | 70.02 | 68.35 | 66.37 | 63.78 |
Dataset | Patch Size | ||||||
---|---|---|---|---|---|---|---|
IP | 93.61 | 96.67 | 97.45 | 98.36 | 98.12 | 97.76 | |
SA | 92.31 | 95.84 | 96.99 | 98.27 | 98.35 | 98.29 | |
C17 | 74.46 | 82.68 | 87.44 | 92.52 | 93.60 | 91.23 | |
C16 | 48.04 | 55.65 | 64.37 | 69.73 | 68.39 | 64.81 |
Dataset | Base | SpaANet | SpecANet | CSSARN |
---|---|---|---|---|
IP | 96.95 | 98.13 | 98.38 | |
SA | 97.26 | 98.29 | 98.03 | |
C17 | 93.72 | 94.98 | 94.91 | |
C16 | 66.09 | 68.13 | 68.59 |
Class No. | 1DCNN | 3DCNNN | DFFN | RSSAN | Bi-LSTM | RIAN | SF | GAHT | CSSARN |
---|---|---|---|---|---|---|---|---|---|
1 | 53.48 | 50.87 | 90.87 | 40.87 | 43.04 | 90.87 | 92.17 | 89.13 | |
2 | 83.53 | 86.88 | 98.59 | 92.32 | 88.85 | 91.75 | 94.96 | 96.62 | |
3 | 77.01 | 84.89 | 89.98 | 87.01 | 92.14 | 97.76 | 96.80 | 97.83 | |
4 | 60.59 | 66.92 | 96.12 | 63.80 | 58.23 | 91.22 | 97.55 | 97.05 | |
5 | 89.69 | 92.88 | 61.35 | 86.63 | 92.63 | 94.99 | 96.89 | 96.89 | |
6 | 96.88 | 97.78 | 99.32 | 97.64 | 95.18 | 97.89 | 99.18 | 97.40 | |
7 | 72.14 | 66.43 | 91.43 | 44.29 | 42.86 | 87.14 | 74.29 | 84.29 | |
8 | 98.45 | 98.20 | 99.75 | 98.95 | 96.28 | 97.36 | 99.79 | 98.24 | |
9 | 67.00 | 63.00 | 48.00 | 28.00 | 50.00 | 71.00 | 88.00 | 95.00 | |
10 | 84.98 | 87.90 | 97.55 | 90.37 | 89.86 | 94.07 | 93.19 | 96.71 | |
11 | 85.48 | 90.14 | 98.57 | 92.46 | 93.60 | 96.50 | 98.97 | 99.04 | |
12 | 82.97 | 77.64 | 95.38 | 77.81 | 77.44 | 91.80 | 93.19 | 96.05 | |
13 | 98.83 | 97.27 | 98.63 | 96.78 | 91.02 | 99.61 | 98.73 | 99.12 | |
14 | 96.06 | 96.76 | 97.57 | 98.01 | 99.73 | 99.73 | 99.35 | 99.37 | |
15 | 68.55 | 83.47 | 98.55 | 83.73 | 83.68 | 95.54 | 97.51 | 98.71 | |
16 | 87.10 | 87.53 | 95.48 | 89.89 | 82.80 | 89.68 | 83.44 | 86.88 | |
OA | 86.10 ± 0.79 | 89.23 ± 1.52 | 98.36 ± 0.14 | 90.90 ± 1.68 | 89.73 ± 1.19 | 95.06 ± 0.45 | 97.03 ± 0.36 | 97.66 ± 0.25 | ± |
AA | 81.42 ± 0.82 | 83.03 ± 4.16 | 97.25 ± 0.59 | 80.86 ± 2.67 | 77.65 ± 2.91 | 91.34 ± 2.36 | 92.91 ± 1.78 | 95.24 ± 1.68 | ± |
Kappa | 84.14 ± 0.90 | 87.69 ± 1.75 | 98.14 ± 0.16 | 89.61 ± 1.92 | 88.29 ± 1.35 | 94.36 ± 0.52 | 96.61 ± 0.41 | 97.34 ± 0.29 | ± |
Class No. | 1DCNN | 3DCNNN | DFFN | RSSAN | Bi-LSTM | RIAN | SF | GAHT | CSSARN |
---|---|---|---|---|---|---|---|---|---|
1 | 53.48 | 9.57 | 24.78 | 4.35 | 13.91 | 94.35 | 17.83 | 31.74 | |
2 | 83.53 | 77.21 | 71.37 | 77.68 | 64.08 | 91.75 | 60.08 | 58.00 | |
3 | 77.01 | 63.83 | 53.23 | 61.25 | 44.63 | 92.14 | 50.72 | 50.07 | |
4 | 60.59 | 52.07 | 50.54 | 45.23 | 33.25 | 91.22 | 37.97 | 51.90 | |
5 | 89.69 | 79.34 | 91.39 | 74.41 | 70.89 | 92.63 | 32.80 | 64.64 | |
6 | 96.88 | 97.97 | 94.30 | 95.86 | 92.77 | 97.89 | 93.92 | 82.79 | |
7 | 72.14 | 41.43 | 64.29 | 17.86 | 25.71 | 87.14 | 1.43 | 25.00 | |
8 | 98.45 | 95.02 | 96.19 | 93.05 | 96.95 | 97.36 | 99.04 | 96.53 | |
9 | 67.00 | 0.00 | 75.00 | 0.00 | 14.00 | 50.00 | 6.00 | 7.00 | |
10 | 84.98 | 75.95 | 70.35 | 74.32 | 71.71 | 94.07 | 56.77 | 58.07 | |
11 | 85.48 | 84.64 | 82.23 | 84.41 | 77.45 | 96.50 | 97.50 | 80.51 | |
12 | 82.97 | 63.84 | 39.19 | 57.30 | 33.09 | 91.80 | 25.13 | 57.91 | |
13 | 98.83 | 92.59 | 82.93 | 81.27 | 85.85 | 99.61 | 94.44 | 71.32 | |
14 | 96.06 | 96.09 | 98.23 | 95.05 | 96.08 | 99.73 | 89.39 | 89.52 | |
15 | 68.55 | 74.15 | 74.09 | 69.53 | 42.28 | 95.54 | 59.38 | 70.78 | |
16 | 87.10 | 89.03 | 93.76 | 82.37 | 80.86 | 89.68 | 66.24 | 39.14 | |
OA | 86.10 ± 0.79 | 80.92 ± 1.93 | 77.43 ± 1.57 | 78.88 ± 1.42 | 71.11 ± 1.94 | 95.06 ± 0.45 | 67.95 ± 1.87 | 70.64 ± 1.67 | ± |
AA | 81.42 ± 0.82 | 68.30 ± 1.93 | 72.62 ± 2.57 | 63.37 ± 2.15 | 58.97 ± 3.04 | 91.34 ± 2.36 | 54.41 ± 4.51 | 58.43 ± 3.47 | ± |
Kappa | 84.14 ± 0.90 | 78.13 ± 2.23 | 74.19 ± 1.78 | 75.81 ± 1.63 | 66.91 ± 2.17 | 94.36 ± 0.52 | 63.10 ± 2.16 | 66.33 ± 1.83 | ± |
Class No. | 1DCNN | 3DCNNN | DFFN | RSSAN | Bi-LSTM | RIAN | SF | GAHT | CSSARN |
---|---|---|---|---|---|---|---|---|---|
1 | 95.80 | 99.68 | 99.97 | 96.58 | 96.51 | 98.38 | 95.25 | 99.95 | |
2 | 96.62 | 99.82 | 96.87 | 99.01 | 99.94 | 99.65 | 99.94 | 99.17 | |
3 | 94.16 | 99.31 | 99.36 | 91.73 | 93.24 | 92.29 | 97.00 | 98.53 | |
4 | 99.18 | 98.62 | 99.14 | 95.88 | 97.29 | 98.90 | 98.68 | 98.08 | |
5 | 97.04 | 95.62 | 96.89 | 98.54 | 95.48 | 96.39 | 95.67 | 96.00 | |
6 | 99.75 | 99.76 | 99.12 | 99.75 | 99.46 | 99.79 | 99.91 | 99.98 | |
7 | 99.54 | 99.56 | 99.87 | 96.44 | 99.11 | 98.26 | 99.19 | 99.19 | |
8 | 83.92 | 86.41 | 92.55 | 82.65 | 85.30 | 95.96 | 87.37 | 85.22 | |
9 | 99.21 | 99.43 | 99.89 | 99.38 | 99.50 | 99.56 | 99.72 | 99.81 | |
10 | 87.41 | 91.85 | 96.29 | 94.04 | 90.05 | 92.45 | 94.97 | 94.91 | |
11 | 88.09 | 95.34 | 98.75 | 87.32 | 87.83 | 98.20 | 92.90 | 97.66 | |
12 | 99.18 | 99.36 | 97.60 | 99.25 | 99.83 | 98.85 | 99.00 | 99.95 | |
13 | 97.79 | 99.63 | 97.12 | 96.64 | 95.52 | 98.87 | 99.08 | 98.36 | |
14 | 90.56 | 96.24 | 97.93 | 89.27 | 91.68 | 93.31 | 98.54 | 98.13 | |
15 | 56.57 | 65.41 | 91.85 | 68.36 | 71.09 | 94.01 | 74.47 | 78.00 | |
16 | 91.31 | 94.73 | 90.65 | 97.47 | 96.96 | 95.06 | 99.51 | 97.01 | |
OA | 88.37 ± 0.77 | 91.27 ± 1.43 | 96.78 ± 0.55 | 89.68 ± 2.55 | 90.88 ± 0.84 | 97.04 ± 0.32 | 92.43 ± 1.12 | 93.27 ± 0.27 | ± |
AA | 92.25 ± 0.61 | 95.09 ± 0.83 | 98.17 ± 0.13 | 92.60 ± 1.83 | 93.44 ± 0.57 | 97.18 ± 0.48 | 95.22 ± 0.74 | 96.60 ± 0.19 | ± |
Kappa | 87.03 ± 0.84 | 90.26 ± 1.61 | 96.42 ± 0.61 | 88.50 ± 2.84 | 89.83 ± 0.95 | 96.70 ± 0.36 | 91.57 ± 1.26 | 92.50 ± 0.30 | ± |
Class No. | 1DCNN | 3DCNNN | DFFN | RSSAN | Bi-LSTM | RIAN | SF | GAHT | CSSARN |
---|---|---|---|---|---|---|---|---|---|
1 | 95.80 | 98.80 | 99.95 | 96.19 | 95.58 | 98.38 | 95.17 | 99.95 | |
2 | 96.62 | 99.83 | 96.22 | 98.87 | 99.94 | 99.60 | 99.89 | 99.30 | |
3 | 94.16 | 98.57 | 98.34 | 88.37 | 93.19 | 92.29 | 97.19 | 97.23 | |
4 | 99.18 | 95.09 | 98.06 | 89.86 | 94.40 | 98.90 | 96.87 | 84.91 | |
5 | 97.04 | 92.05 | 91.40 | 84.56 | 94.00 | 93.49 | 88.75 | 95.03 | |
6 | 99.75 | 99.36 | 99.75 | 98.01 | 99.42 | 99.46 | 99.77 | 99.22 | |
7 | 99.54 | 99.24 | 99.56 | 95.35 | 96.02 | 98.26 | 99.14 | 99.11 | |
8 | 83.92 | 84.95 | 90.01 | 81.28 | 82.53 | 95.96 | 85.47 | 81.93 | |
9 | 99.21 | 99.64 | 99.58 | 98.10 | 99.43 | 99.56 | 99.68 | 99.35 | |
10 | 87.41 | 87.23 | 81.80 | 81.98 | 79.51 | 84.65 | 90.55 | 95.00 | |
11 | 88.09 | 79.03 | 97.68 | 78.90 | 84.55 | 98.20 | 77.42 | 81.72 | |
12 | 99.18 | 99.52 | 98.14 | 95.76 | 99.43 | 99.83 | 99.00 | 96.45 | |
13 | 97.79 | 89.67 | 94.69 | 85.31 | 95.50 | 95.52 | 95.50 | 90.83 | |
14 | 90.56 | 95.96 | 96.77 | 86.65 | 90.60 | 93.31 | 98.88 | 96.02 | |
15 | 56.57 | 62.80 | 81.37 | 65.62 | 66.35 | 94.01 | 67.72 | 76.25 | |
16 | 91.31 | 94.70 | 97.97 | 83.74 | 91.58 | 96.96 | 94.96 | 97.01 | |
OA | 88.37 ± 0.77 | 89.47 ± 1.56 | 93.35 ± 1.13 | 86.25 ± 3.79 | 88.43 ± 1.18 | 97.04 ± 0.32 | 90.14 ± 1.19 | 90.59 ± 0.31 | ± |
AA | 92.25 ± 0.61 | 92.28 ± 1.32 | 95.32 ± 1.11 | 87.87 ± 4.00 | 91.31 ± 1.03 | 97.18 ± 0.48 | 92.78 ± 1.15 | 92.69 ± 072 | ± |
Kappa | 87.03 ± 0.84 | 88.25 ± 1.78 | 92.59 ± 1.27 | 84.67 ± 4.23 | 87.10 ± 1.32 | 96.70 ± 0.36 | 89.00 ± 1.35 | 89.52 ± 0.36 | ± |
Method | IP | SA | ||||||
---|---|---|---|---|---|---|---|---|
1DCNN | 86.10 | 86.10 | 89.23 | 89.23 | 88.37 | 88.37 | 92.39 | 92.39 |
3DCNN | 89.23 | 80.92 | 92.49 | 91.22 | 91.27 | 89.47 | 93.08 | 92.24 |
DFFN | 98.36 | 77.43 | 98.83 | 93.57 | 96.78 | 93.35 | 98.02 | 97.46 |
RSSAN | 90.90 | 78.88 | 96.19 | 92.99 | 89.68 | 86.25 | 95.79 | 95.30 |
Bi-LSTM | 89.73 | 71.11 | 94.73 | 92.58 | 90.88 | 88.43 | 92.41 | 92.29 |
RIAN | 95.06 | 95.06 | 96.98 | 96.98 | 97.04 | 97.04 | 98.11 | 98.11 |
SF | 97.03 | 67.95 | 98.39 | 89.63 | 92.43 | 90.14 | 95.37 | 93.87 |
GAHT | 97.66 | 70.64 | 98.71 | 90.16 | 93.27 | 90.59 | 97.95 | 96.18 |
Class No. | 1DCNN | 3DCNNN | DFFN | RSSAN | Bi-LSTM | RIAN | SF | GAHT | CSSARN |
---|---|---|---|---|---|---|---|---|---|
1 | 22.66 | 51.16 | 35.20 | 39.34 | 60.29 | 70.16 | 80.15 | 87.70 | |
2 | 70.72 | 84.82 | 94.00 | 84.66 | 79.18 | 90.41 | 83.99 | 98.92 | |
3 | 51.81 | 63.40 | 69.76 | 58.76 | 87.11 | 85.27 | 93.39 | 96.49 | |
4 | 94.79 | 96.77 | 98.54 | 97.38 | 95.60 | 97.30 | 97.75 | 98.02 | |
5 | 7.07 | 26.36 | 16.14 | 28.07 | 65.77 | 55.79 | 81.97 | 81.92 | |
6 | 72.10 | 74.05 | 81.53 | 76.84 | 79.80 | 85.43 | 93.79 | 93.76 | |
7 | 2.53 | 7.60 | 51.90 | 1.27 | 1.27 | 22.79 | 10.13 | 32.91 | |
8 | 64.88 | 79.79 | 91.85 | 72.77 | 72.44 | 87.01 | 86.36 | 91.94 | |
9 | 9.48 | 27.86 | 73.83 | 24.02 | 24.77 | 48.97 | 50.47 | 74.77 | |
10 | 32.56 | 47.60 | 80.61 | 52.65 | 56.13 | 79.93 | 66.91 | 84.17 | |
11 | 88.50 | 87.29 | 83.23 | 84.40 | 98.31 | 93.49 | 96.18 | 95.70 | |
12 | 48.03 | 69.81 | 96.18 | 66.54 | 76.32 | 88.66 | 90.88 | 94.00 | |
13 | 61.96 | 63.34 | 92.96 | 64.07 | 63.49 | 83.89 | 75.33 | 90.30 | |
14 | 2.67 | 38.67 | 74.00 | 24.67 | 10.00 | 30.00 | 36.67 | 91.33 | |
15 | 21.18 | 56.13 | 57.30 | 38.14 | 83.19 | 74.31 | 83.14 | 84.02 | |
16 | 6.89 | 25.25 | 19.34 | 14.43 | 55.08 | 45.25 | 32.46 | 63.61 | |
17 | 12.37 | 51.90 | 86.65 | 33.01 | 46.28 | 83.16 | 74.05 | 84.14 | |
OA | 68.84 ± 0.19 | 77.69 ± 0.95 | 94.22 ± 0.37 | 77.61 ± 0.94 | 75.72 ± 0.66 | 87.57 ± 0.52 | 86.43 ± 0.19 | 93.51 ± 0.85 | ± |
AA | 39.42 ± 0.43 | 55.99 ± 1.53 | 86.89 ± 2.32 | 51.97 ± 0.40 | 50.91 ± 0.94 | 73.04 ± 0.47 | 69.54 ± 0.98 | 83.10 ± 2.00 | ± |
Kappa | 62.58 ± 0.27 | 73.22 ± 0.90 | 93.08 ± 0.54 | 73.15 ± 0.68 | 70.94 ± 0.79 | 85.14 ± 0.32 | 83.71 ± 0.47 | 92.22 ± 1.04 | ± |
Class No. | 1DCNN | 3DCNNN | DFFN | RSSAN | Bi-LSTM | RIAN | SF | GAHT | CSSARN |
---|---|---|---|---|---|---|---|---|---|
1 | 1.66 | 0.61 | 0.78 | 5.65 | 3.77 | 6.20 | 3.10 | 7.42 | |
2 | 39.66 | 70.80 | 52.38 | 78.22 | 61.75 | 36.07 | 82.75 | 48.40 | |
3 | 0.27 | 0.54 | 2.43 | 1.95 | 0.54 | 0.05 | 0.54 | 2.16 | |
4 | 92.95 | 95.99 | 94.40 | 95.11 | 92.85 | 91.48 | 95.44 | 91.74 | |
5 | 6.77 | 35.86 | 28.88 | 7.59 | 23.11 | 17.13 | 35.06 | 58.77 | |
6 | 6.87 | 12.64 | 8.20 | 10.64 | 7.80 | 9.52 | 12.56 | 10.91 | |
7 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
8 | 35.74 | 52.51 | 46.50 | 50.07 | 44.87 | 60.93 | 45.06 | 57.99 | |
9 | 4.39 | 9.22 | 6.89 | 7.61 | 16.21 | 15.71 | 2.67 | 0.28 | |
10 | 15.58 | 22.09 | 13.87 | 21.23 | 20.23 | 29.52 | 15.51 | 15.37 | |
11 | 77.40 | 47.97 | 79.96 | 70.58 | 62.26 | 61.19 | 40.51 | 72.28 | |
12 | 2.44 | 75.61 | 2.44 | 2.44 | 37.81 | 23.17 | 15.85 | 30.49 | |
13 | 36.87 | 40.41 | 60.38 | 38.02 | 40.37 | 52.65 | 36.81 | 47.55 | |
14 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |
15 | 27.33 | 13.51 | 64.43 | 41.35 | 53.89 | 54.86 | 54.50 | 39.87 | |
16 | —— | —— | —— | —— | —— | —— | —— | —— | —— |
17 | 0.00 | 0.00 | 0.46 | 0.15 | 0.00 | 0.00 | 0.00 | 0.00 | |
OA | 58.65 ± 0.61 | 64.04 ± 2.14 | 65.28 ± 1.85 | 63.85 ± 1.01 | 62.81 ± 0.42 | 68.02 ± 1.71 | 66.53 ± 0.57 | 61.32 ± 0.87 | ± |
AA | 21.78 ± 2.53 | 29.86 ± 2.61 | 29.43 ± 0.93 | 26.63 ± 1.14 | 31.78 ± 0.36 | 35.11 ± 2.67 | 30.21 ± 2.91 | 25.13 ± 3.04 | ± |
Kappa | 43.69 ± 1.24 | 50.43 ± 1.86 | 52.37 ± 0.45 | 50.30 ± 0.82 | 49.43 ± 0.07 | 56.31 ± 0.48 | 53.60 ± 0.28 | 47.34 ± 1.22 | ± |
Algorithm | Dataset | Average | ||
---|---|---|---|---|
1D CNN | 72,196 | 74,196 | 64,300 | 70,230 |
3D CNN | 313,964 | 1,549,164 | 892,688 | 918,605 |
DFFN | 422,784 | 423,360 | 501,764 | 449,302 |
RSSAN | 117,373 | 110,736 | 60,297 | 96,135 |
Bi-LSTM | 1,577,508 | 1,635,720 | 313,254 | 1,175,494 |
RIAN | 87,384 | 72,620 | 27,454 | 62,486 |
SF | 402,553 | 398,485 | 130,359 | 310,465 |
GAHT | 972,624 | 972,624 | 954,452 | 966,566 |
CSSARN | 21,066 | 21,098 | 20,018 | 20,728 |
Algorithm | Dataset | |||||
---|---|---|---|---|---|---|
1D CNN | 15.92 | 3.93 | 8.64 | 9.42 | 26.47 | 8.30 |
3D CNN | 15.29 | 3.49 | 8.80 | 4.00 | 29.83 | 10.04 |
DFFN | 154.43 | 7.00 | 95.63 | 16.59 | 878.31 | 52.43 |
RSSAN | 75.95 | 4.95 | 49.04 | 6.68 | 137.79 | 28.48 |
Bi-LSTM | 71.99 | 5.35 | 39.88 | 11.83 | 115.38 | 22.65 |
RIAN | 71.27 | 5.18 | 34.07 | 12.28 | 85.05 | 10.79 |
SF | 314.50 | 13.28 | 151.45 | 41.93 | 443.15 | 65.71 |
GAHT | 241.13 | 6.01 | 106.83 | 18.02 | 336.99 | 58.22 |
CSSARN | 165.18 | 7.54 | 137.19 | 12.47 | 209.56 | 25.71 |
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
Li, S.; Chen, B.; Wang, N.; Shi, Y.; Zhang, G.; Liu, J. Constrained Spectral–Spatial Attention Residual Network and New Cross-Scene Dataset for Hyperspectral Classification. Electronics 2024 , 13 , 2540. https://doi.org/10.3390/electronics13132540
Li S, Chen B, Wang N, Shi Y, Zhang G, Liu J. Constrained Spectral–Spatial Attention Residual Network and New Cross-Scene Dataset for Hyperspectral Classification. Electronics . 2024; 13(13):2540. https://doi.org/10.3390/electronics13132540
Li, Siyuan, Baocheng Chen, Nan Wang, Yuetian Shi, Geng Zhang, and Jia Liu. 2024. "Constrained Spectral–Spatial Attention Residual Network and New Cross-Scene Dataset for Hyperspectral Classification" Electronics 13, no. 13: 2540. https://doi.org/10.3390/electronics13132540
Article access statistics, further information, mdpi initiatives, follow mdpi.
Subscribe to receive issue release notifications and newsletters from MDPI journals
IMAGES
VIDEO
COMMENTS
Sampling methods are crucial for conducting reliable research. In this article, you will learn about the types, techniques and examples of sampling methods, and how to choose the best one for your study. Scribbr also offers free tools and guides for other aspects of academic writing, such as citation, bibliography, and fallacy.
Understand sampling methods in research, from simple random sampling to stratified, systematic, and cluster sampling. Learn how these sampling techniques boost data accuracy and representation, ensuring robust, reliable results. Check this article to learn about the different sampling method techniques, types and examples.
The main methodological issue that influences the generalizability of clinical research findings is the sampling method. In this educational article, we are explaining the different sampling ...
1. Simple random sampling. In a simple random sample, every member of the population has an equal chance of being selected. Your sampling frame should include the whole population. To conduct this type of sampling, you can use tools like random number generators or other techniques that are based entirely on chance.
Sampling methods refer to the techniques used to select a subset of individuals or units from a larger population for the purpose of conducting statistical analysis or research. Sampling is an essential part of the Research because it allows researchers to draw conclusions about a population without having to collect data from every member of ...
We could choose a sampling method based on whether we want to account for sampling bias; a random sampling method is often preferred over a non-random method for this reason. Random sampling examples include: simple, systematic, stratified, and cluster sampling. Non-random sampling methods are liable to bias, and common examples include ...
A purposive sampling method was used to select participants who could provide in-depth information about their experiences (Setia, 2017). It is described as the careful selection of a participant ...
Simple random sampling. Simple random sampling involves selecting participants in a completely random fashion, where each participant has an equal chance of being selected.Basically, this sampling method is the equivalent of pulling names out of a hat, except that you can do it digitally.For example, if you had a list of 500 people, you could use a random number generator to draw a list of 50 ...
Step 1: Define your population. Like other methods of sampling, you must decide upon the population that you are studying. In systematic sampling, you have two choices for data collection: You can select your sample ahead of time from a list and then approach the selected subjects to collect data, or.
Evaluate your goals against time and budget. List the two or three most obvious sampling methods that will work for you. Confirm the availability of your resources (researchers, computer time, etc.) Compare each of the possible methods with your goals, accuracy, precision, resource, time, and cost constraints.
Abstract. Knowledge of sampling methods is essential to design quality research. Critical questions are provided to help researchers choose a sampling method. This article reviews probability and non-probability sampling methods, lists and defines specific sampling techniques, and provides pros and cons for consideration.
Sampling is a critical element of research design. Different methods can be used for sample selection to ensure that members of the study population reflect both the source and target populations, including probability and non-probability sampling. Power and sample size are used to determine the number of subjects needed to answer the research ...
The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available.
This paper presents the steps to go through to conduct sampling. Furthermore, as there are different types of sampling techniques/methods, researcher needs to understand the differences to select the proper sampling method for the research. In the regards, this paper also presents the different types of sampling techniques and methods.
To do stratified sampling, you would: a. Divide the toys into three strata (subgroups) based on their type: cars, dolls, and puzzles. b. Calculate the proportion of each stratum in the sample. Since you want a sample of 20 toys, and the box has 100 toys, you'll select 20% of each stratum: Cars: 50 × 20% = 10 cars.
3. Convenience Sampling. If you're using this method, you're selecting participants based on their easy accessibility or proximity to you (e.g., your students or the patients at the hospital you work at). This method is convenient and budget-friendly but could introduce bias and compromise sample representativeness.
Sampling methods in psychology refer to strategies used to select a subset of individuals (a sample) from a larger population, to study and draw inferences about the entire population. Common methods include random sampling, stratified sampling, cluster sampling, and convenience sampling. Proper sampling ensures representative, generalizable, and valid research results.
Sampling types. There are two major categories of sampling methods ( figure 1 ): 1; probability sampling methods where all subjects in the target population have equal chances to be selected in the sample [ 1, 2] and 2; non-probability sampling methods where the sample population is selected in a non-systematic process that does not guarantee ...
This paper presents the steps to go through to conduct sampling. Furthermore, as there are different types of sampling techniques/methods, researcher needs to understand the differences to select the proper sampling method for the research. In the regards, this paper also presents the different types of sampling techniques and methods.
This study employed a non-probability sampling approach, as it is commonly used in business research, and its research objectives and inquiries are best addressed through qualitative research ...
Part 2 of the series focused on context, research questions and design of qualitative research . In this paper, Part 3, we address frequently asked questions (FAQs) about sampling, data collection and analysis. ... A sampling plan is a formal plan specifying a sampling method, a sample size, and procedure for recruiting participants (Box 1) . A ...
As there are different types of sampling techniques/methods, researcher needs to understand the differences to select the proper sampling method for the research. In order to answer the research questions, it is doubtful that researcher should be able to collect data from all cases. Thus, there is a need to select a sample. This paper presents the steps to go through to conduct sampling ...
View PDF. A Manual for Selecting Sampling Techniques in Research. Mohsin Hassan Alvi. The Manual for Sampling Techniques used in Social Sciences is an effort to describe various types of sampling methodologies that are used in researches of social sciences in an easy and understandable way. Characteristics, benefits, crucial issues/ draw backs ...
Sampling strategies vary widely across different disciplines and research areas, and from study to study. There are two major types of sampling - probability and non-probability sampling. Probability sampling, also known as random sampling, is a kind of sample selection where randomisation is used instead of deliberate choice.
Recent studies in active learning, particularly in uncertainty sampling, have focused on the decomposition of model uncertainty into reducible and irreducible uncertainties. In this paper, the aim is to simplify the computational process while eliminating the dependence on observations. Crucially, the inherent uncertainty in the labels is considered, i.e. the uncertainty of the oracles. Two ...
Our research is the first comparative field-based study on (i) the above- and belowground population dynamics and seasonal abundance and (ii) the effectiveness of conventional and innovative sampling methods for two mealybug species, Saccharicoccus sacchari (Cockerell, 1895) (Hemiptera: Coccomorpha, Pseudococcidae) and Heliococcus summervillei (Brookes, 1978) (Hemiptera: Coccomorpha ...
research, methods must now be proven in more realistic settings. Here we demonstrate for the first time that scalable oversight can help humans more comprehensively assess model-written solutions to real-world assistant tasks. In particular we focus on one of the most important and economically impactful applications of LLM assistants: writing ...
This paper introduces Bespoke Non-Stationary (BNS) Solvers, a solver distillation approach to improve sample efficiency of Diffusion and Flow models. BNS solvers are based on a family of non-stationary solvers that provably subsumes existing numerical ODE solvers and consequently demonstrate considerable improvement in sample approximation ...
They bring a wealth of academic, research, and leadership abilitiesMINNEAPOLIS / ST. PAUL (07/01/2024)—University of Minnesota College of Science and Engineering Dean Andrew Alleyne has named four new department heads in the college. All bring a wealth of academic, research, and leadership abilities to their departments.Department of Chemical Engineering and Materials ScienceProfessor Kevin ...
Hyperspectral image classification is widely applied in several fields. Since existing datasets focus on a single scene, current deep learning-based methods typically divide patches randomly on the same image as training and testing samples. This can result in similar spatial distributions of samples, which may incline the network to learn specific spatial distributions in pursuit of falsely ...