Neutrino detectorA neutrino detector is a physics apparatus which is designed to study neutrinos. Because neutrinos only weakly interact with other particles of matter, neutrino detectors must be very large to detect a significant number of neutrinos. Neutrino detectors are often built underground, to isolate the detector from cosmic rays and other background radiation. The field of neutrino astronomy is still very much in its infancy – the only confirmed extraterrestrial sources are the Sun and the supernova 1987A in the nearby Large Magellanic Cloud.
Neutrino astronomyNeutrino astronomy is the branch of astronomy that observes astronomical objects with neutrino detectors in special observatories. Neutrinos are created as a result of certain types of radioactive decay, nuclear reactions such as those that take place in the Sun or high energy astrophysical phenomena, in nuclear reactors, or when cosmic rays hit atoms in the atmosphere. Neutrinos rarely interact with matter, meaning that it is unlikely for them to scatter along their trajectory, unlike photons.
NeutrinoA neutrino (njuːˈtriːnoʊ ; denoted by the Greek letter ν) is a fermion (an elementary particle with spin of 1 /2) that interacts only via the weak interaction and gravity. The neutrino is so named because it is electrically neutral and because its rest mass is so small (-ino) that it was long thought to be zero. The rest mass of the neutrino is much smaller than that of the other known elementary particles excluding massless particles.
Solar neutrino problemThe solar neutrino problem concerned a large discrepancy between the flux of solar neutrinos as predicted from the Sun's luminosity and as measured directly. The discrepancy was first observed in the mid-1960s and was resolved around 2002. The flux of neutrinos at Earth is several tens of billions per square centimetre per second, mostly from the Sun's core. They are nevertheless hard to detect, because they interact very weakly with matter, traversing the whole Earth.
Neutrino oscillationNeutrino oscillation is a quantum mechanical phenomenon in which a neutrino created with a specific lepton family number ("lepton flavor": electron, muon, or tau) can later be measured to have a different lepton family number. The probability of measuring a particular flavor for a neutrino varies between three known states, as it propagates through space. First predicted by Bruno Pontecorvo in 1957, neutrino oscillation has since been observed by a multitude of experiments in several different contexts.
Sampling (statistics)In statistics, quality assurance, and survey methodology, sampling is the selection of a subset or a statistical sample (termed sample for short) of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians attempt to collect samples that are representative of the population. Sampling has lower costs and faster data collection compared to recording data from the entire population, and thus, it can provide insights in cases where it is infeasible to measure an entire population.
Simple random sampleIn statistics, a simple random sample (or SRS) is a subset of individuals (a sample) chosen from a larger set (a population) in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sample in a random way. In SRS, each subset of k individuals has the same probability of being chosen for the sample as any other subset of k individuals. A simple random sample is an unbiased sampling technique. Simple random sampling is a basic type of sampling and can be a component of other more complex sampling methods.
Supernova neutrinosSupernova neutrinos are weakly interactive elementary particles produced during a core-collapse supernova explosion. A massive star collapses at the end of its life, emitting on the order of 1058 neutrinos and antineutrinos in all lepton flavors. The luminosity of different neutrino and antineutrino species are roughly the same. They carry away about 99% of the gravitational energy of the dying star as a burst lasting tens of seconds. The typical supernova neutrino energies are 10MeV.
Stratified samplingIn statistics, stratified sampling is a method of sampling from a population which can be partitioned into subpopulations. In statistical surveys, when subpopulations within an overall population vary, it could be advantageous to sample each subpopulation (stratum) independently. Stratification is the process of dividing members of the population into homogeneous subgroups before sampling. The strata should define a partition of the population.
Cosmic neutrino backgroundThe cosmic neutrino background (CNB or CνB) is the universe's background particle radiation composed of neutrinos. They are sometimes known as relic neutrinos. The CνB is a relic of the Big Bang; while the cosmic microwave background radiation (CMB) dates from when the universe was 379,000 years old, the CνB decoupled (separated) from matter when the universe was just one second old. It is estimated that today, the CνB has a temperature of roughly 1.95K. As neutrinos rarely interact with matter, these neutrinos still exist today.
Decision treeA decision tree is a decision support hierarchical model that uses a tree-like model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm that only contains conditional control statements. Decision trees are commonly used in operations research, specifically in decision analysis, to help identify a strategy most likely to reach a goal, but are also a popular tool in machine learning.
Convenience samplingConvenience sampling (also known as grab sampling, accidental sampling, or opportunity sampling) is a type of non-probability sampling that involves the sample being drawn from that part of the population that is close to hand. This type of sampling is most useful for pilot testing. Convenience sampling is not often recommended for research due to the possibility of sampling error and lack of representation of the population. But it can be handy depending on the situation. In some situations, convenience sampling is the only possible option.
Decision tree learningDecision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of observations. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels.
Gradient boostingGradient boosting is a machine learning technique used in regression and classification tasks, among others. It gives a prediction model in the form of an ensemble of weak prediction models, i.e., models that make very few assumptions about the data, which are typically simple decision trees. When a decision tree is the weak learner, the resulting algorithm is called gradient-boosted trees; it usually outperforms random forest.
Sample mean and covarianceThe sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or mean value) of a sample of numbers taken from a larger population of numbers, where "population" indicates not number of people but the entirety of relevant data, whether collected or not. A sample of 40 companies' sales from the Fortune 500 might be used for convenience instead of looking at the population, all 500 companies' sales.
Decision stumpA decision stump is a machine learning model consisting of a one-level decision tree. That is, it is a decision tree with one internal node (the root) which is immediately connected to the terminal nodes (its leaves). A decision stump makes a prediction based on the value of just a single input feature. Sometimes they are also called 1-rules. Depending on the type of the input feature, several variations are possible.
Sample size determinationSample size determination is the act of choosing the number of observations or replicates to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a population from a sample. In practice, the sample size used in a study is usually determined based on the cost, time, or convenience of collecting the data, and the need for it to offer sufficient statistical power.
Sampling biasIn statistics, sampling bias is a bias in which a sample is collected in such a way that some members of the intended population have a lower or higher sampling probability than others. It results in a biased sample of a population (or non-human factors) in which all individuals, or instances, were not equally likely to have been selected. If this is not accounted for, results can be erroneously attributed to the phenomenon under study rather than to the method of sampling.
Selection biasSelection bias is the bias introduced by the selection of individuals, groups, or data for analysis in such a way that proper randomization is not achieved, thereby failing to ensure that the sample obtained is representative of the population intended to be analyzed. It is sometimes referred to as the selection effect. The phrase "selection bias" most often refers to the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account, then some conclusions of the study may be false.
Boosting (machine learning)In machine learning, boosting is an ensemble meta-algorithm for primarily reducing bias, and also variance in supervised learning, and a family of machine learning algorithms that convert weak learners to strong ones. Boosting is based on the question posed by Kearns and Valiant (1988, 1989): "Can a set of weak learners create a single strong learner?" A weak learner is defined to be a classifier that is only slightly correlated with the true classification (it can label examples better than random guessing).