Alcohol dependenceAlcohol dependence is a previous (DSM-IV and ICD-10) psychiatric diagnosis in which an individual is physically or psychologically dependent upon alcohol (also chemically known as ethanol). In 2013, it was reclassified as alcohol use disorder in DSM-5, which combined alcohol dependence and alcohol abuse into this diagnosis.
Substance dependenceSubstance dependence, also known as drug dependence, is a biopsychological situation whereby an individual's functionality is dependent on the necessitated re-consumption of a psychoactive substance because of an adaptive state that has developed within the individual from psychoactive substance consumption that results in the experience of withdrawal and that necessitates the re-consumption of the drug. A drug addiction, a distinct concept from substance dependence, is defined as compulsive, out-of-control drug use, despite negative consequences.
Long-range dependenceLong-range dependence (LRD), also called long memory or long-range persistence, is a phenomenon that may arise in the analysis of spatial or time series data. It relates to the rate of decay of statistical dependence of two points with increasing time interval or spatial distance between the points. A phenomenon is usually considered to have long-range dependence if the dependence decays more slowly than an exponential decay, typically a power-like decay. LRD is often related to self-similar processes or fields.
Cocaine dependenceCocaine dependence is a neurological disorder that is characterized by withdrawal symptoms upon cessation from cocaine use. It also often coincides with cocaine addiction which is a biopsychosocial disorder characterized by persistent use of cocaine and/or crack despite substantial harm and adverse consequences. The Diagnostic and Statistical Manual of Mental Disorders (5th ed., abbreviated DSM-5), classifies problematic cocaine use as a "Stimulant use disorder". The International Classification of Diseases (11th rev.
Gaussian processIn probability theory and statistics, a Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those (infinitely many) random variables, and as such, it is a distribution over functions with a continuous domain, e.g.
Physical dependencePhysical dependence is a physical condition caused by chronic use of a tolerance-forming drug, in which abrupt or gradual drug withdrawal causes unpleasant physical symptoms. Physical dependence can develop from low-dose therapeutic use of certain medications such as benzodiazepines, opioids, antiepileptics and antidepressants, as well as the recreational misuse of drugs such as alcohol, opioids and benzodiazepines. The higher the dose used, the greater the duration of use, and the earlier age use began are predictive of worsened physical dependence and thus more severe withdrawal syndromes.
Stationary processIn mathematics and statistics, a stationary process (or a strict/strictly stationary process or strong/strongly stationary process) is a stochastic process whose unconditional joint probability distribution does not change when shifted in time. Consequently, parameters such as mean and variance also do not change over time. If you draw a line through the middle of a stationary process then it should be flat; it may have 'seasonal' cycles around the trend line, but overall it does not trend up nor down.
Independent incrementsIn probability theory, independent increments are a property of stochastic processes and random measures. Most of the time, a process or random measure has independent increments by definition, which underlines their importance. Some of the stochastic processes that by definition possess independent increments are the Wiener process, all Lévy processes, all additive process and the Poisson point process. Let be a stochastic process. In most cases, or .
Lévy processIn probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which displacements in pairwise disjoint time intervals are independent, and displacements in different time intervals of the same length have identical probability distributions. A Lévy process may thus be viewed as the continuous-time analog of a random walk.
Time seriesIn mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Examples of time series are heights of ocean tides, counts of sunspots, and the daily closing value of the Dow Jones Industrial Average. A time series is very frequently plotted via a run chart (which is a temporal line chart).
Pareto efficiencyPareto efficiency or Pareto optimality is a situation where no action or allocation is available that makes one individual better off without making another worse off. The concept is named after Vilfredo Pareto (1848–1923), Italian civil engineer and economist, who used the concept in his studies of economic efficiency and income distribution. The following three concepts are closely related: Given an initial situation, a Pareto improvement is a new situation where some agents will gain, and no agents will lose.
Trend-stationary processIn the statistical analysis of time series, a trend-stationary process is a stochastic process from which an underlying trend (function solely of time) can be removed, leaving a stationary process. The trend does not have to be linear. Conversely, if the process requires differencing to be made stationary, then it is called difference stationary and possesses one or more unit roots. Those two concepts may sometimes be confused, but while they share many properties, they are different in many aspects.
Poisson point processIn probability, statistics and related fields, a Poisson point process is a type of random mathematical object that consists of points randomly located on a mathematical space with the essential feature that the points occur independently of one another. The Poisson point process is often called simply the Poisson process, but it is also called a Poisson random measure, Poisson random point field or Poisson point field.
Measurement uncertaintyIn metrology, measurement uncertainty is the expression of the statistical dispersion of the values attributed to a measured quantity. All measurements are subject to uncertainty and a measurement result is complete only when it is accompanied by a statement of the associated uncertainty, such as the standard deviation. By international agreement, this uncertainty has a probabilistic basis and reflects incomplete knowledge of the quantity value. It is a non-negative parameter.
Stochastic processIn probability theory and related fields, a stochastic (stəˈkæstɪk) or random process is a mathematical object usually defined as a sequence of random variables, where the index of the sequence has the interpretation of time. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. Examples include the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule.
Uncertainty quantificationUncertainty quantification (UQ) is the science of quantitative characterization and estimation of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc.
Copula (probability theory)In probability theory and statistics, a copula is a multivariate cumulative distribution function for which the marginal probability distribution of each variable is uniform on the interval [0, 1]. Copulas are used to describe/model the dependence (inter-correlation) between random variables. Their name, introduced by applied mathematician Abe Sklar in 1959, comes from the Latin for "link" or "tie", similar but unrelated to grammatical copulas in linguistics.
UncertaintyUncertainty refers to epistemic situations involving imperfect or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown. Uncertainty arises in partially observable or stochastic environments, as well as due to ignorance, indolence, or both. It arises in any number of fields, including insurance, philosophy, physics, statistics, economics, finance, medicine, psychology, sociology, engineering, metrology, meteorology, ecology and information science.
AlcoholismAlcoholism is, broadly, any drinking of alcohol that results in significant mental or physical health problems. Because there is disagreement on the definition of the word alcoholism, it is not a recognized diagnostic entity, and the use of alcoholism terminology is discouraged due to its heavily stigmatized connotations. Predominant diagnostic classifications are alcohol use disorder (DSM-5) or alcohol dependence (ICD-11); these are defined in their respective sources.
Marginal utilityIn economics, utility refers to the satisfaction or benefit that consumers derive from consuming a product or service. Marginal utility, on the other hand, describes the change in pleasure or satisfaction resulting from an increase or decrease in consumption of one unit of a good or service. Marginal utility can be positive, negative, or zero. For example, when eating pizza, the second piece brings more satisfaction than the first, indicating positive marginal utility.