Stochastic gradient descentStochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a randomly selected subset of the data).
Stochastic optimizationStochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective functions or random constraints. Stochastic optimization methods also include methods with random iterates. Some stochastic optimization methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization.
Gradient descentIn mathematics, gradient descent (also often called steepest descent) is a iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the gradient (or approximate gradient) of the function at the current point, because this is the direction of steepest descent. Conversely, stepping in the direction of the gradient will lead to a local maximum of that function; the procedure is then known as gradient ascent.
Cholesky decompositionIn linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ʃəˈlɛski ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.
Online machine learningIn computer science, online machine learning is a method of machine learning in which data becomes available in a sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once. Online learning is a common technique used in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need of out-of-core algorithms.
Spectral sequenceIn homological algebra and algebraic topology, a spectral sequence is a means of computing homology groups by taking successive approximations. Spectral sequences are a generalization of exact sequences, and since their introduction by , they have become important computational tools, particularly in algebraic topology, algebraic geometry and homological algebra. Motivated by problems in algebraic topology, Jean Leray introduced the notion of a sheaf and found himself faced with the problem of computing sheaf cohomology.
Spectral clusteringIn multivariate statistics, spectral clustering techniques make use of the spectrum (eigenvalues) of the similarity matrix of the data to perform dimensionality reduction before clustering in fewer dimensions. The similarity matrix is provided as an input and consists of a quantitative assessment of the relative similarity of each pair of points in the dataset. In application to image segmentation, spectral clustering is known as segmentation-based object categorization.
Conjugate gradient methodIn mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.
Serre spectral sequenceIn mathematics, the Serre spectral sequence (sometimes Leray–Serre spectral sequence to acknowledge earlier work of Jean Leray in the Leray spectral sequence) is an important tool in algebraic topology. It expresses, in the language of homological algebra, the singular (co)homology of the total space X of a (Serre) fibration in terms of the (co)homology of the base space B and the fiber F. The result is due to Jean-Pierre Serre in his doctoral dissertation. Let be a Serre fibration of topological spaces, and let F be the (path-connected) fiber.
LU decompositionIn numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix.
Matrix decompositionIn the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems. In numerical analysis, different decompositions are used to implement efficient matrix algorithms. For instance, when solving a system of linear equations , the matrix A can be decomposed via the LU decomposition.
Laplacian matrixIn the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian, admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method. The Laplacian matrix relates to many useful properties of a graph.
Leray spectral sequenceIn mathematics, the Leray spectral sequence was a pioneering example in homological algebra, introduced in 1946 by Jean Leray. It is usually seen nowadays as a special case of the Grothendieck spectral sequence. Let be a continuous map of topological spaces, which in particular gives a functor from sheaves of abelian groups on to sheaves of abelian groups on .
Power toolA power tool is a tool that is actuated by an additional power source and mechanism other than the solely manual labor used with hand tools. The most common types of power tools use electric motors. Internal combustion engines and compressed air are also commonly used. Tools directly driven by animal power are not generally considered power tools. Power tools are used in industry, in construction, in the garden, for housework tasks such as cooking, cleaning, and around the house for purposes of driving (fasteners), drilling, cutting, shaping, sanding, grinding, routing, polishing, painting, heating and more.
Derived categoryIn mathematics, the derived category D(A) of an A is a construction of homological algebra introduced to refine and in a certain sense to simplify the theory of derived functors defined on A. The construction proceeds on the basis that the of D(A) should be chain complexes in A, with two such chain complexes considered isomorphic when there is a chain map that induces an isomorphism on the level of homology of the chain complexes. Derived functors can then be defined for chain complexes, refining the concept of hypercohomology.
Mathematical optimizationMathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
Hyperparameter (machine learning)In machine learning, a hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters (typically node weights) are derived via training. Hyperparameters can be classified as model hyperparameters, that cannot be inferred while fitting the machine to the training set because they refer to the model selection task, or algorithm hyperparameters, that in principle have no influence on the performance of the model but affect the speed and quality of the learning process.
Correlation clusteringClustering is the problem of partitioning data points into groups based on their similarity. Correlation clustering provides a method for clustering a set of objects into the optimum number of clusters without specifying that number in advance. Cluster analysis In machine learning, correlation clustering or cluster editing operates in a scenario where the relationships between the objects are known instead of the actual representations of the objects.
Scale-free networkA scale-free network is a network whose degree distribution follows a power law, at least asymptotically. That is, the fraction P(k) of nodes in the network having k connections to other nodes goes for large values of k as where is a parameter whose value is typically in the range (wherein the second moment (scale parameter) of is infinite but the first moment is finite), although occasionally it may lie outside these bounds. The name "scale-free" means that some moments of the degree distribution are not defined, so that the network does not have a characteristic scale or "size".
QR decompositionIn linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix A into a product A = QR of an orthonormal matrix Q and an upper triangular matrix R. QR decomposition is often used to solve the linear least squares problem and is the basis for a particular eigenvalue algorithm, the QR algorithm. Any real square matrix A may be decomposed as where Q is an orthogonal matrix (its columns are orthogonal unit vectors meaning ) and R is an upper triangular matrix (also called right triangular matrix).