Distributed algorithmA distributed algorithm is an algorithm designed to run on computer hardware constructed from interconnected processors. Distributed algorithms are used in different application areas of distributed computing, such as telecommunications, scientific computing, distributed information processing, and real-time process control. Standard problems solved by distributed algorithms include leader election, consensus, distributed search, spanning tree generation, mutual exclusion, and resource allocation.
Distributed computingA distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another. Distributed computing is a field of computer science that studies distributed systems. The components of a distributed system interact with one another in order to achieve a common goal. Three significant challenges of distributed systems are: maintaining concurrency of components, overcoming the lack of a global clock, and managing the independent failure of components.
Computational complexityIn computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem. The study of the complexity of explicitly given algorithms is called analysis of algorithms, while the study of the complexity of problems is called computational complexity theory.
Church–Turing thesisIn computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) is a thesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing.
Complexity classIn computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory. In general, a complexity class is defined in terms of a type of computational problem, a model of computation, and a bounded resource like time or memory. In particular, most complexity classes consist of decision problems that are solvable with a Turing machine, and are differentiated by their time or space (memory) requirements.
Computational complexity theoryIn theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used.
Distributed cognitionDistributed cognition is an approach to cognitive science research that was developed by cognitive anthropologist Edwin Hutchins during the 1990s. From cognitive ethnography, Hutchins argues that mental representations, which classical cognitive science held that are within the individual brain, are actually distributed in sociocultural systems that constitute the tools to think and perceive the world.
Connection formIn mathematics, and specifically differential geometry, a connection form is a manner of organizing the data of a connection using the language of moving frames and differential forms. Historically, connection forms were introduced by Élie Cartan in the first half of the 20th century as part of, and one of the principal motivations for, his method of moving frames. The connection form generally depends on a choice of a coordinate frame, and so is not a tensorial object.
Kolmogorov complexityIn algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy.
Consensus (computer science)A fundamental problem in distributed computing and multi-agent systems is to achieve overall system reliability in the presence of a number of faulty processes. This often requires coordinating processes to reach consensus, or agree on some data value that is needed during computation. Example applications of consensus include agreeing on what transactions to commit to a database in which order, state machine replication, and atomic broadcasts.
Model of computationIn computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes how units of computations, memories, and communications are organized. The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology.
Circuit complexityIn theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits (see below). Proving lower bounds on size of Boolean circuits computing explicit Boolean functions is a popular approach to separating complexity classes.
Connection (mathematics)In geometry, the notion of a connection makes precise the idea of transporting local geometric objects, such as tangent vectors or tensors in the tangent space, along a curve or family of curves in a parallel and consistent manner. There are various kinds of connections in modern geometry, depending on what sort of data one wants to transport. For instance, an affine connection, the most elementary type of connection, gives a means for parallel transport of tangent vectors on a manifold from one point to another along a curve.
Distributed ledgerA distributed ledger (also called a shared ledger or distributed ledger technology or DLT) is the consensus of replicated, shared, and synchronized digital data that is geographically spread (distributed) across many sites, countries, or institutions. In contrast to a centralized database, a distributed ledger does not require a central administrator, and consequently does not have a single (central) point-of-failure.
Upper and lower boundsIn mathematics, particularly in order theory, an upper bound or majorant of a subset S of some preordered set (K, ≤) is an element of K that is greater than or equal to every element of S. Dually, a lower bound or minorant of S is defined to be an element of K that is less than or equal to every element of S. A set with an upper (respectively, lower) bound is said to be bounded from above or majorized (respectively bounded from below or minorized) by that bound.
Connection (principal bundle)In mathematics, and especially differential geometry and gauge theory, a connection is a device that defines a notion of parallel transport on the bundle; that is, a way to "connect" or identify fibers over nearby points. A principal G-connection on a principal G-bundle P over a smooth manifold M is a particular type of connection which is compatible with the action of the group G. A principal connection can be viewed as a special case of the notion of an Ehresmann connection, and is sometimes called a principal Ehresmann connection.
Distributed operating systemA distributed operating system is system software over a collection of independent software, networked, communicating, and physically separate computational nodes. They handle jobs which are serviced by multiple CPUs. Each individual node holds a specific software subset of the global aggregate operating system. Each subset is a composite of two distinct service provisioners. The first is a ubiquitous minimal kernel, or microkernel, that directly controls that node's hardware.
Connection (vector bundle)In mathematics, and especially differential geometry and gauge theory, a connection on a fiber bundle is a device that defines a notion of parallel transport on the bundle; that is, a way to "connect" or identify fibers over nearby points. The most common case is that of a linear connection on a vector bundle, for which the notion of parallel transport must be linear. A linear connection is equivalently specified by a covariant derivative, an operator that differentiates sections of the bundle along tangent directions in the base manifold, in such a way that parallel sections have derivative zero.
Levi-Civita connectionIn Riemannian or pseudo-Riemannian geometry (in particular the Lorentzian geometry of general relativity), the Levi-Civita connection is the unique affine connection on the tangent bundle of a manifold (i.e. affine connection) that preserves the (pseudo-)Riemannian metric and is torsion-free. The fundamental theorem of Riemannian geometry states that there is a unique connection which satisfies these properties. In the theory of Riemannian and pseudo-Riemannian manifolds the term covariant derivative is often used for the Levi-Civita connection.
Parameterized complexityIn computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input.