Loop quantum gravityLoop quantum gravity (LQG) is a theory of quantum gravity, which aims to reconcile quantum mechanics and general relativity, incorporating matter of the Standard Model into the framework established for the intrinsic quantum gravity case. It is an attempt to develop a quantum theory of gravity based directly on Einstein's geometric formulation rather than the treatment of gravity as a mysterious mechanism (force). As a theory LQG postulates that the structure of space and time is composed of finite loops woven into an extremely fine fabric or network.
Quantifier eliminationQuantifier elimination is a concept of simplification used in mathematical logic, model theory, and theoretical computer science. Informally, a quantified statement " such that " can be viewed as a question "When is there an such that ?", and the statement without quantifiers can be viewed as the answer to that question. One way of classifying formulas is by the amount of quantification. Formulas with less depth of quantifier alternation are thought of as being simpler, with the quantifier-free formulas as the simplest.
Quantifier (logic)In logic, a quantifier is an operator that specifies how many individuals in the domain of discourse satisfy an open formula. For instance, the universal quantifier in the first order formula expresses that everything in the domain satisfies the property denoted by . On the other hand, the existential quantifier in the formula expresses that there exists something in the domain which satisfies that property. A formula where a quantifier takes widest scope is called a quantified formula.
Automated theorem provingAutomated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major impetus for the development of computer science. While the roots of formalised logic go back to Aristotle, the end of the 19th and early 20th centuries saw the development of modern logic and formalised mathematics.
Loop optimizationIn compiler theory, loop optimization is the process of increasing execution speed and reducing the overheads associated with loops. It plays an important role in improving cache performance and making effective use of parallel processing capabilities. Most execution time of a scientific program is spent on loops; as such, many compiler optimization techniques have been developed to make them faster. Since instructions inside loops can be executed repeatedly, it is frequently not possible to give a bound on the number of instruction executions that will be impacted by a loop optimization.
Existential quantificationIn predicate logic, an existential quantification is a type of quantifier, a logical constant which is interpreted as "there exists", "there is at least one", or "for some". It is usually denoted by the logical operator symbol ∃, which, when used together with a predicate variable, is called an existential quantifier ("∃x" or "∃(x)" or "(∃x)"). Existential quantification is distinct from universal quantification ("for all"), which asserts that the property or relation holds for all members of the domain.
Vampire (theorem prover)Vampire is an automatic theorem prover for first-order classical logic developed in the Department of Computer Science at the University of Manchester. Up to Version 3, it was developed by Andrei Voronkov together with Kryštof Hoder and previously with Alexandre Riazanov. Since Version 4, the development has involved a wider international team including Laura Kovacs, Giles Reger, and Martin Suda. Since 1999 it has won at least 53 trophies in the CADE ATP System Competition, the "world cup for theorem provers", including the most prestigious FOF division and the theory-reasoning TFA division.
NegationIn logic, negation, also called the logical not or logical complement, is an operation that takes a proposition to another proposition "not ", standing for " is not true", written , or . It is interpreted intuitively as being true when is false, and false when is true. Negation is thus a unary logical connective. It may be applied as an operation on notions, propositions, truth values, or semantic values more generally. In classical logic, negation is normally identified with the truth function that takes truth to falsity (and vice versa).
For loopIn computer science a for-loop or for loop is a control flow statement for specifying iteration. Specifically, a for loop functions by running a section of code repeatedly until a certain condition has been satisfied. For-loops have two parts: a header and a body. The header defines the iteration and the body is the code that is executed once per iteration. The header often declares an explicit loop counter or loop variable. This allows the body to know which iteration is being executed.
Universal instantiationIn predicate logic, universal instantiation (UI; also called universal specification or universal elimination, and sometimes confused with dictum de omni) is a valid rule of inference from a truth about each member of a class of individuals to the truth about a particular individual of that class. It is generally given as a quantification rule for the universal quantifier but it can also be encoded in an axiom schema. It is one of the basic principles used in quantification theory. Example: "All dogs are mammals.
Isabelle (proof assistant)The Isabelle automated theorem prover is a higher-order logic (HOL) theorem prover, written in Standard ML and Scala. As an LCF-style theorem prover, it is based on a small logical core (kernel) to increase the trustworthiness of proofs without requiring yet supporting explicit proof objects. Isabelle is available inside a flexible system framework allowing for logically safe extensions, which comprise both theories as well as implementations for code-generation, documentation, and specific support for a variety of formal methods.
Natural deductionIn logic and proof theory, natural deduction is a kind of proof calculus in which logical reasoning is expressed by inference rules closely related to the "natural" way of reasoning. This contrasts with Hilbert-style systems, which instead use axioms as much as possible to express the logical laws of deductive reasoning. Natural deduction grew out of a context of dissatisfaction with the axiomatizations of deductive reasoning common to the systems of Hilbert, Frege, and Russell (see, e.g., Hilbert system).
Real closed fieldIn mathematics, a real closed field is a field F that has the same first-order properties as the field of real numbers. Some examples are the field of real numbers, the field of real algebraic numbers, and the field of hyperreal numbers. A real closed field is a field F in which any of the following equivalent conditions is true: F is elementarily equivalent to the real numbers. In other words, it has the same first-order properties as the reals: any sentence in the first-order language of fields is true in F if and only if it is true in the reals.
Superposition calculusThe superposition calculus is a calculus for reasoning in equational logic. It was developed in the early 1990s and combines concepts from first-order resolution with ordering-based equality handling as developed in the context of (unfailing) Knuth–Bendix completion. It can be seen as a generalization of either resolution (to equational logic) or unfailing completion (to full clausal logic). Like most first-order calculi, superposition tries to show the unsatisfiability of a set of first-order clauses, i.e.
Resolution (logic)In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation complete theorem-proving technique for sentences in propositional logic and first-order logic. For propositional logic, systematically applying the resolution rule acts as a decision procedure for formula unsatisfiability, solving the (complement of the) Boolean satisfiability problem. For first-order logic, resolution can be used as the basis for a semi-algorithm for the unsatisfiability problem of first-order logic, providing a more practical method than one following from Gödel's completeness theorem.
Automatic parallelizationAutomatic parallelization, also auto parallelization, or autoparallelization refers to converting sequential code into multi-threaded and/or vectorized code in order to use multiple processors simultaneously in a shared-memory multiprocessor (SMP) machine. Fully automatic parallelization of sequential programs is a challenge because it requires complex program analysis and the best approach may depend upon parameter values that are not known at compilation time.
Ashtekar variablesIn the ADM formulation of general relativity, spacetime is split into spatial slices and a time axis. The basic variables are taken to be the induced metric on the spatial slice and the metric's conjugate momentum , which is related to the extrinsic curvature and is a measure of how the induced metric evolves in time. These are the metric canonical coordinates. In 1986 Abhay Ashtekar introduced a new set of canonical variables, Ashtekar (new) variables to represent an unusual way of rewriting the metric canonical variables on the three-dimensional spatial slices in terms of an SU(2) gauge field and its complementary variable.
Mathematical analysisAnalysis is the branch of mathematics dealing with continuous functions, limits, and related theories, such as differentiation, integration, measure, infinite sequences, series, and analytic functions. These theories are usually studied in the context of real and complex numbers and functions. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis. Analysis may be distinguished from geometry; however, it can be applied to any space of mathematical objects that has a definition of nearness (a topological space) or specific distances between objects (a metric space).