Alternating series testIn mathematical analysis, the alternating series test is the method used to show that an alternating series is convergent when its terms (1) decrease in absolute value, and (2) approach zero in the limit. The test was used by Gottfried Leibniz and is sometimes known as Leibniz's test, Leibniz's rule, or the Leibniz criterion. The test is only sufficient, not necessary, so some convergent alternating series may fail the first part of the test. A series of the form where either all an are positive or all an are negative, is called an alternating series.
Direct comparison testIn mathematics, the comparison test, sometimes called the direct comparison test to distinguish it from similar related tests (especially the limit comparison test), provides a way of deducing the convergence or divergence of an infinite series or an improper integral. In both cases, the test works by comparing the given series or integral to one whose convergence properties are known.
Summation by partsIn mathematics, summation by parts transforms the summation of products of sequences into other summations, often simplifying the computation or (especially) estimation of certain types of sums. It is also called Abel's lemma or Abel transformation, named after Niels Henrik Abel who introduced it in 1826. Suppose and are two sequences. Then, Using the forward difference operator , it can be stated more succinctly as Summation by parts is an analogue to integration by parts: or to Abel's summation formula: An alternative statement is which is analogous to the integration by parts formula for semimartingales.
Limit (mathematics)In mathematics, a limit is the value that a function (or sequence) approaches as the input (or index) approaches some value. Limits are essential to calculus and mathematical analysis, and are used to define continuity, derivatives, and integrals. The concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to and direct limit in . In formulas, a limit of a function is usually written as (although a few authors use "Lt" instead of "lim") and is read as "the limit of f of x as x approaches c equals L".
Ratio testIn mathematics, the ratio test is a test (or "criterion") for the convergence of a series where each term is a real or complex number and an is nonzero when n is large. The test was first published by Jean le Rond d'Alembert and is sometimes known as d'Alembert's ratio test or as the Cauchy ratio test. The usual form of the test makes use of the limit The ratio test states that: if L < 1 then the series converges absolutely; if L > 1 then the series diverges; if L = 1 or the limit fails to exist, then the test is inconclusive, because there exist both convergent and divergent series that satisfy this case.
Integral test for convergenceIn mathematics, the integral test for convergence is a method used to test infinite series of monotonous terms for convergence. It was developed by Colin Maclaurin and Augustin-Louis Cauchy and is sometimes known as the Maclaurin–Cauchy test. Consider an integer N and a function f defined on the unbounded interval , on which it is monotone decreasing. Then the infinite series converges to a real number if and only if the improper integral is finite. In particular, if the integral diverges, then the series diverges as well.
Basel problemThe Basel problem is a problem in mathematical analysis with relevance to number theory, concerning an infinite sum of inverse squares. It was first posed by Pietro Mengoli in 1650 and solved by Leonhard Euler in 1734, and read on 5 December 1735 in The Saint Petersburg Academy of Sciences. Since the problem had withstood the attacks of the leading mathematicians of the day, Euler's solution brought him immediate fame when he was twenty-eight.
Alternating seriesIn mathematics, an alternating series is an infinite series of the form or with an > 0 for all n. The signs of the general terms alternate between positive and negative. Like any series, an alternating series converges if and only if the associated sequence of partial sums converges. The geometric series 1/2 − 1/4 + 1/8 − 1/16 + ⋯ sums to 1/3. The alternating harmonic series has a finite sum but the harmonic series does not.
Cauchy productIn mathematics, more specifically in mathematical analysis, the Cauchy product is the discrete convolution of two infinite series. It is named after the French mathematician Augustin-Louis Cauchy. The Cauchy product may apply to infinite series or power series. When people apply it to finite sequences or finite series, that can be seen merely as a particular case of a product of series with a finite number of non-zero coefficients (see discrete convolution). Convergence issues are discussed in the next section.
Series (mathematics)In mathematics, a series is, roughly speaking, the operation of adding infinitely many quantities, one after the other, to a given starting quantity. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures (such as in combinatorics) through generating functions. In addition to their ubiquity in mathematics, infinite series are also widely used in other quantitative disciplines such as physics, computer science, statistics and finance.
Absolute convergenceIn mathematics, an infinite series of numbers is said to converge absolutely (or to be absolutely convergent) if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number Similarly, an improper integral of a function, is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if Absolute convergence is important for the study of infinite series because its definition is strong enough to have properties of finite sums that not all convergent series possess – a convergent series that is not absolutely convergent is called conditionally convergent, while absolutely convergent series behave "nicely".
Harmonic series (mathematics)In mathematics, the harmonic series is the infinite series formed by summing all positive unit fractions: The first terms of the series sum to approximately , where is the natural logarithm and is the Euler–Mascheroni constant. Because the logarithm has arbitrarily large values, the harmonic series does not have a finite limit: it is a divergent series. Its divergence was proven in the 14th century by Nicole Oresme using a precursor to the Cauchy condensation test for the convergence of infinite series.
Monotone convergence theoremIn the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences (sequences that are non-decreasing or non-increasing) that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum.
Root testIn mathematics, the root test is a criterion for the convergence (a convergence test) of an infinite series. It depends on the quantity where are the terms of the series, and states that the series converges absolutely if this quantity is less than one, but diverges if it is greater than one. It is particularly useful in connection with power series. The root test was developed first by Augustin-Louis Cauchy who published it in his textbook Cours d'analyse (1821). Thus, it is sometimes known as the Cauchy root test or Cauchy's radical test.
Divergent seriesIn mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit. If a series converges, the individual terms of the series must approach zero. Thus any series in which the individual terms do not approach zero diverges. However, convergence is a stronger condition: not all series whose terms approach zero converge. A counterexample is the harmonic series The divergence of the harmonic series was proven by the medieval mathematician Nicole Oresme.
Binary logarithmIn mathematics, the binary logarithm (log2n) is the power to which the number 2 must be raised to obtain the value n. That is, for any real number x, For example, the binary logarithm of 1 is 0, the binary logarithm of 2 is 1, the binary logarithm of 4 is 2, and the binary logarithm of 32 is 5. The binary logarithm is the logarithm to the base 2 and is the inverse function of the power of two function. As well as log2, an alternative notation for the binary logarithm is lb (the notation preferred by ISO 31-11 and ISO 80000-2).
Conditional convergenceIn mathematics, a series or integral is said to be conditionally convergent if it converges, but it does not converge absolutely. More precisely, a series of real numbers is said to converge conditionally if exists (as a finite real number, i.e. not or ), but A classic example is the alternating harmonic series given by which converges to , but is not absolutely convergent (see Harmonic series). Bernhard Riemann proved that a conditionally convergent series may be rearranged to converge to any value at all, including ∞ or −∞; see Riemann series theorem.
Hyperbolic functionsIn mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the unit hyperbola. Also, similarly to how the derivatives of sin(t) and cos(t) are cos(t) and –sin(t) respectively, the derivatives of sinh(t) and cosh(t) are cosh(t) and +sinh(t) respectively. Hyperbolic functions occur in the calculations of angles and distances in hyperbolic geometry.
Power of twoA power of two is a number of the form 2n where n is an integer, that is, the result of exponentiation with number two as the base and integer n as the exponent. In a context where only integers are considered, n is restricted to non-negative values, so there are 1, 2, and 2 multiplied by itself a certain number of times. The first ten powers of 2 for non-negative values of n are: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, ... Because two is the base of the binary numeral system, powers of two are common in computer science.
Power seriesIn mathematics, a power series (in one variable) is an infinite series of the form where an represents the coefficient of the nth term and c is a constant. Power series are useful in mathematical analysis, where they arise as Taylor series of infinitely differentiable functions. In fact, Borel's theorem implies that every power series is the Taylor series of some smooth function. In many situations, c (the center of the series) is equal to zero, for instance when considering a Maclaurin series.