Method of moments (electromagnetics)The method of moments (MoM), also known as the moment method and method of weighted residuals, is a numerical method in computational electromagnetics. It is used in computer programs that simulate the interaction of electromagnetic fields such as radio waves with matter, for example antenna simulation programs like NEC that calculate the radiation pattern of an antenna. Generally being a frequency-domain method, it involves the projection of an integral equation into a system of linear equations by the application of appropriate boundary conditions.
Integral equationIn mathematics, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form: where is an integral operator acting on u. Hence, integral equations may be viewed as the analog to differential equations where instead of the equation involving derivatives, the equation contains integrals.
Matrix (mathematics)In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a " matrix", or a matrix of dimension . Without further specifications, matrices represent linear maps, and allow explicit computations in linear algebra.
Invertible matrixIn linear algebra, an n-by-n square matrix A is called invertible (also nonsingular, nondegenerate or (rarely used) regular), if there exists an n-by-n square matrix B such that where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix B is uniquely determined by A, and is called the (multiplicative) inverse of A, denoted by A−1. Matrix inversion is the process of finding the matrix B that satisfies the prior equation for a given invertible matrix A.
Matrix decompositionIn the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems. In numerical analysis, different decompositions are used to implement efficient matrix algorithms. For instance, when solving a system of linear equations , the matrix A can be decomposed via the LU decomposition.
Matrix multiplicationIn mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
Singularity (mathematics)In mathematics, a singularity is a point at which a given mathematical object is not defined, or a point where the mathematical object ceases to be well-behaved in some particular way, such as by lacking differentiability or analyticity. For example, the function has a singularity at , where the value of the function is not defined, as involving a division by zero. The absolute value function also has a singularity at , since it is not differentiable there. The algebraic curve defined by in the coordinate system has a singularity (called a cusp) at .
Magnetic fieldA magnetic field is a vector field that describes the magnetic influence on moving electric charges, electric currents, and magnetic materials. A moving charge in a magnetic field experiences a force perpendicular to its own velocity and to the magnetic field. A permanent magnet's magnetic field pulls on ferromagnetic materials such as iron, and attracts or repels other magnets.
Toeplitz matrixIn linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix: Any matrix of the form is a Toeplitz matrix. If the element of is denoted then we have A Toeplitz matrix is not necessarily square. A matrix equation of the form is called a Toeplitz system if is a Toeplitz matrix. If is an Toeplitz matrix, then the system has at-most only unique values, rather than .
Matrix exponentialIn mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group. Let X be an n×n real or complex matrix. The exponential of X, denoted by eX or exp(X), is the n×n matrix given by the power series where is defined to be the identity matrix with the same dimensions as .
IntegralIn mathematics, an integral is the continuous analog of a sum, which is used to calculate areas, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental operations of calculus, the other being differentiation. Integration started as a method to solve problems in mathematics and physics, such as finding the area under a curve, or determining displacement from velocity. Today integration is used in a wide variety of scientific fields.
Sparse matrixIn numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered dense. The number of zero-valued elements divided by the total number of elements (e.
Removable singularityIn complex analysis, a removable singularity of a holomorphic function is a point at which the function is undefined, but it is possible to redefine the function at that point in such a way that the resulting function is regular in a neighbourhood of that point. For instance, the (unnormalized) sinc function, as defined by has a singularity at z = 0. This singularity can be removed by defining which is the limit of sinc as z tends to 0. The resulting function is holomorphic.
Incomplete gamma functionIn mathematics, the upper and lower incomplete gamma functions are types of special functions which arise as solutions to various mathematical problems such as certain integrals. Their respective names stem from their integral definitions, which are defined similarly to the gamma function but with different or "incomplete" integral limits. The gamma function is defined as an integral from zero to infinity. This contrasts with the lower incomplete gamma function, which is defined as an integral from zero to a variable upper limit.
Maxwell's equationsMaxwell's equations, or Maxwell–Heaviside equations, are a set of coupled partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, and electric circuits. The equations provide a mathematical model for electric, optical, and radio technologies, such as power generation, electric motors, wireless communication, lenses, radar, etc. They describe how electric and magnetic fields are generated by charges, currents, and changes of the fields.
Holomorphic functionIn mathematics, a holomorphic function is a complex-valued function of one or more complex variables that is complex differentiable in a neighbourhood of each point in a domain in complex coordinate space Cn. The existence of a complex derivative in a neighbourhood is a very strong condition: it implies that a holomorphic function is infinitely differentiable and locally equal to its own Taylor series (analytic). Holomorphic functions are the central objects of study in complex analysis.
Elliptic integralIn integral calculus, an elliptic integral is one of a number of related functions defined as the value of certain integrals, which were first studied by Giulio Fagnano and Leonhard Euler (1750). Their name originates from their originally arising in connection with the problem of finding the arc length of an ellipse. Modern mathematics defines an "elliptic integral" as any function f which can be expressed in the form where R is a rational function of its two arguments, P is a polynomial of degree 3 or 4 with no repeated roots, and c is a constant.
Magnetic vector potentialIn classical electromagnetism, magnetic vector potential (often called A) is the vector quantity defined so that its curl is equal to the magnetic field: . Together with the electric potential φ, the magnetic vector potential can be used to specify the electric field E as well. Therefore, many equations of electromagnetism can be written either in terms of the fields E and B, or equivalently in terms of the potentials φ and A. In more advanced theories such as quantum mechanics, most equations use potentials rather than fields.
Closed-form expressionIn mathematics, an expression is in closed form if it is formed with constants, variables and a finite set of basic functions connected by arithmetic operations (+, −, ×, ÷, and integer powers) and function composition. Commonly, the allowed functions are nth root, exponential function, logarithm, and trigonometric functions . However, the set of basic functions depends on the context.
Cauchy's integral formulaIn mathematics, Cauchy's integral formula, named after Augustin-Louis Cauchy, is a central statement in complex analysis. It expresses the fact that a holomorphic function defined on a disk is completely determined by its values on the boundary of the disk, and it provides integral formulas for all derivatives of a holomorphic function. Cauchy's formula shows that, in complex analysis, "differentiation is equivalent to integration": complex differentiation, like integration, behaves well under uniform limits – a result that does not hold in real analysis.