Modern portfolio theoryModern portfolio theory (MPT), or mean-variance analysis, is a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk. It is a formalization and extension of diversification in investing, the idea that owning different kinds of financial assets is less risky than owning only one type. Its key insight is that an asset's risk and return should not be assessed by itself, but by how it contributes to a portfolio's overall risk and return.
Nonlinear dimensionality reductionNonlinear dimensionality reduction, also known as manifold learning, refers to various related techniques that aim to project high-dimensional data onto lower-dimensional latent manifolds, with the goal of either visualizing the data in the low-dimensional space, or learning the mapping (either from the high-dimensional space to the low-dimensional embedding or vice versa) itself. The techniques described below can be understood as generalizations of linear decomposition methods used for dimensionality reduction, such as singular value decomposition and principal component analysis.
Portfolio optimizationPortfolio optimization is the process of selecting the best portfolio (asset distribution), out of the set of all portfolios being considered, according to some objective. The objective typically maximizes factors such as expected return, and minimizes costs like financial risk. Factors being considered may range from tangible (such as assets, liabilities, earnings or other fundamentals) to intangible (such as selective divestment). Modern portfolio theory was introduced in a 1952 doctoral thesis by Harry Markowitz; see Markowitz model.
Curse of dimensionalityThe curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming. Dimensionally cursed phenomena occur in domains such as numerical analysis, sampling, combinatorics, machine learning, data mining and databases.
Dimensionality reductionDimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally close to its intrinsic dimension. Working in high-dimensional spaces can be undesirable for many reasons; raw data are often sparse as a consequence of the curse of dimensionality, and analyzing the data is usually computationally intractable (hard to control or deal with).
Portfolio (finance)In finance, a portfolio is a collection of investments. The term “portfolio” refers to any combination of financial assets such as stocks, bonds and cash. Portfolios may be held by individual investors or managed by financial professionals, hedge funds, banks and other financial institutions. It is a generally accepted principle that a portfolio is designed according to the investor's risk tolerance, time frame and investment objectives. The monetary value of each asset may influence the risk/reward ratio of the portfolio.
Asset pricingIn financial economics, asset pricing refers to a formal treatment and development of two main pricing principles, outlined below, together with the resultant models. There have been many models developed for different situations, but correspondingly, these stem from either general equilibrium asset pricing or rational asset pricing, the latter corresponding to risk neutral pricing.
Arbitrage pricing theoryIn finance, arbitrage pricing theory (APT) is a multi-factor model for asset pricing which relates various macro-economic (systematic) risk variables to the pricing of financial assets. Proposed by economist Stephen Ross in 1976, it is widely believed to be an improved alternative to its predecessor, the Capital Asset Pricing Model (CAPM). APT is founded upon the law of one price, which suggests that within an equilibrium market, rational investors will implement arbitrage such that the equilibrium price is eventually realised.
Capital asset pricing modelIn finance, the capital asset pricing model (CAPM) is a model used to determine a theoretically appropriate required rate of return of an asset, to make decisions about adding assets to a well-diversified portfolio. The model takes into account the asset's sensitivity to non-diversifiable risk (also known as systematic risk or market risk), often represented by the quantity beta (β) in the financial industry, as well as the expected return of the market and the expected return of a theoretical risk-free asset.
Fundamental theorem of asset pricingThe fundamental theorems of asset pricing (also: of arbitrage, of finance), in both financial economics and mathematical finance, provide necessary and sufficient conditions for a market to be arbitrage-free, and for a market to be complete. An arbitrage opportunity is a way of making money with no initial investment without any possibility of loss. Though arbitrage opportunities do exist briefly in real life, it has been said that any sensible market model must avoid this type of profit.
Risk-neutral measureIn mathematical finance, a risk-neutral measure (also called an equilibrium measure, or equivalent martingale measure) is a probability measure such that each share price is exactly equal to the discounted expectation of the share price under this measure. This is heavily used in the pricing of financial derivatives due to the fundamental theorem of asset pricing, which implies that in a complete market, a derivative's price is the discounted expected value of the future payoff under the unique risk-neutral measure.
Financial risk managementFinancial risk management is the practice of protecting economic value in a firm by managing exposure to financial risk - principally operational risk, credit risk and market risk, with more specific variants as listed aside. As for risk management more generally, financial risk management requires identifying the sources of risk, measuring these, and crafting plans to address them. See for an overview. Financial risk management as a "science" can be said to have been born with modern portfolio theory, particularly as initiated by Professor Harry Markowitz in 1952 with his article, "Portfolio Selection"; see .
Rational pricingRational pricing is the assumption in financial economics that asset prices – and hence asset pricing models – will reflect the arbitrage-free price of the asset as any deviation from this price will be "arbitraged away". This assumption is useful in pricing fixed income securities, particularly bonds, and is fundamental to the pricing of derivative instruments. Arbitrage is the practice of taking advantage of a state of imbalance between two (or possibly more) markets. Where this mismatch can be exploited (i.
Dimensional analysisIn engineering and science, dimensional analysis is the analysis of the relationships between different physical quantities by identifying their base quantities (such as length, mass, time, and electric current) and units of measurement (such as metres and grams) and tracking these dimensions as calculations or comparisons are performed. The term dimensional analysis is also used to refer to conversion of units from one dimensional unit to another, which can be used to evaluate scientific formulae.
DimensionIn physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coordinate is needed to specify a point on it - for example, the point at 5 on a number line. A surface, such as the boundary of a cylinder or sphere, has a dimension of two (2D) because two coordinates are needed to specify a point on it - for example, both a latitude and longitude are required to locate a point on the surface of a sphere.
Mixture modelIn statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population.
Efficient-market hypothesisThe efficient-market hypothesis (EMH) is a hypothesis in financial economics that states that asset prices reflect all available information. A direct implication is that it is impossible to "beat the market" consistently on a risk-adjusted basis since market prices should only react to new information. Because the EMH is formulated in terms of risk adjustment, it only makes testable predictions when coupled with a particular model of risk.
Low-dimensional topologyIn mathematics, low-dimensional topology is the branch of topology that studies manifolds, or more generally topological spaces, of four or fewer dimensions. Representative topics are the structure theory of 3-manifolds and 4-manifolds, knot theory, and braid groups. This can be regarded as a part of geometric topology. It may also be used to refer to the study of topological spaces of dimension 1, though this is more typically considered part of continuum theory.
Clustering high-dimensional dataClustering high-dimensional data is the cluster analysis of data with anywhere from a few dozen to many thousands of dimensions. Such high-dimensional spaces of data are often encountered in areas such as medicine, where DNA microarray technology can produce many measurements at once, and the clustering of text documents, where, if a word-frequency vector is used, the number of dimensions equals the size of the vocabulary.
Machine learningMachine learning (ML) is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines 'discover' their 'own' algorithms, without needing to be explicitly told what to do by any human-developed algorithms. Recently, generative artificial neural networks have been able to surpass results of many previous approaches.