Thermodynamic equilibriumThermodynamic equilibrium is an axiomatic concept of thermodynamics. It is an internal state of a single thermodynamic system, or a relation between several thermodynamic systems connected by more or less permeable or impermeable walls. In thermodynamic equilibrium, there are no net macroscopic flows of matter nor of energy within a system or between systems. In a system that is in its own state of internal thermodynamic equilibrium, no macroscopic change occurs.
Thermodynamic equationsThermodynamics is expressed by a mathematical framework of thermodynamic equations which relate various thermodynamic quantities and physical properties measured in a laboratory or production process. Thermodynamics is based on a fundamental set of postulates, that became the laws of thermodynamics. One of the fundamental thermodynamic equations is the description of thermodynamic work in analogy to mechanical work, or weight lifted through an elevation against gravity, as defined in 1824 by French physicist Sadi Carnot.
Thermodynamic systemA thermodynamic system is a body of matter and/or radiation, considered as separate from its surroundings, and studied using the laws of thermodynamics. Thermodynamic systems may be isolated, closed, or open. An isolated system exchanges no matter or energy with its surroundings, whereas a closed system does not exchange matter but may exchange heat and experience and exert forces. An open system can interact with its surroundings by exchanging both matter and energy.
Thermodynamic processClassical thermodynamics considers three main kinds of thermodynamic process: (1) changes in a system, (2) cycles in a system, and (3) flow processes. (1)A Thermodynamic process is a process in which the thermodynamic state of a system is changed. A change in a system is defined by a passage from an initial to a final state of thermodynamic equilibrium. In classical thermodynamics, the actual course of the process is not the primary concern, and often is ignored.
Thermodynamic stateIn thermodynamics, a thermodynamic state of a system is its condition at a specific time; that is, fully identified by values of a suitable set of parameters known as state variables, state parameters or thermodynamic variables. Once such a set of values of thermodynamic variables has been specified for a system, the values of all thermodynamic properties of the system are uniquely determined. Usually, by default, a thermodynamic state is taken to be one of thermodynamic equilibrium.
Nucleic acid double helixIn molecular biology, the term double helix refers to the structure formed by double-stranded molecules of nucleic acids such as DNA. The double helical structure of a nucleic acid complex arises as a consequence of its secondary structure, and is a fundamental component in determining its tertiary structure. The term entered popular culture with the publication in 1968 of The Double Helix: A Personal Account of the Discovery of the Structure of DNA by James Watson.
Thermodynamic cycleA thermodynamic cycle consists of linked sequences of thermodynamic processes that involve transfer of heat and work into and out of the system, while varying pressure, temperature, and other state variables within the system, and that eventually returns the system to its initial state. In the process of passing through a cycle, the working fluid (system) may convert heat from a warm source into useful work, and dispose of the remaining heat to a cold sink, thereby acting as a heat engine.
Thermodynamic potentialA thermodynamic potential (or more accurately, a thermodynamic potential energy) is a scalar quantity used to represent the thermodynamic state of a system. Just as in mechanics, where potential energy is defined as capacity to do work, similarly different potentials have different meanings. The concept of thermodynamic potentials was introduced by Pierre Duhem in 1886. Josiah Willard Gibbs in his papers used the term fundamental functions. One main thermodynamic potential that has a physical interpretation is the internal energy U.
Intensive and extensive propertiesPhysical properties of materials and systems can often be categorized as being either intensive or extensive, according to how the property changes when the size (or extent) of the system changes. According to IUPAC, an intensive quantity is one whose magnitude is independent of the size of the system, whereas an extensive quantity is one whose magnitude is additive for subsystems. The terms "intensive and extensive quantities" were introduced into physics by German writer Georg Helm in 1898, and by American physicist and chemist Richard C.
Table of thermodynamic equationsCommon thermodynamic equations and quantities in thermodynamics, using mathematical notation, are as follows: List of thermodynamic propertiesThermodynamic potentialFree entropy and Defining equation (physical chemistry) Many of the definitions below are also used in the thermodynamics of chemical reactions. Heat capacity and Thermal expansion Thermal conductivity The equations in this article are classified by subject. where kB is the Boltzmann constant, and Ω denotes the volume of macrostate in the phase space or otherwise called thermodynamic probability.
Probability distributionIn probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events (subsets of the sample space). For instance, if X is used to denote the outcome of a coin toss ("the experiment"), then the probability distribution of X would take the value 0.5 (1 in 2 or 1/2) for X = heads, and 0.
Circular chromosomeA circular chromosome is a chromosome in bacteria, archaea, mitochondria, and chloroplasts, in the form of a molecule of circular DNA, unlike the linear chromosome of most eukaryotes. Most prokaryote chromosomes contain a circular DNA molecule – there are no free ends to the DNA. Free ends would otherwise create significant challenges to cells with respect to DNA replication and stability. Cells that do contain chromosomes with DNA ends, or telomeres (most eukaryotes), have acquired elaborate mechanisms to overcome these challenges.
DNADeoxyribonucleic acid (diːˈɒksᵻˌraɪboʊnjuːˌkliːᵻk,_-ˌkleɪ-; DNA) is a polymer composed of two polynucleotide chains that coil around each other to form a double helix. The polymer carries genetic instructions for the development, functioning, growth and reproduction of all known organisms and many viruses. DNA and ribonucleic acid (RNA) are nucleic acids. Alongside proteins, lipids and complex carbohydrates (polysaccharides), nucleic acids are one of the four major types of macromolecules that are essential for all known forms of life.
Joint probability distributionGiven two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considered for any given number of random variables. The joint distribution encodes the marginal distributions, i.e. the distributions of each of the individual random variables. It also encodes the conditional probability distributions, which deal with how the outputs of one random variable are distributed when given information on the outputs of the other random variable(s).
NucleoidThe nucleoid (meaning nucleus-like) is an irregularly shaped region within the prokaryotic cell that contains all or most of the genetic material. The chromosome of a typical prokaryote is circular, and its length is very large compared to the cell dimensions, so it needs to be compacted in order to fit. In contrast to the nucleus of a eukaryotic cell, it is not surrounded by a nuclear membrane. Instead, the nucleoid forms by condensation and functional arrangement with the help of chromosomal architectural proteins and RNA molecules as well as DNA supercoiling.
Maximum entropy probability distributionIn statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class (usually defined in terms of specified properties or measures), then the distribution with the largest entropy should be chosen as the least-informative default.
Conditional probability distributionIn probability theory and statistics, given two jointly distributed random variables and , the conditional probability distribution of given is the probability distribution of when is known to be a particular value; in some cases the conditional probabilities may be expressed as functions containing the unspecified value of as a parameter. When both and are categorical variables, a conditional probability table is typically used to represent the conditional probability.
Indecomposable distributionIn probability theory, an indecomposable distribution is a probability distribution that cannot be represented as the distribution of the sum of two or more non-constant independent random variables: Z ≠ X + Y. If it can be so expressed, it is decomposable: Z = X + Y. If, further, it can be expressed as the distribution of the sum of two or more independent identically distributed random variables, then it is divisible: Z = X1 + X2. The simplest examples are Bernoulli-distributeds: if then the probability distribution of X is indecomposable.
Binomial distributionIn probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success (with probability p) or failure (with probability ). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.
Poisson distributionIn probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. It is named after French mathematician Siméon Denis Poisson ('pwɑːsɒn; pwasɔ̃). The Poisson distribution can also be used for the number of events in other specified interval types such as distance, area, or volume.