wikiamp

Uncertainty principle

In quantum mechanics, the uncertainty principle (also known as Heisenberg's uncertainty principle) is any of a variety of mathematical inequalities asserting a fundamental limit to the precision with which the values for certain pairs of physical quantities of a particle, such as position, x, and momentum, p, can be predicted from initial conditions. Such variable pairs are known as complementary variables or canonically conjugate variables, and, depending on interpretation, the uncertainty principle limits to what extent such conjugate properties maintain their approximate meaning, as the mathematical framework of quantum physics does not support the notion of simultaneously well-defined conjugate properties expressed by a single value. The uncertainty principle implies that it is in general not possible to predict the value of a quantity with arbitrary certainty, even if all initial conditions are specified.

Introduced first in 1927, by the German physicist Werner Heisenberg, the uncertainty principle states that the more precisely the position of some particle is determined, the less precisely its momentum can be predicted from initial conditions, and vice versa. The formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard later that year and by Hermann Weyl in 1928: {{Equation box 1 |indent =:: |equation = sigma_{x}sigma_{p} geq frac{hbar}{2} ~~ |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}} where is the reduced Planck constant, ).

Historically, the uncertainty principle has been confused with a related effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the system, that is, without changing something in a system. Heisenberg utilized such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty. It has since become clearer, however, that the uncertainty principle is inherent in the properties of all wave-like systems, and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. Thus, the uncertainty principle actually states a fundamental property of quantum systems and is not a statement about the observational success of current technology. It must be emphasized that measurement does not mean only a process in which a physicist-observer takes part, but rather any interaction between classical and quantum objects regardless of any observer.{{refn|name=precision|group=note|N.B. on precision: If delta x and delta p are the precisions of position and momentum obtained in an individual measurement and sigma_{x}, sigma_{p} their standard deviations in an ensemble of individual measurements on similarly prepared systems, then "There are, in principle, no restrictions on the precisions of individual measurements delta x and delta p, but the standard deviations will always satisfy sigma_{x}sigma_{p} ge hbar/2".}}

Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their main research program. These include, for example, tests of number–phase uncertainty relations in superconducting or quantum optics systems. Applications dependent on the uncertainty principle for their operation include extremely low-noise technology such as that required in gravitational wave interferometers.

Introduction

The uncertainty principle is not readily apparent on the macroscopic scales of everyday experience. So it is helpful to demonstrate how it applies to more easily understood physical situations. Two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, but the more abstract matrix mechanics picture formulates it in a way that generalizes more easily.

Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormal bases in Hilbert space are Fourier transforms of one another (i.e., position and momentum are conjugate variables). A nonzero function and its Fourier transform cannot both be sharply localized. A similar tradeoff between the variances of Fourier conjugates arises in all systems underlain by Fourier analysis, for example in sound waves: A pure tone is a sharp spike at a single frequency, while its Fourier transform gives the shape of the sound wave in the time domain, which is a completely delocalized sine wave. In quantum mechanics, the two key points are that the position of the particle takes the form of a matter wave, and momentum is its Fourier conjugate, assured by the de Broglie relation , where is the wavenumber.

In matrix mechanics, the mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value (the eigenvalue). For example, if a measurement of an observable is performed, then the system is in a particular eigenstate of that observable. However, the particular eigenstate of the observable need not be an eigenstate of another observable : If so, then it does not have a unique associated measurement for it, as the system is not in an eigenstate of that observable.

Wave mechanics interpretation

(Ref )According to the de Broglie hypothesis, every object in the universe is a wave, i.e., a situation which gives rise to this phenomenon. The position of the particle is described by a wave function Psi(x,t). The time-independent wave function of a single-moded plane wave of wavenumber k0 or momentum p0 is

:psi(x) propto e^{ik_0 x} = e^{ip_0 x/hbar} ~.

The Born rule states that this should be interpreted as a probability density amplitude function in the sense that the probability of finding the particle between a and b is

: operatorname P [a leq X leq b] = int_a^b |psi(x)|^2 , mathrm{d}x ~.

In the case of the single-moded plane wave, |psi(x)|^2 is a uniform distribution. In other words, the particle position is extremely uncertain in the sense that it could be essentially anywhere along the wave packet.

On the other hand, consider a wave function that is a sum of many waves, which we may write this as

:psi(x) propto sum_n A_n e^{i p_n x/hbar}~, where An represents the relative contribution of the mode pn to the overall total. The figures to the right show how with the addition of many plane waves, the wave packet can become more localized. We may take this a step further to the continuum limit, where the wave function is an integral over all possible modes

:psi(x) = frac{1}{sqrt{2 pi hbar}} int_{-infty}^infty varphi(p) cdot e^{i p x/hbar} , dp ~,

with varphi(p) representing the amplitude of these modes and is called the wave function in momentum space. In mathematical terms, we say that varphi(p) is the Fourier transform of psi(x) and that x and p are conjugate variables. Adding together all of these plane waves comes at a cost, namely the momentum has become less precise, having become a mixture of waves of many different momenta.

One way to quantify the precision of the position and momentum is the standard deviation σ. Since |psi(x)|^2 is a probability density function for position, we calculate its standard deviation.

The precision of the position is improved, i.e. reduced σx, by using many plane waves, thereby weakening the precision of the momentum, i.e. increased σp. Another way of stating this is that σx and σp have an inverse relationship or are at least bounded from below. This is the uncertainty principle, the exact limit of which is the Kennard bound. Click the show button below to see a semi-formal derivation of the Kennard inequality using wave mechanics.

Matrix mechanics interpretation

(Ref )

In matrix mechanics, observables such as position and momentum are represented by self-adjoint operators. When considering pairs of observables, an important quantity is the commutator. For a pair of operators and , one defines their commutator as :[hat{A},hat{B}]=hat{A}hat{B}-hat{B}hat{A}. In the case of position and momentum, the commutator is the canonical commutation relation :[hat{x},hat{p}]=i hbar.

The physical meaning of the non-commutativity can be understood by considering the effect of the commutator on position and momentum eigenstates. Let |psi angle be a right eigenstate of position with a constant eigenvalue . By definition, this means that hat{x}|psi angle = x_0 |psi angle. Applying the commutator to |psi angle yields :[hat{x},hat{p}] | psi angle = (hat{x}hat{p}-hat{p}hat{x}) | psi angle = (hat{x} - x_0 hat{I}) hat{p} , | psi angle = i hbar | psi angle, where is the identity operator.

Suppose, for the sake of proof by contradiction, that |psi angle is also a right eigenstate of momentum, with constant eigenvalue . If this were true, then one could write :(hat{x} - x_0 hat{I}) hat{p} , | psi angle = (hat{x} - x_0 hat{I}) p_0 , | psi angle = (x_0 hat{I} - x_0 hat{I}) p_0 , | psi angle=0. On the other hand, the above canonical commutation relation requires that :[hat{x},hat{p}] | psi angle=i hbar | psi angle e 0. This implies that no quantum state can simultaneously be both a position and a momentum eigenstate.

When a state is measured, it is projected onto an eigenstate in the basis of the relevant observable. For example, if a particle's position is measured, then the state amounts to a position eigenstate. This means that the state is not a momentum eigenstate, however, but rather it can be represented as a sum of multiple momentum basis eigenstates. In other words, the momentum must be less precise. This precision may be quantified by the standard deviations, :sigma_x=sqrt{langle hat{x}^2 angle-langle hat{x} angle^2} :sigma_p=sqrt{langle hat{p}^2 angle-langle hat{p} angle^2}.

As in the wave mechanics interpretation above, one sees a tradeoff between the respective precisions of the two, quantified by the uncertainty principle.

Robertson–Schrödinger uncertainty relations

The most common general form of the uncertainty principle is the Robertson uncertainty relation.

For an arbitrary Hermitian operator hat{mathcal{O}} we can associate a standard deviation

:::sigma_{mathcal{O}} = sqrt{langle hat{mathcal{O}}^2 angle-langle hat{mathcal{O}} angle^2},

where the brackets langlemathcal{O} angle indicate an expectation value. For a pair of operators hat{A} and hat{B}, we may define their commutator as

:::[hat{A},hat{B}]=hat{A}hat{B}-hat{B}hat{A},

In this notation, the Robertson uncertainty relation is given by

::: sigma_A sigma_B geq left| frac{1}{2i}langle[hat{A},hat{B}] angle ight| = frac{1}{2}left|langle[hat{A},hat{B}] angle ight|,

The Robertson uncertainty relation immediately follows from a slightly stronger inequality, the Schrödinger uncertainty relation, {{Equation box 1 |indent =:: |equation = sigma_A^2sigma_B^2 geq left| frac{1}{2}langle{hat{A}, hat{B}} angle - langle hat{A} anglelangle hat{B} angle ight|^2+ left|frac{1}{2i} langle[ hat{A}, hat{B}] angle ight|^2, |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}} where we have introduced the anticommutator, :::{hat{A},hat{B}}=hat{A}hat{B}+hat{B}hat{A}.

}

we let z=langle fmid g angle and z^{*}=langle g mid f angle and substitute these into the equation above to get

{{NumBlk|::::||langle fmid g angle|^2 = igg(frac{langle fmid g angle+langle gmid f angle}{2}igg)^2 + igg(frac{langle fmid g angle-langle gmid f angle}{2i}igg)^2 |}}

The inner product langle fmid g angle is written out explicitly as

:::langle fmid g angle = langle(hat{A}-langle hat{A} angle)Psi|(hat{B}-langle hat{B} angle)Psi angle,

and using the fact that hat{A} and hat{B} are Hermitian operators, we find

: egin{align} langle fmid g angle & = langlePsi|(hat{A}-langle hat{A} angle)(hat{B}-langle hat{B} angle)Psi angle \[4pt] & = langlePsimid(hat{A}hat{B}-hat{A}langle hat{B} angle - hat{B}langle hat{A} angle + langle hat{A} anglelangle hat{B} angle)Psi angle \[4pt] & = langlePsimidhat{A}hat{B}Psi angle-langlePsimidhat{A}langle hat{B} anglePsi angle -langlePsimidhat{B}langle hat{A} anglePsi angle+langlePsimidlangle hat{A} anglelangle hat{B} anglePsi angle \[4pt] & =langle hat{A}hat{B} angle-langle hat{A} anglelangle hat{B} angle-langle hat{A} anglelangle hat{B} angle+langle hat{A} anglelangle hat{B} angle \[4pt] & =langle hat{A}hat{B} angle-langle hat{A} anglelangle hat{B} angle. end{align}

Similarly it can be shown that langle gmid f angle = langle hat{B}hat{A} angle-langle hat{A} anglelangle hat{B} angle.

Thus we have

::: langle fmid g angle-langle gmid f angle = langle hat{A}hat{B} angle-langle hat{A} anglelangle hat{B} angle-langle hat{B}hat{A} angle+langle hat{A} anglelangle hat{B} angle = langle [hat{A},hat{B}] angle

and

:::langle fmid g angle+langle gmid f angle = langle hat{A}hat{B} angle-langle hat{A} anglelangle hat{B} angle+langle hat{B}hat{A} angle-langle hat{A} anglelangle hat{B} angle = langle {hat{A},hat{B}} angle -2langle hat{A} anglelangle hat{B} angle.

We now substitute the above two equations above back into Eq. () and get ::: |langle fmid g angle|^2=Big(frac{1}{2}langle{hat{A},hat{B}} angle - langle hat{A} anglelangle hat{B} angleBig)^2 + Big(frac{1}{2i} langle[hat{A},hat{B}] angleBig)^{2}, .

Substituting the above into Eq. () we get the Schrödinger uncertainty relation

::: sigma_Asigma_B geq sqrt{Big(frac{1}{2}langle{hat{A},hat{B}} angle - langle hat{A} anglelangle hat{B} angleBig)^2 + Big(frac{1}{2i} langle[hat{A},hat{B}] angleBig)^2}.

This proof has an issue related to the domains of the operators involved. For the proof to make sense, the vector hat{B} |Psi angle has to be in the domain of the unbounded operator hat{A}, which is not always the case. In fact, the Robertson uncertainty relation is false if hat{A} is an angle variable and hat{B} is the derivative with respect to this variable. In this example, the commutator is a nonzero constant—just as in the Heisenberg uncertainty relation—and yet there are states where the product of the uncertainties is zero. (See the counterexample section below.) This issue can be overcome by using a variational method for the proof., or by working with an exponentiated version of the canonical commutation relations.

Note that in the general form of the Robertson–Schrödinger uncertainty relation, there is no need to assume that the operators hat{A} and hat{B} are self-adjoint operators. It suffices to assume that they are merely symmetric operators. (The distinction between these two notions is generally glossed over in the physics literature, where the term Hermitian is used for either or both classes of operators. See Chapter 9 of Hall's book for a detailed discussion of this important but technical distinction.) |}

Mixed states

The Robertson–Schrödinger uncertainty relation may be generalized in a straightforward way to describe mixed states.

:::sigma_A^2 sigma_B^2 geq left(frac{1}{2}operatorname{tr}( ho{A,B}) - operatorname{tr}( ho A)operatorname{tr}( ho B) ight)^2 +left(frac{1}{2i} operatorname{tr}( ho[A,B]) ight)^2

The Maccone–Pati uncertainty relations

The Robertson–Schrödinger uncertainty relation can be trivial if the state of the system is chosen to be eigenstate of one of the observable. The stronger uncertainty relations proved by Maccone and Pati give non-trivial bounds on the sum of the variances for two incompatible observables. (Earlier works on uncertainty relations formulated as the sum of variances include, e.g., Ref. due to Huang.) For two non-commuting observables A and B the first stronger uncertainty relation is given by : sigma_{A}^2 + sigma_{ B}^2 ge pm i langle Psimid [A, B]|Psi angle + mid langle Psimid(A pm i B)mid{ar Psi} angle|^2, where sigma_{A}^2 = langle Psi |A^2 |Psi angle - langle Psi mid A mid Psi angle^2 , sigma_{B}^2 = langle Psi |B^2 |Psi angle - langle Psi mid B midPsi angle^2 , |{ar Psi} angle is a normalized vector that is orthogonal to the state of the system |Psi angle and one should choose the sign of pm i langle Psimid[A, B]midPsi angle to make this real quantity a positive number.

The second stronger uncertainty relation is given by : sigma_A^2 + sigma_B^2 ge frac{1}{2}| langle {ar Psi}_{A+B} mid(A + B)mid Psi angle|^2 where | {ar Psi}_{A+B} angle is a state orthogonal to |Psi angle . The form of | {ar Psi}_{A+B} angle implies that the right-hand side of the new uncertainty relation is nonzero unless | Psi angle is an eigenstate of (A + B). One may note that |Psi angle can be an eigenstate of ( A+ B) without being an eigenstate of either A or B . However, when |Psi angle is an eigenstate of one of the two observables the Heisenberg–Schrödinger uncertainty relation becomes trivial. But the lower bound in the new relation is nonzero unless |Psi angle is an eigenstate of both.

Phase space

In the phase space formulation of quantum mechanics, the Robertson–Schrödinger relation follows from a positivity condition on a real star-square function. Given a Wigner function W(x,p) with star product ★ and a function f, the following is generally true:

:langle f^* star f angle =int (f^* star f) , W(x,p) , dx , dp ge 0.

Choosing f=a+bx+cp, we arrive at

:langle f^* star f angle =egin{bmatrix}a^* & b^* & c^* end{bmatrix}egin{bmatrix}1 & langle x angle & langle p angle \ langle x angle & langle x star x angle & langle x star p angle \ langle p angle & langle p star x angle & langle p star p angle end{bmatrix}egin{bmatrix}a \ b \ cend{bmatrix} ge 0.

Since this positivity condition is true for all a, b, and c, it follows that all the eigenvalues of the matrix are positive. The positive eigenvalues then imply a corresponding positivity condition on the determinant:

:detegin{bmatrix}1 & langle x angle & langle p angle \ langle x angle & langle x star x angle & langle x star p angle \ langle p angle & langle p star x angle & langle p star p angle end{bmatrix} = detegin{bmatrix}1 & langle x angle & langle p angle \ langle x angle & langle x^2 angle & leftlangle xp + frac{ihbar}{2} ight angle \ langle p angle & leftlangle xp - frac{ihbar}{2} ight angle & langle p^2 angle end{bmatrix} ge 0,

or, explicitly, after algebraic manipulation,

:sigma_x^2 sigma_p^2 = left( langle x^2 angle - langle x angle^2 ight)left( langle p^2 angle - langle p angle^2 ight)ge left( langle xp angle - langle x angle langle p angle ight)^2 + frac{hbar^2}{4} ~.

Examples



Since the Robertson and Schrödinger relations are for general operators, the relations can be applied to any two observables to obtain specific uncertainty relations. A few of the most common relations found in the literature are given below.
  • For position and linear momentum, the canonical commutation relation [hat{x}, hat{p}] = ihbar implies the Kennard inequality from above: :: sigma_x sigma_p geq frac{hbar}{2}.
  • For two orthogonal components of the total angular momentum operator of an object: :: sigma_{J_i} sigma_{J_j} geq frac{hbar}{2} ig|langle J_k angleig|, : where i, j, k are distinct, and Ji denotes angular momentum along the xi axis. This relation implies that unless all three components vanish together, only a single component of a system's angular momentum can be defined with arbitrary precision, normally the component parallel to an external (magnetic or electric) field. Moreover, for [J_x, J_y] = i hbar varepsilon_{xyz} J_z, a choice hat{A} = J_x, hat{B} = J_y, in angular momentum multiplets, ψ = |j, m〉, bounds the Casimir invariant (angular momentum squared, langle J_x^2+ J_y^2 + J_z^2 angle) from below and thus yields useful constraints such as , and hence jm, among others.
  • In non-relativistic mechanics, time is privileged as an independent variable. Nevertheless, in 1945, L. I. Mandelshtam and I. E. Tamm derived a non-relativistic time–energy uncertainty relation, as follows. For a quantum system in a non-stationary state and an observable B represented by a self-adjoint operator hat B, the following formula holds: :: sigma_E frac{sigma_B}{left| frac{mathrm{d}langle hat B angle}{mathrm{d}t} ight |} ge frac{hbar}{2}, : where σE is the standard deviation of the energy operator (Hamiltonian) in the state , σB stands for the standard deviation of B. Although the second factor in the left-hand side has dimension of time, it is different from the time parameter that enters the Schrödinger equation. It is a lifetime of the state with respect to the observable B: In other words, this is the time intervalt) after which the expectation value langlehat B angle changes appreciably. : An informal, heuristic meaning of the principle is the following: A state that only exists for a short time cannot have a definite energy. To have a definite energy, the frequency of the state must be defined accurately, and this requires the state to hang around for many cycles, the reciprocal of the required accuracy. For example, in spectroscopy, excited states have a finite lifetime. By the time–energy uncertainty principle, they do not have a definite energy, and, each time they decay, the energy they release is slightly different. The average energy of the outgoing photon has a peak at the theoretical energy of the state, but the distribution has a finite width called the natural linewidth. Fast-decaying states have a broad linewidth, while slow-decaying states have a narrow linewidth. : The same linewidth effect also makes it difficult to specify the rest mass of unstable, fast-decaying particles in particle physics. The faster the particle decays (the shorter its lifetime), the less certain is its mass (the larger the particle's width).
  • For the number of electrons in a superconductor and the phase of its Ginzburg–Landau order parameter :: Delta N , Delta varphi geq 1.

    A counterexample

    Suppose we consider a quantum particle on a ring, where the wave function depends on an angular variable heta, which we may take to lie in the interval [0,2pi]. Define "position" and "momentum" operators hat{A} and hat{B} by : hat{A}psi( heta)= hetapsi( heta),quad hetain [0,2pi], and :hat{B}psi=-ihbarfrac{dpsi}{d heta}, where we impose periodic boundary conditions on hat{B}. The definition of hat{A} depends on our choice to have heta range from 0 to 2pi. These operators satisfy the usual commutation relations for position and momentum operators, [hat{A},hat{B}]=ihbar.

    Now let psi be any of the eigenstates of hat{B}, which are given by psi( heta)=e^{2pi in heta}. These states are normalizable, unlike the eigenstates of the momentum operator on the line. Also the operator hat{A} is bounded, since heta ranges over a bounded interval. Thus, in the state psi, the uncertainty of B is zero and the uncertainty of A is finite, so that :sigma_Asigma_B=0. Although this result appears to violate the Robertson uncertainty principle, the paradox is resolved when we note that psi is not in the domain of the operator hat{B}hat{A}, since multiplication by heta disrupts the periodic boundary conditions imposed on hat{B}. Thus, the derivation of the Robertson relation, which requires hat{A}hat{B}psi and hat{B}hat{A}psi to be defined, does not apply. (These also furnish an example of operators satisfying the canonical commutation relations but not the Weyl relations.)

    For the usual position and momentum operators hat{X} and hat{P} on the real line, no such counterexamples can occur. As long as sigma_x and sigma_p are defined in the state psi, the Heisenberg uncertainty principle holds, even if psi fails to be in the domain of hat{X}hat{P} or of hat{P}hat{X}.

    Examples

    (Refs )

    Quantum harmonic oscillator stationary states



    Consider a one-dimensional quantum harmonic oscillator (QHO). It is possible to express the position and momentum operators in terms of the creation and annihilation operators: :hat x = sqrt{frac{hbar}{2momega}}(a+a^dagger) :hat p = isqrt{frac{m omegahbar}{2}}(a^dagger-a).

    Using the standard rules for creation and annihilation operators on the eigenstates of the QHO, :a^{dagger}|n angle=sqrt{n+1}|n+1 angle :a|n angle=sqrt{n}|n-1 angle, the variances may be computed directly, :sigma_x^2 = frac{hbar}{momega} left( n+frac{1}{2} ight) :sigma_p^2 = hbar momega left( n+frac{1}{2} ight), . The product of these standard deviations is then :sigma_x sigma_p = hbar left(n+frac{1}{2} ight) ge frac{hbar}{2}.~

    In particular, the above Kennard bound is saturated for the ground state , for which the probability density is just the normal distribution.

    Quantum harmonic oscillators with Gaussian initial condition

    In a quantum harmonic oscillator of characteristic angular frequency ω, place a state that is offset from the bottom of the potential by some displacement x0 as :psi(x)=left(frac{m Omega}{pi hbar} ight)^{1/4} exp{left( -frac{m Omega (x-x_0)^2}{2hbar} ight)}, where Ω describes the width of the initial state but need not be the same as ω. Through integration over the propagator, we can solve for the -dependent solution. After many cancelations, the probability densities reduce to :|Psi(x,t)|^2 sim mathcal{N}left( x_0 cos{(omega t)} , frac{hbar}{2 m Omega} left( cos^2(omega t) + frac{Omega^2}{omega^2} sin^2{(omega t)} ight) ight) :|Phi(p,t)|^2 sim mathcal{N}left( -m x_0 omega sin(omega t), frac{hbar m Omega}{2} left( cos^2{(omega t)} + frac{omega^2}{Omega^2} sin^2{(omega t)} ight) ight), where we have used the notation mathcal{N}(mu, sigma^2) to denote a normal distribution of mean μ and variance σ. Copying the variances above and applying trigonometric identities, we can write the product of the standard deviations as

    : egin{align} sigma_x sigma_p&=frac{hbar}{2}sqrt{left( cos^2{(omega t)} + frac{Omega^2}{omega^2} sin^2{(omega t)} ight)left( cos^2{(omega t)} + frac{omega^2}{Omega^2} sin^2{(omega t)} ight)} \ &= frac{hbar}{4}sqrt{3+frac{1}{2}left(frac{Omega^2}{omega^2}+frac{omega^2}{Omega^2} ight)-left(frac{1}{2}left(frac{Omega^2}{omega^2}+frac{omega^2}{Omega^2} ight)-1 ight) cos{(4 omega t)}} end{align}

    From the relations

    :frac{Omega^2}{omega^2}+frac{omega^2}{Omega^2} ge 2, quad |cos(4 omega t)| le 1,

    we can conclude the following: (the right most equality holds only when Ω = ω) . :sigma_x sigma_p ge frac{hbar}{4}sqrt{3+frac{1}{2} left(frac{Omega^2}{omega^2}+frac{omega^2}{Omega^2} ight)-left(frac{1}{2} left(frac{Omega^2}{omega^2}+frac{omega^2}{Omega^2} ight)-1 ight)} = frac{hbar}{2}.

    Coherent states



    A coherent state is a right eigenstate of the annihilation operator, :hat{a}|alpha angle=alpha|alpha angle, which may be represented in terms of Fock states as :|alpha angle =e^{-

    A measurement apparatus will have a finite resolution set by the discretization of its possible outputs into bins, with the probability of lying within one of the bins given by the Born rule. We will consider the most common experimental situation, in which the bins are of uniform size. Let δx be a measure of the spatial resolution. We take the zeroth bin to be centered near the origin, with possibly some small constant offset c. The probability of lying within the jth interval of width δx is

    :operatorname P[x_j]= int_{(j-1/2)delta x-c}^{(j+1/2)delta x-c}| psi(x)|^2 , dx

    To account for this discretization, we can define the Shannon entropy of the wave function for a given measurement apparatus as

    : H_x=-sum_{j=-infty}^infty operatorname P[x_j] ln operatorname P[x_j].

    Under the above definition, the entropic uncertainty relation is

    :H_x+H_p>lnleft(frac{e}{2} ight)-lnleft(frac{delta x delta p}{h} ight).

    Here we note that is a typical infinitesimal phase space volume used in the calculation of a partition function. The inequality is also strict and not saturated. Efforts to improve this bound are an active area of research.

    The Efimov inequality by Pauli matrices



    In 1976, Sergei P. Efimov deduced an inequality that refines the Robertson relation by applying high-order commutators. His approach is based on the Pauli matrices. Later V.V. Dodonov used the method to derive relations for a several observables by using Clifford algebra.

    According to Jackiw, the Robertson uncertainty is valid only when the commutator is C-number. The Efimov method is effective for variables that have commutators of high-order - for example for the kinetic energy operator and for coordinate one. Consider two operators hat A and hat B that have commutator hat C :

    ::: left [ hat A,hat B ight ]=hat C .

    To shorten formulas we use the operator deviations:

    ::: delta hat A = hat A -leftlangle hat A ight angle , when new operators have the zero mean deviation. To use the Pauli matrices we can consider the operator: ::: hat F=gamma_1 , delta hat A, sigma_1 + gamma_2 ,delta hat B , sigma_2 + gamma_3 , delta hat C , sigma_3 ,

    where 2×2 spin matrices sigma_i have commutators:

    ::: left [ sigma_i ,sigma_k ight ]= , i, e_{ikl} sigma_l , where e_{ikl} antisymmetric symbol. They act in the spin space independently from delta hat A {,},delta hat B {,},delta hat C . Pauli matrices define the Clifford algebra. We take arbitrary numbers gamma_i in operator hat F to be real. Physical square of the operator is equal to:

    ::: hat F hat F^+ = gamma_1^2 ( , delta hat A delta hat A^+)+ gamma_2^2 ( ,delta hat B delta hat B^+) + gamma_3^2 (, delta hat C delta hat C^+)+ gamma_1 gamma_2 , hat C, sigma_3 + gamma_2 gamma_3, hat C_2 , sigma_1 - gamma_1 gamma_3 , hat C_3 , sigma_2,

    where hat F^+ is adjoint operator and commutators hat C_2 and hat C_3 are following:

    :::hat C_2 = ileft [ deltahat B,hat C ight ],qquad hat C_3 = ileft [ deltahat A,hat C ight ] .

    Operator hat F hat F^+ is positive-definite, what is essential to get an inequality below . Taking average value of it over state left | psi ight angle , we get positive-definite matrix 2×2:

    :egin {align} left langlepsi ight | hat F hat F^+left |psi ight angle &= gamma_1^2 , left langle (delta hat A )^2 ight angle + gamma_2^2 ,left langle ( delta hat B )^2 ight angle + gamma_3^2 , left langle ( delta hat C )^2 ight angle +\ & +gamma_1 gamma_2 , left langle hat C ight angle , sigma_3 + gamma_2 gamma_3, left langlehat C_2 ight angle , sigma_1 - gamma_1 gamma_3 , left langle hat C_3 ight angle , sigma_2 , end {align} where used the notion:

    : left langle (delta hat A )^2 ight angle = left langle (delta hat A delta A^+) ight angle and analogous one for operators B,,C . Regarding that coefficients gamma_i are arbitrary in the equation, we get the positive-definite matrix 6×6. Sylvester's criterion says that its leading principal minors are non-negative. The Robertson uncertainty follows from minor of forth degree. To strengthen result we calculate determinant of sixth order:

    {{Equation box 1 |indent =:: |equation = left langle{ ( delta hat A)^2} ight angle left langle{ ( delta hat B)^2} ight angle leftlangle {(delta hat C)^2 } ight angle ge frac{1}{4}left langle hat C ight angle^2 leftlangle {(delta hat C)^2 } ight angle +frac{1}{4}leftlangle (delta hat A)^2 ight angle left langle hat C_2 ight angle^2 + frac{1}{4}leftlangle (delta hat B)^2 ight angle left langle hat C_3 ight angle^2 |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}} The equality is observed only when the state is an eigenstate for the operator hat F and likewise for the spin variables:

    :::hat F, {left | psi ight angle} {left | hat s ight angle}=0 . Found relation we may apply to the kinetic energy operator hat E_{kin} =frac { {mathbf hat p^2}}{2} and for operator of the coordinate mathbf hat x : {{Equation box 1 |indent =:: |equation = left langle (delta hat E)^2 ight angle left langle (delta hat mathbf x)^2 ight angle ge frac{hbar^2}{4} left langle , mathbf hat p , ight angle^2 + frac{hbar^2}{2} left langle (delta hat E)^2 ight angle left langle (delta mathbf hat p)^2 ight angle ^{-1} |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}} In particular, equality in the formula is observed for the ground state of the oscillator, whereas the right-hand item of the Robertson uncertainty vanishes: ::: left langle , mathbf hat p , ight angle = 0 .

    Physical meaning of the relation is more clear if to divide it by the squared nonzero average impulse what yields:

    {{Equation box 1 |indent =:: |equation = left langle (delta hat E)^2 ight angle (delta t)^2 ge frac{hbar^2}{4} + frac{hbar^2}{2} left langle (delta hat E)^2 ight angle left langle (delta mathbf hat p)^2 ight angle ^{-1} left langle , mathbf hat p , ight angle^{-2} , |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}}

    where (delta t)^2=left langle (delta mathbf hat x)^2 ight angle left langle mathbf ,hat p , ight angle^{-2} is squared effective time within which a particle moves near the mean trajectory (Mass of the particle is equal to 1).

    The method can be applied for three noncomuting operators of angular momentum mathbf hat L . We compile the operator:

    ::: hat F=gamma_1 , delta hat L_x, sigma_1 + gamma_2 ,delta hat L_y , sigma_2 + gamma_3 , delta hat L_z , sigma_3 . We recall that the operators sigma_i are auxiliary and there is no relation between the spin variables of the particle. In such way, their commutative properties are of importance only. Squared and averaged operator hat F gives positive-definite matrix where we get following inequality from:{{Equation box 1 |indent =:: |equation = left langle {(delta hat L_x)}^2 ight angle left langle {(delta hat L_y)}^2 ight angle left langle{(delta hat L_z)}^2) ight angle ge frac{ hbar^2}{4} sum_{i=1}^3 left langle (delta hat L_i)^2 ight angle left langle hat L_i ight angle^2 |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}}

    To develop method for a group of operators one may use the Clifford algebra instead of the Pauli matrices .

    Harmonic analysis



    In the context of harmonic analysis, a branch of mathematics, the uncertainty principle implies that one cannot at the same time localize the value of a function and its Fourier transform. To wit, the following inequality holds, :left(int_{-infty}^infty x^2 |f(x)|^2,dx ight)left(int_{-infty}^infty xi^2 |hat{f}(xi)|^2,dxi ight)ge frac{|f|_2^4}{16pi^2}.

    Further mathematical uncertainty inequalities, including the above entropic uncertainty, hold between a function and its Fourier transform :

    :H_x+H_xi ge log(e/2)

    Signal processing

    In the context of signal processing, and in particular time–frequency analysis, uncertainty principles are referred to as the Gabor limit, after Dennis Gabor, or sometimes the Heisenberg–Gabor limit. The basic result, which follows from "Benedicks's theorem", below, is that a function cannot be both time limited and band limited (a function and its Fourier transform cannot both have bounded domain)—see bandlimited versus timelimited. Thus

    : sigma_t cdot sigma_f ge frac{1}{4pi} approx 0.08 ext{ cycles}

    where sigma_t and sigma_f are the standard deviations of the time and frequency estimates respectively.

    Stated alternatively, "One cannot simultaneously sharply localize a signal (function ) in both the time domain and frequency domain (, its Fourier transform)".

    When applied to filters, the result implies that one cannot achieve high temporal resolution and frequency resolution at the same time; a concrete example are the resolution issues of the short-time Fourier transform—if one uses a wide window, one achieves good frequency resolution at the cost of temporal resolution, while a narrow window has the opposite trade-off.

    Alternate theorems give more precise quantitative results, and, in time–frequency analysis, rather than interpreting the (1-dimensional) time and frequency domains separately, one instead interprets the limit as a lower limit on the support of a function in the (2-dimensional) time–frequency plane. In practice, the Gabor limit limits the simultaneous time–frequency resolution one can achieve without interference; it is possible to achieve higher resolution, but at the cost of different components of the signal interfering with each other.

    As a result, in order to analyze signals where the transients are important, the wavelet transform is often used instead of the Fourier.

    DFT-Uncertainty principle

    There is an uncertainty principle that uses signal sparsity (or the number of non-zero coefficients).

    Let left { mathbf{ x_n } ight } := x_0, x_1, ldots, x_{N-1} be a sequence of N complex numbers and left { mathbf{X_k} ight } := X_0, X_1, ldots, X_{N-1}, its discrete Fourier transform.

    Denote by |x|_0 the number of non-zero elements in the time sequence x_0,x_1,ldots,x_{N-1} and by |X|_0 the number of non-zero elements in the frequency sequence X_0,X_1,ldots,X_{N-1}. Then, :Nleq |x|_0 cdot |X|_0.

    Benedicks's theorem

    Amrein–Berthier and Benedicks's theorem intuitively says that the set of points where is non-zero and the set of points where is non-zero cannot both be small.

    Specifically, it is impossible for a function in and its Fourier transform to both be supported on sets of finite Lebesgue measure. A more quantitative version is : |f|_{L^2(mathbf{R}^d)}leq Ce^{C|S||Sigma|} igl(|f|_{L^2(S^c)} + | hat{f} |_{L^2(Sigma^c)} igr) ~.

    One expects that the factor may be replaced by , which is only known if either or is convex.

    Hardy's uncertainty principle

    The mathematician G. H. Hardy formulated the following uncertainty principle: it is not possible for and to both be "very rapidly decreasing". Specifically, if in L^2(mathbb{R}) is such that :|f(x)|leq C(1+|x|)^Ne^{-api x^2} and :|hat{f}(xi)|leq C(1+|xi|)^Ne^{-bpi xi^2} (C>0,N an integer),

    then, if , while if , then there is a polynomial of degree such that

    ::f(x)=P(x)e^{-api x^2}.

    This was later improved as follows: if f in L^2(mathbb{R}^d) is such that

    :int_{mathbb{R}^d}int_{mathbb{R}^d}|f(x)||hat{f}(xi)|frac{e^{pi|langle x,xi angle|}}{(1+|x|+|xi|)^N} , dx , dxi < +infty ~, then ::f(x)=P(x)e^{-pilangle Ax,x angle} ~, where is a polynomial of degree and is a real positive definite matrix.

    This result was stated in Beurling's complete works without proof and proved in Hörmander (the case d=1,N=0) and Bonami, Demange, and Jaming for the general case. Note that Hörmander–Beurling's version implies the case in Hardy's Theorem while the version by Bonami–Demange–Jaming covers the full strength of Hardy's Theorem. A different proof of Beurling's theorem based on Liouville's theorem appeared in ref.

    A full description of the case as well as the following extension to Schwartz class distributions appears in ref.

    Theorem. If a tempered distribution finmathcal{S}'(R^d) is such that

    :e^{pi|x|^2}finmathcal{S} '(R^d) and :e^{pi|xi|^2}hat finmathcal{S}'(R^d) ~, then ::f(x)=P(x)e^{-pilangle Ax,x angle} ~, for some convenient polynomial and real positive definite matrix of type .

    History

    Werner Heisenberg formulated the uncertainty principle at Niels Bohr's institute in Copenhagen, while working on the mathematical foundations of quantum mechanics.

    In 1925, following pioneering work with Hendrik Kramers, Heisenberg developed matrix mechanics, which replaced the ad hoc old quantum theory with modern quantum mechanics. The central premise was that the classical concept of motion does not fit at the quantum level, as electrons in an atom do not travel on sharply defined orbits. Rather, their motion is smeared out in a strange way: the Fourier transform of its time dependence only involves those frequencies that could be observed in the quantum jumps of their radiation.

    Heisenberg's paper did not admit any unobservable quantities like the exact position of the electron in an orbit at any time; he only allowed the theorist to talk about the Fourier components of the motion. Since the Fourier components were not defined at the classical frequencies, they could not be used to construct an exact trajectory, so that the formalism could not answer certain overly precise questions about where the electron was or how fast it was going.

    In March 1926, working in Bohr's institute, Heisenberg realized that the non-commutativity implies the uncertainty principle. This implication provided a clear physical interpretation for the non-commutativity, and it laid the foundation for what became known as the Copenhagen interpretation of quantum mechanics. Heisenberg showed that the commutation relation implies an uncertainty, or in Bohr's language a complementarity. Any two variables that do not commute cannot be measured simultaneously—the more precisely one is known, the less precisely the other can be known. Heisenberg wrote:
    It can be expressed in its simplest form as follows: One can never know with perfect accuracy both of those two important factors which determine the movement of one of the smallest particles—its position and its velocity. It is impossible to determine accurately both the position and the direction and speed of a particle at the same instant.


    In his celebrated 1927 paper, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik" ("On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics"), Heisenberg established this expression as the minimum amount of unavoidable momentum disturbance caused by any position measurement, but he did not give a precise definition for the uncertainties Δx and Δp. Instead, he gave some plausible estimates in each case separately. In his Chicago lecture he refined his principle:Kennard in 1927 first proved the modern inequality:

    {{NumBlk|::|sigma_xsigma_pgefrac{hbar}{2}|2}}

    where , and , are the standard deviations of position and momentum. Heisenberg only proved relation () for the special case of Gaussian states.

    Terminology and translation

    Throughout the main body of his original 1927 paper, written in German, Heisenberg used the word, "Ungenauigkeit" ("indeterminacy"), to describe the basic theoretical principle. Only in the endnote did he switch to the word, "Unsicherheit" ("uncertainty"). When the English-language version of Heisenberg's textbook, The Physical Principles of the Quantum Theory, was published in 1930, however, the translation "uncertainty" was used, and it became the more commonly used term in the English language thereafter.

    Heisenberg's microscope

    The principle is quite counter-intuitive, so the early students of quantum theory had to be reassured that naive measurements to violate it were bound always to be unworkable. One way in which Heisenberg originally illustrated the intrinsic impossibility of violating the uncertainty principle is by utilizing the observer effect of an imaginary microscope as a measuring device.

    He imagines an experimenter trying to measure the position and momentum of an electron by shooting a photon at it. :Problem 1 – If the photon has a short wavelength, and therefore, a large momentum, the position can be measured accurately. But the photon scatters in a random direction, transferring a large and uncertain amount of momentum to the electron. If the photon has a long wavelength and low momentum, the collision does not disturb the electron's momentum very much, but the scattering will reveal its position only vaguely.

    :Problem 2 – If a large aperture is used for the microscope, the electron's location can be well resolved (see Rayleigh criterion); but by the principle of conservation of momentum, the transverse momentum of the incoming photon affects the electron's beamline momentum and hence, the new momentum of the electron resolves poorly. If a small aperture is used, the accuracy of both resolutions is the other way around.

    The combination of these trade-offs implies that no matter what photon wavelength and aperture size are used, the product of the uncertainty in measured position and measured momentum is greater than or equal to a lower limit, which is (up to a small numerical factor) equal to Planck's constant. Heisenberg did not care to formulate the uncertainty principle as an exact limit, and preferred to use it instead, as a heuristic quantitative statement, correct up to small numerical factors, which makes the radically new noncommutativity of quantum mechanics inevitable.

    Critical reactions

    The Copenhagen interpretation of quantum mechanics and Heisenberg's Uncertainty Principle were, in fact, seen as twin targets by detractors who believed in an underlying determinism and realism. According to the Copenhagen interpretation of quantum mechanics, there is no fundamental reality that the quantum state describes, just a prescription for calculating experimental results. There is no way to say what the state of a system fundamentally is, only what the result of observations might be.

    Albert Einstein believed that randomness is a reflection of our ignorance of some fundamental property of reality, while Niels Bohr believed that the probability distributions are fundamental and irreducible, and depend on which measurements we choose to perform. Einstein and Bohr debated the uncertainty principle for many years.

    The ideal of the detached observer

    Wolfgang Pauli called Einstein's fundamental objection to the uncertainty principle "the ideal of the detached observer" (phrase translated from the German):

    Einstein's slit

    The first of Einstein's thought experiments challenging the uncertainty principle went as follows:

    :Consider a particle passing through a slit of width . The slit introduces an uncertainty in momentum of approximately because the particle passes through the wall. But let us determine the momentum of the particle by measuring the recoil of the wall. In doing so, we find the momentum of the particle to arbitrary accuracy by conservation of momentum.

    Bohr's response was that the wall is quantum mechanical as well, and that to measure the recoil to accuracy , the momentum of the wall must be known to this accuracy before the particle passes through. This introduces an uncertainty in the position of the wall and therefore the position of the slit equal to , and if the wall's momentum is known precisely enough to measure the recoil, the slit's position is uncertain enough to disallow a position measurement.

    A similar analysis with particles diffracting through multiple slits is given by Richard Feynman.

    Einstein's box

    Bohr was present when Einstein proposed the thought experiment which has become known as Einstein's box. Einstein argued that "Heisenberg's uncertainty equation implied that the uncertainty in time was related to the uncertainty in energy, the product of the two being related to Planck's constant." Consider, he said, an ideal box, lined with mirrors so that it can contain light indefinitely. The box could be weighed before a clockwork mechanism opened an ideal shutter at a chosen instant to allow one single photon to escape. "We now know, explained Einstein, precisely the time at which the photon left the box." "Now, weigh the box again. The change of mass tells the energy of the emitted light. In this manner, said Einstein, one could measure the energy emitted and the time it was released with any desired precision, in contradiction to the uncertainty principle."

    Bohr spent a sleepless night considering this argument, and eventually realized that it was flawed. He pointed out that if the box were to be weighed, say by a spring and a pointer on a scale, "since the box must move vertically with a change in its weight, there will be uncertainty in its vertical velocity and therefore an uncertainty in its height above the table. ... Furthermore, the uncertainty about the elevation above the earth's surface will result in an uncertainty in the rate of the clock," because of Einstein's own theory of gravity's effect on time. "Through this chain of uncertainties, Bohr showed that Einstein's light box experiment could not simultaneously measure exactly both the energy of the photon and the time of its escape."

    EPR paradox for entangled particles

    Bohr was compelled to modify his understanding of the uncertainty principle after another thought experiment by Einstein. In 1935, Einstein, Podolsky and Rosen (see EPR paradox) published an analysis of widely separated entangled particles. Measuring one particle, Einstein realized, would alter the probability distribution of the other, yet here the other particle could not possibly be disturbed. This example led Bohr to revise his understanding of the principle, concluding that the uncertainty was not caused by a direct interaction.

    But Einstein came to much more far-reaching conclusions from the same thought experiment. He believed the "natural basic assumption" that a complete description of reality would have to predict the results of experiments from "locally changing deterministic quantities" and therefore would have to include more information than the maximum possible allowed by the uncertainty principle.

    In 1964, John Bell showed that this assumption can be falsified, since it would imply a certain inequality between the probabilities of different experiments. Experimental results confirm the predictions of quantum mechanics, ruling out Einstein's basic assumption that led him to the suggestion of his hidden variables. These hidden variables may be "hidden" because of an illusion that occurs during observations of objects that are too large or too small. This illusion can be likened to rotating fan blades that seem to pop in and out of existence at different locations and sometimes seem to be in the same place at the same time when observed. This same illusion manifests itself in the observation of subatomic particles. Both the fan blades and the subatomic particles are moving so fast that the illusion is seen by the observer. Therefore, it is possible that there would be predictability of the subatomic particles behavior and characteristics to a recording device capable of very high speed tracking....Ironically this fact is one of the best pieces of evidence supporting Karl Popper's philosophy of invalidation of a theory by falsification-experiments. That is to say, here Einstein's "basic assumption" became falsified by experiments based on Bell's inequalities. For the objections of Karl Popper to the Heisenberg inequality itself, see below.

    While it is possible to assume that quantum mechanical predictions are due to nonlocal, hidden variables, and in fact David Bohm invented such a formulation, this resolution is not satisfactory to the vast majority of physicists. The question of whether a random outcome is predetermined by a nonlocal theory can be philosophical, and it can be potentially intractable. If the hidden variables were not constrained, they could just be a list of random digits that are used to produce the measurement outcomes. To make it sensible, the assumption of nonlocal hidden variables is sometimes augmented by a second assumption—that the size of the observable universe puts a limit on the computations that these variables can do. A nonlocal theory of this sort predicts that a quantum computer would encounter fundamental obstacles when attempting to factor numbers of approximately 10,000 digits or more; a potentially achievable task in quantum mechanics.

    Popper's criticism



    Karl Popper approached the problem of indeterminacy as a logician and metaphysical realist. He disagreed with the application of the uncertainty relations to individual particles rather than to ensembles of identically prepared particles, referring to them as "statistical scatter relations". In this statistical interpretation, a particular measurement may be made to arbitrary precision without invalidating the quantum theory. This directly contrasts with the Copenhagen interpretation of quantum mechanics, which is non-deterministic but lacks local hidden variables.

    In 1934, Popper published Zur Kritik der Ungenauigkeitsrelationen (Critique of the Uncertainty Relations) in Naturwissenschaften, and in the same year Logik der Forschung (translated and updated by the author as The Logic of Scientific Discovery in 1959), outlining his arguments for the statistical interpretation. In 1982, he further developed his theory in Quantum theory and the schism in Physics, writing:
    [Heisenberg's] formulae are, beyond all doubt, derivable statistical formulae of the quantum theory. But they have been habitually misinterpreted by those quantum theorists who said that these formulae can be interpreted as determining some upper limit to the precision of our measurements. [original emphasis]


    Popper proposed an experiment to falsify the uncertainty relations, although he later withdrew his initial version after discussions with Weizsäcker, Heisenberg, and Einstein; this experiment may have influenced the formulation of the EPR experiment.

    Many-worlds uncertainty

    The many-worlds interpretation originally outlined by Hugh Everett III in 1957 is partly meant to reconcile the differences between Einstein's and Bohr's views by replacing Bohr's wave function collapse with an ensemble of deterministic and independent universes whose distribution is governed by wave functions and the Schrödinger equation. Thus, uncertainty in the many-worlds interpretation follows from each observer within any universe having no knowledge of what goes on in the other universes.

    Free will



    Some scientists including Arthur Compton and Martin Heisenberg have suggested that the uncertainty principle, or at least the general probabilistic nature of quantum mechanics, could be evidence for the two-stage model of free will. One critique, however, is that apart from the basic role of quantum mechanics as a foundation for chemistry, nontrivial biological mechanisms requiring quantum mechanics are unlikely, due to the rapid decoherence time of quantum systems at room temperature. Proponents of this theory commonly say that this decoherence is overcome by both screening and decoherence-free subspaces found in biological cells.

    Thermodynamics



    There is reason to believe that violating the uncertainty principle also strongly implies the violation of the second law of thermodynamics.