Loading search

RDFRS US:

The mission of the Richard Dawkins Foundation for Reason and Science is to support scientific education, critical thinking and evidence-based understanding of the natural world in the quest to overcome religious fundamentalism, superstition, intolerance and suffering.

The Magic of Reality

for the iPad

Sean Faircloth:

Attack of the Theocrats!

Jump to comment 29 by Jos Gibbons

They seem like subsets of it enough that I’d hope people with questions about them raise such questions here, but maybe they’d rather wait for a thread like that later. You’ve brought up an example or two below:

The tricky thing here is “variance” (square of “standard deviation”) is an ambiguous term. I’ll need to put all this into a disambiguating context (or at least a context which elucidates how much ambiguity is in need of disambiguation) by using the central limit theorem. Given

nIIDs (independent identically distributed random variables), e.g. measurements of a real-world quantity that has a probability distribution,ifeach of those IIDs x_1, …, x_n has common mean μ and common variance σ², their mean x has mean μ and variance σ²/n. What is more, whennis large, the distribution of x is almost exactly Gaussian or “Normal”. That lets you approximately calculate the probability distribution of the “z-value” of x, i.e. how many of its standard deviations it is away from its mean (z < 0 when x < μ). When statisticians talk about a 95 % confidence interval, they verified μ lies in the interval with probability 0.95 on this Gaussian approximation. Note that the standard deviation is much smaller for the mean than for the individual x_i.Now consider a more general case, where the x_i still have a common mean but different variances, which means they’re no longer IIDs. An unbiased weighted mean w of the x_i is equal to the sum over all i of (a_i)(x_i), where the a_i are non-negative reals of sum 1. The usual mean takes each a_i to be the same, i.e. each = 1/

n. For any choices of the a_i, you can calculate the variance of w from the variances of the x_i, provided the x_i are still independent. There’s a standard answer to the question, “which choice of the a_i, in terms of the σ_i, minimises the variance of w?” Needless to say, statisticians have gotten pretty adept at making standard deviations as small as possible so that gaps between means can be large by comparison.So now let’s say you want to know whether xs have a greater mean than ys, and so you’re interested in the mean and standard deviation of the variable x-y. To be sure, a standard deviation mustn’t be too large a percentage of that mean’s modulus, or the result will have little statistical significance. But

whichstandard deviation needs to be small? That of the sample rather than individual values. I remember explaining this once to a poster who didn’t see how satellite readings could confirm temperature changes smaller than their temperature errors. As for the experiment to which you refer, the fact is I can’t say how significant the final results were without knowing a lot more about the numbers quoted, including which types of “standard deviation” that term referred to. (Indeed, it’s more complicated than I’m implying here; for example, there are some contexts in which a variance should be divided not byn, but instead byn-1.)On with complex numbers. (Did you know the guy in charge of Conservapedia refuses to embrace complex numbers? He comes from a field that needs them!)

Not entirely. For starters, the complex numbers are the unique algebraically closed extension of the reals. If you want the set S of permitted coefficients in polynomials to include all roots of such polynomials, complex numbers are the way to go. It gives a sense of completeness, rather than claiming sometimes there just aren’t all that many roots, which smacks of denial when compared with a system that finds them. To be sure, you can say everything again with real numbers only; in solving quadratic equations with real coefficients, for example, you could seek those real pairs (x,y) with a(x²-y²)+bx+c = (2ax+b)y = 0. You can even restate the complex-coefficients generalisation if you really want to. In the end, however, that’s more a case of deliberately avoiding admitting you’re talking about complex numbers. And if z²+1=0 is rewritten into a statement with a root in your theory, that’s as good as admitting i is there. I will say more on that below.

Getting back to the casus irreducibilis, the only way to work out on the way there what the roots are is to make use of existence theorems on the complex numbers. For example, one key part of solving x³ + px + q = 0 relies on noting there must be complex (possibly real) u, v satisfying u+v = S, uv = P, regardless of your choice of S, P. The proof takes S = x, P = -p/3. As for getting three different roots, later you obtain formulae for u³, v³ from which you can only get u, v to within factors which are complex cubic roots of 1. Once you have the final result, you could say, “Theorem: these are the roots” and then verify they are through substitution, and in the case where all roots are real your proof of their validity wouldn’t need to use complex numbers. But why bother with all of that hassle?

Another famous example of uses of complex numbers is to calculate a cosine-based formula C or a sine-based formula S by first making sure you have one of each, very similar to one another, and then use results like de Moivre’s theorem and the properties of geometric series to obtain C+iS and hence C and S, so you get two results for the price of 1. But the “even if you don’t

needcomplex numbers for this, they make things aloteasier” examples keep coming. Of the many examples from Euclidean geometry alone, my personal favourite is Napoleon’s theorem, so named because legend has it Napoleon Bonaparte found a proof of this theorem (although I don’t think the legend says he used complex numbers):Given a triangle ABC, erect in the plane on its sides equilateral triangles (i) outwardly and (ii) inwardly. In case (i), the centres of these equilateral triangles are themselves the vertices of an equilateral triangle. In case (ii), this is true too. These two bonus equilateral triangles’ areas differ by the area of triangle ABC. (You can also express the bonus triangles’ energies as linear combinations of the original triangle’s area and the sum of the squares of its side lengths.)You probably know complex numbers appear in the axioms of quantum mechanics, because the states are vectors in a complex-valued Hilbert space. You

canrewrite the entire thing using only real numbers, but it’s inadvisable. You end up making the “inner product” more like a Minkowski product in special relativity, which means things like the non-negativity of norms and Cauchy-Schwarz inequalities need a fundamental overhaul, and you’re better off just proving the underlying theorems about complex numbers. (In fact, even special relativity is sometimes instead written using complex numbers to make its equivalents of those results easier to prove.) Because that’s all the jump from the quantum axioms to the quantum theorems is – it’s proof of a theorem about complex numbers which states that that jump is logically valid.What real-world applications

doimaginary numbers have, indeed? Assuming you mean ordinary 2-part complex numbers, they unavoidably come up in quantum mechanics and make many calculations easier in such diverse fields as optics, electrical engineering and fluid physics (including its applications such as in climatology, hydraulics or aerodynamics, and possibly also where you need to understand vibrations, e.g. designing bridges that will handle the wind properly). “Hypercomplex” numbers such as quaternions have even more esoteric links to reality, but it’s all useful nonetheless.Permalink Thu, 10 May 2012 16:45:46 UTC | #940889