Chapter 4
Complex Numbers
4.8 De Moivre’s Theorem Page: 357
De Moivre's formula, named after Abraham de Moivre, states that for any complex number (and, in particular, for any real number) x and any integer n it holds that
The formula is important because it connects complex numbers (i stands for the imaginary unit) and trigonometry. The expression "cos x + i sin x" is sometimes abbreviated to "cis x".
By expanding the left hand side and then comparing the real and imaginary parts, it is possible to derive useful expressions for cos(nx) and sin(nx) in terms of cos(x) and sin(x). Furthermore, one can use this formula to find explicit expressions for the n-th roots of unity, that is, complex numbers z such that zn = 1.
Multiplying two complex numbers in polar form is easily undertaken by using the rules set out in the last section.
This leads to a very simple formula for calculating powers of complex numbers - known as De Moivre's theorem.
Consider the product of z with itself (z2) if z =
The rules of multiplying the moduli and adding the arguments gives
z2 = =
Now consider z3
Take z2 = and multiply this by z =
This gives z3 = ( )( ) =
A pattern has emerged.
This result can be extended to the nth power and is known as De Moivre's Theorem.
De Moivre's theorem
If z = , then
Now look at the proof of this theorem which is included for interest at proof (11).
This is a very useful result as it makes it simple to find once z is expressed in polar form.
Example 1
Calculate
Answer:
Using De Moivre's theorem
[ To reach this result
multiply the moduli to give 25 and add the arguments to give ]
Example 2
Calculate (1 - i 3)6
Answer:
First express z = 1 - i 3 in polar form.
= 2
As z is in the fourth quadrant arg z = - where tan = 3, i.e.
So arg z = and z =
Using De Moivre's theorem
De Moivre's theorem and the rule for dividing complex numbers in polar form can be used to simplify fractions involving powers.
Example 3
Simplify
Answer:
First express 1 - i 3 and 1 + i in polar form.
From the previous example 1 - i 3 =
From an Argand diagram 1 + i =
By De Moivre's theorem
and
Hence
[To divide complex numbers in polar form, divide the moduli and subtract the arguments.]
Question
Simplify
Answer
Question
Simplify
Answer
Chapter 7
Calculus
7.3 The Derivative Page: 533
The graph of a function, drawn in black, and a tangent line to that function, drawn in red. The slope of the tangent line is equal to the derivative of the function at the marked point.
In calculus, a branch of mathematics, the derivative is a measurement of how a function changes when the values of its inputs change. Loosely speaking, a derivative can be thought of as how much a quantity is changing at some given point. For example, the derivative of the position of a car at some point in time is the velocity, or speed, at which that car is traveling (conversely the integral of the velocity is the car's position or distance traveled).
The derivative of a function at a chosen input value describes the best linear approximation of the function near that input value. For a real-valued function of a single real variable, the derivative at a point equals the slope of the tangent line to the graph of the function at that point. In higher dimensions, the derivative of a function at a point is a linear transformation called the linearization.[1]
The process of finding a derivative is called differentiation. The fundamental theorem of calculus states that differentiation is the reverse process to integration.
The derivative of a function represents an infinitesimal change in the function with respect to one of its variables.
The "simple" derivative of a function with respect to a variable is denoted either or
often written in-line as . When derivatives are taken with respect to time, they are often denoted using Newton's overdot notation for fluxions,
The "d-ism" of Leibnitz's eventually won the notation battle against the "dotage" of Newton's fluxion notation (P. Ion, pers. comm., Aug. 18, 2006).
When a derivative is taken times, the notation or
is used, with
etc., the corresponding fluxion notation.
When a function depends on more than one variable, a partial derivative
can be used to specify the derivative with respect to one or more variables.
The derivative of a function with respect to the variable is defined as
but may also be calculated more symmetrically as
provided the derivative is known to exist.
It should be noted that the above definitions refer to "real" derivatives, i.e., derivatives which are restricted to directions along the real axis. However, this restriction is artificial, and derivatives are most naturally defined in the complex plane, where they are sometimes explicitly referred to as complex derivatives. In order for complex derivatives to exist, the same result must be obtained for derivatives taken in any direction in the complex plane. Somewhat surprisingly, almost all of the important functions in mathematics satisfy this property, which is equivalent to saying that they satisfy the Cauchy-Riemann equations.
These considerations can lead to confusion for students because elementary calculus texts commonly consider only "real" derivatives, never alluding to the existence of complex derivatives, variables, or functions. For example, textbook examples to the contrary, the "derivative" (read: complex derivative) of the absolute value function does not exist because at every point in the complex plane, the value of the derivative depends on the direction in which the derivative is taken (so the Cauchy-Riemann equations cannot and do not hold). However, the real derivative (i.e., restricting the derivative to directions along the real axis) can be defined for points other than as
As a result of the fact that computer algebra programs such as Mathematica generically deal with complex variables (i.e., the definition of derivative always means complex derivative), correctly returns unevaluated by such software.
If the first derivative exists, the second derivative may be defined as
and calculated more symmetrically as
again provided the second derivative is known to exist.
Note that in order for the limit to exist, both and must exist and be equal, so the function must be continuous. However, continuity is a necessary but not sufficient condition for differentiability. Since some discontinuous functions can be integrated, in a sense there are "more" functions which can be integrated than differentiated. In a letter to Stieltjes, Hermite wrote, "I recoil with dismay and horror at this lamentable plague of functions which do not have derivatives."
A three-dimensional generalization of the derivative to an arbitrary direction is known as the directional derivative. In general, derivatives are mathematical objects which exist between smooth functions on manifolds. In this formalism, derivatives are usually assembled into "tangent maps."
Performing numerical differentiation is in many ways more difficult than numerical integration. This is because while numerical integration requires only good continuity properties of the function being integrated, numerical differentiation requires more complicated properties such as Lipschitz classes.
Simple derivatives of some simple functions follow.
where , , etc. are Jacobi elliptic functions, and the product rule and quotient rule have been used extensively to expand the derivatives.
There are a number of important rules for computing derivatives of certain combinations of functions. Derivatives of sums are equal to the sum of derivatives so that
In addition, if is a constant,
The product rule for differentiation states
where denotes the derivative of with respect to . This derivative rule can be applied iteratively to yield derivative rules for products of three or more functions, for example,
The quotient rule for derivatives states that
while the power rule gives
Other very important rule for computing derivatives is the chain rule, which states that for ,
or more generally, for
where denotes a partial derivative.
Miscellaneous other derivative identities include
If , where is a constant, then
so
Derivative identities of inverse functions include
A vector derivative of a vector function
can be defined by
The th derivatives of for , 2, ... are
The th row of the triangle of coefficients 1; 1, 1; 2, 4, 1; 6, 18, 9, 1; ... (Sloane's A021009) is given by the absolute values of the coefficients of the Laguerre polynomial .
Faá di Bruno's formula gives an explicit formula for the th derivative of the composition .
The June 2, 1996 comic strip FoxTrot by Bill Amend (Amend 1998, p. 19; Mitchell 2006/2007) featured the following derivative as a "hard" exam problem intended for a remedial math class but accidentally handed out to the normal class:
Chapter 6
Probability
6.4 Probability Page: 471
Probability is the branch of mathematics that studies the possible outcomes of given events together with the outcomes' relative likelihoods and distributions. In common usage, the word "probability" is used to mean the chance that a particular event (or set of events) will occur expressed on a linear scale from 0 (impossibility) to 1 (certainty), also expressed as a percentage between 0 and 100%. The analysis of events governed by probability is called statistics.
There are several competing interpretations of the actual "meaning" of probabilities. Frequentists view probability simply as a measure of the frequency of outcomes (the more conventional interpretation), while Bayesians treat probability more subjectively as a statistical procedure that endeavors to estimate parameters of an underlying distribution based on the observed distribution.
A properly normalized function that assigns a probability "density" to each possible outcome within some interval is called a probability function (or probability distribution function), and its cumulative value (integral for a continuous distribution or sum for a discrete distribution) is called a distribution function (or cumulative distribution function).
A variate is defined as the set of all random variables that obey a given probabilistic law. It is common practice to denote a variate with a capital letter (most commonly ). The set of all values that can take is then called the range, denoted (Evans et al. 2000, p. 5). Specific elements in the range of are called quantiles and denoted , and the probability that a variate assumes the element is denoted .
Probabilities are defined to obey certain assumptions, called the probability axioms. Let a sample space contain the union ( ) of all possible events , so
and let and denote subsets of . Further, let be the complement of , so that
Then the set can be written as
where denotes the intersection. Then
where is the empty set.
Let denote the conditional probability of given that has already occurred, then
The relationship
holds if and are independent events. A very important result states that
which can be generalized to
No comments:
Post a Comment