Documente Academic
Documente Profesional
Documente Cultură
Algebra
Now you will start to appreciate the ability of Mathematica to perform symbolic manipulations.Here are some basic
manipulations we use very often in algebra:
Expand[] get rid of Parenthesis
Factor[] factor into parenthesis
Together[] put over a common denominator
Numerator[] extract the numerator
Denominator[] extract the denominator
Simplify[] simplfy an expression
FullSimplify[] simplify with more tricks (takes longer than Simplify)
The function that is required for your needs may not be obvious. Sometimes you may have to rewrite a complex
algebraic expression in different ways and try Simplify or other functions several times, in order to get to a desired
form.
Replacements can be a precious trick. They are performed by typing /. after an expression:
y ê. x Ø 2
y = x ^ 2 + 1;
We have simply assigned the value 2 to the variable x, and replaced such specific value of x in the expression y.
Functions
We have already seen many Mathematica Intrinsic Functions, such as Log[x], Sin[q] and so on. Now we will learn
how to define our own function. This is a great tool for programming, so make sure you understand the basic ideas
about defining functions. Let's start with an example:
f@x_D = x + 3
IMPORTANT:
1. Our function name does NOT start with a capitol letter
2. The argument of the function is a variable followed by the underscore symbol.
The underscore symbol in the argument is the most important concept here. While x is a variable, and its name is
definitely x, x_ is a pattern, not a variable. x_ may stand for anything you like, for example it could be a generic
expression:
2 Lecture2.nb
f@a + b ^ 2 + c ê 2D
c
3 + a + b2 + ÅÅÅÅ
2
As you can see, the function f did its job: It took the argument provided and added 3. In this case the pattern x_
took the value a+b^2+c/2. If you had defined f[x], instead of f[x_], this would have not worked:
g@a + b ^ 2 + c ê 2D
g@zD = z + 3;
gAa + b2 + ÅÅÅÅ E
c
2
As you can see, g[z] is not as smart as f[x_]!! You should have done:
Lecture2.nb 3
g@zD ê. z Ø a + b ^ 2 + c ê 2
c
3 + a + b2 + ÅÅÅÅ
2
to make it work, because g takes only the variable z as its argument; but generally this is not a convenient way to
define a function.
IMPORTANT: Remember that the argument of a function is a pattern (x_), not a variable (x).
As with the Mathematica Intrinsic functions and with all variables, with the functions defined by yourself you can
check their definition with the question marks, ?f or ??f.
The generalization to functions of several variables is trivial:
By the way, you can apply f[x_] to an expression, say y, in three different ways:
f[y] Like we did above
y // f Like we have already done with the function N, e.g. Expression // N
f@y
f@x_D := x ^ 2 ê 10.
You can type SHIFT+ENTER (like I have just done), but you won't see any output. If you read the Mathematica
Book, you will discover that this so called "delayed equal" is presented as the standard way to define a function,
because it is generally true that we introduce functions long before we can use them. However, when possible, it is
legal to define function with the normal = symbol. That is possible more often than you may think. You may think
you need the delayed equal for f[x_]:=x^2+a, if x or a are not defined yet. Wrong! Remember that Mathematica
strength is symbolic manipulations, and Mathematica is perfectly happy to receive the expression x^2+a and work
with it, until you decide, perhaps, to give x some value. The delayed evaluation becomes truly necessary in situa-
tions where the function is defined for example using the solution of an equation (maybe a differential equation)
that has not been solved yet.
You can define functions also with conditions. This is necessary when functions have discontinuities, so a simple
analytical expression without conditions cannot cover the full range of values of the argument of the function. The
general syntax makes use of /; like this:
f[x_] := something /; condition
Here is the Heaviside function:
h@x_D := 1 ê; x > 0;
h@x_D := 0 ê; x < 0
4 < 3 && 2 ã 1
False
Calculus
You can ask Mathematica to compute derivatives of a function with two different ways:
D[f[x],x]
f ' [x]
The first syntax is more general, because it is trivially generalized to partial derivatives as well:
D[f[x,y,z],y]
For example:
f@x_, y_D := x ^ 2 + y;
D@f@xD, xD
2x
You can easily see how the other syntax is insufficient for a partial derivative, unless you are interested only in the
derivative of the first variable in the argument of the function:
Lecture2.nb 5
f '@yD
2y
I was trying to get the partial derivative with respect to y, but I still got it with respect to the first argument, x_,
which is now called y (which is logic because x_ is a pattern, not the variable x).
If you want to compute a Taylor series to define an approximation of a function of x around a point x=a, you may
n! „xn x=a
Actually, Mathematica is better than that, it can compute the power series directly with the function
Series[f[x],{x,a,n}]
where n stands for the number of terms in the series. For example:
è!!!!!!
è!!!!!! i a2 y
I a2 + Cos@aD2 M + j
j ÅÅÅÅ - 2 Cos@aD Sin@aDz
j ÅÅÅÅÅÅÅÅ z
z Hx - aL + H-Cos@aD + Sin@aD L Hx - aL +
k {
2 2 2
a
i Cos@aD2 Sin@aD2 y
ÅÅÅÅ Cos@aD Sin@aD Hx - aL3 + j j ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ z
z Hx - aL4 + O@x - aD5
k {
4
3 3 3
which contains the first 4 terms of the expansion as requested. The last term is telling me that the error in this approx-
imation of g[x] converges like (x-a)^5: If I reduce (x-a) by a factor of 2, the error decreases by a factor of 2^5.
IMPORTANT: The result of the function Series[] is a very special animal! You can add a series to it or a function.
If you add a function the function is expanded to the same order of the series first, and then it is summed up with
the series. The nature of the animal is such that you cannot evaluate the series:
s ê. x Ø 2.0
SeriesData::ssdn : Attempt to evaluate a series at the number 2.`; returning Indeterminate. More…
Indeterminate
To evaluate the series you first need to turn it into a function equal to the series truncated to the last term. This is
done with the function Normal:
Normal@sD ê. x Ø 2.0
è!!!!!
! i è!!!!! ! y
a2 + Cos@aD2 + ÅÅÅÅ H2. - aL3 Cos@aD Sin@aD + H2. - aL j
j
j ÅÅÅÅÅÅÅÅÅÅÅÅ - 2 Cos@aD Sin@aDz
z
z+
a2
k {
4
3 a
i Cos@aD2 Sin@aD2 y
H2. - aL4 j
j ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ z
z + H2. - aL2 H-Cos@aD 2 + Sin@aD2 L
k 3 3 {
Finally, let's take a look at integrals. The function Integrate[] is good for both indefinite and definite integrals:
6 Lecture2.nb
Integrate@H2 + x ^ 2 + x ^ 4L ê Ha + x ^ 2L , x D
è!!! è!!!
a
, 8x, -60, 60<, Assumptions Ø ReA a E ã 0 && -60 < ImA a E < 60EE
2 + x2 + x4
IntegrateA ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
a + x2
Not only Mathematica was able to solve the definite integral. The program is also giving us a conditional statement
for the value of the parameter a that needs to be satisfied for the solution to be valid. Otherwise, for the assumptions
given at the end, no solution can be found.
By the way,for limits simply use:
Limit[f[x],xÆa]
Solve@3 x ^ 3 + x ^ 2 - 4 x - 2 ã 0, xD
è!!! è!!!
98x Ø -1<, 9x Ø ÅÅÅÅ I1 - 7 M=, 9x Ø ÅÅÅÅ I1 + 7 M==
1 1
3 3
The most important things to remember is that when we are looking for solutions of an equation we are really ask-
ing to evaluate the condition that the left hand side of the equation is equal to the right hand side (0 in this case), so
we need to use the double equal, = =, as the simple equal, =, is used to assign a value to a variable or a function, not
to test an expression.
For higher order equations, most likely Mathematica will not find analytical solutions:
Solve@x ^ 6 + 3 x ^ 4 - x + 34 ã 0, xD
88x Ø Root@34 - #1 + 3 #14 + #16 &, 1D<, 8x Ø Root@34 - #1 + 3 #14 + #16 &, 2D<,
8x Ø Root@34 - #1 + 3 #14 + #16 &, 3D<, 8x Ø Root@34 - #1 + 3 #14 + #16 &, 4D<,
8x Ø Root@34 - #1 + 3 #14 + #16 &, 5D<, 8x Ø Root@34 - #1 + 3 #14 + #16 &, 6D<<
But don't give up! Mathematica probably will find the 6 numerical solutions at least:
Lecture2.nb 7
% êê N
88x Ø -1.33357 - 0.992949 Â<, 8x Ø -1.33357 + 0.992949 Â<, 8x Ø 0.0138974 - 2.1459 Â<,
8x Ø 0.0138974 + 2.1459 Â<, 8x Ø 1.31967 - 0.964004 Â<, 8x Ø 1.31967 + 0.964004 Â<<
Yes! Mathematica found 6 complex solutions. Are you skeptical? You can easily check that the left hand side of the
equation is pretty close to zero, if evaluated at one of the solutions:
x ^ 6 + 3 x ^ 4 - x + 34 ê. x Ø -1.3335699070590556` + 0.9929492986722481` Â
Don't forget about Mathematica split personality. Here again you have the numerical equivalent of the analytical
solution function, NSolve:
NSolve@x ^ 6 + 3 x ^ 4 - x + 34 ã 0, xD
88x Ø -1.33357 - 0.992949 Â<, 8x Ø -1.33357 + 0.992949 Â<, 8x Ø 0.0138974 - 2.1459 Â<,
8x Ø 0.0138974 + 2.1459 Â<, 8x Ø 1.31967 - 0.964004 Â<, 8x Ø 1.31967 + 0.964004 Â<<
The functions Solve and NSolve can solve also for systems of coupled equations, and the syntax is really intuitive:
Solve[{equation1,equation2,.....},{variable1,variable2,.....}]
For example, let's solve analytically this innocent looking system of 3 equations of 3 unknowns:
+ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ +
è!!!!!!!!!!!!!!!!!!!!!!!! è!!!!!!!!!!!!!!!! !!!!!!!! è!!!!!!!!!!!!!!!! ! !!!!!!!
21ê3
2601 I3 - a + 5 - 6 a + a2 M 423 a I3 - a + 5 - 6 a + a2 M 9 a I3 - a + 5 - 6 a + a2 M
2ê3 2ê3 2 2ê3
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
ÅÅÅÅÅÅÅÅ - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
ÅÅÅÅÅÅÅÅÅ + ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ + c,
è!!!!!!!!!!!!!!!!!!!!!!!! 1ê3
22ê3 22ê3 22ê3
I3 - a + 5 - 6 a + a M
1+Â 3
k {
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
ÅÅÅÅÅÅ - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
1ê3
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
1ê3
2 2ê3 2 2 2
I3 - a + 5 - 6 a + a M
1+Â 3
k {
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
ÅÅÅÅÅÅ - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
1ê3 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
1ê3
2 2ê3 2 2 2
+
8 Lecture2.nb
2 M
1-Â 3
k {
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
ÅÅÅÅÅÅ - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
1ê3
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
2 5 - 6 a + a 2 21ê3
2 M
1-Â 3
k {
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
ÅÅÅÅÅÅ - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
1ê3
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
2 5 - 6 a + a 2 21ê3
How many boring hours did we save? Solve is so nice that it can be applied to systems of equations with no solu-
tions or underdetermined, that is with non-unique solutions. If you put the system of equations into the matrix
form, such system corresponds to a matrix with zero determinant: It may have no solutions or the system is underde-
termined (you may be able to solve for one variable as a function of some others, so the system has infinite solu-
tions). Mathematica messages will be pretty explicit when that happens.
IMPORTANT: When you are trying to solve equations or systems of equations you must be very careful to
make sure that your unknowns have not been assigned any values or definitions earlier on, or for example
right here due to a typo (e.g. = instead of = =). To be safe, you may want to Clear all unknowns prior to use
them in a system of equations.
And now the limitations of Solve and NSolve: Pretty strong limitations, actually. Try to solve anything that is not a
polynomial expression, that is something requiring non-algebraic type of manipulations, and most likely Mathemat-
ica will complain you are asking too much, even with NSolve. If that happens, then you'd better be CREATIVE.
You will need to guess the solutions, at least the ones you care for, but you will have to invent techniques to educate
your guess.......but that's the next calss......
them in a system of equations.
And now the limitations of Solve and NSolve: Pretty strong limitations, actually. Try to solve anything that is not a 9
Lecture2.nb
polynomial expression, that is something requiring non-algebraic type of manipulations, and most likely Mathemat-
ica will complain you are asking too much, even with NSolve. If that happens, then you'd better be CREATIVE.
You will need to guess the solutions, at least the ones you care for, but you will have to invent techniques to educate
your guess.......but that's the next calss......
Numerical Analysis
Finding solutions to complex nonlinear equations is far from an automatica process. You will find it requires some
creativity, as numerical methods for finding slutions require initial guesses. It may also be hard to know if we have
found all possible soltions or not. For single equations, or for systems of two coupled equations, you can rely on
plotting the single function of one variable, or contour plotting the two functions of two variables. The function to
use is FindRoot, with the following syntax:
FindRoot[equation,{variable,guess}]
IMPORTANT: The function FindRoot is for numerical computations, hence the equation must be fully numerical
(no undefined parameters or variables apart from the unknowns).
Example:
8x Ø -1.14367<
Are there other solutions apart from this one? Well, we need to take a look at this equations, by plotting the left
hand side, at least within a range of interest:
0.5
-2 -1 1 2
-0.5
-1
-1.5
Ü Graphics Ü
8x Ø 0.74473<
-7.771561172376096*^-16
That's a good accuracy of course! But let's imagine my initial guess had been less educated, for example:
8x Ø 0.744732<
The solution appears to be the same as before, but only because both solutions are desplayed with not more than 6
significant figures, Mathematica's default desplay. Let's check the accuracy now:
-2.8312293117727094*^-6
You can control the accuracy goal and the number of iterations like this:
8x Ø 0.751772<
-0.011458638767890661
Clearly with only 2 iterations we could not reach an accuracy goal of 6 significant figures.
With a system of two coupled equations the trick to visualize the solutions is to use a contour plot:
8x Ø 2.22393, y Ø 0.654078<
8-0.334902, 0.000751537<
Let's visualize the first equation with a contour plot of its left hand side showing only the locations of x,y space
where the left hand side is zero:
-2
-4
2 4 6 8 10
Ü ContourGraphics Ü
-2
-4
2 4 6 8 10
Ü ContourGraphics Ü
Show@plot1, plot2D
-2
-4
0 2 4 6 8 10
Ü Graphics Ü
There is clearly an infinite number of solutions (keep moving to the right...). The first one we found was a tuff one
(almost tangent, perhaps not even touching: Click on the plot and make it larger by dragging one corner withthe
cursor). Let's search for an obvious intersection:
8x Ø 6.01478, y Ø -1.8392<
{0., 0.}
Pretty good accuracy! Clearly intersections are easier to find that ambiguous tangent points...
You can evaluate numerically also integrals. Try to use the function Integrate with a fancy integral:
14 Lecture2.nb
Integrate@Exp@Sin@xDD, xD
‡‰
Sin@xD
„x
Mathematica took some time, evidently the program tried a bunch of transofrmations, and came up with nothing.
Most likely you would have come up with nothing consulting a thick book of integrals as well. Now we are only
left with the option of evaluate this numerically. Of course, in the numerical integration, we need to decide what the
extremes of the integration interval are:
12.8109
Mathematica was very fast to compute this. This is how the function looks like:
2.5
1.5
0.5
-4 -2 2 4
Ü Graphics Ü
Interpolation
In we are dealing with numerical data, for example results of an experiment, we may have reasons to think that our
discrete data really represent a continuous function that governs the physics of our experiment. In that case it makes
sense to guess what the value of the experiment would be at intermediate points, where we don't have data, by carry-
ing out an interpolation. Let's generate some data, plot it, compute an interpolation, overplot the interpolated
function:
Lecture2.nb 15
25
20
15
10
5 10 15 20
-5
Ü Graphics Ü
25
20
15
10
5 10 15 20
-5
Ü Graphics Ü
f = Interpolation@dataD
The interpolation function is a pure function, in the language of Mathematica, meaning it has no argument. We can
still evaluate it in a trivial way:
f@10D
10 + 10 Sin@10D
Well.....
16 Lecture2.nb
f@10D êê N
4.55979
We can plot it as well. Notice that our discrete dataset was plotted with ListPlot (we could not use Plot for that).
Now f is a function, so we can use Plot:
25
20
15
10
5 10 15 20
-5
Ü Graphics Ü
And now we will learn that Show has no problem to compbine plots made with ListPlot and Plot:
Show@plot1, plot2D
25
20
15
10
5 10 15 20
-5
Ü Graphics Ü
This is how a spline interpolation (what the function Interpolation does) looks like. It is the smoothest way to
connect the given points, with continuous first (slope) and second (curvature) derivatives. It is achieved by putting
togheter pieces of cubic polynomials between each pair of points. With lower interpolation order (lower polynomial
degree), lower order derivatives of the interpolation function are smooth:
g = Interpolation@data, InterpolationOrder Ø 1D
25
20
15
10
5 10 15 20
-5
Ü Graphics Ü
With first order interpolation (linear interplation), I have just drawn straight lines between the points, clearly the
first derivvatives is not even continuous:
10
5 10 15 20
-5
Ü Graphics Ü
While even the second derivative was continuous for the 3rd order interpolation (cubic spline):
18 Lecture2.nb
5 10 15 20
-5
-10
-15
Ü Graphics Ü
But it is clear that the third derivative would not be continuous. If we want a continuous 6th derivative, we need to
interpolate to the 7th order.
Let's say your data looks good and you want to publish the results of your experiments. You also want to become
famous: You want people to use the results of your work. If so, it may help to include in your paper an analytical fit
to the data. In that way, the result of all your hard work can be summarized into a formula that people can take
home and use. As you understand, this is a tricky art to master: The formula need to fit the data as well as possible,
but cannot be so complicated to be unreadable. If your physical process is scale free and generates power laws, that
the task is trivial. But in general your experiments will include parameter studies in multidimensional parameter
spaces, so your final formula may have to be a function of many variables...Good luck. Here we fit as a function of
one indipendent variable:
20
17.5
15
12.5
10
7.5
5 10 15 20
Ü Graphics Ü
Pretty bad!! We need to ask Mathematica to try to use also higher order polinomials:
30
25
20
15
10
5 10 15 20
Ü Graphics Ü
We can't deny we are having troubles here! No need to overplot the original data to notice that. But that's obvious:
The data have an oscillatory nature, so let's try with a sine function:
25
20
15
10
5 10 15 20
-5
Ü Graphics Ü
Now the fit is good (well the data was originated with a sine function...). If you try with a cosine it does not work,
but if you try with Cos[x-2] it is not so bad. Clearly, if the data is not fake, you have to do some educated guessing,
as usual in numerical analysis.
If you need to read data from a file you first go to the directory with one of the follwoing methods:
SetDirectory["/directory1/directory2/......"]
/directory1/directory2/......
Directory@D