Sunteți pe pagina 1din 25

3

Frequency Modulation

The other common form of analog data transmission that is familiar to most people is called
frequency modulation (FM). In the previous section, the message signal was multiplied with
the carrier, modulating its amplitude in order to be transmitted (this amplitude modulation).
With FM, the frequency of the carrier is modulated (varied) was the message changes. FM
is what is used to transmit the majority of radio broadcasts. It is generally preferred over
AM because it is less sensitive to noise, and it is in fact possible to trade bandwidth for
noise performance. The advantages of FM come at the expense of increased complexity in
the transmitter and in the receiver. This having been said, we will see a simple FM receiver
is actually no more complicated (to build) than its AM counterpart.
Conceptually, FM is pretty simple. If me are sending a message m(t), we send a higher
frequency wave when the amplitude of m(t) is high, and a lower frequency wave when the
amplitude is low. The example below shows a square wave message, and the corresponding
FM signal. The message here is called mb (t). The b stands for binary - the example signal
happens to only have two values - and differentiates this example from the case of a general
message to which is discussed shortly.
High amplitude

High frequency
s FM (t )

mb (t )

-1

-1

Low amplitdue

Low frequency

Figure 70:

We can see that when the message mb (t) has a high (1) amplitude, the frequency of the
modulated signal sF M (t) is high. When m(t) goes low (-1), the modulated signal frequency
is reduced. Mathematically, we could describe this as
sF M (t) = A cos(2 [fc + kf mb (t)] t).

(38)

The A in front of this equation is just a constant which tells us the signal amplitude. The
constant kf is called the frequency sensitivity. It tells us how much the signal frequency
changes as as the message changes. From both the equation and Fig. 70, we can see that the
62

effective frequency of the cosine wave sF M (t) is fw (t) = fc + kf mb (t). This translates into
fw = fc + kf , mb = 1 (Higher Frequency)
= fc kf , mb = 1 (Lower Frequency).
What about for an arbitrary message m(t) ? We want the frequency of our cosine wave to
be

fw (t) =

fc
|{z}

+ kf m(t) .
| {z }

Carrier
frequency

(39)

Change due
to message

The next page or so of math culminates in Eq. 43 which tells us how to get the FM signal.
Intuitively, we might think that the FM signal would be determined using Eq. 38. This is
in fact incorrect. To get this right, we need to step back and think about the meaning of
frequency and phase. Lets start with an analogy from linear motion. If we define position
x(t) and velocity v(t), we know the relationship between them is
d
x(t).
dt

v(t) =

(40)

That is to say that velocity is the time rate of change of position. To obtain position, from
velocity, we integrate:
x(t) =

v(t)dt.

(41)

The angular counterparts of position and velocity are phase and angular frequency =
2f . These are related in the same way. Using f , which is relevant here, the equation
corresponding to Eq. 41 is
(t) = 2

f (t)dt

(42)

Now a cosine wave can be defined generally as c(t) = cos((t)). If we want a wave with a
fixed frequency fc , we can calculate
Z t
(t) = 2
fc dt
0

= 2fc t
which gives the usual result c(t) = cos(2fc t). If we want a frequency that varies with time,
as in Eq. 39, we integrate
Z t
F M (t) = 2
fw (t)dt
0
Z t
Z t
= 2
fc dt + 2
kf m(t)dt
0
0
Z t
= 2fc t + 2kf
m(t)dt.
0

63

The FM signal is sF M (t) = A cos(F M (t)) and substituting the equation above gives


Z t
sF M (t) = A cos 2fc t + 2kf
m(t)dt .

(43)

Equation 43 correctly indicates that the frequency (derivative of phase) of the signal is
proportional to the message m(t).

3.1

Generating an FM signal

With mathematical description of the FM signal now in hand, we must determine how to
build a circuit that will take m(t) as in input and give us sF M (t). This is of course a
modulator. The way these modulators work is interesting enough that it is worth discussing
a couple:
One option is to bypass altogether the previous discussion about integration and directly
control the resonant frequency of a circuit using m(t). This can be accomplished using a LC
resonant circuit incorporating a voltage variable capacitance diode, also called a varicap.
Cosine waves are typically generated by an oscillator with a circuit such as that in Fig. 71
governing the oscillation frequency. Look up Hartley or Armstrong oscillators to get a sense
of how this work. The capacitor, which would be more familiar in such as circuit, is replaced
with a variable capacitance diode. The diode capacitance changes depending on the voltage
across it. The message signal is presented across the capacitor, and the circuit is set up
(biased) so that the total capacitance CT is equal to
CT = C0 + kc m(t).

The resonance frequency of the LC circuit is f0 = 1/(2 LCT ) which will be the output
frequency of the oscillator.
In this example, the output frequency changes in a nonlinear fashion with the message;
however, for small changes in CT , we could show (using a series expansion) that the change
in output frequency is roughly proportional to m(t), that is
f0 =

1
+ kf m(t).
2LC0

This is of course Eq. 39, with the carrier frequency equal to 1/(2 LC0 ) and the frequency
sensitivity kf determined by the factor kc and the capacitances involved.
The second, less direct method of modulation uses an integrator followed by a phase
modulator. In the previous example, the frequency of oscillation was determined by m(t)
which implicitly took care of the integration in the signal equation 43. The integrated
64

Z in

+
m(t )

CT

Figure 71: Tank circuit of a voltage controlled oscillator. The capacitance CT depends on the
message signal m(t). This capacitance in turn governs the oscillation frequency of the circuit.
This LC circuit has a parallel impedance Zin m and would be connected to an appropriate
oscillator circuit.

message represents the phase of SF M , so we must come up with a circuit that can modify
R
the phase of a wave based on m(t)dt. As you may recall, integration can be accomplished
with an operational amplifier (OP-AMP) as shown in Fig. 72. Briefly, we know that the

C
ix (t )

R
_

m(t )

+
vo (t )

Figure 72:

input terminals of the OP-AMP are at ground potential, and we know the currents flowing
through the resistor and capacitor must me one and the same (equal). This gives us
ix (t) =

d
m(t)
= C v0 (t).
R
dt

Solving for the output voltage gives the expected integral


Z t
1
v0 (t) =
m(t)dt
RC 0

(44)

and puts us one step closer to frequency modulation. The next step is to use this integrated
signal to control the phase of a cosine wave.
A phase modulator circuit appears in Fig. 73. Similar to modulators we have seen
previously, this circuit has two inputs: the carrier, c(t) = A cos(2fc t), and the integrated
65

message signal, provided by the integrator discussed above. Phase shifting a cosine wave
means the addition of a phase angle:
Shifted Cosine = A cos(2fc t + s (t)).
Out goal is to make s (t) =

Rt
0

m(t)dt so that we wind up with Eq. 39.


t

m(t )dt
0

CT

A cos(2pf c t )

s FM (t )

Figure 73:

The circuit in Fig. 73 is a bandpass filter with a center frequency of f0 = 1/(2 LCT ).
The transfer function of this filter is shown in Fig. 74. Assume that the filter is designed so
H ( f )

H( f )

1
90o

f0

f0

- 90o

Figure 74:
R
that when no voltage is applied to the varicap (when m(t)dt = 0), the resonant frequency
is equal to the carrier frequency, f0 = fc . When the integrated message signal is applied to
the capacitance CT will increase or decrease, causing a shift in f0 . We can express this as
Z t
f = f0 fc = k1
m(t)dt.
0

For small deviations from f0 , the filter has a response that can be approximated by the
dashed lines in Fig. 74. The gain is close to 1, and the phase varies linearly. When f = 0,
the carrier passes through the filter without any change in amplitude or phase since the phase
66

shift is zero at the center of the filter. A non-zero frequency difference though, means that
the carrier is no longer passing through the center of the filter. For small f , this has little
effect on the carrier amplitude since we approximate the filter gain as 1. However, the phase
of the carrier will be shifted by k2 f , with k2 being the slope of the linear approximation
of the phase response.
The output signal will therefore be
SF M (t) = A cos(2fc t k2 f )
Z t
= A cos(2fc t + k1 k2
m(t)dt).
0

Again, we have arrived at the correct frequency modulated signal in Eq. 43.
This second method of modulation is often preferred. A precision oscillator, such as a
crystal. can be employed to generate the carrier. This contrasts with the direct modulation
method wherein a tuned circuit is used to control the oscillator frequency. Such a tuned
circuit leaves the modulator more susceptible to frequency drift, due to temperature changes
in the circuit.

3.2

Demodulating FM

The goal of course is to extract m(t) back from s(t), a FM signal, picked up for example by
an antenna. Assuming the received signal has been filtered, it will contain only s(t). The
FM wave was generated by integrating the message signal so it is natural that differentiation
is used in demodulation. The way that this works can be interpreted in both the time and
frequency domains and we will discuss both.
Start with the formula for an FM signal, a repeat of Eq. 43:
Z t
m(t)dt).
s(t) = A cos(2fc t + 2kf
0

The time derivative of this waveform is




Z t
d
m(t)dt
s(t) = 2A (fc + kf m(t)) sin 2fc t + 2kf
dt
0




Z t
kf
m(t)dt .
(45)
= 2Afc 1 + m(t) sin 2fc t + 2kf
fc
0
R
In taking the derivative, the chain rule, and the fact that dtd m(t)dt = m(t) have been
employed. Look at Eq. 36, which defines an amplitude modulated signal. You should be
able to see that Eq. 45 above looks very similar: the message signal, plus a constant, is
multiplied by a sine wave. The only differences are some scaling constants, the choice of a
67

sine instead of a cosine carrier, and the frequency variation of the sine wave. Like with AM,
the envelope of this derivative signal now contains the original message and can be recovered
with an envelope detector, leaving the recovered signal


kf
m(t)

= 2Afc 1 + m(t) .
fc

(46)

Recovery only depends on the envelope ffc m(t) never falling below one, as in the AM case.
Figure 75 shown as block diagram representation of the demodulation procedure. The
envelope detector, consists of a diode rectifier, followed by a RC lowpass filter, as in Fig. 50.
With this block taken care of, it is the differentiator that presents a difficulty.
s (t )

d
dt

~
s (t )

Env.
Detect

~ (t )
m

Figure 75:

A differentiator can be made using an inverting op-amp as in Fig. 72 with the resistor and
capacitor interchanged. This is not usually done in practice, as the high frequency operation
of such a circuit is not reliable. To implement a differentiator, we switch to analyzing the
demodulation process in the frequency domain.
The idea behind FM is that a larger m(t) results in a higher frequency, and a smaller
message a lower frequency. This was first indicated in Fig. 70. What we are effectively
doing in converting voltage to frequency. Consequently, we will recover the voltage (m(t))
by finding a way to convert higher frequency back to a higher voltage and lower frequency
to lower voltage.
Think about the filter whose transfer function is shown in Fig. 76. The magnitude of the
transfer function varies linearly with frequency over the range fc W to fc + W and has a

slope of p. Imagine now putting a wave with a frequency f < fc into the filter, as in the
upper part of Fig. 77. The input to the filter has a lower frequency, so it will scaled by a value
somewhere towards the left hand side of the transfer function. The result is a comparatively
lower output amplitude. On the other hand, it we put a higher frequency wave, frequency fh
into the filter, the transfer function shown that the output will be comparatively higher. This
filter does exactly what we have set out to do: the amplitude of the output is proportional to
the frequency of the input. This may be more easy to see with the third example in Fig. 78.
The wave going in to the filter has a frequency that varies with time. When the frequency
is higher, the filter lets more of the wave through, and when it is lower, less gets through.
The output of the filter now has an amplitude which follows the frequency variations in the
68

H( f )

slope, p
pW

f
fc - W

fc

fl

fh

fc + W

Figure 76:
INPUT

OUTPUT

cos(2pf l t )
H( f )

cos(2pf ht )
H( f )

Figure 77:

input wave. We have just created a filter which acts as a frequency to voltage converter
which will be able to undo frequency modulation and recover m(t). Of course, the output is
still a sine wave. Only its amplitude is proportional to the input frequency. We recover the
message signal now by running the filter output through an amplitude detector. Looking
INPUT

OUTPUT

H( f )

Figure 78:

back at the block diagram for FM demodulation (Fig. 75), you can safely conclude that the
filter we have just described acts as the differentiator.
Now that we know the transfer function that will differentiate our signal, we need to look
at how to implement such a filter. The answer is actually in the bandpass filter response in
69

Fig. 73. We will reproduce it here. Instead of focusing on the center of the filter response, we
shift our attention to the transition band. As highlighted in Fig. 79, you can see that there is
a region where the rolloff of the bandpass filter varies almost linearly with frequency. In this
region at least, the response looks like the ideal in Fig. 76. We know that out FM signal will

H( f )

fc

Figure 79:

have a frequency that varies around the carrier fc . If we design a bandpass filter, with the
linear region of the transition band centered at fc , we have a good chance of differentiating
the FM wave. Again, the operation of differentiation is realized by creating a filter which
converts frequency variations back to amplitude variations. The only condition is that the
frequency sensitivity kf is chosen so that the frequency of the FM signal does not go outside
the linear region of the filter. As you may recall, kf tells us how the frequency of the FM
signal changes in proportion to the message - look back at Eq. 43.

3.3

FM Example

Before embarking on a more mathematical description of how the differentiator works, lets
pause and use what we already know to do a complete example of the FM modulation /
demodulation process. The message m(t) will be the same as in Fig. 51: a sound recording
of a male speaker saying hello.
Fig. 80 (a) shows a short section of the speech waveform. This is the same section that
was examined in the AM case of Fig. 51. The difference here is that the signal has not been
scaled and shifted. Although the scale is different, you can look back at Fig. 51 (c) and see
that the envelope there is the same shape as the signal here.
Rt
To transmit the signal with FM, we begin by calculating kf 0 m(t)dt. This is plotted,

over the same small section, in plot (b) of Fig. 80. The amplitude here is just over 200
Volts. This has occurred for two reasons: kf was made large (300000/2) to accentuate
the modulation and make the FM waveform that we will see shortly easier to visualize; the
integrated signal has a large offset, the cumulative effect of integrating m(t) for the first (not
70

(b) 230
Amplitude (V)

Amplitude (mV)

(a) 100
50
0
-50

2
-100
310

1
315

220
210
200
190
310

320

315

Time (ms)

320

Time (ms)

Figure 80:

shown) 310 ms of the waveform. You can look at plot (b) and see that it decreases when
m(t) is below zero and increases in the opposite case, hopefully convincing yourself that the
second plot does represent the integral of the first.
Having obtained the integral of the signal, we employ Eq. 43 to generate an appropriate
FM wave s(t). This is displayed in Fig. 81 (a). For clarity, the plot has been zoomed in
slightly, to span a total of 6 ms. We have used a carrier frequency fc =7000 Hz, and an
amplitude of 1. The points labelled (1,2,3) in the figure correspond to the signal minima

(a)

(b) 60

Amplitude (mV)

Amplitude (V)

1
0.5
0
-0.5
-1
310

312

314

40

20

0
0

316

Time (ms)

10

Frequency (KHz)

15

Figure 81:

(1,2,3) in m(t) identified in Fig. 80 (a). For the lower amplitude regions of m(t), the FM
wave has a lower frequency while it has a higher frequency for the higher amplitude points
in between. The amplitude of s(t) is fixed at 1, with all the message information contained
in the frequency variation.
Plot (b) in Fig. 81 shows the frequency domain representation S(f ) of the FM signal.
Only the upper half of the frequency spectrum is shown. We see a peak at 7 KHz which can
be expected. The carrier frequency is 7 KHz and the frequency of the transmitted signal
varies around this value according to the message. There are a number of differences between
71

this frequency spectrum and that of the AM signal seen earlier in Fig. 51. In that figure,
M (f ) and S(f ) are both shown for the AM example, with S(f ) essentially as shifted copy
of M (f ). The spectrum of the FM signal is not composed of shifted copies of the message
spectrum. As we will see in section 3.5, the relationship between M (f ) and S(f ) for FM is
actually much more complicated. Looking at M (f ), there is no easy way of telling what the
message spectrum might look like. Another important difference is that S(f ) spans a greater
frequency range than it did for AM. The way to think about this is that AM simply shifts
the message over in the frequency domain so its bandwidth must remain the same. In FM,
the amount that the frequency varies really depends on the magnitude of kf . The choice of
this parameter is therefore what determines the width of the signal spectrum, rather that
the message itself.
(a)

(b) 30

20

0.6

Gain

Gain

0.8

0.4

10

0.2
0
0

10

0
0

15

Frequency (KHz)

Amplitude (V)

(c)

10

15

Frequency (KHz)

1.5
1
0.5
0
-0.5
-1
-1.5
310

315

320

Time (ms)

Figure 82:

At the receiver, s(t) is demodulated with a differentiator and envelope detector. The
differentiator here has been realized as a series RLC bandpass filter. The filter is designed
to have a resonant frequency of 13 KHz and a bandwidth of 10 KHz, giving a response as
indicated in Fig. 82 (a). The resonant frequency and bandwidth were chosen so that the
filter response would increase almost linearly with frequency in the range around the carrier
at 7 KHz. You can see that the response shown in the plot is not entirely linear, but it will
in fact be good enough to recover our signal. The differentiated signal s(t) is obtained by
) is shown in Fig. 82 (b).
passing s(t) though this filter. The corresponding spectrum S(f
The spectrum has now been emphasized at higher frequencies compared to S(f ), but the
resulting spectrum still does not look like M (f ).
72

The effect of what we have done becomes clear in the time domain, s(t) shown in plot
(c). As in the previous discussion, the amplitude of the higher frequency sections of s(t)
becomes higher, and that of the lower frequency sections lower. Looking at the envelope
s(t), you can now see that it follows the shape of m(t) from Fig. 80 (a).

(a)

(b)

1
0.5
0
-0.5
-1
-1.5
310

15

Amplitude (mV)

Amplitude (Volts)

1.5

315

10

320

Time (s)

10

20

300

4
3
2
1
0
-5

-10

Frequency (kHz)

(d)

Amplitude (mV)

Amplitude (Volts)

(c)

-20

200

100

0
310

Frequency (KHz)

315

320

Time (s)

Figure 83:

We can now rectify and filter s(t) to get back m(t), just as was done for the AM signal.
The steps are summarized in Fig. 83. In plot (a), the differentiator output has been rectified
by a diode. The effect in the frequency domain is shown in plot (b). The spectrum is a mess.
We can see that there is something that looks like S(f ) at around 7000 KHz, and some

high frequency interference has been added as well. The difference now though is at the
center of the spectrum. Looking back to Fig. 52, this central portion may appear familiar:
it is the spectrum of the original hello waveform we have been studying. The only major
difference is the spike in the center due to the DC offset. We knew that this had to be the
case - we could see m(t) in the envelope of s(t) - but it should be gratifying nonetheless.
After filtering, we get the recovered M (f ) by itself (plot (c)), and can see that it looks nearly
identical to that in Fig. 51.
Returning to the time domain, plot (d) shows the demodulated signal over the short
time interval. Compared with the original in Fig. 80, we can see that the demodulated
signal is close but not quite identical. There are a number of reasons that this is the case;
FM is nonlinear and by nature allows may allow corrupting influences to modify slightly
73

the signal being transmitted. The differentiator that we used to recover the message was
not perfectly linear; this may explain why the recovered signal is most corrupted at low and
high amplitudes (corresponding to frequencies in s(t) furthest from the central linear region
of the differentiator). The larger bandwidth of s(t) may have encroached into that of m(t)
during demodulation (see Fig. 83 (b)). Regardless, the signal has been recovered, and when
played over speakers in fact sounds clearer than the original, due to the filtering away of
high frequency noise.

3.4

The Differentiator

We have seen that a differentiator can be implemented over a narrow frequency range using
the transition band of a bandpass filter. This is sufficient to understand how an FM demodulator works; however, a more mathematical discussion of how the differentiator works gives
some additional insight into the process. Specifically, we will show why the filter works, and
develop an equation which tells us the amplitude of s(t).
We will start by determining the frequency response of an ideal differentiator. Suppose
we have a signal g(t) with a frequency spectrum G(f ). Since the former is the inverse Fourier
transform of the latter, we can write
Z
g(t) =
G(f )ej2f t df
(47)

which is Eq. 21. We take the time derivative of both sides of the equation by putting a d/dt
in front of each:

d
d
g(t) =
dt
dt

G(f )ej2f t df.

(48)

The integral is with respect to time so the derivative can be moved inside without changing
the meaning. The term G(f ) also appears as a constant to the differentiation so we get
Z
d  j2f t 
dg(t)
e
df
=
G(f )
dt
dt

Z
=
[j2f G(f )] ej2f t df.

By moving the differentiation inside of the equation, we ended up just taking the derivative
of exp j2f t. Grouping terms as in the second line of the equation, it is apparent that it
is now the expression in the square brackets which is being inverse Fourier transformed. In
words, this says the inverse Fourier transform of j2f G(f ) is
as the time differentiation property of Fourier Transforms:
dg(t)
j2f G(f );
dt
74

d
g(t).
dt

This can be expressed

(49)

taking the derivative in the time domain is the same as multiplying by f in the frequency
domain. Thus, a filter with a transfer function of H(f ) = j2f will be a differentiator. The
transfer function of such a filter is plotted in Fig. 84. The j2 is just a scaling constant so
H( f )

slope, j 2p

Figure 84:

that any linear function of f , like the filter we implemented in FM demodulation, will serve
the same purpose.
A simple way to see why this works is to look at the case of a sine wave with a frequency
of f0 . The derivative of this wave is
d
sin 2f0 t =
dt

2f
|{z}0

= Derivative

proportional
to frequency

cos(2f0 t) .
| {z }
Phase shift of
90 deg.

The factor of 2f is apparent. The j that shows up in Eq. 49 is reflected in the change from
a sine to a cosine. Since any signal can be split into sines and cosines (what a FT does), we
can see how the time differentiation property comes about.
We can now use the time differentiation property to examine how the bandpass filter
demodulates the FM signal. Assume that the signal bandwidth is 2W , and that the filter
response looks like the plot on the left in Fig. 85. It is linear from fc W to fc + W , and has
a slope of j2. This filter differs from the ideal differentiator as it is shifted from f = 0: the
ideal differentiator response crosses zero at f = 0 but this has been shifted over by fc W .
It will actually be easier if we assume the filter has been shifted over by fc and shifted up
as well, so we break the filter response down into a flat part H1 (f ) and sloped part H2 (f )
as in the figure.
We assume that the FM signal going into the filter is defined as usual by


Z t
s(t) = A cos 2fc t + 2kf
m(t)dt .
0

75

(50)

H1 ( f )
j 2pW

f
H( f )

fc - W

slope, j 2p

j 2pW

f
fc - W

fc

fc

fc + W

H2( f )

slope, j 2p

fc + W

f
fc - W

fc

fc + W

Figure 85:

When s(t) goes through H1 (f ), it is simply scaled by a factor of j2W . The j effectively
turns the cosine into a negative sine so we can write the output of this filter as


Z t
m(t)dt .
s1 (t) = 2AW sin 2fc t + 2kf
0

The situation is trickier for H2 (f ). We know that if we shift H2 (f ) down by fc , we would


have an ideal differentiator. We can use this to determine the output of this filter by shifting
s(t) down in frequency by fc , differentiating it, and shifting it back up. Shifting the signal
down is essentially accomplished by dropping the 2fc t term inside of the cosine function
- we are effectively only differentiating the baseband part of the signal. Think of this as
applying the chain rule to differentiate but not including the 2fc t term when taking the
derivative of the inside part. The output of the filter is


Z t
m(t)dt .
s2 (t) = 2kf m(t) sin 2fc t + 2kf
0

Putting this together, we determine the combined differentiator output to be






Z t
kf
m(t)dt .
s(t) = 2AW 1 + m(t) sin 2fc t + 2kf
W
0

(51)

This is important because the term in the square brackets must always be larger than zero
for diode detection to work, just like in AM. As you can see, this actually depends on the
width of the modulated signal (or the slope of the differentiator response), giving us another
thing to think about when designing an FM system.

76

3.5

FM Mathematics

Although it required more thought in the derivation, we can see that in their simplest form,
FM modulation and demodulation are barely more complex than for AM. We can modulate
with the addition of a filter network, and demodulate with a similar addition. Despite
this simple description, the nonlinearity of the modulation Equation 43 makes FM more
challenging to analyze mathematically.
The goal of our mathematical analysis of FM is to determine the bandwidth of the
modulated signal. As with AM, it will be necessary to understand this bandwidth and the
factors that affect it if we wish to share a channel amongst messages modulated at different
carrier frequencies. With AM, the signal bandwidth was simply that of the message (or half
that for SSB). We have already observed that this is not the case for FM. The bandwidth
will depend on some way of kf , with the modulated signal spectrum bearing no resemblance
to that of the message.
We know that the general form of the FM signal is


Z t
s(t) = A cos 2fc t + 2kf
m(t)dt .

(52)

To keep things compact, we define the excess phase


Z t
(t) = 2kf
m(t)

(53)

so that
s(t) = A cos(2fc t + (t)).

(54)

Now, we use the trigonometric identity


cos(a + b) = cos a cos b sin a sin b

(55)

s(t) = A [cos((t)) cos(2fc t) sin((t)) sin(2fc t)] .

(56)

to rewrite Eq. 52 as

We first assume that the modulation is narrow band, meaning that at all times, |(t)| 1.

Remember, this angle is proportional to the frequency sensitivity kf which tells us how much
the signal frequency changes as the message changes. The narrow band condition essentially
means that the frequency of s(t) does not change vary much from fc . In practice, this
approximation is valid for about max |(t)| < 0.3.

77

Under this narrow band condition, the small value of (t) lets us approximate cos((t))
1 and sin((t)) (t) to give
:1
: (t)


cos((t))
cos(2fc t) A
s(t) = A
sin((t))
sin(2fc t)



= A cos(2fc t) A(t) sin(2fc t).

(57)

Looking at this equation, we can see that what has happened is actually a kind of
amplitude modulation. Compare Eq. 57 to Eq. 36. The first term in Eq. 57 is the carrier, a
cosine at frequency fc . The second term consists of (t), modulated with a sine wave at the
R
carrier frequency. Remember that (t) = 2kf m(t)dt. We can think of (t) as the new
message which is just being amplitude modulated. The situation in the frequency domain
Q( f )

S( f )

Q ( 0)

1
2

1
2

1
Q( f + f c )
2j

- fc

fc
-

1
Q( f - f c )
2j

Figure 86:

is explained in Fig. 86. The angle (t) has a frequency representation (f ), given by its
Fourier transform. This is related to the message by
Z t
m(t)dt}
(f ) = F{2kf
0

so it can be calculated given a known message. Upon modulation, the sine term in Eq. 57
splits the spectrum and shifts it left and right. The cosine term, the carrier, results in a
spikes at fc . Notice that the copies of the spectrum (f ) have opposite signs and are
multiplied by a factor of 1/(2j) because the decomposition of a sine function into complex
exponentials (responsible for the frequency shifting)
sin(2fc t) =

ej2fc t ej2fc t
.
2j

Comparing what has happened to our previous studies of AM, we conclude that in the
narrow band case, FM is really akin to AM, with the integrated message modulated with a
sine instead of a cosine.
78

The above analysis was possible because we linearized the equation for s(t). That is, we
used the assumption that (t) was small to represent the nonlinear functions cos((t)) and
sin((t)) with linear functions 1 and respectively. If (t) is large, this linearization is not
possible.
What we want to do is come up with a mathematical way of relating m(t) to the final
FM bandwidth. Lets start again by looking at the FM signal represented Eq. 56. What we
really have is two functions cos((t)) and sin((t)) amplitude modulated with a cosine and
sine carrier respectively. Each of these functions can be interpreted as a message being
amplitude modulated. We can make this clear by rewriting
s(t) = A [cos((t)) cos(2fc t) sin((t)) sin(2fc t)]
= Am1 (t) cos(2fc t) Am2 (t) sin(2fc t).

(58)

We have now defined two new message functions


Z

m1 (t) = cos((t)) = cos(2kf


m2 (t) = sin((t)) = sin(2kf

m(t)dt)

(59)

m(t)dt)

(60)

0
t

and we can see from Eq. 58 that these new messages are being amplitude modulated, the
first with a cosine carrier and the second with as sine carrier. We already know how to
analyze the bandwidth of this amplitude modulation and so have all the steps necessary to
go from the original message m(t) to the FM waveform. Specifically, we start with m(t)
to determine the new functions m1 (t) and m2 (t). These will have frequency spectra M1 (f )
and M2 (f ) with a defined bandwidth. We then analyze these as if they are being amplitude
modulated - each is split and shifted up and down by fc to arrive at the frequency spectrum
of our FM signal.
Lets look at a specific example of this procedure, and use it to develop a general way of
estimating the bandwidth of an FM signal. Consider a message m(t) that consists of a lone
cosine wave. This is called a single tone signal since a cosine wave represents the waveform
of a pure sound tone (like the whistle in Fig. 5). This message message is
m(t) = a cos(2f0 t).

(61)

We want the instantaneous frequency to be fi = fc + kf m(t) and so corresponding phase

79

is
(t) = 2kf

m(t)dt

= 2kf

a cos(2f0 t)dt

= 2kf

a
sin(2f0 t)
2f0

kf a
sin(2f0 t).
f0

(62)

For single tone FM, we know that the largest value of m(t) will be the amplitude a so the
largest instantaneous frequency change will be f = kf a which is called the frequency deviation. We will see later that the impact of this frequency deviation on the signal bandwidth
depends on the tone frequency f0 . For this reason, we define the modulation index which
is
kf a
f
frequency deviation of tone
=
=
=
(63)
f0
f0
frequency of tone
so the FM signal is
s(t) = A cos(2fc t + sin(2f0 t)).
m(t )

(64)

s (t )

1
f0

q (t )
b

1
f0

Figure 87:

One cycle of m(t) is shown in Fig. 87. The length of one period is T = 1/f0 as indicated.
Also shown is the final FM signal s(t). You can see that it is a wave whose frequency is lower
in the middle of the plot (where m(t) has a lower amplitude) and higher at the sides. In this
case, the carrier frequency is fc = 20f0 and = 5. The excess phase (t) is also shown. As
we calculated, it is a sine wave with an amplitude equal to the modulation index.

80

m1 (t )

m2 (t )

1
f0

1
f0

Figure 88:

To calculate the frequency spectrum of s(t), we determine first use Eq.s 59 and 60 to
determine m1 (t) = cos( sin(2f0 t)) and m2 (t) = sin( sin(2f0 t)). Each of these message
functions is plotted in Fig. 88, over one period of m(t). Remember that m(t) is assumed to
be a cosine wave going on for ever so m1 (t) and m2 (t) are also infinite periodic functions.
The new messages m1 (t) and m2 (t) are now amplitude modulated according to Eq. 58 to
get s(t). This is best analyzed in the frequency domain. We start by determining the spectra
M1 (f ) and M2 (f ) which will be split and shifted up and down during the modulation. The
time domain functions are nonlinear - calculating their fourier transforms is possible but
mathematically complex. Instead, we will evaluate these spectra using a computer (FFT of
a ten period long signal) which gives the approximate answer. We will use this result for our
ultimate goal of deriving an expression for S(f ). The spectra M1 (f ) and M2 (f ) which are
the Fourier transforms of m1 (t) and m2 (t) are shown in Fig. 89. Only the magnitudes are
shown. Each spectrum is composed of a series of peaks. This makes sense because we know
that m1 (t) and m2 (t) are periodic waveforms. The peak separation in each case is 2f0 , and
we can see that M1 (f ) has peaks at frequencies which are even multiples of f0 while M2 (f )
only has peaks at the odd multiples.
To get the spectrum of s(t), we must take each of the spectra in Fig. 88 and split it an
shift it up and down by fc . Again, this comes from the interpretation of FM as amplitude
modulation of the messages m1 (t) and m2 (t) as in Eq. 58. To verify this, we use a computer
to determine the FT of s(t) which was plotted in Fig. 87. The resulting spectrum S(f ) is
plotted in Fig. 90. As we expect, the spectrum is centered around fc . If we look closely

at one side of the spectrum, as in the plot in the right hand side, we can see that it consists
of a combination of the peaks present in M1 (f ) and M2 (f ). Since these peaks occurred at
alternating frequencies in the baseband signals, they do not interfere with each other after
modulation. The peaks in S(f ) are now each separated by f0 Hz. From this plot, we can
determine the bandwidth of the modulated spectrum. If we consider the few smaller peaks
81

| M1( f ) |

| M2( f ) |

f
- 8 f0 - 4 f0

4 f0

f
- 5 f0 - f0 f0

8 f0

5 f0

Figure 89:

S( f )

S( f )

f
- fc

f
fc - 6 f0

fc

fc

fc + 6 f0

Figure 90:

at the sides to be insignificant, this bandwidth is about 12f0 .


We now know the FM spectrum and approximate bandwidth for a single tone message.
However, we needed to use a computer to calculate the spacing and magnitude of the peaks
in the spectrum. Understanding how we got this spectrum allows us to proceed to the usual
mathematical analysis of single tone FM. Rather than calculate and analyze m1 (t) and m2 (t),
we can actually start with
s(t) = A cos(2fc t + sin(2f0 t))
and use a variation of a mathematical technique called the Jacobi Anger Expansion to
rewrite it as

X
Jn () cos(2(fc + nf0 )t).
(65)
s(t) = A
n=

This equation may look difficult to understand but we can break it down as follows: we
know that cos(2fi t) will have a frequency domain representation consisting of peaks at
f = fi , each with an amplitude of 1/2. Since this is symmetrical for positive and negative

frequencies, lets only worry about the positive frequencies.


82

The cos(2(fc + nf0 )) term in Eq. 65 has fi = fc + nf0 , so in the positive frequencies,
this means peaks as
fc + nf0 , n = , 2, 1, 0, 1, 2,
As shown in Fig. 91, this means a peak at the carrier fc , and another peak every f0 Hz in
both directions. The Jn () term in Eq. 65 tells us how high each of these peaks will be in
1
2

...

...
fc - 2 f0

fc - f0

fc

fc + f0

fc + 2 f0

Figure 91:

the spectrum of s(t). The function Jn () is called a Bessel function. Since is fixed for
a given modulator and message, this is really just a function of n. If you look back at the
Eq. 15 which describes a Fourier cosine series, you can see that Jn () acts in the same way
that aa does there. In fact, we can make Eq. 65 easier to look at by re-writing it as
s(t) = A

cn cos(2(fc + nf0 )t).

(66)

n=

Bessel functions do not have a closed form solution, and typically need to be looked up or
evaluated on a computer. To give a sense of what these coefficients look like, Fig. 92 shows
the coefficients cn = Jn () for various values of . A property of Bessel functions is that
|Jn ()| = |Jn ()|. If we are only interested in the magnitude of the coefficients, these plots
can simply be reflected about n = 0 to give us the coefficients for negative n.

You can see in the plots that for smaller , the coefficients quickly go to zero; they take
longer to do so as increases. It is these coefficients that tell us the amplitudes of the peaks
shown in Fig. 91 which come from the cosine term in Eq. 65. Fig. 91 is thus modified examples of this modification are shown in Fig. 93. The actual height of the coefficients will
depend of and A according to cn = AJn ()/2. Figure. 93 plots the spectrum S(f ) for
positive frequencies only and for values of 1, 2, and 5. The increasing values correspond
to an increase in kf , which increases f and thus . You can see that the peaks are spaced
apart by f0 as in Fig. 91 and that the amplitudes are given by the coefficients plotted in
Fig. 92. If you look at the plot for = 5, you will see that it is exactly the plot we found
by treating FM as AM of m1 (t) and m2 (t). The advantage of the bessel function approach
is that it formalizes the procedure and gives us values for the amplitudes of the peaks in the
frequency spectrum.
83

0.5

0.5

Jn(0.3)

Jn(0.1)

-0.5

-0.5

-1

-1
2

10

0.5

0.5

Jn(1)

Jn(0.5)

-0.5

10

10

10

-0.5

-1

-1
0

10

0.5

0.5

Jn(5)

Jn(2)

-0.5

-0.5

-1

-1
0

10

Figure 92:

84

f0
b =1

fc

2(1 f 0 )

b =2

fc
2(2 f 0 )
b =5

fc
2(5 f 0 )

Figure 93:

After all this work, it is best to summarize single tone FM as follows: if you are given
m(t) = a cos(2f0 t), first use the frequency sensitivity kf to calculate = kf a/f0 . Look at a
plot like the one in Fig. 92 to determine the shape of the spectrum for the you found. The
spectrum S(f ) will consist of the peaks from Fig. 92, centered at fc and spaced apart by

a frequency f0 . The peaks from Fig. 92 will of course be mirrored around fc as well. Look
back at Fig. 90 and make sure you understand how it can be computed using the procedure
above.
Our goal in all of this is to determine the effective bandwidth of the signal. This band-

width could be defined as the width over which the spectrum has a significant amplitude it never quite gets to zero but we can neglect values of cn which are very small. In Fig. 93, a
width equal to 2f0 is indicated in each plot. You can see that most of the significant (non
zero) part of the spectrum is within this range. If we extend the range by an extra f0 on
either side, we come very close to capturing all the significant peaks in the spectrum. This
has led to an empirical equation, known as Carsons rule, which states that
Bandwidth 2(f0 + f0 ).

(67)

This says that the effective signal bandwidth is given by the span specified in the figure, plus
an extra two peaks. You can see that it is not exact - but it is simple to use and is a good
rule of thumb for bandwidth calculations. Using the definition = f /f0 , Carsons rule is
often written as 2(f + f0 ).
Remember that this derivation was for a purely sinusoidal m(t) = a cos(2f0 t). For an
85

arbitrary message, there will be no analytical expression for the signal bandwidth. However,
if we know that the message has a bandwidth of 2W and is centered at f = 0 (see eg.
Fig. 41), then the highest frequency wave present in the message has a frequency f0 = W .
The maximum possible frequency deviation for an arbitrary m(t) will be f = kf max |m(t)|.
Inserting these worst case values into Carsons rule gives
Bandwidth 2(kf max |m(t)| + W ).

(68)

This equation gives the worst case scenario: it is not likely that the maximum signal amplitude will actually be associated with the highest frequency component in the signal. However,
it is again the quickest way of calculating the approximate bandwidth. If a more accurate
measure is needed, it analyzing the spectrum of s(t) either using hardware measurements or
computer experiments would be necessary.

86

S-ar putea să vă placă și