Sunteți pe pagina 1din 12

RLC Circuit Analysis (Series And Parallel)

February 22, 2020 by Electrical4U

In RLC circuit, the most fundamental elements of a resistor, inductor


and capacitor are connected across a voltage supply. All of these
elements are linear and passive in nature. Passive components are ones
that consume energy rather than producing it; linear elements are
those which have a linear relationship between voltage and current.
There are number of ways of connecting these elements across voltage
supply, but the most common method is to connect these elements
either in series or in parallel. The RLC circuit exhibits the property of
resonance in same way as LC circuit exhibits, but in this circuit the
oscillation dies out quickly as compared to LC circuit due to the
presence of resistor in the circuit.
Series RLC Circuit
When a resistor, inductor and capacitor are connected in series with
the voltage supply, the circuit so formed is called series RLC circuit.
Since all these components are connected in series, the current in each
element remains the same,

Let VR be the voltage across resistor, R.


VL be the voltage across inductor, L.
VC be the voltage across capacitor, C.
XL be the inductive reactance.
XC be the capacitive reactance.
The total voltage in RLC circuit is not equal to algebraic sum of
voltages across the resistor, the inductor and the capacitor; but it is a
vector sum because, in case of resistor the voltage is in-phase with the
current, for inductor the voltage leads the current by 90o and for
capacitor, the voltage lags behind the current by 90o. So, voltages in
each component are not in phase with each other; so they cannot be
added arithmetically. The figure below shows the phasor diagram of
series RLC circuit. For drawing the phasor diagram for RLC series
circuit, the current is taken as reference because, in series circuit the
current in each element remains the same and the corresponding
voltage vectors for each component are drawn in reference to common
current vector.

The Impedance for a Series RLC Circuit


The impedance Z of a series RLC circuit is defined as opposition to the
flow of current due circuit resistance R, inductive reactance, XL and
capacitive reactance, XC. If the inductive reactance is greater than the
capacitive reactance i.e XL > XC, then the RLC circuit has lagging phase
angle and if the capacitive reactance is greater than the inductive
reactance i.e XC > XL then, the RLC circuit have leading phase angle and
if both inductive and capacitive are same i.e X L = XC then circuit will
behave as purely resistive circuit.
We know that
Where,
Substituting the values

Parallel RLC Circuit


In parallel RLC Circuit the resistor, inductor and capacitor are
connected in parallel across a voltage supply. The parallel RLC circuit
is exactly opposite to the series RLC circuit. The applied voltage
remains the same across all components and the supply current gets
divided. The total current drawn from the supply is not equal to
mathematical sum of the current flowing in the individual component,
but it is equal to its vector sum of all the currents, as the current
flowing in resistor, inductor and capacitor are not in the same phase
with each other; so they cannot be added arithmetically.
Phasor diagram of parallel RLC circuit, IR is the current flowing in the
resistor, R in amps.
IC is the current flowing in the capacitor, C in amps.
IL is the current flowing in the inductor, L in amps.
Is is the supply current in amps.
In the parallel RLC circuit, all the components are connected in
parallel; so the voltage across each element is same. Therefore, for
drawing phasor diagram, take voltage as reference vector and all the
other currents i.e IR, IC, IL are drawn relative to this voltage vector. The
current through each element can be found using Kirchhoff’s Current
Law, which states that the sum of currents entering a junction or node
is equal to the sum of current leaving that node.
As shown above in the equation of impedance, Z of a parallel RLC
circuit; each element has reciprocal of impedance (1 / Z) i.e.
admittance, Y. So in parallel RLC circuit, it is convenient to use
admittance instead of impedance.
Resonance in RLC Circuit
In a circuit containing inductor and capacitor, the energy is stored in
two different ways.

1. When a current flows in a inductor, energy is stored in magnetic


field.
2. When a capacitor is charged, energy is stored in static electric
field.
The magnetic field in the inductor is built by the current, which gets
provided by the discharging capacitor. Similarly, the capacitor is
charged by the current produced by collapsing magnetic field of
inductor and this process continues on and on, causing electrical
energy to oscillate between the magnetic field and the electric field. In
some cases at certain a certain frequency known as the resonant
frequency, the inductive reactance of the circuit becomes equal to
capacitive reactance which causes the electrical energy to oscillate
between the electric field of the capacitor and magnetic field of the
inductor. This forms a harmonic oscillator for current. In RLC circuit,
the presence of resistor causes these oscillation s to die out over period
of time and it is called as the damping effect of resistor.
Formula for Resonant Frequency
During resonance, at certain frequency called resonant frequency, f r.

Digital electronics, digital technology or digital (electronic) circuits are electronics that


operate on digital signals. In contrast, analog circuits manipulate analog signals whose
performance is more subject to manufacturing tolerance, signal attenuation and noise.
Digital techniques are helpful because it is much easier to get an electronic device to switch
into one of a number of known states than to accurately reproduce a continuous range of
values.
Digital electronic circuits are usually made from large assemblies of logic gates (often
printed on integrated circuits), simple electronic representations of Boolean logic functions.
[1]

Contents

 1History
o 1.1Digital revolution and digital age
 2Properties
 3Construction
 4Design
o 4.1Representation
o 4.2Synchronous systems
o 4.3Asynchronous systems
o 4.4Register transfer systems
o 4.5Computer design
o 4.6Computer architecture
o 4.7Design issues in digital circuits
o 4.8Automated design tools
o 4.9Design for testability
o 4.10Trade-offs
 4.10.1Cost
 4.10.2Reliability
 4.10.3Fanout
 4.10.4Speed
 5Logic families
 6Recent developments
 7See also
 8Notes
 9References
 10Further reading
 11External links
History[edit]
The binary number system was refined by Gottfried Wilhelm Leibniz (published in 1705)
and he also established that by using the binary system, the principles of arithmetic and
logic could be joined. Digital logic as we know it was the brain-child of George Boole in the
mid 19th century. In an 1886 letter, Charles Sanders Peirce described how logical
operations could be carried out by electrical switching circuits.[2] Eventually, vacuum
tubes replaced relays for logic operations. Lee De Forest's modification, in 1907, of
the Fleming valve can be used as an AND gate. Ludwig Wittgenstein introduced a version
of the 16-row truth table as proposition 5.101 of Tractatus Logico-
Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, shared the
1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924.
Mechanical analog computers started appearing in the first century and were later used in
the medieval era for astronomical calculations. In World War II, mechanical analog
computers were used for specialized military applications such as calculating torpedo
aiming. During this time the first electronic digital computers were developed. Originally
they were the size of a large room, consuming as much power as several hundred
modern personal computers (PCs).[3]
The Z3 was an electromechanical computer designed by Konrad Zuse. Finished in 1941, it
was the world's first working programmable, fully automatic digital computer.[4] Its operation
was facilitated by the invention of the vacuum tube in 1904 by John Ambrose Fleming.
At the same time that digital calculation replaced analog, purely electronic circuit elements
soon replaced their mechanical and electromechanical equivalents. John
Bardeen and Walter Brattain invented the point-contact transistor at Bell Labs in 1947,
followed by William Shockley inventing the bipolar junction transistor at Bell Labs in 1948.[5]
[6]

At the University of Manchester, a team under the leadership of Tom Kilburn designed and


built a machine using the newly developed transistors instead of vacuum tubes.[7] Their
first transistorised computer and the first in the world, was operational by 1953, and a
second version was completed there in April 1955. From 1955 onwards, transistors
replaced vacuum tubes in computer designs, giving rise to the "second generation" of
computers. Compared to vacuum tubes, transistors were smaller, more reliable, had
indefinite lifespans, and required less power than vacuum tubes - thereby giving off less
heat, and allowing much denser concentrations of circuits, up to tens of thousands in a
relatively compact space.
While working at Texas Instruments in July 1958, Jack Kilby recorded his initial ideas
concerning the integrated circuit (IC), then successfully demonstrated the first working
integrated on 12 September 1958.[8] Kilby's chip was made of germanium. The following
year, Robert Noyce at Fairchild Semiconductor invented the silicon integrated circuit. The
basis for Noyce's silicon IC was the planar process, developed in early 1959 by Jean
Hoerni, who was in turn building on Mohamed Atalla's silicon surface passivation method
developed in 1957.[9] This new technique, the integrated circuit, allowed for quick, low-cost
fabrication of complex circuits by having a set of electronic circuits on one small plate
("chip") of semiconductor material, normally silicon.

Digital revolution and digital age[edit]


Further information: Digital Revolution and Digital Age
The metal–oxide–semiconductor field-effect transistor (MOSFET), also known as the MOS
transistor, was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959.[10][11]
[12]
 The MOSFET's advantages include high scalability,[13] affordability,[14] low power
consumption, and high transistor density.[15] Its rapid on–off electronic switching speed also
makes it ideal for generating pulse trains,[16] the basis for electronic digital signals,[17][18] in
contrast to BJTs which more slowly generate analog signals resembling sine waves.
[16]
 Along with MOS large-scale integration (LSI), these factors make the MOSFET an
important switching device for digital circuits.[19] The MOSFET revolutionized the electronics
industry,[20][21] and is the most common semiconductor device.[11][22] MOSFETs are the
fundamental building blocks of digital electronics, during the Digital Revolution of the late
20th to early 21st centuries.[12][23][24] This paved the way for the Digital Age of the early 21st
century.[12]
In the early days of integrated circuits, each chip was limited to only a few transistors, and
the low degree of integration meant the design process was relatively simple.
Manufacturing yields were also quite low by today's standards. The wide adoption of the
MOSFET transistor by the early 1970s led to the first large-scale integration (LSI) chips
with more than 10,000 transistors on a single chip.[25] Following the wide adoption of CMOS,
a type of MOSFET logic, by the 1980s, millions and then billions of MOSFETs could be
placed on one chip as the technology progressed,[26] and good designs required thorough
planning, giving rise to new design methods. As of 2013, billions of MOSFETs are
manufactured every day.[11]
The wireless revolution, the introduction and proliferation of wireless networks, began in the
1990s and was enabled by the wide adoption of MOSFET-based RF power
amplifiers (power MOSFET and LDMOS) and RF circuits (RF CMOS).[27][28][29] Wireless
networks allowed for public digital transmission without the need for cables, leading
to digital television (digital TV), GPS, satellite radio, wireless Internet and mobile
phones through the 1990s–2000s.
Discrete cosine transform (DCT) coding, a data compression technique first proposed
by Nasir Ahmed in 1972,[30] enabled practical digital media transmission,[31][32][33] with image
compression formats such as JPEG (1992), video coding formats such as H.26x (1988
onwards) and MPEG (1993 onwards),[34] audio coding standards such as Dolby
Digital (1991)[35][36] and MP3 (1994),[34] and digital TV standards such as video-on-
demand (VOD)[31] and high-definition television (HDTV).[37] Internet video was popularized
by YouTube, an online video platform founded by Chad Hurley, Jawed Karim and Steve
Chen in 2005, which enabled the video streaming of MPEG-4 AVC (H.264) user-generated
content from anywhere on the World Wide Web.[38]

Properties[edit]
An advantage of digital circuits when compared to analog circuits is that signals
represented digitally can be transmitted without degradation caused by noise.[39] For
example, a continuous audio signal transmitted as a sequence of 1s and 0s, can be
reconstructed without error, provided the noise picked up in transmission is not enough to
prevent identification of the 1s and 0s.
In a digital system, a more precise representation of a signal can be obtained by using
more binary digits to represent it. While this requires more digital circuits to process the
signals, each digit is handled by the same kind of hardware, resulting in an
easily scalable system. In an analog system, additional resolution requires fundamental
improvements in the linearity and noise characteristics of each step of the signal chain.
With computer-controlled digital systems, new functions to be added through software
revision and no hardware changes. Often this can be done outside of the factory by
updating the product's software. So, the product's design errors can be corrected after the
product is in a customer's hands.
Information storage can be easier in digital systems than in analog ones. The noise
immunity of digital systems permits data to be stored and retrieved without degradation. In
an analog system, noise from aging and wear degrade the information stored. In a digital
system, as long as the total noise is below a certain level, the information can be recovered
perfectly. Even when more significant noise is present, the use of redundancy permits the
recovery of the original data provided too many errors do not occur.
In some cases, digital circuits use more energy than analog circuits to accomplish the same
tasks, thus producing more heat which increases the complexity of the circuits such as the
inclusion of heat sinks. In portable or battery-powered systems this can limit use of digital
systems. For example, battery-powered cellular telephones often use a low-power analog
front-end to amplify and tune in the radio signals from the base station. However, a base
station has grid power and can use power-hungry, but very flexible software radios. Such
base stations can be easily reprogrammed to process the signals used in new cellular
standards.
Many useful digital systems must translate from continuous analog signals to discrete
digital signals. This causes quantization errors. Quantization error can be reduced if the
system stores enough digital data to represent the signal to the desired degree of fidelity.
The Nyquist–Shannon sampling theorem provides an important guideline as to how much
digital data is needed to accurately portray a given analog signal.
In some systems, if a single piece of digital data is lost or misinterpreted, the meaning of
large blocks of related data can completely change. For example, a single-bit error in audio
data stored directly as linear pulse code modulation causes, at worst, a single click.
Instead, many people use audio compression to save storage space and download time,
even though a single bit error may cause a larger disruption.
Because of the cliff effect, it can be difficult for users to tell if a particular system is right on
the edge of failure, or if it can tolerate much more noise before failing. Digital fragility can
be reduced by designing a digital system for robustness. For example, a parity bit or
other error management method can be inserted into the signal path. These schemes help
the system detect errors, and then either correct the errors, or request retransmission of the
data.

Construction[edit]

A binary clock, hand-wired on breadboards

A digital circuit is typically constructed from small electronic circuits called logic gates that
can be used to create combinational logic. Each logic gate is designed to perform a
function of boolean logic when acting on logic signals. A logic gate is generally created
from one or more electrically controlled switches, usually transistors but thermionic
valves have seen historic use. The output of a logic gate can, in turn, control or feed into
more logic gates.
Another form of digital circuit is constructed from lookup tables, (many sold as
"programmable logic devices", though other kinds of PLDs exist). Lookup tables can
perform the same functions as machines based on logic gates, but can be easily
reprogrammed without changing the wiring. This means that a designer can often repair
design errors without changing the arrangement of wires. Therefore, in small volume
products, programmable logic devices are often the preferred solution. They are usually
designed by engineers using electronic design automation software.
Integrated circuits consist of multiple transistors on one silicon chip, and are the least
expensive way to make large number of interconnected logic gates. Integrated circuits are
usually interconnected on a printed circuit board which is a board which holds electrical
components, and connects them together with copper traces.
Design[edit]
Engineers use many methods to minimize logic redundancy in order to reduce the circuit
complexity. Reduced complexity reduces component count and potential errors and
therefore typically reduces cost. Logic redundancy can be removed by several well-known
techniques, such as binary decision diagrams, Boolean algebra, Karnaugh maps,
the Quine–McCluskey algorithm, and the heuristic computer method. These operations are
typically performed within a computer-aided design system.
Embedded systems with microcontrollers and programmable logic controllers are often
used to implement digital logic for complex systems that don't require optimal performance.
These systems are usually programmed by software engineers or by electricians,
using ladder logic.

Representation[edit]
Representations are crucial to an engineer's design of digital circuits. To choose
representations, engineers consider types of digital systems.
The classical way to represent a digital circuit is with an equivalent set of logic gates. Each
logic symbol is represented by a different shape. The actual set of shapes was introduced
in 1984 under IEEE/ANSI standard 91-1984 and is now in common use by integrated circuit
manufacturers.[40] Another way is to construct an equivalent system of electronic switches
(usually transistors). This can be represented as a truth table.
Most digital systems divide into combinational and sequential systems. A combinational
system always presents the same output when given the same inputs. A sequential system
is a combinational system with some of the outputs fed back as inputs. This makes the
digital machine perform a sequence of operations. The simplest sequential system is
probably a flip flop, a mechanism that represents a binary digit or "bit". Sequential systems
are often designed as state machines. In this way, engineers can design a system's gross
behavior, and even test it in a simulation, without considering all the details of the logic
functions.
Sequential systems divide into two further subcategories. "Synchronous" sequential
systems change state all at once, when a "clock" signal changes state. "Asynchronous"
sequential systems propagate changes whenever inputs change. Synchronous sequential
systems are made of well-characterized asynchronous circuits such as flip-flops, that
change only when the clock changes, and which have carefully designed timing margins.
For logic simulation, digital circuit representations have digital file formats that can be
processed by computer programs.

Synchronous systems[edit]

A 4-bit ring counter using D-type flip flops is an example of synchronous logic. Each device is
connected to the clock signal, and update together.

Main article: synchronous logic


The usual way to implement a synchronous sequential state machine is to divide it into a
piece of combinational logic and a set of flip flops called a "state register." Each time a
clock signal ticks, the state register captures the feedback generated from the previous
state of the combinational logic, and feeds it back as an unchanging input to the
combinational part of the state machine. The fastest rate of the clock is set by the most
time-consuming logic calculation in the combinational logic.
The state register is just a representation of a binary number. If the states in the state
machine are numbered (easy to arrange), the logic function is some combinational logic
that produces the number of the next state.

Asynchronous systems[edit]
As of 2014, most digital logic is synchronous because it is easier to create and verify a
synchronous design. However, asynchronous logic has the advantage of its speed not
being constrained by an arbitrary clock; instead, it runs at the maximum speed of its logic
gates.[a] Building an asynchronous system using faster parts makes the circuit faster.
Nevertherless, most systems need circuits that allow external unsynchronized signals to
enter synchronous logic circuits. These are inherently asynchronous in their design and
must be analyzed as such. Examples of widely used asynchronous circuits include
synchronizer flip-flops, switch debouncers and arbiters.
Asynchronous logic components can be hard to design because all possible states, in all
possible timings must be considered. The usual method is to construct a table of the
minimum and maximum time that each such state can exist, and then adjust the circuit to
minimize the number of such states. Then the designer must force the circuit to periodically
wait for all of its parts to enter a compatible state (this is called "self-resynchronization").
Without such careful design, it is easy to accidentally produce asynchronous logic that is
"unstable," that is, real electronics will have unpredictable results because of the
cumulative delays caused by small variations in the values of the electronic components.

Register transfer systems[edit]

Example of a simple circuit with a toggling output. The inverter forms the combinational logic in this
circuit, and the register holds the state.

Many digital systems are data flow machines. These are usually designed using
synchronous register transfer logic, using hardware description languages such
as VHDL or Verilog.
In register transfer logic, binary numbers are stored in groups of flip flops called registers.
The outputs of each register are a bundle of wires called a "bus" that carries that number to
other calculations. A calculation is simply a piece of combinational logic. Each calculation
also has an output bus, and these may be connected to the inputs of several registers.
Sometimes a register will have a multiplexer on its input, so that it can store a number from
any one of several buses. Alternatively, the outputs of several items may be connected to a
bus through buffers that can turn off the output of all of the devices except one. A
sequential state machine controls when each register accepts new data from its input.
Asynchronous register-transfer systems (such as computers) have a general solution. In
the 1980s, some researchers discovered that almost all synchronous register-transfer
machines could be converted to asynchronous designs by using first-in-first-out
synchronization logic. In this scheme, the digital machine is characterized as a set of data
flows. In each step of the flow, an asynchronous "synchronization circuit" determines when
the outputs of that step are valid, and presents a signal that says, "grab the data" to the
stages that use that stage's inputs. It turns out that just a few relatively simple
synchronization circuits are needed.

Computer design[edit]

Intel 80486DX2 microprocessor

The most general-purpose register-transfer logic machine is a computer. This is basically


an automatic binary abacus. The control unit of a computer is usually designed as
a microprogram run by a microsequencer. A microprogram is much like a player-piano roll.
Each table entry or "word" of the microprogram commands the state of every bit that
controls the computer. The sequencer then counts, and the count addresses the memory
or combinational logic machine that contains the microprogram. The bits from the
microprogram control the arithmetic logic unit, memory and other parts of the computer,
including the microsequencer itself. A "specialized computer" is usually a conventional
computer with special-purpose control logic or microprogram.
In this way, the complex task of designing the controls of a computer is reduced to a
simpler task of programming a collection of much simpler logic machines.
Almost all computers are synchronous. However, true asynchronous computers have also
been designed. One example is the Aspida DLX core.[42] Another was offered by ARM
Holdings. Speed advantages have not materialized, because modern computer designs
already run at the speed of their slowest component, usually memory. These do use
somewhat less power because a clock distribution network is not needed. An unexpected
advantage is that asynchronous computers do not produce spectrally-pure radio noise, so
they 

S-ar putea să vă placă și