Sunteți pe pagina 1din 152

PLC News from WIKI.

Programmable logic controller

Siemens Simatic S7-400 system in a rack, left-to-right: power supply unit (PS), CPU, interface
module (IM) and communication processor (CP).
A programmable logic controller, PLC, orprogrammable controller is a digital computer used for
automation of typically industrial electromechanicalprocesses, such as control of machinery on
factoryassembly lines, amusement rides, or light fixtures. PLCs are used in many machines, in
many industries. PLCs are designed for multiple arrangements of digital and analog inputs and
outputs, extended temperature ranges, immunity to electrical noise, and resistance to vibration
and impact. Programs to control machine operation are typically stored in battery-backed-up
ornon-volatile memory. A PLC is an example of a "hard"real-time system since output results
must be produced in response to input conditions within a limited time, otherwise unintended
operation will result.
Before the PLC, control, sequencing, and safety interlock logic for manufacturing automobiles
was mainly composed of relays, cam timers, drum sequencers, and dedicated closed-loop
controllers. Since these could number in the hundreds or even thousands, the process for updating
such facilities for the yearly model change-over was very time consuming and expensive, as
electriciansneeded to individually rewire the relays to change their operational characteristics.
Digital computers, being general-purpose programmable devices, were soon applied to control
industrial processes. Early computers required specialist programmers, and stringent operating
environmental control for temperature, cleanliness, and power quality. Using a general-purpose
computer for process control required protecting the computer from the plant floor conditions. An
industrial control computer would have several attributes: it would tolerate the shop-floor
environment, it would support discrete (bit-form) input and output in an easily extensible manner,
it would not require years of training to use, and it would permit its operation to be monitored.
The response time of any computer system must be fast enough to be useful for control; the
required speed varying according to the nature of the process.[1]Since many industrial processes
have timescales easily addressed by millisecond response times, modern (fast, small, reliable)
electronics greatly facilitate building reliable controllers, especially because performance can be
traded off for reliability.
In 1968 GM Hydra-Matic (the automatic transmission division of General Motors) issued a
request for proposals for an electronic replacement for hard-wired relay systems based on a white

paper written by engineer Edward R. Clark. The winning proposal came from Bedford Associates
of Bedford, Massachusetts. The first PLC, designated the 084 because it was Bedford Associates'
eighty-fourth project, was the result.[2] Bedford Associates started a new company dedicated to
developing, manufacturing, selling, and servicing this new product: Modicon, which stood for
MOdular DIgital CONtroller. One of the people who worked on that project was Dick Morley,
who is considered to be the "father" of the PLC.[3] The Modicon brand was sold in 1977 to
Gould Electronics, later acquired by German Company AEG, and then by French Schneider
Electric, the current owner.
One of the very first 084 models built is now on display at Modicon's headquarters in North
Andover, Massachusetts. It was presented to Modicon by GM, when the unit was retired after
nearly twenty years of uninterrupted service. Modicon used the 84 moniker at the end of its
product range until the 984 made its appearance.
The automotive industry is still one of the largest users of PLCs.
Contents
[hide]

1Development

1.1Programming

2Functionality

2.1Programmable logic relay (PLR)

3PLC topics

3.1Features

3.2Scan time

3.3System scale

3.4User interface

3.5Communications

3.6Programming

3.7Security

3.8Simulation

3.9Redundancy

4PLC compared with other control systems

5Discrete and analog signals

5.1Example

6See also

7References

8Further reading

9External links

Development[edit]
Early PLCs were designed to replace relay logic systems. These PLCs were programmed in
"ladder logic", which strongly resembles a schematic diagram of relay logic. This program
notation was chosen to reduce training demands for the existing technicians. Other early PLCs
used a form of instruction list programming, based on a stack-based logic solver.
Modern PLCs can be programmed in a variety of ways, from the relay-derived ladder logic to
programming languages such as specially adapted dialects of BASIC and C. Another method is
state logic, a very high-level programming language designed to program PLCs based onstate
transition diagrams.
Many early PLCs did not have accompanying programming terminals that were capable of
graphical representation of the logic, and so the logic was instead represented as a series of logic
expressions in some version of Boolean format, similar to Boolean algebra. As programming
terminals evolved, it became more common for ladder logic to be used, for the aforementioned
reasons and because it was a familiar format used for electromechanical control panels. Newer
formats such as state logic and Function Block (which is similar to the way logic is depicted
when using digital integrated logic circuits) exist, but they are still not as popular as ladder logic.
A primary reason for this is that PLCs solve the logic in a predictable and repeating sequence,
and ladder logic allows the programmer (the person writing the logic) to see any issues with the
timing of the logic sequence more easily than would be possible in other formats.
Programming[edit]
Early PLCs, up to the mid-1990s, were programmed using proprietary programming panels or
special-purpose programming terminals, which often had dedicated function keys representing
the various logical elements of PLC programs.[2] Some proprietary programming terminals
displayed the elements of PLC programs as graphic symbols, but plain ASCII character
representations of contacts, coils, and wires were common. Programs were stored on cassette tape
cartridges. Facilities for printing and documentation were minimal due to lack of memory
capacity. The oldest PLCs used non-volatile magnetic core memory.
More recently, PLCs are programmed using application software on personal computers, which
now represent the logic in graphic form instead of character symbols. The computer is connected
to the PLC through Ethernet, RS-232, RS-485, or RS-422 cabling. The programming software
allows entry and editing of the ladder-style logic. Generally the software provides functions for
debugging and troubleshooting the PLC software, for example, by highlighting portions of the
logic to show current status during operation or via simulation. The software will upload and
download the PLC program, for backup and restoration purposes. In some models of
programmable controller, the program is transferred from a personal computer to the PLC

through a programming board which writes the program into a removable chip such as an
EPROM
Functionality[edit]
The functionality of the PLC has evolved over the years to include sequential relay control,
motion control, process control, distributed control systems, and networking. The data handling,
storage, processing power, and communication capabilities of some modern PLCs are
approximately equivalent to desktop computers. PLC-like programming combined with remote
I/O hardware, allow a general-purpose desktop computer to overlap some PLCs in certain
applications. Desktop computer controllers have not been generally accepted in heavy industry
because the desktop computers run on less stable operating systems than do PLCs, and because
the desktop computer hardware is typically not designed to the same levels of tolerance to
temperature, humidity, vibration, and longevity as the processors used in PLCs. Operating
systems such as Windows do not lend themselves to deterministic logic execution, with the result
that the controller may not always respond to changes of input status with the consistency in
timing expected from PLCs. Desktop logic applications find use in less critical situations, such as
laboratory automation and use in small facilities where the application is less demanding and
critical, because they are generally much less expensive than PLCs.[citation needed]
Programmable logic relay (PLR)[edit]
In more recent years, small products called PLRs (programmable logic relays), and also by
similar names, have become more common and accepted. These are much like PLCs, and are
used in light industry where only a few points of I/O (i.e. a few signals coming in from the real
world and a few going out) are needed, and low cost is desired. These small devices are typically
made in a common physical size and shape by several manufacturers, and branded by the makers
of larger PLCs to fill out their low end product range. Popular names include PICO Controller,
NANO PLC, and other names implying very small controllers. Most of these have 8 to 12
discrete inputs, 4 to 8 discrete outputs, and up to 2 analog inputs. Size is usually about 4" wide,
3" high, and 3" deep. Most such devices include a tiny postage-stamp-sized LCD screen for
viewing simplified ladder logic (only a very small portion of the program being visible at a given
time) and status of I/O points, and typically these screens are accompanied by a 4-way rocker
push-button plus four more separate push-buttons, similar to the key buttons on a VCR remote
control, and used to navigate and edit the logic. Most have a small plug for connecting via RS232 or RS-485 to a personal computer so that programmers can use simple Windows applications
for programming instead of being forced to use the tiny LCD and push-button set for this
purpose. Unlike regular PLCs that are usually modular and greatly expandable, the PLRs are
usually not modular or expandable, but their price can be twoorders of magnitude less than a
PLC, and they still offer robust design and deterministic execution of the logics.
PLC topics[edit]
Features[edit]

Control panel with PLC (grey elements in the center). The unit consists of separate elements,
from left to right; power supply, controller, relayunits for in- and output

The main difference from other computers is that PLCs are armored for severe conditions (such
as dust, moisture, heat, cold), and have the facility for extensive input/output (I/O) arrangements.
These connect the PLC to sensors and actuators. PLCs readlimit switches, analog process
variables (such as temperature and pressure), and the positions of complex positioning systems.
Some use machine vision.[4] On the actuator side, PLCs operate electric motors, pneumatic or
hydraulic cylinders, magneticrelays, solenoids, or analog outputs. The input/output arrangements
may be built into a simple PLC, or the PLC may have external I/O modules attached to a
computer network that plugs into the PLC.
Scan time[edit]
A PLC program is generally executed repeatedly as long as the controlled system is running. The
status of physical input points is copied to an area of memory accessible to the processor,
sometimes called the "I/O Image Table". The program is then run from its first instruction rung
down to the last rung. It takes some time for the processor of the PLC to evaluate all the rungs
and update the I/O image table with the status of outputs.[5] This scan time may be a few
milliseconds for a small program or on a fast processor, but older PLCs running very large
programs could take much longer (say, up to 100 ms) to execute the program. If the scan time
were too long, the response of the PLC to process conditions would be too slow to be useful.
As PLCs became more advanced, methods were developed to change the sequence of ladder
execution, and subroutines were implemented.[6] This simplified programming could be used to
save scan time for high-speed processes; for example, parts of the program used only for setting
up the machine could be segregated from those parts required to operate at higher speed.
Special-purpose I/O modules may be used where the scan time of the PLC is too long to allow
predictable performance. Precision timing modules, or counter modules for use with shaft
encoders, are used where the scan time would be too long to reliably count pulses or detect the
sense of rotation of an encoder. The relatively slow PLC can still interpret the counted values to
control a machine, but the accumulation of pulses is done by a dedicated module that is
unaffected by the speed of the program execution.
System scale[edit]
A small PLC will have a fixed number of connections built in for inputs and outputs. Typically,
expansions are available if the base model has insufficient I/O.
Modular PLCs have a chassis (also called a rack) into which are placed modules with different
functions. The processor and selection of I/O modules are customized for the particular
application. Several racks can be administered by a single processor, and may have thousands of
inputs and outputs. Either a special high speed serial I/O link or comparable communication
method is used so that racks can be distributed away from the processor, reducing the wiring
costs for large plants. Options are also available to mount I/O points directly to the machine and
utilize quick disconnecting cables to sensors and valves, saving time for wiring and replacing
components.
User interface[edit]
See also: User interface and List of human-computer interaction topics

PLCs may need to interact with people for the purpose of configuration, alarm reporting, or
everyday control. A human-machine interface (HMI) is employed for this purpose. HMIs are also
referred to as man-machine interfaces (MMIs) and graphical user interfaces (GUIs). A simple
system may use buttons and lights to interact with the user. Text displays are available as well as
graphical touch screens. More complex systems use programming and monitoring software
installed on a computer, with the PLC connected via a communication interface.
Communications[edit]
PLCs have built-in communications ports, usually 9-pin RS-232, RS-422, rs-485, Ethernet.
Various protocols are usually included. Many of these protocols are vendor specific.
Most modern PLCs can communicate over a network to some other system, such as a computer
running a SCADA (Supervisory Control And Data Acquisition) system or web browser.
PLCs used in larger I/O systems may have peer-to-peer (P2P) communication between
processors. This allows separate parts of a complex process to have individual control while
allowing the subsystems to co-ordinate over the communication link. These communication links
are also often used for HMI devices such as keypads or PC-type workstations.
Formerly, some manufacturers offered dedicated communication modules as an add-on function
where the processor had no network connection built-in.
Programming[edit]
PLC programs are typically written in a special application on a personal computer, then
downloaded by a direct-connection cable or over a network to the PLC. The program is stored in
the PLC either in battery-backed-up RAM or some other non-volatile flash memory. Often, a
single PLC can be programmed to replace thousands of relays.[7]
Under the IEC 61131-3 standard, PLCs can be programmed using standards-based programming
languages. A graphical programming notation called Sequential Function Chartsis available on
certain programmable controllers. Initially most PLCs utilized Ladder Logic Diagram
Programming, a model which emulated electromechanical control panel devices (such as the
contact and coils of relays) which PLCs replaced. This model remains common today.
IEC 61131-3 currently defines five programming languages for programmable control systems:
function block diagram (FBD), ladder diagram (LD), structured text (ST; similar to thePascal
programming language), instruction list (IL; similar to assembly language), andsequential
function chart (SFC).[8] These techniques emphasize logical organization of operations.[7]
While the fundamental concepts of PLC programming are common to all manufacturers,
differences in I/O addressing, memory organization, and instruction sets mean that PLC
programs are never perfectly interchangeable between different makers. Even within the same
product line of a single manufacturer, different models may not be directly compatible.
Security[edit]
Prior to the discovery of the Stuxnet computer worm in June 2010, security of PLCs received
little attention. PLCs generally contain a real-time operating system such as OS-9 or VxWorks,

and exploits for these systems exist much as they do for desktop computer operating systems
such as Microsoft Windows. PLCs can also be attacked by gaining control of a computer they
communicate with.[9]
Simulation[edit]

PLCLogix PLC Simulation Software


In order to properly understand the operation of a PLC, it is necessary to spend considerable
timeprogramming, testing, and debugging PLC programs. PLC systems are inherently expensive,
and down-time is often very costly. In addition, if a PLC is programmed incorrectly it can result
in lost productivity and dangerous conditions. PLC simulation software such as PLCLogix can
save time in the design of automated control applications and can also increase the level of safety
associated with equipment since various "what if" scenarios can be tried and tested before the
system is activated.[10]
Redundancy[edit]
Some special processes need to work permanently with minimum unwanted down time.
Therefore, it is necessary to design a system which is fault-tolerant and capable of handling the
process with faulty modules. In such cases to increase the system availability in the event of
hardware component failure, redundant CPU or I/O modules with the same functionality can be
added to hardware configuration for preventing total or partial process shutdown due to hardware
failure.
PLC compared with other control systems[edit]

Allen-Bradley PLC installed in a control panel


PLCs are well adapted to a range of automation tasks. These are typically industrial processes in
manufacturing where the cost of developing and maintaining the automation system is high
relative to the total cost of the automation, and where changes to the system would be expected
during its operational life. PLCs contain input and output devices compatible with industrial pilot
devices and controls; little electrical design is required, and the design problem centers on
expressing the desired sequence of operations. PLC applications are typically highly customized
systems, so the cost of a packaged PLC is low compared to the cost of a specific custom-built
controller design. On the other hand, in the case of mass-produced goods, customized control
systems are economical. This is due to the lower cost of the components, which can be optimally
chosen instead of a "generic" solution, and where the non-recurring engineering charges are
spread over thousands or millions of units.
For high volume or very simple fixed automation tasks, different techniques are used. For
example, a consumer dishwasher would be controlled by an electromechanical cam timercosting
only a few dollars in production quantities.
A microcontroller-based design would be appropriate where hundreds or thousands of units will
be produced and so the development cost (design of power supplies, input/output hardware, and

necessary testing and certification) can be spread over many sales, and where the end-user would
not need to alter the control. Automotive applications are an example; millions of units are built
each year, and very few end-users alter the programming of these controllers. However, some
specialty vehicles such as transit buses economically use PLCs instead of custom-designed
controls, because the volumes are low and the development cost would be uneconomical.[11]
Very complex process control, such as used in the chemical industry, may require algorithms and
performance beyond the capability of even high-performance PLCs. Very high-speed or precision
controls may also require customized solutions; for example, aircraft flight controls.Single-board
computers using semi-customized or fully proprietary hardware may be chosen for very
demanding control applications where the high development and maintenance cost can be
supported. "Soft PLCs" running on desktop-type computers can interface with industrial I/O
hardware while executing programs within a version of commercial operating systems adapted
for process control needs.[11]
Programmable controllers are widely used in motion control, positioning control, and torque
control. Some manufacturers produce motion control units to be integrated with PLC so thatGcode (involving a CNC machine) can be used to instruct machine movements.[citation needed]
PLCs may include logic for single-variable feedback analog control loop, a proportional, integral,
derivative (PID) controller. A PID loop could be used to control the temperature of a
manufacturing process, for example. Historically PLCs were usually configured with only a few
analog control loops; where processes required hundreds or thousands of loops, a distributed
control system (DCS) would instead be used. As PLCs have become more powerful, the
boundary between DCS and PLC applications has become less distinct.
PLCs have similar functionality as remote terminal units (RTU). An RTU, however, usually does
not support control algorithms or control loops. As hardware rapidly becomes more powerful and
cheaper, RTUs, PLCs, and DCSs are increasingly beginning to overlap in responsibilities, and
many vendors sell RTUs with PLC-like features, and vice versa. The industry has standardized
on the IEC 61131-3 functional block language for creating programs to run on RTUs and PLCs,
although nearly all vendors also offer proprietary alternatives and associated development
environments.
In recent years "safety" PLCs have started to become popular, either as standalone models or as
functionality and safety-rated hardware added to existing controller architectures (Allen Bradley
Guardlogix, Siemens F-series etc.). These differ from conventional PLC types as being suitable
for use in safety-critical applications for which PLCs have traditionally been supplemented with
hard-wired safety relays. For example, a safety PLC might be used to control access to a robot
cell with trapped-key access, or perhaps to manage the shutdown response to an emergency stop
on a conveyor production line. Such PLCs typically have a restricted regular instruction set
augmented with safety-specific instructions designed to interface with emergency stops, light
screens, and so forth. The flexibility that such systems offer has resulted in rapid growth of
demand for these controllers.
Discrete and analog signals[edit]
Discrete signals behave as binary switches, yielding simply an On or Off signal (1 or 0, True or
False, respectively). Push buttons, limit switches, and photoelectric sensors are examples of

devices providing a discrete signal. Discrete signals are sent using either voltage or current,
where a specific range is designated as On and another as Off. For example, a PLC might use 24
V DC I/O, with values above 22 V DC representing On, values below 2VDC representingOff,
and intermediate values undefined. Initially, PLCs had only discrete I/O.
Analog signals are like volume controls, with a range of values between zero and full-scale.
These are typically interpreted as integer values (counts) by the PLC, with various ranges of
accuracy depending on the device and the number of bits available to store the data. As PLCs
typically use 16-bit signed binary processors, the integer values are limited between -32,768 and
+32,767. Pressure, temperature, flow, and weight are often represented by analog signals. Analog
signals can use voltage or current with a magnitude proportional to the value of the process
signal. For example, an analog 0 to 10 V or 4-20 mA input would be convertedinto an integer
value of 0 to 32767.
Current inputs are less sensitive to electrical noise (e.g. from welders or electric motor starts)
than voltage inputs.
Example[edit]
As an example, say a facility needs to store water in a tank. The water is drawn from the tank by
another system, as needed, and our example system must manage the water level in the tank by
controlling the valve that refills the tank. Shown is a "ladder diagram" which shows the control
system. A ladder diagram is a method of drawing control circuits which pre-dates PLCs. The
ladder diagram resembles the schematic diagram of a system built with electromechanical relays.
Shown are:

Two inputs (from the low and high level switches) represented by contacts of the float
switches

An output to the fill valve, labelled as the fill valve which it controls

An "internal" contact, representing the output signal to the fill valve which is created in
the program.

A logical control scheme created by the interconnection of these items in software

In ladder diagram, the contact symbols represent the state of bits in processor memory, which
corresponds to the state of physical inputs to the system. If a discrete input is energized, the
memory bit is a 1, and a "normally open" contact controlled by that bit will pass a logic "true"
signal on to the next element of the ladder. Therefore, the contacts in the PLC program that
"read" or look at the physical switch contacts in this case must be "opposite" or open in order to
return a TRUE for the closed physical switches. Internal status bits, corresponding to the state of
discrete outputs, are also available to the program.
In the example, the physical state of the float switch contacts must be considered when choosing
"normally open" or "normally closed" symbols in the ladder diagram. The PLC has two discrete
inputs from float switches (Low Level and High Level). Both float switches (normally closed)
open their contacts when the water level in the tank is above the physical location of the switch.

When the water level is below both switches, the float switch physical contacts are both closed,
and a true (logic 1) value is passed to the Fill Valve output. Water begins to fill the tank. The
internal "Fill Valve" contact latches the circuit so that even when the "Low Level" contact opens
(as the water passes the lower switch), the fill valve remains on. Since the High Level is also
normally closed, water continues to flow as the water level remains between the two switch
levels. Once the water level rises enough so that the "High Level" switch is off (opened), the PLC
will shut the inlet to stop the water from overflowing; this is an example of seal-in (latching)
logic. The output is sealed in until a high level condition breaks the circuit. After that the fill
valve remains off until the level drops so low that the Low Level switch is activated, and the
process repeats again.

| (N.C. physical (N.C. physical


| Switch)
| Low Level

Switch)
High Level

|
Fill Valve

|------[ ]------|------[ ]----------------------(OUT)---------|


|

| Fill Valve |

|------[ ]------|

A complete program may contain thousands of rungs, evaluated in sequence. Typically the PLC
processor will alternately scan all its inputs and update outputs, then evaluate the ladder logic;
input changes during a program scan will not be effective until the next I/O update. A complete
program scan may take only a few milliseconds, much faster than changes in the controlled
process.
Programmable controllers vary in their capabilities for a "rung" of a ladder diagram. Some only
allow a single output bit. There are typically limits to the number of series contacts in line, and
the number of branches that can be used. Each element of the rung is evaluated sequentially. If
elements change their state during evaluation of a rung, hard-to-diagnose faults can be generated,
although sometimes (as above) the technique is useful. Some implementations forced evaluation
from left-to-right as displayed and did not allow reverse flow of a logic signal (in multi-branched
rungs) to affect the output.
See also[edit]

Industrial control systems

Industrial safety systems

PLC Technician

References[edit]
1.
Jump up^ E. A. Parr, Industrial Control Handbook, Industrial Press Inc., 1999 ISBN 08311-3085-7
2.
^ Jump up to:a b M. A. Laughton, D. J. Warne (ed), Electrical Engineer's Reference book,
16th edition,Newnes, 2003 Chapter 16 Programmable Controller
3.
Jump up^ "The father of invention: Dick Morley looks back on the 40th anniversary of
the PLC".Manufacturing Automation. 12 September 2008.
4.
Jump up^ Harms, Toni M. & Kinner, Russell H. P.E., Enhancing PLC Performance with
Vision Systems. 18th Annual ESD/HMI International Programmable Controllers Conference
Proceedings, 1989, p. 387-399.
5.
Jump up^ Maher, Michael J. Real-Time Control and Communications. 18th Annual
ESD/SMI International Programmable Controllers Conference Proceedings, 1989, p. 431-436.
6.
Jump up^ Kinner, Russell H., P.E. Designing Programmable Controller Application
Programs Using More than One Designer. 14th Annual International Programmable Controllers
Conference Proceedings, 1985, p. 97-110.
7.
^ Jump up to:a b W. Bolton, Programmable Logic Controllers, Fifth Edition, Newnes,
2009 ISBN 978-1-85617-751-1, Chapter 1
8.
Jump up^ Keller, William L Jr. Grafcet, A Functional Chart for Sequential Processes, 14th
Annual International Programmable Controllers Conference Proceedings, 1984, p. 71-96.
9.

Jump up^ [1]

10.

Jump up^ PLC simulation reference

11.
^ Jump up to:a b Gregory K. McMillan, Douglas M. Considine (ed), Process/Industrial
Instruments and Controls Handbook Fifth Edition, McGraw-Hill, 1999 ISBN 0-07-012582-1
Section 3 Controllers
Further reading[edit]

Daniel Kandray, Programmable Automation Technologies, Industrial Press, 2010 ISBN


978-0-8311-3346-7, Chapter 8 Introduction to Programmable Logic Controllers
External links[edit]
Wikimedia Commons has media related toProgrammable logic controller.

Wikiversity has learning materials aboutProgrammable logic controller (basics)

PLC Complete Tutorial

PLC FAQ's

PLC Introduction Tutorial

Categories:

Automation

Industrial computing

Programmable logic controllers

Navigation menu

Not logged in

Talk

Contributions

Create account

Log in

Article

Talk

Read

Edit

View history

Main page

Contents

Featured content

Current events

Random article

Donate to

store

Interaction

Help

About

Community portal

Recent changes

Contact page

Tools

What links here

Related changes

Upload file

Special pages

Permanent link

Page information

Wikidata item

Cite this page

Print/export

Create a book

Download as PDF

Printable version

Languages

???????

Az?rbaycanca

?????

Bosanski

Catal

Cetina

Dansk

Deutsch

Eesti

Espaol

Euskara

?????

Franais

Galego

???

??????

Hrvatski

Bahasa Indonesia

Italiano

?????

Magyar

??????????

Bahasa Melayu

??????

??????????

Nederlands

???

Norsk bokml

Norsk nynorsk

Polski

Portugus

Romna

???????

Shqip

Simple English

?????? / srpski

Suomi

Svenska

?????

Trke

??????????

Ti?ng Vi?t

??

Edit links

This page was last modified on 16 December 2015, at 15:26.

Text is available under the Creative Commons Attribution-ShareAlike License; additional


terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. is a
registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Privacy policy

About

Disclaimers

Contact

Developers

Mobile view

Remote terminal unit

For other uses, see Terminal (disambiguation).

This article needs additional citations for verification. Please help improve this article by adding
citations to reliable sources. Unsourced material may be challenged and removed. (December
2011)
A remote terminal unit (RTU) is a microprocessor-controlled electronic device that interfaces
objects in the physical world to a distributed control system or SCADA (supervisory control and
data acquisition) system by transmitting telemetry data to a master system, and by using
messages from the master supervisory system to control connected objects.[1] Another term that
may be used for RTU is remote telecontrol unit.

Contents
[hide]

1Architecture

1.1Power supply

1.2Digital or status inputs

1.3Analog inputs

1.4Digital (control) outputs

1.5Analog outputs

1.6Software and logic control

1.7Communications

1.7.1IED communications

1.7.2Master communications

2Comparison with other control systems

3Applications

4See also

5References

Architecture[edit]
A RTU monitors the field digital and analog parameters and transmits data to the Central
Monitoring Station. It contains setup software to connect data input streams to data output
streams, define communication protocols, and troubleshoot installation problems.
A RTU may consist of one complex circuit card consisting of various sections needed to do a
custom fitted function or may consist of many circuit cards including CPU or processing with
communications interface(s), and one or more of the following: (AI) analog input, (DI) digital
input, (DO/CO) digital or control (relay) output, or (AO) analog output card(s).
Power supply[edit]
A form of power supply will be included for operation from the AC mains for various CPU,
status wetting voltages and other interface cards. This may consist of AC to DC converters where
operated from a station battery system.
RTUs may include a battery and charger circuitry to continue operation in event of AC power
failure for critical applications where a station battery is not available.
Digital or status inputs[edit]

Most RTUs incorporate an input section or input status cards to acquire two state real world
information. This is usually accomplished by using an isolated voltage or current source to sense
the position of a remote contact (open or closed) at the RTU site. This contact position may
represent many different devices, including electrical breakers, liquid valve positions, alarm
conditions, and mechanical positions of devices.
Analog inputs[edit]
A RTU can monitor analog inputs of different types including 0-1 mA, 420 mA current loop, 0
10 V., 2.5 V, 5.0 V etc. Many RTU inputs buffer larger quantities via transducers to convert
and isolate real world quantities from sensitive RTU input levels. An RTU can also receive
analog data via a communication system from a master or IED (Intelligent Electronic Device)
sending data values to it.
The RTU or host system translates and scales this raw data into the appropriate units such as
gallons of water left, temperature degrees, or Megawatts, before presenting the data to the user
via the HMI.
Digital (control) outputs[edit]
RTUs may drive high current capacity relays to a digital output (or "DO") board to switch power
on and off to devices in the field. The DO board switches voltage to the coil in the relay, which
closes the high current contacts, which completes the power circuit to the device.
RTU outputs may also consist of driving a sensitive logic input on an electronic PLC, or other
electronic device using a sensitive 5 V input.
Analog outputs[edit]
While not as commonly used, analog outputs may be included to control devices that require
varying quantities, such as graphic recording instruments (strip charts). Summed or massaged
data quantities may be generated in a master SCADA system and output for display locally or
remotely, wherever needed.
Software and logic control[edit]
Modern RTUs are usually capable of executing simple programs autonomously without
involving the host computers of the DCS or SCADA system to simplify deployment and to
provide redundancy for safety reasons. An RTU in a modern water management system will
typically have code to modify its behavior when physical override switches on the RTU are
toggled during maintenance by maintenance personnel. This is done for safety reasons; a
miscommunication between the system operators and the maintenance personnel could cause
system operators to mistakenly enable power to a water pump when it is being replaced, for
example.
Maintenance personnel should have any equipment they are working on disconnected from
power and locked to prevent damage and / or injury.
Communications[edit]

A RTU may be interfaced to multiple master stations and IEDs (Intelligent Electronic Device)
with different communication media (usually serial (RS232, RS485, RS422) orEthernet). An
RTU may support standard protocols (Modbus, IEC 60870-5-101/103/104, DNP3, IEC 60870-6ICCP, IEC 61850 etc.) to interface any third party software.
Data transfer may be initiated from either end using various techniques to insure synchronization
with minimal data traffic. The master may poll its subordinate unit (Master to RTU or the RTU
poll an IED) for changes of data on a periodic basis. Analog value changes will usually only be
reported only on changes outside a set limit from the last transmitted value. Digital (status)
values observe a similar technique and only transmit groups (bytes) when one included point (bit)
changes. Another method used is where a subordinate unit initiates an update of data upon a
predetermined change in analog or digital data. Periodic complete data transmission must be used
periodically, with either method, to insure full synchronization and eliminate stale data. Most
communication protocols support both methods, programmable by the installer.
Multiple RTUs or multiple IEDs may share a communications line, in a multi-drop scheme, as
units are addressed uniquely and only respond to their own polls and commands.
IED communications[edit]
IED communications transfer data between the RTU and an IED. This can eliminate the need for
many hardware status inputs, analog inputs, and relay outputs in the RTU. Communications are
accomplished by copper or fibre optics lines. Multiple units may share communication lines.
Master communications[edit]
Master communications are usually to a larger control system in a control room or a data
collection system incorporated into a larger system. Data may be moved using a copper, fibre
optic or radio frequency communication system. Multiple units may share communication lines.
Comparison with other control systems[edit]
RTUs differ from programmable logic controllers (PLCs) in that RTUs are more suitable for wide
geographical telemetry, often using wireless communications, while PLCs are more suitable for
local area control (plants, production lines, etc.) where the system utilizes physical media for
control. The IEC 61131 programming tool is more popular for use with PLCs, while RTUs often
use proprietary programming tools.
RTUs, PLCs and DCS are increasingly beginning to overlap in responsibilities, and many
vendors sell RTUs with PLC-like features and vice versa. The industry has standardized on the
IEC 61131-3 functional block language for creating programs to run on RTUs and PLCs,
although nearly all vendors also offer proprietary alternatives and associated development
environments.
In addition, some vendors now supply RTUs with comprehensive functionality pre-defined,
sometimes with PLC extensions and/or interfaces for configuration.
Some suppliers of RTUs have created simple graphical user interfaces GUI to enable customers
to configure their RTUs easily. In some applications dataloggers are used in similar applications.

A programmable automation controller (PAC) is a compact controller that combines the features
and capabilities of a PC-based control system with that of a typical PLC. PACs are deployed in
SCADA systems to provide RTU and PLC functions. In many electrical substation SCADA
applications, "distributed RTUs" use information processors or station computers to communicate
with digital protective relays, PACS, and other devices for I/O, and communicate with the
SCADA master in lieu of a traditional RTU.
Applications[edit]

Remote monitoring of functions and instrumentation for:

Oil and gas (offshore platforms, onshore oil wells)

Networks of pump stations (wastewater collection, or for water supply)

Environmental monitoring systems (pollution, air quality, emissions monitoring)

Mine sites

Air traffic equipment such as navigation aids (DVOR, DME, ILS and GP)

Remote monitoring and control of functions and instrumentation for:

Hydro-graphic (water supply, reservoirs, sewage systems)

Electrical power transmission networks and associated equipment

Natural gas networks and associated equipment

Outdoor warning sirens

See also[edit]

Telemetric

Digital protective relay

IED

SCADA

References[edit]
1.
Jump up^ Gordon R. Clarke, Deon Reynders, Edwin Wright, Practical modern SCADA
protocols: DNP3, 60870.5 and related systems Newnes, 2004 ISBN 0-7506-5799-5 pages 19-21
Categories:

Computer peripherals

Telemetry

Navigation menu

Not logged in

Talk

Contributions

Create account

Log in

Article

Talk

Read

Edit

View history

Main page

Contents

Featured content

Current events

Random article

Donate to

store

Interaction

Help

About

Community portal

Recent changes

Contact page

Tools

What links here

Related changes

Upload file

Special pages

Permanent link

Page information

Wikidata item

Cite this page

Print/export

Create a book

Download as PDF

Printable version

Languages

???????

Catal

Deutsch

Espaol

?????

Italiano

???????

Edit links

This page was last modified on 9 October 2015, at 20:32.

Text is available under the Creative Commons Attribution-ShareAlike License; additional


terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. is a
registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Privacy policy

About

Disclaimers

Contact

Developers

Mobile view

Modbus

Modbus is a serial communications protocol originally published by Modicon (now Schneider


Electric) in 1979 for use with its programmable logic controllers (PLCs). Simple and robust, it
has since become a de facto standard communication protocol, and it is now a commonly
available means of connecting industrial electronic devices.[1] The main reasons for the use of
Modbus in the industrial environment are:

developed with industrial applications in mind

openly published and royalty-free

easy to deploy and maintain

moves raw bits or words without placing many restrictions on vendors

Modbus enables communication among many devices connected to the same network, for
example a system that measures temperature and humidity and communicates the results to a
computer. Modbus is often used to connect a supervisory computer with a remote terminal unit
(RTU) in supervisory control and data acquisition (SCADA) systems. Many of the data types are
named from its use in driving relays: a single-bit physical output is called a coil, and a single-bit
physical input is called a discrete input or a contact.
The development and update of Modbus protocols has been managed by the Modbus
Organization[2] since April 2004, when Schneider Electric transferred rights to that organization.
[3] The Modbus Organization is an association of users and suppliers of Modbus compliant
devices that seeks to drive the adoption and evolution of Modbus.[4]
Contents
[hide]

1Protocol versions

2Communication and devices

3Frame format

4Supported function codes

5Format of data of requests and responses for main function codes

5.1Function code 1 (read coils) and function code 2 (read discrete inputs)

5.2Function code 5 (force/write single coil)

5.3Function code 15 (force/write multiple coils)

5.4Function code 4 (read input registers) and function code 3 (read holding registers)

5.5Function code 6 (preset/write single holding register)

5.6Function code 16 (preset/write multiple holding registers)

5.7Exception responses

5.8Main Modbus exception codes

6Coil, discrete input, input register, holding register numbers and addresses

6.1JBUS mapping

7Implementations

8Limitations

9Trade group

10Modbus Plus

11References

12External links

Protocol versions[edit]
Versions of the Modbus protocol exist for serial port and for Ethernet and other protocols that
support the Internet protocol suite. There are many variants of Modbus protocols:

Modbus RTU This is used in serial communication & makes use of a compact, binary
representation of the data for protocol communication. The RTU format follows the
commands/data with a cyclic redundancy check checksum as an error check mechanism to
ensure the reliability of data. Modbus RTU is the most common implementation available for
Modbus. A Modbus RTU message must be transmitted continuously without inter-character
hesitations. Modbus messages are framed (separated) by idle (silent) periods.

Modbus ASCII This is used in serial communication & makes use of ASCII characters
for protocol communication. The ASCII format uses a longitudinal redundancy checkchecksum.
Modbus ASCII messages are framed by leading colon (':') and trailing newline (CR/LF).

Modbus TCP/IP or Modbus TCP This is a Modbus variant used for communications
over TCP/IP networks, connecting over port 502.[5] It does not require a checksum calculation as
lower layers already provide checksum protection.

Modbus over TCP/IP or Modbus over TCP or Modbus RTU/IP This is a Modbus
variant that differs from Modbus TCP in that a checksum is included in the payload as with
Modbus RTU.[6]

Modbus over UDP Some have experimented with using Modbus over UDP on IP
networks, which removes the overheads required for TCP [7]


Modbus Plus (Modbus+, MB+ or MBP) Modbus over Fieldbus (Modbus+ or MB+), also
exists, but remains proprietary to Schneider Electric. requires a dedicated co-processor to handle
fast HDLC-like token rotation. It uses twisted pair at 1 Mbit/s and includes transformer isolation
at each node, which makes it transition/edge triggered instead of voltage/level triggered. Special
interfaces are required to connect Modbus Plus to a computer, typically a card made for the ISA
(SA85), PCI or PCMCIA bus.

Modbus PEMEX- This variant is an extension of standard Modbus with support for
historical and flow data. It was designed for process control and never gained widespread
adoption [8]

Enron Modbus- This variant is an extension of standard Modbus with support for 32 bit
Integer and Floating Point variables, and historical and flow data. Data types are mapped using
standard addresses.[9] The historical data is used to meet an American Petroleum Institute (API)
industry standard for how data should be stored[10]
Data model and function calls are identical for the first 4 variants of protocols; only the
encapsulation is different. However the variants are not interoperable as the frame formats are
different.
Communication and devices[edit]
Each device intended to communicate using Modbus is given a unique address. In serial and
MB+ networks, only the node assigned as the Master may initiate a command. On Ethernet, any
device can send out a Modbus command, although usually only one master device does so. A
Modbus command contains the Modbus address of the device it is intended for (1 to 247). Only
the intended device will act on the command, even though other devices might receive it (an
exception is specific broadcastable commands sent to node 0 which are acted on but not
acknowledged). All Modbus commands contain checksum information, to allow the recipient to
detect transmission errors. The basic Modbus commands can instruct an RTU to change the value
in one of its registers, control or read an I/O port, and command the device to send back one or
more values contained in its registers.
There are many modems and gateways that support Modbus, as it is a very simple protocol and
often copied. Some of them were specifically designed for this protocol. Different
implementations use wireline, wireless communication, such as in the ISM band, and even short
message service (SMS) or General Packet Radio Service (GPRS). One of the more common
designs of wireless networks makes use of Mesh networking. Typical problems that designers
have to overcome include high latency and timing issues.
Frame format[edit]
A Modbus frame is composed of an Application Data Unit (ADU) which encloses a Protocol
Data Unit (PDU):[11]

ADU = Address + PDU + Error check

PDU = Function code + Data

All Modbus variants choose one of the following frame formats.[1]

Modbus RTU frame format (primarily used on 8-bit asynchronous lines like EIA-485)

Name Length (bits) Function


Start

28

At least 3 1/2 character times of silence (mark condition)

Address

Station address

Function

Indicates the function code; e.g., read coils/holding registers

Data

n 8 Data + length will be filled depending on the message type

CRC

16 bits Cyclic Redundancy Check

End

28

At least 3 1/2 character times of silence between frames

Note about the CRC:

Polynomial: x16 + x15 + x2 + 1 (CRC-16-ANSI also known as CRC-16-IBM, normal


hexadecimal algebraic polynomial being 8005 and reversed A001)

Initial value: 65,535

Example of frame in hexadecimal: 01 04 02 FF FF B8 80 (CRC-16-ANSI calculation


from 01 to FF gives 80B8 which is transmitted least significant byte first)
Modbus ASCII frame format (primarily used on 7- or 8-bit asynchronous serial lines)
Name Length (bytes)
Function
Start

Starts with colon : (ASCII hex value is 0x3a)

Address

Station address

Function

Indicates the function codes like read coils / inputs

Data

nx2

Data + length will be filled depending on the message type

LRC

Checksum (Longitudinal redundancy check )

End

Carriage return line feed (CR/LF) pair (ASCII values of 0x0d & 0x0a)

Address, function, data, and LRC are all capital hexadecimal readable pairs of characters
representing 8-bit values (0-255). For example, 122 (7x16+10) will be represented as7A.
LRC is calculated as the sum of 8-bit values, negated (two's complement) and encoded as an 8bit value. Example: if address, function, and data encode as 247, 3, 19, 137, 0, and 10, their sum

is 416. Two's complement (-416) trimmed to 8-bit is 96 (e.g. 256x2-416) which will be
represented as 60 in hexadecimal. Hence the following frame:F7031389000A60<CR><LF>
Modbus TCP frame format (primarily used on Ethernet networks)

Name Length (bytes) Function


Transaction identifier 2

For synchronization between messages of server & client

Protocol identifier

Zero for Modbus/TCP

Length field

Number of remaining bytes in this frame

Unit identifier 1

Slave address (255 if not used)

Function code 1

Function codes as in other variants

Data bytes

Data as response or commands

Unit identifier is used with Modbus/TCP devices that are composites of several Modbus devices,
e.g. on Modbus/TCP to Modbus RTU gateways. In such case, the unit identifier tells the Slave
Address of the device behind the gateway. Natively Modbus/TCP-capable devices usually ignore
the Unit Identifier.
The byte order for values in Modbus data frames is Big-Endian (MSB, Most Significant Byte of
a value received first).
Supported function codes[edit]
The various reading, writing and other operations are categorised as follows.[12] The most
primitive reads and writes are shown in bold. A number of sources use alternative terminology,
for example Force Single Coil where the standard uses Write Single Coil.[13]
Prominent entities within a Modbus slave are:

Coils: readable and writable, 1 bit (off/on)

Discrete Inputs: readable, 1 bit (off/on)

Input Registers: readable, 16 bits (0 to 65,535), essentially measurements and statuses

Holding Registers: readable and writable, 16 bits (0 to 65,535), essentially configuration


values
Modbus function codes
Function type Function nameFunction code
Data Access

Bit access

Physical Discrete Inputs

Internal Bits or Physical Coils

Read Discrete Inputs 2

Read Coils

Write Single Coil

Write Multiple Coils 15


16-bit access Physical Input Registers
Registers

Read Input Registers 4

Internal Registers or Physical Output Registers


3
Write Single Holding Register

Write Multiple Holding Registers

16

Read/Write Multiple Registers

23

Read Multiple Holding

Mask Write Register 22


Read FIFO Queue
File Record Access

Read File Record

Write File Record


Diagnostics

20

21

Read Exception Status

Diagnostic

24

Get Com Event Counter

11

Get Com Event Log 12


Report Slave ID

17

Read Device Identification

43

Other Encapsulated Interface Transport

43

Format of data of requests and responses for main function codes[edit]


Requests and responses follow frame formats described above. This section gives details of data
formats of most used function codes.
Function code 1 (read coils) and function code 2 (read discrete inputs)[edit]
Request:

Address of first coil/discrete input to read (16-bit)

Number of coils/discrete inputs to read (16-bit)

Normal response:

Number of bytes of coil/discrete input values to follow (8-bit)

Coil/discrete input values (8 coils/discrete inputs per byte)

Value of each coil/discrete input is binary (0 for off, 1 for on). First requested coil/discrete input
is stored as least significant bit of first byte in reply.
If number of coils/discrete inputs is not a multiple of 8, most significant bit(s) of last byte will be
stuffed with zeros.
For example, if eleven coils are requested, two bytes of values are needed. Suppose states of
those successive coils are on, off, on, off, off, on, on, on, off, on, on, then data part of the
response will be 02E506 in hexadecimal.
Function code 5 (force/write single coil)[edit]
Request:

Address of coil (16-bit)

Value to force/write: 0 for off and 65,280 (FF00 in hexadecimal) for on

Normal response: same as request.


Function code 15 (force/write multiple coils)[edit]
Request:

Address of first coil to force/write (16-bit)

Number of coils to force/write (16-bit)

Number of bytes of coil values to follow (8-bit)

Coil values (8 coil values per byte)

Value of each coil is binary (0 for off, 1 for on). First requested coil is stored as least significant
bit of first byte in request.
If number of coils is not a multiple of 8, most significant bit(s) of last byte should be stuffed with
zeros. See example for function codes 1 and 2.

Normal response:

Address of first coil (16-bit)

number of coils (16-bit)

Function code 4 (read input registers) and function code 3 (read holding registers)[edit]
Request:

Address of first register to read (16-bit)

Number of registers to read (16-bit)

Normal response:

Number of bytes of register values to follow (8-bit)

Register values (16 bits per register)

Because the number of bytes for register values is 8-bit wide, only 128 registers can be read at
once.
Function code 6 (preset/write single holding register)[edit]
Request:

Address of holding register to preset/write (16-bit)

New value of the holding register (16-bit)

Normal response: same as request.


Function code 16 (preset/write multiple holding registers)[edit]
Request:

Address of first holding register to preset/write (16-bit)

Number of holding registers to preset/write (16-bit)

Number of bytes of register values to follow (8-bit)

New values of holding registers (16 bits per register)

Because the number of bytes for register values is 8-bit wide, only 128 holding registers can be
preset/written at once.
Normal response:

Address of first preset/written holding register (16-bit)

number of preset/written holding registers (16-bit)

Exception responses[edit]
For a normal response, slave repeats the function code. Should a slave want to report an error, it
will reply with the requested function code plus 128 (3 becomes 131 or 83 in hexadecimal), and
will only include one byte of data, known as the exception code.
Main Modbus exception codes[edit]
Code Text

Details

1
Illegal Function
by slave

Function code received in the query is not recognized or allowed

2
Illegal Data Address Data address of some or all the required entities are not allowed or
do not exist in slave
3

Illegal Data Value

Value is not accepted by slave

4
Slave Device Failure Unrecoverable error occurred while slave was attempting to
perform requested action
5
Acknowledge Slave has accepted request and is processing it, but a long duration of time
is required. This response is returned to prevent a timeout error from occurring in the master.
Master can next issue a Poll Program Complete message to determine if processing is completed
6
Slave Device Busy
should retry later

Slave is engaged in processing a long-duration command. Master

7
Negative Acknowledge
Slave cannot perform the programming functions. Master
should request diagnostic or error information from slave
8
Memory Parity Error Slave detected a parity error in memory. Master can retry the
request, but service may be required on the slave device
10
Gateway Path Unavailable
misconfigured gateway

Specialized for Modbus gateways. Indicates a

11
Gateway Target Device Failed to Respond
when slave fails to respond

Specialized for Modbus gateways. Sent

Coil, discrete input, input register, holding register numbers and addresses[edit]
Some conventions govern how access to Modbus entities (coils, discrete inputs, input registers,
holding registers) are referenced.
It is important to make a distinction between entity number and entity address:

Entity numbers combine entity type and entity location within their description table

Entity address is the starting address, a 16-bit value in the data part of the Modbus frame.
As such its range goes from 0 to 65,535
In the traditional standard, numbers for those entities start with a digit, followed by a number of
four digits in range 1 - 9,999:

coils numbers start with a zero and then span from 00001 to 09999

discrete input numbers start with a one and then span from 10001 to 19999

input register numbers start with a three and then span from 30001 to 39999

holding register numbers start with a four and then span from 40001 to 49999

This translates into addresses between 0 and 9,998 in data frames.


For example, in order to read holding registers starting at number 40001, corresponding address
in the data frame will be 0 with a function code of 3 (as seen above). For holding registers
starting at number 40100, address will be 99. Etc.
This limits the number of addresses to 9,999 for each entity. A de facto referencing extends this
to the maximum of 65,536.[14]

It simply consists of adding one digit to the previous list:

coil numbers span from 000001 to 065536

discrete input numbers span from 100001 to 165536

input register numbers span from 300001 to 365536

holding register numbers span from 400001 to 465536

When using the extended referencing, all number references must be exactly six digits. This
avoids confusion between coils and other entities. For example, to know the difference between
holding register #40001 and coil #40001, if coil #40001 is the target, it must appear as #040001.
JBUS mapping[edit]
Another de facto protocol tightly related with Modbus[15] appeared after it and was defined by
PLC brand April Automates, resulting of a collaborative effort of French companiesRenault
Automation and Merlin Gerin et Cie in 1985:[16] JBUS. Differences between Modbus and JBUS
at that time (number of entities, slave stations) are now irrelevant as this protocol almost
disappeared with April PLC series which AEG Schneider Automation bought in 1994 and then
made them obsolete. However the name JBUS survived to some extent.
JBUS supports function codes 1, 2, 3, 4, 5, 6, 15, and 16 and thus all the entities described above.
However numbering is different with JBUS:[17]

Number and address coincide: entity #x has address x in the data frame

Consequently, entity number does not include the entity type. For example, holding
register #40010 in Modbus will be holding register #9, located at address 9 in JBUS

Number 0 (and thus address 0) is not supported. Slave should not implement any real data
at this number and address and it can return a null value or throw an error when requested
Implementations[edit]
Almost all implementations have variations from the official standard. Different varieties might
not communicate correctly between equipment of different suppliers. Some of the most common
variations are:

Data types

Floating point IEEE

32-bit integer

8-bit data

Mixed data types

Bit fields in integers

Multipliers to change data to/from integer. 10, 100, 1000, 256 ...

Protocol extensions

16-bit slave addresses

32-bit data size (1 address = 32 bits of data returned)

Word swapped data

Limitations[edit]

Since Modbus was designed in the late 1970s to communicate to programmable logic
controllers, the number of data types is limited to those understood by PLCs at the time. Large
binary objects are not supported.

No standard way exists for a node to find the description of a data object, for example, to
determine if a register value represents a temperature between 30 and 175 degrees.

Since Modbus is a master/slave protocol, there is no way for a field device to "report by
exception" (except over Ethernet TCP/IP, called open-mbus)- the master node must routinely poll
each field device, and look for changes in the data. This consumes bandwidth and network time
in applications where bandwidth may be expensive, such as over a low-bit-rate radio link.

Modbus is restricted to addressing 254 devices on one data link, which limits the number
of field devices that may be connected to a master station (once again Ethernet TCP/IP being an
exception).

Modbus transmissions must be contiguous which limits the types of remote


communications devices to those that can buffer data to avoid gaps in the transmission.

Modbus protocol itself provides no security against unauthorized commands or


interception of data.[18]
Trade group[edit]
Modbus Organization, Inc. is a trade association for the promotion and development of Modbus
protocol.[2]
Modbus Plus[edit]
Despite the name, Modbus Plus[19] is not a variant of Modbus. It is a different protocol,
involving token passing.
It is a proprietary specification of Schneider Electric, though it is unpublished rather than
patented. It is normally implemented using a custom chipset available only to partners of
Schneider.
References[edit]
1.
^ Jump up to:a b Drury, Bill (2009). Control Techniques Drives and Controls Handbook
(PDF) (2nd ed.). Institution of Engineering and Technology. pp. 508. (subscription required
(help)).

2.
^ Jump up to:a b "Modbus home page". Modbus. Modbus Organization, Inc. Retrieved 2
August2013.
3.
Jump up^ "Modbus FAQ". Modbus. Modbus Organization, Inc. Retrieved 1 November
2012.
4.
Jump up^ "About Modbus Organization". Modbus. Modbus Organization, Inc. Retrieved8
November 2012.
5.

Jump up^ Modbus Messaging on TCP/IP Implementation Guide V1.0b, s3.1.3

6.

Jump up^ Remote Modbus Network Monitoring

7.

Jump up^ Java implementation

8.

Jump up^ http://www.rtaautomation.com/modbus/

9.

Jump up^ http://www.simplymodbus.ca/Enron.htm

10.
Jump up^ http://www.calscan.net/pdf/ModBus%20Driver%20Development%20Guide
%201v13.pdf
11.
Jump up^ "Modbus Messaging On TCP/IP Implementation Guide" (PDF). Modbus
Organization. Modbus-IDA.
12.
Jump up^ "Modbus Application Protocol V1.1b3" (PDF). Modbus. Modbus Organization,
Inc. Retrieved 2 August 2013.
13.
Jump up^ Clarke, Gordon; Reynders, Deon (2004). Practical Modern Scada Protocols:
Dnp3, 60870.5 and Related Systems. Newnes. pp. 4751. ISBN 0-7506-5799-5.
14.

Jump up^ "Modbus 101 - Introduction to Modbus". Control Solutions, Inc.

15.

Jump up^ "Differences between JBUS and MODBUS protocols". Schneider Electric.

16.

Jump up^ "RENAULT AUTOMATION MERLIN GERIN ET CIE". French Corporate.

17.
Jump up^ "900 Series JBUS and MODBUS Digital Communications Handbook".
Eurotherm Control.
18.
Jump up^ Palmer; Shenoi, Sujeet, eds. (2325 March 2009). Critical Infrastructure
Protection III. Third IFIP WG 11. 10 International Conference. Hanover, New Hampshire:
Springer. p. 87.ISBN 3-642-04797-1.
19.
Jump up^ "Modbus Plus - Modbus Plus Network - Products overview - Schneider
Electric United States". Schneider-electric.com. Retrieved 2014-01-03.
External links[edit]
Official

Modbus Organization with protocol specifications

Modbus Protocol; Modicon; 74 pages; 2000.

Other

Free Modbus Guide for Field Technician

Cost free Modbus RTU Device Testing Software

Free Modbus RTU source code on Protocessor website (requires signing up)

Perl module for Modbus/TCP

Android based Modbus TCP Master

Pymodbus: Full Modbus protocol implementation in Python, free software

MinimalModbus: Light RTU only Modbus implementation in Python

modbus-tk: Fast Modbus Implementation in Python

Interesting Performance comparison of the 3 above mentioned python modules

Tcl based Modbus RTU driver

Open Source C library of Modbus protocol for Linux, Mac OS X, FreeBSD, QNX and
Win32

Freeware Modbus Slave Simulator Application

Free PeakHMI RTU, TCP/IP and ACSII slave simulators

Jamod - Java library of Modbus protocol

ModSlave - Modbus TCP Slave Device written in Python, Free Software

digitalpetri Modbus - A modern, asynchronous Modbus implementation for Java

[show]

Automation protocols

[show]

Technical and de facto standards for wired computer buses

Categories:

Industrial Ethernet

Industrial computing

Building automation

Automation

Network protocols

Navigation menu

Not logged in

Talk

Contributions

Create account

Log in

Article

Talk

Read

Edit

View history

Main page

Contents

Featured content

Current events

Random article

Donate to

store

Interaction

Help

About

Community portal

Recent changes

Contact page

Tools

What links here

Related changes

Upload file

Special pages

Permanent link

Page information

Wikidata item

Cite this page

Print/export

Create a book

Download as PDF

Printable version

Languages

Cetina

Deutsch

Eesti

Espaol

?????

Franais

Bahasa Indonesia

Italiano

???

Polski

Portugus

???????

Suomi

Svenska

Trke

??????????

??

Edit links

This page was last modified on 15 December 2015, at 09:13.

Text is available under the Creative Commons Attribution-ShareAlike License; additional


terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. is a
registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Privacy policy

About

Disclaimers

Contact

Developers

Mobile view

Electricity generation

(Redirected from Power generation)

Turbo generator

Electricity generation is the process of generating electric power from other sources of primary
energy. The fundamental principles of electricity generation were discovered during the 1820s
and early 1830s by the British scientist Michael Faraday. His basic method is still used today:
electricity is generated by the movement of a loop of wire, or disc of copper between thepoles of
a magnet.[1] For electric utilities, it is the first process in the delivery of electricity to consumers.
The other processes, electricity transmission, distribution, and electrical power storage and
recovery using pumped-storage methods are normally carried out by the electric power industry.
Electricity is most often generated at a power station by electromechanicalgenerators, primarily
driven by heat engines fueled by chemical combustion or nuclear fission but also by other means
such as the kinetic energy of flowing water and wind. Other energy sources include solar
photovoltaics and geothermal power and electrochemical batteries.
Contents
[hide]

1History

2Methods of generating electricity

2.1Turbines

2.2Reciprocating engines

2.3Photovoltaic panels

2.4Electrochemical

2.5Other generation methods

3Economics of generation and production of electricity

4Production

4.1Historical results of production of electricity

4.2Production by country

4.2.1List of countries with source of electricity 2008

5Cogeneration

6Environmental concerns

7Water consumption

8See also

9References

10External links

History[edit]

Main article: Electrification

Diagram of an electric power system, generation system in red


Central power stations became economically practical with the development of alternating
current power transmission, using power transformers to transmit power at high voltage and with
low loss. Electricity has been generated at central stations since 1882. The first power plants were
run on water power[2] or coal,[3] and today, rely mainly on coal, nuclear, natural
gas,hydroelectric, wind generators, and petroleum, with a small amount from solar energy, tidal
power, and geothermal sources. The use of power-lines and power-poles have been significantly
important in the distribution of electricity.
Methods of generating electricity[edit]

U.S. 2014 Electricity Generation By Type.[4]

Sources of electricity in France in 2006;[5] nuclear power was the main source.
There are seven fundamental methods of directly transforming other forms of energy into
electrical energy:

Static electricity, from the physical separation and transport of charge (examples:
triboelectric effect and lightning)

Electromagnetic induction, where an electrical generator, dynamo or alternator transforms


kinetic energy (energy of motion) into electricity. This is the most used form for generating
electricity and is based on Faraday's law. It can be experimented by simply rotating a magnet
within closed loops of a conducting material (e.g. copper wire)

Electrochemistry, the direct transformation of chemical energy into electricity, as in a


battery, fuel cell or nerve impulse

Photovoltaic effect, the transformation of light into electrical energy, as in solar cells

Thermoelectric effect, the direct conversion of temperature differences to electricity, as in


thermocouples, thermopiles, and thermionic converters.

Piezoelectric effect, from the mechanical strain of electrically anisotropic molecules or


crystals. Researchers at the US Department of Energy's Lawrence Berkeley National Laboratory
(Berkeley Lab) have developed a piezoelectric generator sufficient to operate a liquid crystal
display using thin films of M13 bacteriophage.[6]

Nuclear transformation, the creation and acceleration of charged particles (examples:


betavoltaics or alpha particleemission)

Static electricity was the first form discovered and investigated, and the electrostatic generator is
still used even in modern devices such as the Van de Graaff generator and MHD generators.
Charge carriers are separated and physically transported to a position of increased electric
potential. Almost all commercial electrical generation is done using electromagnetic induction, in
which mechanical energy forces an electrical generator to rotate. There are many different
methods of developing the mechanical energy, including heat engines, hydro, wind and tidal
power. The direct conversion of nuclear potential energy to electricity by beta decay is used only
on a small scale. In a full-size nuclear power plant, the heat of a nuclear reaction is used to run a
heat engine. This drives a generator, which converts mechanical energy into electricity by
magnetic induction. Most electric generation is driven by heat engines. The combustion of fossil
fuels supplies most of the heat to these engines, with a significant fraction from nuclear fission
and some from renewable sources. The modern steam turbine (invented by Sir Charles Parsons in
1884) currently generates about 80% of the electric power in the world using a variety of heat
sources.
Turbines[edit]

Large dams such as Three Gorges Dam in China can provide large amounts of hydroelectric
power; it has a 22.5 GW capability.
Almost all electrical power on Earth is generated with a turbine of some type. Turbines are
commonly driven by wind, water, steam or burning gas. The turbine drives an electric generator.
Power sources include:

Steam

Water is boiled by coal burned in a thermal power plant, about 40% of all electricity is
generated this way.[7]

Nuclear fission heat created in a nuclear reactor creates steam. Less than 15% of
electricity is generated this way.

Renewables. The steam is generated by:

Biomass

Solar thermal energy (the sun as the heat source): solar parabolic troughs and solar power
towers concentrate sunlight to heat a heat transfer fluid, which is then used to produce steam.

Geothermal power. Either steam under pressure emerges from the ground and drives a
turbine or hot water evaporates a low boiling liquid to create vapor to drive a turbine.

Large dams such as Hoover Damcan provide large amounts ofhydroelectric power; it has 2.07
GWcapability.

Gas Natural gas is burned in a gas turbine, turbines are driven directly by gases produced
by combustion. Combined cycle are driven by both steam and natural gas. They generate power

by burning natural gas in a gas turbine and use residual heat to generate steam. At least 20% of
the worlds electricity is generated by natural gas.

Water Energy is captured from the movement of water. From falling water (dam), the rise
and fall of tides or ocean thermal currents. Each driving a water turbine to produce approximately
16% of the worlds electricity.

Wind The windmill was a very early wind turbine. In a solar updraft tower wind is
artificially produced. Before 2010 less than 2% of the worlds electricity was produced from
wind.
Reciprocating engines[edit]
Small electricity generators are often powered by reciprocating engines burning diesel, biogas or
natural gas. Diesel engines are often used for back up generation, usually at low voltages.
However most large power grids also use diesel generators, originally provided as emergency
back up for a specific facility such as a hospital, to feed power into the grid during certain
circumstances. Biogas is often combusted where it is produced, such as a landfill or wastewater
treatment plant, with a reciprocating engine or a microturbine, which is a small gas turbine.

A coal-fired power plant in Laughlin, Nevada U.S.A. Owners of this plant ceased operations after
declining to invest in pollution control equipment to comply with pollution regulations.[8]
Photovoltaic panels[edit]
Unlike the solar heat concentrators mentioned above, photovoltaic panels convert sunlight
directly to electricity. Although sunlight is free and abundant, solar electricity is still usually more
expensive to produce than large-scale mechanically generated power due to the cost of the
panels. Low-efficiency silicon solar cells have been decreasing in cost and multijunction cells
with close to 30% conversion efficiency are now commercially available. Over 40% efficiency
has been demonstrated in experimental systems.[9] Until recently, photovoltaics were most
commonly used in remote sites where there is no access to a commercial power grid, or as a
supplemental electricity source for individual homes and businesses. Recent advances in
manufacturing efficiency and photovoltaic technology, combined with subsidies driven by
environmental concerns, have dramatically accelerated the deployment of solar panels. Installed
capacity is growing by 40% per year led by increases in Germany, Japan, and the United States.
Electrochemical[edit]
Electrochemical electricity generation is important in portable and mobile applications.
Currently, most electrochemical power comes from closed electrochemical cells ("batteries").[10]
Primary cells, such as the common zinc-carbon batteries, act as power sources directly, but many
types of cells are used as storage systems rather than primary generation systems.
Open electrochemical systems, known as fuel cells, have been undergoing a great deal of
research and development in the last few years. Fuel cells can be used to extract power either

from natural fuels or from synthesized fuels (mainly electrolytic hydrogen) and so can be viewed
as either generation systems or storage systems depending on their use.
Other generation methods[edit]

Wind turbines usually provide electrical generation in conjunction with other methods of
producing power.
Various other technologies have been studied and developed for power generation.
Solid-state generation (without moving parts) is of particular interest in portable applications.
This area is largely dominated by thermoelectric (TE) devices, though thermionic (TI) and
thermophotovoltaic (TPV) systems have been developed as well. Typically, TE devices are used
at lower temperatures than TI and TPV systems.
Piezoelectric devices are used for power generation from mechanical strain, particularly in power
harvesting.
Betavoltaics are another type of solid-state power generator which produces electricity from
radioactive decay. Fluid-basedmagnetohydrodynamic (MHD) power generation has been studied
as a method for extracting electrical power from nuclear reactors and also from more
conventional fuel combustion systems. Osmotic power finally is another possibility at places
where salt and fresh water merges (e.g. deltas, ...)
Economics of generation and production of electricity[edit]
See also: Cost of electricity by source
The selection of electricity production modes and their economic viability varies in accordance
with demand and region. The economics vary considerably around the world, resulting in
widespread selling prices, e.g. the price in Venezuela is 3 cents per kWh while in Denmark it is
40 cents per kWh. Hydroelectric plants, nuclear power plants, thermal power plants and
renewable sources have their own pros and cons, and selection is based upon the local power
requirement and the fluctuations in demand. All power grids have varying loads on them but the
daily minimum is the base load, supplied by plants which run continuously. Nuclear, coal, oil and
gas plants can supply base load.
Thermal energy is economical in areas of high industrial density, as the high demand cannot be
met by renewable sources. The effect of localized pollution is also minimized as industries are
usually located away from residential areas. These plants can also withstand variation in load and
consumption by adding more units or temporarily decreasing the production of some units.
Nuclear power plants can produce a huge amount of power from a single unit. However, recent
disasters in Japan have raised concerns over the safety of nuclear power, and the capital cost of
nuclear plants is very high. Hydroelectric power plants are located in areas where the potential
energy from falling water can be harnessed for moving turbines and the generation of power. It is
not an economically viable source of production where the load varies too much during the
annual production cycle and the ability to store the flow of water is limited.

Due to advancements in technology, and with mass production, renewable sources other than
hydroelectricity (solar power, wind energy, tidal power, etc.) experienced decreases in cost of
production, and the energy is now in many cases cost-comparative with fossil fuels. Many
governments around the world provide subsidies to offset the higher cost of any new power
production, and to make the installation of renewable energy systems economically feasible.
However, their use is frequently limited by theirintermittent nature. If natural gas prices are
below $3 per million British thermal units, generating electricity from natural gas is cheaper than
generating power by burning coal.[11]
Production[edit]

This section is outdated. Please update this article to reflect recent events or newly available
information. (March 2015)
The production of electricity in 2009 was 20,053TWh. Sources of electricity were fossil fuels
67%, renewable energy 16% (mainly hydroelectric, wind, solar and biomass), and nuclear power
13%, and other sources were 3%. The majority of fossil fuel usage for the generation of
electricity was coal and gas. Oil was 5.5%, as it is the most expensive common commodity used
to produce electrical energy. Ninety-two percent of renewable energy was hydroelectric followed
by wind at 6% and geothermal at 1.8%. Solar photovoltaic was 0.06%, and solar thermal was
0.004%. Data are from OECD 2011-12 Factbook (2009 data).[12]
Source of Electricity (World total year 2008)
-

Coal

Oil

Gas

Nuclear

Natural
Renewables

other Total

Average electric power (TWh/year) 8,263 1,111 4,301 2,731 3,288 568
Average electric power (GW) 942.6 126.7 490.7 311.6 375.1 64.8
Proportion

41%

5%

21%

13%

16%

3%

20,261

2311.4

100%

data source IEA/OECD

Energy Flow of Power Plant


Total energy consumed at all power plants for the generation of electricity was 4,398,768 ktoe
(kilo ton of oil equivalent) which was 36% of the total for primary energy sources (TPES) of
2008.
Electricity output (gross) was 1,735,579 ktoe (20,185 TWh), efficiency was 39%, and the balance
of 61% was generated heat. A small part (145,141 ktoe, which was 3% of the input total) of the
heat was utilized at co-generation heat and power plants. The in-house consumption of electricity
and power transmission losses were 289,681 ktoe. The amount supplied to the final consumer
was 1,445,285 ktoe (16,430 TWh) which was 33% of the total energy consumed at power plants
and heat and power co-generation (CHP) plants.[13]

Historical results of production of electricity[edit]

Production by country[edit]
Main article: World energy resources and consumption
See also: Electricity consumption
The United States has long been the largest producer and consumer of electricity, with a global
share in 2005 of at least 25%, followed by China, Japan, Russia, and India. As of Jan-2010, total
electricity generation for the 2 largest generators was as follows: USA: 3992 billion kWh (3992
TWh) and China: 3715 billion kWh (3715 TWh).
List of countries with source of electricity 2008[edit]
Data source of values (electric power generated) is IEA/OECD.[14] Listed countries are top 20
by population or top 20 by GDP (PPP) and Saudi Arabia based on CIA World Factbook 2009.
[15]
Composition of Electricity by Resource (TWh per year 2008)
Country's electricity sector
other* total
Coal
total

PV*

rank

Renewable

Bio

Oil

Gas

3,288 65

12

sub
Hydro Geo

Solar

Solar

Thermal
total

Nuclear

rank

rank

Thermal

Fossil Fuel

Wind Tide

sub

rank

World total
219

8,263 1,111 4,301 13,675 2,731 0.5


3,584 271
20,261 -

Proportion
41% 5.5% 21%
1.1% 0.003%
18%

67%
-

0.9

13% 1.3% 100% -

16%

0.3% 0.06% 0.004%

2,788 2
68
2.4
3,457 2

585

0.2

13

685
2.0

12

114

0.02

14

China
2,733 23
598

31
1

India
569

34
82
128.02 6

5
830

15
5

USA
2,133 58
357

1011
4

3,101 1
838
73
4,369 1

282

17

1.6

0.88

56

43
20

25
17

130
-

19
149

20

12

8.3

18
370

29
3

59
20

23
463

14
9

13

370

0.6

30
14

62
-

22
92

1.6
24

16

28

31
29

33
-

27
35

27

1.5

12
25

15
-

28
21

28

5.7

16
167

495
5

708
2.5

4
163
1,040 4

167

0.5

0.01

139
91

283
7

711
22

3
258
1,082 3

83

2.8

2.3

2.6

131
12

202
0.8

13
259

9.8
14

14

39

7.1

0.01

0.3

20
16

40
-

26
61

26

9.8

11

0.001 -

0.1

30
15

47
-

25
73

25

26

Indonesia
61
Brazil
13

Pakistan
0.1

32
28

Bangladesh
0.6

1.7
1.5

Nigeria
-

3.1
5.7
Russia

197
Japan
288

Mexico
21

49
47

Philippines
16

4.9
21

Vietnam
15

1.6
26

Ethiopia
-

0.5
3.3

28

0.5
-

29
3.8

30

3.3

0.01

26
16

90
20

115
-

20
131

22

15

0.9

88
9

388
29

6
637

148
7

27

0.02

4.4

41

99
13

164
0.22

16
198

19

33

0.16

0.85

0.02
7.5

0.03
22

0.05
-

30
7.5

29

7.5

36
5.2

173
26

209
-

11
215

17

5.0

0.20

102
23

135
4.8

18
147

21

7.1

0.002 0.003 -

5.8
75

22
8

55
5.9

24
575

439
8

68

0.04

5.7

0.51

6.1
16

177
18

310
11

7
389

52
11

10

9.3

0.02

7.1

31
58

173
11

253
8.6

9
319

12

47

5.5

0.2

4.9

81
24

288
0.7

8
446

151
10

5.6

0.3

0.4

Egypt
-

Germany
291

9.2
72

Turkey
58

7.5
34

DR Congo
Iran
0.4

Thailand
32

1.7
7.1

France
27
UK
127
Italy
49

South Korea
192

15
6.3

Spain
50

18
61

122
10

190
4.3

14
314

59
13

26

2.6

0.02

32

41
2

162
8.5

17
651

94
6

383

0.03

3.8

0.03

88
-

204
-

12
204

18

46
21

186
3.5

15
238

41
16

11

7.8

0.004 -

0.6

39
19

239
2.2

10
257

15

12

0.2

0.004 3.9

63
27

92
6.8

21
108

4.2
23

15

0.1

0.04

Country

Coal

Oil

Gas

sub

total

Nuclear

rank

Hydro Geo

Canada
112

9.8
386

Saudi Arabia
-

116
Taiwan

125

14
8.4

Australia
198

2.8
16

Netherlands
27

2.1
4.4

rank

Thermal
PV

4.3

Solar

Solar

Thermal

Wind Tide

total

Bio

rank

sub

other Total rank


Solar PV* is Photovoltaics Bio other* = 198TWh (Biomass) + 69TWh (Waste) + 4TWh (other)
Cogeneration[edit]
Main article: Cogeneration
See also: Electrification
Co-generation is the practice of using exhaust or extracted steam from a turbine for heating
purposes, such as drying paper, distilling petroleum in a refinery or for building heat. Before

central power stations were widely introduced it was common for industries, large hotels and
commercial buildings to generate their own power and use low pressure exhaust steam for
heating.[16] This practice carried on for many years after central stations became common and is
still in use in many industries.
Environmental concerns[edit]
Main article: Environmental impact of electricity generation
See also: Global warming and Coal phase out
Variations between countries generating electrical power affect concerns about the environment.
In France only 10% of electricity is generated from fossil fuels, the US is higher at 70% and
China is at 80%.[14] The cleanliness of electricity depends on its source. Most scientists agree
that emissions of pollutants and greenhouse gases from fossil fuel-based electricity generation
account for a significant portion of world greenhouse gas emissions; in the United States,
electricity generation accounts for nearly 40% of emissions, the largest of any source.
Transportation emissions are close behind, contributing about one-third of U.S. production of
carbon dioxide.[17] In the United States, fossil fuel combustion for electric power generation is
responsible for 65% of all emissions of sulfur dioxide, the main component of acid rain.[18]
Electricity generation is the fourth highest combined source of NOx, carbon monoxide, and
particulate matter in the US.[19] In July 2011, the UK parliament tabled a motion that "levels of
(carbon) emissions from nuclear power were approximately three times lower per kilowatt hour
than those of solar, four times lower than clean coal and 36 times lower than conventional coal".
[20]
Main article: Life-cycle greenhouse-gas emissions of energy sources
Lifecycle greenhouse gas emissions by electricity source.[21]

Technology

Description

50th percentile

(g CO2/kWhe)
Hydroelectric
reservoir

Wind
onshore
12
Nuclear
various generation II reactor types
16
Biomass

various18
Solar thermal
parabolic trough
22
Geothermal
hot dry rock
45
Solar PV
Polycrystalline silicon
46
Natural gas
various combined cycle turbines without scrubbing 469
Coal
various generator types without scrubbing

1001

Water consumption[edit]
Most large scale thermoelectric power stations consume considerable amounts of water for
cooling purposes and boiler water make up - 1 L/kWh for once through (e.g. river cooling), and
1.7 L/kWh for cooling tower cooling.[22] Water abstraction for cooling water accounts for about
40% of European total water abstraction, although most of this water is returned to its source,
albeit slightly warmer. Different cooling systems have different consumption vs. abstraction
characteristics. Cooling towers withdraw a small amount of water from the environment and
evaporate most of it. Once-through systems withdraw a large amount but return it to the
environment immediately, at a higher temperature.
See also[edit]

Energy portal

Renewable energy portal

Infrastructure portal

Cost of electricity by source

Directive on Electricity Production from Renewable Energy Sources

Distributed generation

Electrification

Emissions & Generation Resource Integrated Database

Droop speed control

Electric power transmission

Electric utility

Eurelectric

Electric power distribution

Electricity retailing

Energy development

Environmental concerns with electricity generation

Eugene Green Energy Standard

Generating Availability Data System

Load profile

List of countries by electricity production

List of countries by electricity production from renewable sources

Mains electricity

Parallel generation

Power quality

Virtual power plant

Voltage drop

World energy consumption

References[edit]
1.

Jump up^ "Page not found". Retrieved 15 May 2015.

2.
Jump up^ In 1881, under the leadership of Jacob Schoellkopf, the first hydroelectric
generating station was built on Niagara Falls.
3.

Jump up^ "Pearl Street Station". Retrieved 15 May 2015.

4.

Jump up^ http://www.eia.gov/electricity/monthly/epm_table_grapher.cfm?t=epmt_1_01

5.
Jump up^ DGEMP / Observatoire de l'nergie (April 2007). "LElectricit en France en
2006 : une analyse statistique." (PDF) (in French). Retrieved 2007-05-23.
6.

Jump up^ "piezoelectric generator". The Times Of India. Retrieved 2012-05-20.

7.

Jump up^ http://www.worldcoal.org/coal/uses-of-coal/coal-electricity/

8.
Jump up^ Reuters News Service (2005-12-30). "Mohave Power Plant in Nevada to Close
as Expected". Planet Ark. Retrieved 2007-07-16.
9.
Jump up^ New World Record Achieved in Solar Cell Technology (press release, 2006-1205), U.S. Department of Energy.
10.
Jump up^ World's Largest Utility Battery System Installed in Alaska (press release, 200309-24), U.S. Department of Energy. "13,670 nickel-cadmium battery cells to generate up to 40
megawatts of power for about 7 minutes, or 27 megawatts of power for 15 minutes."
11.
Jump up^ Smith, Karl (22 March 2013). "Will Natural Gas Stay Cheap Enough To
Replace Coal And Lower Us Carbon Emissions". Forbes. Retrieved 20 June 2015.
12.

Jump up^ [1] OECD 2011-12 Factbook

13.

Jump up^ International Energy Agency, "2008 Energy Balance for World", 2011.

14.

^ Jump up to:a b IEA Statistics and Balances retrieved 2011-5-8

15.

Jump up^ CIA World Factbook 2009 retrieved 2011-5-8

16.

Jump up^ Hunter & Bryant 1991

17.
Jump up^ Borenstein, Seth (2007-06-03). "Carbon-emissions culprit? Coal". The Seattle
Times.
18.

Jump up^ "Sulfur Dioxide". US Environmental Protection Agency.

19.

Jump up^ "AirData". US Environmental Protection Agency.

20.

Jump up^ "Early day motion 2061". UK Parliament. Retrieved 15 May 2015.

21.
Jump up^ http://srren.ipcc-wg3.de/report/IPCC_SRREN_Annex_II.pdf see page 10
Moomaw, W., P. Burgherr, G. Heath, M. Lenzen, J. Nyboer, A. Verbruggen, 2011: Annex II:
Methodology. In IPCC Special Report on Renewable Energy Sources and Climate Change
Mitigation.

22.
Jump up^ AAAS Annual Meeting 17 - 21 Feb 2011, Washington DC. Sustainable or Not?
Impacts and Uncertainties of Low-Carbon Energy Technologies on Water.Evangelos Tzimas ,
European Commission, JRC Institute for Energy, Petten, Netherlands
External links[edit]

Electricity - A Visual Primer

Power Technologies Energy Data Book

NOW on PBS: Power Struggle

This Week in Energy (TWiEpodcast)

Electricity: From Table-top to Powerplant

The Power Sector in Lebanon via Carboun

[hide]

Electricity delivery

Concepts

Availability factor

Baseload

Black start

Capacity factor

Demand factor

Demand management

EROEI

Fault

Grid storage

Intermittency

Load following

Nameplate capacity

Peak demand

Power quality

Power-flow study

Repowering

Spark spread

Sources

Nonrenewable

Coal

Fossil-fuel power station

Natural gas

Petroleum

Nuclear

Oil shale

Renewable

Biomass

Biofuel

Geothermal

Hydro

Marine

Current

Osmotic

Thermal

Tidal

Wave

Solar

Wind

Technology

AC power

Cogeneration

Combined cycle

Cooling tower

Induction generator

Micro CHP

Microgeneration

Rankine cycle

Three-phase electric power

Virtual power plant

Transmission and
distribution

Blackout (Rolling blackout)

Demand response

Distributed generation

Dynamic demand

Electric power distribution

Electrical grid

High-voltage direct current

Load management

Pumped hydro

Power storage

Negawatts

Smart grid

Substation

Super grid

Transformer

TSO

Transmission tower

Utility pole

Policies

Carbon offset

Ecotax

Energy subsidies

Feed-in tariff

Fossil-fuel phase-out

Net metering

Pigovian tax

Renewable Energy Certificates

Renewable energy payments

Renewable energy policy

Categories

Electric power distribution


Electricity economics
Power station technology

Portals

Energy
Renewable energy
Sustainable development

Authority control

NDL: 00563020

Categories:

Electric power generation

Navigation menu

Not logged in

Talk

Contributions

Create account

Log in

Article

Talk

Read

Edit

View history

Main page

Contents

Featured content

Current events

Random article

Donate to

store

Interaction

Help

About

Community portal

Recent changes

Contact page

Tools

What links here

Related changes

Upload file

Special pages

Permanent link

Page information

Wikidata item

Cite this page

Print/export

Create a book

Download as PDF

Printable version

Languages

Afrikaans

???????

?????

Bn-lm-g

Catal

Deutsch

Eesti

Espaol

?????

Franais

Galego

???

??????

Italiano

?????

??????

Bahasa Melayu

???

Portugus

Romna

???????

Shqip

?????

Simple English

Suomi

Svenska

?????

??????

Trke

Ti?ng Vi?t

Walon

??

??

Edit links

This page was last modified on 4 January 2016, at 21:35.

Text is available under the Creative Commons Attribution-ShareAlike License; additional


terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. is a
registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Privacy policy

About

Disclaimers

Contact

Developers

Mobile view

SCADA

This article is about the industrial control computer system. For the genus of butterflies, see
Scada.

This article needs additional citations for verification. Please help improve this article by adding
citations to reliable sources. Unsourced material may be challenged and removed. (January 2008)
SCADA (supervisory control and data acquisition) is a system for remote monitoring and control
that operates with coded signals over communication channels (using typically one
communication channel per remote station). The control system may be combined with a data
acquisition system by adding the use of coded signals over communication channels to acquire
information about the status of the remote equipment for display or for recording functions.[1] It
is a type of industrial control system (ICS). Industrial control systems are computer-based
systems that monitor and control industrial processes that exist in the physical world. SCADA
systems historically distinguish themselves from other ICS systems by being large-scale
processes that can include multiple sites, and large distances.[2] These processes include
industrial, infrastructure, and facility-based processes, as described below:


Industrial processes include those of manufacturing, production, power generation,
fabrication, and refining, and may run in continuous, batch, repetitive, or discrete modes.

Infrastructure processes may be public or private, and include water treatment and
distribution, wastewater collection and treatment, oil and gas pipelines, electrical power
transmission and distribution, wind farms, civil defense siren systems, and large communication
systems.

Facility processes occur both in public facilities and private ones, including buildings,
airports, ships, and space stations. They monitor and control heating, ventilation, and air
conditioning systems (HVAC), access, and energy consumption.
Contents
[hide]

1Common system components

2Systems concepts

3Humanmachine interface

4Hardware solutions

4.1Supervisory station

4.1.1Operational philosophy

4.2Communication infrastructure and methods

5SCADA architectures

5.1First generation: "Monolithic"

5.2Second generation: "Distributed"

5.3Third generation: "Networked"

5.4Fourth generation: "Internet of Things"

6Security issues

7SCADA In the workplace

8See also

9References

10External links

Common system components[edit]


A SCADA system usually consists of the following subsystems:


Remote terminal units (RTUs) connect to sensors in the process and convert sensor
signals to digital data. They have telemetry hardware capable of sending digital data to the
supervisory system, as well as receiving digital commands from the supervisory system. RTUs
often have embedded control capabilities such as ladder logic in order to accomplish boolean
logic operations.

Programmable logic controller (PLCs) connect to sensors in the process and convert
sensor signals to digital data. PLCs have more sophisticated embedded control capabilities
(typically one or more IEC 61131-3 programming languages) than RTUs. PLCs do not have
telemetry hardware, although this functionality is typically installed alongside them. PLCs are
sometimes used in place of RTUs as field devices because they are more economical, versatile,
flexible, and configurable.

A telemetry system is typically used to connect PLCs and RTUs with control centers, data
warehouses, and the enterprise. Examples of wired telemetry media used in SCADA systems
include leased telephone lines and WAN circuits. Examples of wireless telemetry media used in
SCADA systems include satellite (VSAT), licensed and unlicensed radio, cellular and
microwave.

A data acquisition server is a software service which uses industrial protocols to connect
software services, via telemetry, with field devices such as RTUs and PLCs. It allows clients to
access data from these field devices using standard protocols.

A humanmachine interface or HMI is the apparatus or device which presents processed


data to a human operator, and through this, the human operator monitors and interacts with the
process. The HMI is a client that requests data from a data acquisition server or in most
installations the HMI is the graphical user interface for the operator, collects all data from
external devices, creates reports, performs alarming, sends notifications, etc.

A Historian is a software service which accumulates time-stamped data, boolean events,


and boolean alarms in a database which can be queried or used to populate graphic trends in the
HMI. The historian is a client that requests data from a data acquisition server.

A supervisory (computer) system, gathering (acquiring) data on the process and sending
commands (control) to the SCADA system.

Communication infrastructure connecting the supervisory system to the remote terminal


units.

Various processes and analytical instrumentation.

Systems concepts[edit]
The term SCADA (Supervisory Control and Data Acquisition) usually refers to centralized
systems which monitor and control entire sites, or complexes of systems spread out over large
areas (anything from an industrial plant to a nation). Most control actions are performed
automatically by RTUs or by PLCs. Host control functions are usually restricted to basic
overriding or supervisory level intervention. For example, a PLC may control the flow of cooling
water through part of an industrial process, but the SCADA system may allow operators to
change the set points for the flow, and enable alarm conditions, such as loss of flow and high

temperature, to be displayed and recorded. The feedback control loop passes through the RTU or
PLC, while the SCADA system monitors the overall performance of the loop.

SCADA's schematic overview


Data acquisition begins at the RTU or PLC level and includes meter readings and equipment
status reports that are communicated to SCADA as required. Data is then compiled and formatted
in such a way that a control room operator using the HMI can make supervisory decisions to
adjust or override normal RTU (PLC) controls. Data may also be fed to a Historian, often built on
a commodity Database Management System, to allow trending and other analytical auditing.
SCADA systems typically implement a distributed database, commonly referred to as a tag
database, which contains data elements calledtags or points. A point represents a single input or
output value monitored or controlled by the system. Points can be either "hard" or "soft". A hard
point represents an actual input or output within the system, while a soft point results from logic
and math operations applied to other points. (Most implementations conceptually remove the
distinction by making every property a "soft" point expression, which may, in the simplest case,
equal a single hard point.) Points are normally stored as value-timestamp pairs: a value, and the
timestamp when it was recorded or calculated. A series of value-timestamp pairs gives the history
of that point. It is also common to store additional metadata with tags, such as the path to a field
device or PLC register, design time comments, and alarm information.
SCADA systems are significantly important systems used in national infrastructures such as
electric grids, water supplies and pipelines. However, SCADA systems may have security
vulnerabilities, so the systems should be evaluated to identify risks and solutions implemented to
mitigate those risks.[3]
Humanmachine interface[edit]

Typical basic SCADA animations

More complex SCADA animation


A humanmachine interface (HMI) is the input-output device through which the human operator
controls the process, and which presents process data to a human operator.
HMI(Human Machine interface) is usually linked to the SCADA system's databases and software
programs, to provide trending, diagnostic data, and management information such as scheduled
maintenance procedures, logistic information, detailed schematics for a particular sensor or
machine, and expert-system troubleshooting guides.
The HMI system usually presents the information to the operating personnel graphically, in the
form of a mimic diagram. This means that the operator can see a schematic representation of the
plant being controlled. For example, a picture of a pump connected to a pipe can show the
operator that the pump is running and how much fluid it is pumping through the pipe at the

moment. The operator can then switch the pump off. The HMI software will show the flow rate
of the fluid in the pipe decrease in real time. Mimic diagrams may consist of line graphics and
schematic symbols to represent process elements, or may consist of digital photographs of the
process equipment overlain with animated symbols.
The HMI package for the SCADA system typically includes a drawing program that the
operators or system maintenance personnel use to change the way these points are represented in
the interface. These representations can be as simple as an on-screen traffic light, which
represents the state of an actual traffic light in the field, or as complex as a multi-projector
display representing the position of all of the elevators in a skyscraper or all of the trains on a
railway.
An important part of most SCADA implementations is alarm handling. The system monitors
whether certain alarm conditions are satisfied, to determine when an alarm event has occurred.
Once an alarm event has been detected, one or more actions are taken (such as the activation of
one or more alarm indicators, and perhaps the generation of email or text messages so that
management or remote SCADA operators are informed). In many cases, a SCADA operator may
have to acknowledge the alarm event; this may deactivate some alarm indicators, whereas other
indicators remain active until the alarm conditions are cleared. Alarm conditions can be explicit
for example, an alarm point is a digital status point that has either the value NORMAL or
ALARM that is calculated by a formula based on the values in other analogue and digital points
or implicit: the SCADA system might automatically monitor whether the value in an analogue
point lies outside high and low- limit values associated with that point. Examples of alarm
indicators include a siren, a pop-up box on a screen, or a coloured or flashing area on a screen
(that might act in a similar way to the "fuel tank empty" light in a car); in each case, the role of
the alarm indicator is to draw the operator's attention to the part of the system 'in alarm' so that
appropriate action can be taken. In designing SCADA systems, care must be taken when a
cascade of alarm events occurs in a short time, otherwise the underlying cause (which might not
be the earliest event detected) may get lost in the noise. Unfortunately, when used as a noun, the
word 'alarm' is used rather loosely in the industry; thus, depending on context it might mean an
alarm point, an alarm indicator, or an alarm event.
Hardware solutions[edit]
SCADA solutions often have Distributed Control System (DCS) components. Use of "smart"
RTUs or PLCs, which are capable of autonomously executing simple logic processes without
involving the master computer, is increasing. A standardized control programming language, IEC
61131-3 (a suite of 5 programming languages including Function Block, Ladder, Structured Text,
Sequence Function Charts and Instruction List), is frequently used to create programs which run
on these RTUs and PLCs. Unlike a procedural language such as the C programming language or
FORTRAN, IEC 61131-3 has minimal training requirements by virtue of resembling historic
physical control arrays. This allows SCADA system engineers to perform both the design and
implementation of a program to be executed on an RTU or PLC. A Programmable Automation
Controller (PAC) is a compact controller that combines the features and capabilities of a PCbased control system with that of a typical PLC. PACs are deployed in SCADA systems to
provide RTU and PLC functions. In many electrical substation SCADA applications, "distributed
RTUs" use information processors or station computers to communicate with digital protective

relays, PACs, and other devices for I/O, and communicate with the SCADA master in lieu of a
traditional RTU.
Since about 1998, virtually all major PLC manufacturers have offered integrated HMI/SCADA
systems, many of them using open and non-proprietary communications protocols. Numerous
specialized third-party HMI/SCADA packages, offering built-in compatibility with most major
PLCs, have also entered the market, allowing mechanical engineers, electrical engineers and
technicians to configure HMIs themselves, without the need for a custom-made program written
by a software programmer. The Remote Terminal Unit (RTU) connects to physical equipment.
Typically, an RTU converts the electrical signals from the equipment to digital values such as the
open/closed status from a switch or a valve, or measurements such as pressure, flow, voltage or
current. By converting and sending these electrical signals out to equipment the RTU can control
equipment, such as opening or closing a switch or a valve, or setting the speed of a pump.
Supervisory station[edit]
The term supervisory station refers to the servers and software responsible for communicating
with the field equipment (RTUs, PLCs, SENSORS etc.), and then to the HMI software running
on workstations in the control room, or elsewhere. In smaller SCADA systems, the master station
may be composed of a single PC. In larger SCADA systems, the master station may include
multiple servers, distributed software applications, and disaster recovery sites. To increase the
integrity of the system the multiple servers will often be configured in a dual-redundant or hotstandby formation providing continuous control and monitoring in the event of a server
malfunction or breakdown.
Operational philosophy[edit]
For some installations, the costs that would result from the control system failing are extremely
high. Hardware for some SCADA systems is ruggedized to withstand temperature, vibration, and
voltage extremes. In the most critical installations, reliability is enhanced by having redundant
hardware and communications channels, up to the point of having multiple fully equipped control
centres. A failing part can be quickly identified and its functionality automatically taken over by
backup hardware. A failed part can often be replaced without interrupting the process. The
reliability of such systems can be calculated statistically and is stated as the mean time to failure,
which is a variant of Mean Time Between Failures (MTBF). The calculated mean time to failure
of such high reliability systems can be on the order of centuries.
Communication infrastructure and methods[edit]
SCADA systems have traditionally used combinations of radio and direct wired connections,
although SONET/SDH is also frequently used for large systems such as railways and power
stations. The remote management or monitoring function of a SCADA system is often referred to
as telemetry. Some users want SCADA data to travel over their pre-established corporate
networks or to share the network with other applications. The legacy of the early low-bandwidth
protocols remains, though.
SCADA protocols are designed to be very compact. Many are designed to send information only
when the master station polls the RTU. Typical legacy SCADA protocols includeModbus RTU,
RP-570, Profibus and Conitel. These communication protocols are all SCADA-vendor specific

but are widely adopted and used. Standard protocols are IEC 60870-5-101 or 104, IEC 61850 and
DNP3. These communication protocols are standardized and recognized by all major SCADA
vendors. Many of these protocols now contain extensions to operate over TCP/IP. Although the
use of conventional networking specifications, such as TCP/IP, blurs the line between traditional
and industrial networking, they each fulfill fundamentally differing requirements.[4]
With increasing security demands (such as North American Electric Reliability Corporation
(NERC) and Critical Infrastructure Protection (CIP) in the US), there is increasing use of
satellite-based communication. This has the key advantages that the infrastructure can be selfcontained (not using circuits from the public telephone system), can have built-in encryption, and
can be engineered to the availability and reliability required by the SCADA system operator.
Earlier experiences using consumer-grade VSAT were poor. Modern carrier-class systems
provide the quality of service required for SCADA.[5]
RTUs and other automatic controller devices were developed before the advent of industry wide
standards for interoperability. The result is that developers and their management created a
multitude of control protocols. Among the larger vendors, there was also the incentive to create
their own protocol to "lock in" their customer base. A list of automation protocols is compiled
here.
Recently, OLE for process control (OPC) has become a widely accepted solution for
intercommunicating different hardware and software, allowing communication even between
devices originally not intended to be part of an industrial network.
SCADA architectures[edit]

The United States Army's Training Manual 5-601 covers "SCADA Systems for C4ISR
Facilities".
SCADA systems have evolved through four generations as follows:[6][7][8][9]
First generation: "Monolithic"[edit]
Early SCADA system computing was done by large minicomputers. Common network services
did not exist at the time SCADA was developed. Thus SCADA systems were independent
systems with no connectivity to other systems. The communication protocols used were strictly
proprietary at that time. The first-generation SCADA system redundancy was achieved using a
back-up mainframe system connected to all the Remote Terminal Unit sites and was used in the
event of failure of the primary mainframe system. Some first generation SCADA systems were
developed as "turn key" operations that ran on minicomputers such as the PDP-11 series made by
theDigital Equipment Corporation.
Second generation: "Distributed"[edit]
SCADA information and command processing was distributed across multiple stations which
were connected through a LAN. Information was shared in near real time. Each station was
responsible for a particular task thus making the size and cost of each station less than the one
used in First Generation. The network protocols used were still not standardized. Since the

protocols were proprietary, very few people beyond the developers knew enough to determine
how secure a SCADA installation was. Security of the SCADA installation was usually
overlooked.*
Third generation: "Networked"[edit]
Similar to a distributed architecture, any complex SCADA can be reduced to simplest
components and connected through communication protocols. In the case of a networked design,
the system may be spread across more than one LAN network called a process control network
(PCN) and separated geographically. Several distributed architecture SCADAs running in
parallel, with a single supervisor and historian, could be considered a network architecture. This
allows for a more cost effective solution in very large scale systems.
Fourth generation: "Internet of Things"[edit]
With the commercial availability of cloud computing, SCADA systems have increasingly
adopted Internet of Things technology to significantly reduce infrastructure costs and increase
ease of maintenance and integration. As a result, SCADA systems can now report state in near
real-time and use the horizontal scale available in cloud environments to implement more
complex control algorithms than are practically feasible to implement on traditional
programmable logic controllers.[10] Further, the use of open network protocols such as TLS
inherent in the Internet of Things technology, provides a more readily comprehensible and
manageable security boundary than the heterogeneous mix of proprietary network protocols
typical of many decentralized SCADA implementations. One such example of this technology is
an innovative approach to rainwater harvesting through the implementation of real time controls
(RTC).
This decentralization of data also requires a different approach to SCADA than traditional PLC
based programs. When a SCADA system is used locally, the preferred methodology involves
binding the graphics on the user interface to the data stored in specific PLC memory addresses.
However, when the data comes from a disparate mix of sensors, controllers and databases (which
may be local or at varied connected locations), the typical 1 to 1 mapping becomes problematic.
A solution to this is Data Modeling, a concept derived from object oriented programming.[11]
In a Data Model, a virtual representation of each device is constructed in the SCADA software.
These virtual representations (Models) can contain not just the address mapping of the device
represented, but also any other pertinent information (web based info, database entries, media
files, etc.) that may be used by other facets of the SCADA/IoT implementation. As the increased
complexity of the Internet of Things renders traditional SCADA increasingly house-bound, and
as communication protocols evolve to favor platform-independent, service-oriented architecture
(such as OPC UA), it is likely that more SCADA software developers will implement some form
of data modeling.
Security issues[edit]
SCADA systems that tie together decentralized facilities such as power, oil, and gas pipelines and
water distribution and wastewater collection systems were designed to be open, robust, and easily
operated and repaired, but not necessarily secure.[12] The move from proprietary technologies to
more standardized and open solutions together with the increased number of connections

between SCADA systems, office networks, and the Internet has made them more vulnerable to
types of network attacks that are relatively common in computer security. For example, United
States Computer Emergency Readiness Team (US-CERT) released a vulnerability advisory[13]
that allowed unauthenticated users to download sensitive configuration information including
password hashes on an Inductive Automation Ignition system utilizing a standard attack type
leveraging access to the Tomcat Embedded Web server. Security researcher Jerry Brown
submitted a similar advisory regarding a buffer overflow vulnerability[14] in a Wonderware
InBatchClientActiveX control. Both vendors made updates available prior to public vulnerability
release. Mitigation recommendations were standard patching practices and requiring VPNaccess
for secure connectivity. Consequently, the security of some SCADA-based systems has come into
question as they are seen as potentially vulnerable to cyber attacks.[15][16][17]
In particular, security researchers are concerned about:

the lack of concern about security and authentication in the design, deployment and
operation of some existing SCADA networks

the belief that SCADA systems have the benefit of security through obscurity through the
use of specialized protocols and proprietary interfaces

the belief that SCADA networks are secure because they are physically secured

the belief that SCADA networks are secure because they are disconnected from the
Internet.
SCADA systems are used to control and monitor physical processes, examples of which are
transmission of electricity, transportation of gas and oil in pipelines, water distribution, traffic
lights, and other systems used as the basis of modern society. The security of these SCADA
systems is important because compromise or destruction of these systems would impact multiple
areas of society far removed from the original compromise. For example, a blackout caused by a
compromised electrical SCADA system would cause financial losses to all the customers that
received electricity from that source. How security will affect legacy SCADA and new
deployments remains to be seen.
There are many threat vectors to a modern SCADA system. One is the threat of unauthorized
access to the control software, whether it be human access or changes induced intentionally or
accidentally by virus infections and other software threats residing on the control host machine.
Another is the threat of packet access to the network segments hosting SCADA devices. In many
cases, the control protocol lacks any form of cryptographic security, allowing an attacker to
control a SCADA device by sending commands over a network. In many cases SCADA users
have assumed that having a VPN offered sufficient protection, unaware that security can be
trivially bypassed with physical access to SCADA-related network jacks and switches. Industrial
control vendors suggest approaching SCADA security like Information Security with a defense in
depth strategy that leverages common IT practices.[18]
The reliable function of SCADA systems in our modern infrastructure may be crucial to public
health and safety. As such, attacks on these systems may directly or indirectly threaten public
health and safety. Such an attack has already occurred, carried out on Maroochy Shire Council's
sewage control system in Queensland, Australia.[19] Shortly after a contractor installed a

SCADA system in January 2000, system components began to function erratically. Pumps did
not run when needed and alarms were not reported. More critically, sewage flooded a nearby
park and contaminated an open surface-water drainage ditch and flowed 500 meters to a tidal
canal. The SCADA system was directing sewage valves to open when the design protocol should
have kept them closed. Initially this was believed to be a system bug. Monitoring of the system
logs revealed the malfunctions were the result of cyber attacks. Investigators reported 46 separate
instances of malicious outside interference before the culprit was identified. The attacks were
made by a disgruntled ex-employee of the company that had installed the SCADA system. The
ex-employee was hoping to be hired by the utility full-time to maintain the system.
In April 2008, the Commission to Assess the Threat to the United States from Electromagnetic
Pulse (EMP) Attack issued a Critical Infrastructures Report which discussed the extreme
vulnerability of SCADA systems to an electromagnetic pulse (EMP) event. After testing and
analysis, the Commission concluded: "SCADA systems are vulnerable to EMP insult. The large
numbers and widespread reliance on such systems by all of the Nations critical infrastructures
represent a systemic threat to their continued operation following an EMP event. Additionally,
the necessity to reboot, repair, or replace large numbers of geographically widely dispersed
systems will considerably impede the Nations recovery from such an assault."[20]
Many vendors of SCADA and control products have begun to address the risks posed by
unauthorized access by developing lines of specialized industrial firewall and VPNsolutions for
TCP/IP-based SCADA networks as well as external SCADA monitoring and recording
equipment. The International Society of Automation (ISA) started formalizing SCADA security
requirements in 2007 with a working group, WG4. WG4 "deals specifically with unique
technical requirements, measurements, and other features required to evaluate and assure security
resilience and performance of industrial automation and control systems devices".[21]
The increased interest in SCADA vulnerabilities has resulted in vulnerability researchers
discovering vulnerabilities in commercial SCADA software and more general offensive SCADA
techniques presented to the general security community.[22] In electric and gas utility SCADA
systems, the vulnerability of the large installed base of wired and wireless serial communications
links is addressed in some cases by applying bump-in-the-wire devices that employ
authentication and Advanced Encryption Standard encryption rather than replacing all existing
nodes.[23]
In June 2010, anti-virus security company VirusBlokAda reported the first detection of malware
that attacks SCADA systems (Siemens' WinCC/PCS 7 systems) running on Windows operating
systems. The malware is called Stuxnet and uses four zero-day attacks to install a rootkit which
in turn logs into the SCADA's database and steals design and control files.[24][25] The malware
is also capable of changing the control system and hiding those changes. The malware was found
on 14 systems, the majority of which were located in Iran.[26]
In October 2013 National Geographic released a docudrama titled, "American Blackout" which
dealt with a large-scale cyber attack on SCADA and the United States' electrical grid.
SCADA In the workplace[edit]
SCADA is one of many tools that can be used while working in an environment where
operational duties need to be monitored through electronic communication instead of locally. For

example, an operator can position a valve to open or close through SCADA without leaving the
control station or the computer. The SCADA system also can switch a pump or motor on or off
and has the capability of putting motors on a "Hand" operating status, Off, or Automatic. "Hand"
refers to operating the equipment locally, while Automatic has the equipment operate according
to set points the operator provides on a computer that can communicate with the equipment
through SCADA.
See also[edit]

BACnet

LonWorks

Modbus

Telemetry

EPICS

Stuxnet - The first known custom-made virus designed to specifically infiltrate SCADA.

Industrial Internet

Process control network

References[edit]
1.

Jump up^ "Cyber Security Dictionary". 2 Jan 2012. Retrieved 23 March 2014.

2.
Jump up^ Boys, Walt (18 August 2009). "Back to Basics: SCADA". Automation TV:
Control Global - Control Design.
3.
Jump up^ Boyer, Stuart A. (2010). SCADA Supervisory Control and Data Acquisition.
USA: ISA - International Society of Automation. p. 179. ISBN 978-1-936007-09-7.
4.
Jump up^ "Introduction to Industrial Control Networks" (PDF). IEEE Communications
Surveys and Tutorials. 2012.
5.
Jump up^ Bergan, Christian (August 2011). "Demystifying Satellite for the Smart Grid:
Four Common Misconceptions". Electric Light & Powers. Utility Automation & Engineering
T&D (Tulsa, OK: PennWell) 16 (8). Four. Retrieved 2 May 2012. satellite is a cost-effective and
secure solution that can provide backup communications and easily support core smart grid
applications like SCADA, telemetry, AMI backhaul and distribution automation
6.
Jump up^ OFFICE OF THE MANAGER NATIONAL COMMUNICATIONS
SYSTEMctober 2004. "Supervisory Control and Data Acquisition (SCADA) Systems" (PDF).
NATIONAL COMMUNICATIONS SYSTEM.
7.

Jump up^ "SCADA Systems april 2014".

8.

Jump up^ J. Russel. "A Brief History of SCADA/EMS (2015)".

9.

Jump up^ N. Gribakin. "SCADA System Architecture".

10.
Jump up^ How The "Internet Of Things" Is Turning Cities Into Living Organisms
Retrieved September 16, 2013
11.

Jump up^ "The History of Data Modeling". Exforsys Inc. 11 January 2007.

12.
Jump up^ Boyes, Walt (2011). Instrumentation Reference Book, 4th Edition. USA:
Butterworth-Heinemann. p. 27. ISBN 0-7506-8308-2.
13.
Jump up^ "ICSA-11-231-01INDUCTIVE AUTOMATION IGNITION
INFORMATION DISCLOSURE VULNERABILITY" (PDF). 19 Aug 2011. Retrieved 21 Jan
2013.
14.
Jump up^ "ICSA-11-094-01WONDERWARE INBATCH CLIENT ACTIVEX
BUFFER OVERFLOW" (PDF). 13 Apr 2011. Retrieved 26 Mar 2013.
15.
Jump up^ D. Maynor and R. Graham (2006). "SCADA Security and Terrorism: We're Not
Crying Wolf" (PDF).
16.
Jump up^ Robert Lemos (26 July 2006). "SCADA system makers pushed toward
security". SecurityFocus. Retrieved 9 May 2007.
17.
Jump up^ "Cyberthreats, Vulnerabilities and Attacks on SCADA Networks" (PDF). Rosa
Tang, berkeley.edu. Retrieved 1 August 2012.
18.
Jump up^ "Industrial Security Best Practices" (PDF). Rockwell Automation. Retrieved 26
Mar 2013.
19.
Jump up^ Slay, J.; Miller, M. (November 2007). "Chpt 6: Lessons Learned from the
Maroochy Water Breach". Critical infrastructure protection (Online-Ausg. ed.). Springer Boston.
pp. 7382.ISBN 978-0-387-75461-1. Retrieved 2 May 2012.
20.

Jump up^ http://www.empcommission.org/docs/A2473-EMP_Commission-7MB.pdf

21.

Jump up^ "Security for all". InTech. June 2008. Retrieved 2 May 2012.

22.

Jump up^ "SCADA Security Generic Electric Grid Malware Design".

23.
Jump up^ KEMA, Inc. (November 2006). "Substation Communications: Enabler of
Automation / An Assessment of Communications Technologies". UTC United Telecom
Council: 321.
24.
Jump up^ Mills, Elinor (21 July 2010). "Details of the first-ever control system malware
(FAQ)". CNET. Retrieved 21 July 2010.
25.
Jump up^ "SIMATIC WinCC / SIMATIC PCS 7: Information concerning Malware / Virus
/ Trojan". Siemens. 21 July 2010. Retrieved 22 July 2010. malware (trojan) which affects the
visualization system WinCC SCADA.
26.

Jump up^ "Siemens: Stuxnet worm hit industrial systems". Retrieved 16 September 2010.

External links[edit]

UK SCADA security guidelines

BBC NEWS | Technology | Spies 'infiltrate US power grid'

Categories:

Telemetry

Industrial automation

Industrial software

Control engineering

Navigation menu

Not logged in

Talk

Contributions

Create account

Log in

Article

Talk

Read

Edit

View history

Main page

Contents

Featured content

Current events

Random article

Donate to

store

Interaction

Help

About

Community portal

Recent changes

Contact page

Tools

What links here

Related changes

Upload file

Special pages

Permanent link

Page information

Wikidata item

Cite this page

Print/export

Create a book

Download as PDF

Printable version

Languages

???????

?????????

Catal

Cetina

Deutsch

Eesti

????????

Espaol

?????

Franais

Galego

???

Hrvatski

Bahasa Indonesia

Italiano

?????

Nederlands

???

Norsk bokml

Occitan

Polski

Portugus

???????

Simple English

Slovencina

?????? / srpski

Suomi

Svenska

???

Trke

??????????

Ti?ng Vi?t

??

Edit links

This page was last modified on 4 January 2016, at 22:10.

Text is available under the Creative Commons Attribution-ShareAlike License; additional


terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. is a
registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Privacy policy

About

Disclaimers

Contact

Developers

Mobile view

Jerk (physics)

In physics, jerk, also known as jolt, surge, or lurch, is the rate of change of acceleration; that is,
the derivative of acceleration with respect to time, and as such the second derivative of velocity,
or the third derivative of position. Jerk is defined by any of the following equivalent expressions:

where
is acceleration,
is velocity,
is position,
is time.
Jerk is a vector, and there is no generally used term to describe its scalar magnitude (more
precisely, its norm, e.g. "speed" as the norm of the velocity vector).
According to the result of dimensional analysis of jerk, [length/time3], the SI units are m/s3 (or
ms-3). There is no universal agreement on the symbol for jerk, but is commonly used.
Newton's notation for the time derivative ( ) is also applied.
The fourth derivative of position, equivalent to the first derivative of jerk, is jounce.
Because of involving third derivatives, in mathematics differential equations of the form

are called jerk equations. It has been shown that a jerk equation, which is equivalent to a system
of three first order, ordinary, non-linear differential equations, is in a certain sense the minimal
setting for solutions showing chaotic behaviour. This motivates mathematical interest in jerk
systems. Systems involving a fourth or higher derivative are accordingly called hyperjerk
systems.
Contents

[hide]

1Physiological effects and human perception of physical jerk

2Forces and path derivatives

2.1Position itself, zeroth derivative

2.2Speed , magnitude of the first derivative

2.3Acceleration a, magnitude of the second derivative

2.4Higher derivatives

3Jerk in an idealized setting

4Jerk in rotation

5Jerk in elastically deformable matter

6Applied considerations of Jerk

6.1Geometric design of roads and tracks

6.2Motion control

6.3Jerk in manufacturing

7See also

8Notes

9References

10External links

Physiological effects and human perception of physical jerk[edit]


The smooth movement and also the rest state of an alert human body is achieved by balancing
the forces of several antagonistic muscles which are controlled across neural paths by the brain
(for directed movement) or sometimes across reflex arcs. In balancing some given force (holding
or pulling up a weight, e.g.) the postcentral gyrus establishes a control loop to achieve this
equilibrium by adjusting the muscular tension according to the sensed position of the actuator. If
the load changes faster than the current state of this control loop is capable of supplying a
suitable, adaptive response, the balance cannot be upheld, because the tensioned muscles cannot
relax or build up tension fast enough and overshoot in either direction, until the neural control
loop manages to take control again. Of course the time to react is limited from below by
physiological bounds and also depends on the attention level of the brain: an expected change
will be stabilized faster than a sudden drop or increase of load.
So passengers in transportation, who need this time to adapt to stress changes and to adjust their
muscle tension, or else suffer conditions such as whiplash, can be safely subjected both only to a
less than maximum acceleration, and to a less than maximum jerk,[1] so to avoid loss of control

over their body motion thereby endangering their physical integrity. Even where occupant safety
is not an issue, excessive jerk may result in an uncomfortable ride on elevators, trams, and the
like, and engineers expend considerable design effort to minimize "jerky motion".
Since forces, changing at a suitable rate in time (that is, suitable jerk) are the cause of vibrations,
and vibrations significantly impair the quality of transportation, there is good reason to simply
minimize jerk in transportation vehicles.
As an everyday example, driving in a car can show effects of acceleration and jerk. The more
experienced drivers accelerate smoothly, but beginners provide a jerky ride.

Changing gears, especially with a foot-operated clutch, offers well-known examples:


although the accelerating force is bounded by the engine power, an inexperienced driver lets you
experience severe jerk, because of intermittent force closure over the clutch.

High-powered sports cars offer the feeling of being pressed into the cushioning, but this is
the force of the acceleration. Only in the very first moments, when the torque of the engine grows
with the rotational speed, the acceleration grows remarkably and a slight whiplash effect is
noticeable in the neck, mostly masked by the jerk of gear switching.

The beginning of an emergency braking lets the body whip forward faster than the
achieved acceleration value alone would accomplish, and a collision does so to an even greater
degree, but quantitative testing on living humans (and, for some, on animals) runs afoul of ethical
concerns, with the effect that cadavers or crash test dummies must be substituted, which, of
course, do not show the physiological reactions to jerk caused by an active control loop described
above.

A highly reproducible experiment to demonstrate jerk is as follows: Brake a car starting at


a modest speed in two different ways:
1.
apply a constant, modest force on the pedal till the car comes to a halt, only then release
the pedal;
2.
apply the same, constant, modest force on the pedal, but just before the halt, reduce the
force on the pedal, optimally releasing the pedal fully, exactly when the car stops.
The reason for the by far bigger jerk in the first way to brake is a discontinuity of the
acceleration, which is initially at a constant value, due to the constant force on the pedal, and
drops to zero immediately, when the wheels stop rotating. Note that there would be no jerk if the
car started to move backwards with the same acceleration. Every experienced driver knows how
to start and how to stop braking with low jerk. See also below in the motion profile, segment 7:
Deceleration ramp-down.
For some remarks on how the human perception of various motions is organized in the
proprioceptors, the vestibular organ and by visual impressions, and how to deceive it, see the
article on motion simulators.
Forces and path derivatives[edit]
Position itself, zeroth derivative[edit]

The most prominent force associated with the position of a particle relates via Hooke's Law to
the rigid stiffness of a spring.

This is a force opposing the increase in displacement.


Speed , magnitude of the first derivative[edit]
A particle moving in a viscous fluid environment experiences a drag force , which, depending on
the Reynolds number and its area, ranges between being proportional to up to being proportional
to according to the drag equation:

where
is the density of the fluid,
is the speed of the object relative to the fluid,
is the cross-sectional area, and
is the drag coefficient a dimensionless number.
The drag coefficient depends on the scalable shape of the object and on the Reynolds number,
which itself depends on the speed.
Acceleration a, magnitude of the second derivative[edit]
The acceleration is according to Newton's Second Law

bound to a force , via the proportionality given by the mass .


Higher derivatives[edit]
In classical mechanics of rigid bodies there are no forces associated with the higher derivatives of
the path, nevertheless not only the physiological effects of jerk, but also oscillations and
deformation propagation along and in non-ideally rigid bodies, require various techniques for
controlling motion to avoid the resulting destructive forces. It is often reported (where?) that
NASA in designing the Hubble Telescope not only limited the jerk in their requirement
specification, but also the next higher derivative, the jounce.
For a recoil force on accelerating charged particles emitting radiation, which is proportional to
their jerk and the square of their charge, see the AbrahamLorentz force. A more advanced
theory, applicable in a relativistic and quantum environment, accounting for self-energy is
provided in WheelerFeynman absorber theory.
Jerk in an idealized setting[edit]
In real world environments, because of deformation, granularity at least at the Planck scale, i.e.
quanta-effects, and other reasons, discontinuities in acceleration do not occur. However,

frequently used idealized settings (rigid bodies, smooth representations of paths, no friction, and
the like) applied to an also idealized point mass moving along apiecewise smooth and as a whole
continuous path, suffice for the phenomenon of a jump-discontinuity in acceleration at the points
where the path is not smooth, and accordingly for an unbounded jerk in this simplified model of
classical mechanics (see two examples below). Extrapolating from the idealized settings, the
effect of jerk in real situations can be qualitatively described, explained and predicted.
The jump-discontinuity in acceleration may be modeled by a Dirac delta in the jerk, scaled with
the height of this jump. Integrating jerk over time generally gives the according acceleration;
doing so across such a Dirac delta reconstructs exactly the jump discontinuity in the acceleration
belonging to the Dirac delta in the jerk.
Assume a path along a circular arc with radius , which tangentially connects to a straight line.
The whole path is continuous and its pieces are smooth. Now let a point particle move with
constant speed along this path, so its tangential acceleration is zero, and consider the acceleration
orthogonal to the path: it is zero along the straight part and along the circle (centripetal
acceleration). This gives a jump-discontinuity in the magnitude of the acceleration by , and the
particle undergoes a jerk measured by a Dirac delta scaled with this value, for purely geometric
reasons, when it passes the connection of the pieces. See below for a more concrete application.
If we assume an idealized spring and idealized, kinetic frictional forces, proportional to the
normal force and directed oppositely to the velocity, there is another example of discontinuous
acceleration. Let a mass, connected to an ideal spring, oscillate on a flat, idealized surface with
friction. Each time the velocity changes sign (at the maxima ofdisplacement), the magnitude of
the force on the mass, which is the vectorial sum of the spring force and the kinetic frictional
force, changes by twice the magnitude of the frictional force, since the spring force is continuous
and the frictional force reverses its direction when the velocity does. Therefore the acceleration
jumps by this amount divided by the mass. That is, the mass experiences a discontinuous
acceleration and the jerk contains a Dirac delta, each time the mass passes through the
(decreasing) maximal displacements, until it comes to a halt, because the static friction force
adapts to the residual spring force, establishing equilibrium with zero net force and zero velocity.
The car example relies on the way the brakes operate on a rotating drum or on a disc. As long as
the disc rotates the brake pads act to decelerate the vehicle via the kinetic frictional forces which
create a constant braking torque on the disk. This decreases the rotation linearly to zero with
constant angular acceleration, but when the rotation reaches exactly zero, this hitherto constant
frictional force suddenly drops to zero, as well as the torque, and the associated acceleration of
the car. This, of course, neglects all effects of tire sliding, dipping of suspension, real deflection
of all ideally rigid mechanisms, etc. A sudden drop in acceleration indicates a Dirac delta in the
physical jerk, which is smoothed down by the real environment, the cumulative effects of which
are analogous to damping, to the physiologically perceived jerk.
Another example of significant jerk, analogous to the first setting, is given by cutting the rope
twirling a particle around a center. When the rope is cut, the circular path with non-zero
centripetal acceleration changes abruptly to a straight path with suddenly no force in the direction
to the former center. Imagine a monomolecular fiber, cut by a laser and you arrive at very high
rates of jerk, because of the extremely short cutting time.

Jerk in rotation[edit]

Animation showing a four-position external Geneva drive in operation

Timing diagram over one rev. for angle, angular velocity, angular acceleration, and angular jerk
Consider a rotational movement of a rigid body about a fixed axis in an inertial frame. The
orientation of the solid can be expressed by an angle , the angular position, from which one can
express:
the angular speed as the time derivative of
the angular acceleration as the time derivative of .
Differentiating the with respect to time, defines an angular jerk :
.
The angular acceleration corresponds to the quotient of the torque acting on the body and the
moment of inertia of the body with respect to the momentary axis of rotation. An abrupt change
of the torque results in an important angular jerk.
The general case of a rigid body movement in space can be modeled by a kinematic screw, which
specifies at each instant one (axial) vector, the angular velocity and one (polar) vector, the linear
velocity . From this the angular acceleration is defined as

and thus the angular jerk


.
Consider for example a Geneva drive, a device for creating an intermittent rotation of the driven
wheel (blue) from a continuous rotation of the driving wheel (red). On one cycle of the driving
wheel there is a variation of the angular position of the driven wheel by one quarter of a cycle,
and a constant angular position on the remainder of the cycle.
Because of the necessary finite thickness of the fork making up the slot for the driving pin this
device generates a discontinuity in the angular acceleration , and therefore an unbounded angular
jerk in the driven wheel.
This does not preclude the mechanism from being used in e.g. movie projectors to stepwise
transport the film with high reliability (very long life) and just slight noise, since the load is very
low - the system drives just that part of the film which is within the corridor of projection, so a
very low mass (a few centimeters thin plastic film), with low friction, at a moderate speed (2.4
m/s, 8.6 km/h) is affected.
Dual cam drives

1/6 per revolution

1/3 per revolution


To avoid the jerk inherent in a single cam device, a dual cam device can be used instead, bulkier
and more expensive, but also quieter. This operates two cams on one axis in continuous rotation
and shifting another axle about a fraction of a full revolution. The pictures show a step drive by
one sixth and one third rotation, respectively per full revolution of the driving axle. Note that two
of the arms of the stepped wheel are always in contact with the double cam, so there is no radial
clearance. To follow the detailed operation of the dual cam devices it is advisable to have a look
at the enlarged pictures.
Generally, combined contacts may be used to avoid jerk (and also wear and noise) associated
with one single follower, e.g. gliding along a slot and thereby changing its contact point from one
side of the slot to the other, by using two followers always sliding along the same, one side each.
Jerk in elastically deformable matter[edit]
Compression wave patterns

Plane wave

Cylindrical symmetry
A force/acceleration acting on an elastically deformable mass will affect a deformation which
depends on itsstiffness and the acceleration applied. If the change of this force is slow, the jerk is
small, and the propagation of this deformation through the body may be considered
instantaneously compared to the change in acceleration. The distorted body acts as if it were in a
quasi-static regime. It is the common thread that only a changing force, i.e. a non-zero jerk, can
cause mechanical (or on a charged particle: electromagnetic) waves to be radiated. So for nonzero to high jerk a shock wave and its propagationthrough the body is to be considered. The left
picture shows the propagation of a deformation as a compressional, plane wave through an
elastically deformable material. For angular jerk the deformation waves are arranged circularly
and cause shear stress as shown in the picture to the right, which also might cause other modes of
vibration. As usual with waves, one has to consider their reflections along all boundaries and the
emerging interference patterns, i.e. destructive as well as constructive interference, which may
lead to exceeding boundaries of structural integrity. As a rough estimate the deformation waves
result in vibrations of the whole device and, generally, vibrations cause noise, wear, and,
especially in resonance cases, even disruption.

Pole with massive top

The picture to the left shows a massive top bending the elastic pole, to which it is connected, to
the left, when the bottom block is accelerated to the right. When the block stops accelerating, the
top on the pole will start a (damped) oscillation under the regime of the stiffness of the pole. This
could make plausible, how a bigger (periodic) jerk might excite a bigger amplitude of the
oscillations, because any small oscillations are damped before they get reinforced by another
amplitude of the shock wave.

Sinusoidal acceleration profile


One can also argue that a steeper slope of the acceleration, i.e. a bigger jerk, excites bigger wave
components in the shockwave with higher frequencies, belonging to higher Fourier coefficients,
and so an increased probability of exciting a resonant mode.
As a general rule, to reduce the amplitude of excited stress waves, causing vibrations, any motion
of massive parts has to be shaped by limiting the jerk, i.e. making the acceleration continuous
and keep its slopes as flat as possible. Since the described effects are almost not amenable to
abstract models anymore, the various suggested algorithms for reducing vibrations include still
higher derivatives such as the jounce or suggest continuous regimes not only for the acceleration,
but also for the jerk. One concept is e.g. shaping the acceleration and deceleration sinusoidal with
zero-acceleration in between (see the profile to the right), making the speed look sinusoidal with
constant maximal speed, too. The jerk however will remain discontinuous at the points when the
acceleration enters and leaves its zero-phases.
Applied considerations of Jerk[edit]
Although jerk is not directly involved in Newton's Laws, it has to be considered in engineering in
various places. Normally, only speed and acceleration are used for analysis. For example, the jerk
produced by falling from outer space to the Earth is not particularly useful given the gravitational
acceleration changes very slowly. Sometimes the analysis has to include the jerk for a particular
reason.
Geometric design of roads and tracks[edit]

Easement curve
The principles of geometric design apply to the jerk oriented orthogonally to the path of motion,
considering the centripetal acceleration, whereas the velocity along the path is assumed to be
constant, and so the tangential jerk is zero. Any change in curvature of the path implies non-zero
jerk, arising from purely geometric reasons. To avoid the unbounded (centripetal) jerk when
moving from a straight path into a curve or vice versa, track transition curves are constructed,
which limit the jerk by gradually increasing the centripetal acceleration, i.e. the curvature, to the
value that belongs to the radius of the circle and the speed of travel. The theoretical optimum is
achieved by theEuler spiral, which linearly increases the acceleration, i.e. minimal constant jerk.
As a design rule a maximum value of 0.5 m/s3 and for convenience purposes a value of 0.35
m/s3 are recommended in railway design. The picture shows a piece of an Euler spiral leading as
track transition curve from a straight line to an arc of a circle. In the real scenario the plane of the

track is inclined in the course of the curve and so also this vertical acceleration of the necessary
lifting of the center of mass of the rail car has to be considered to minimize the wear on the
embankment and the tracks by following a slightly different curve. This has been patented as the
Wiener Kurve (Viennese Curve).[2][3]
Roller coasters[1] are of course also subject to these design considerations, when rolling into a
loop. The acceleration values range up to 4g in this environment and it would not be possible to
ride loopings without track transitions, as well as one cannot smoothly drive along a figure eight
consisting of circles. Any S-shaped curve must contain some jerk-reducing transition.
Motion control[edit]
In motion control the focus is on straight linear motion, where the need is to move a system from
one steady position to another (point-to-point motion). So effectively, the jerk resulting from
tangential acceleration is under control. Prominent applications are elevators in people
transportation, and the support of tools in machining. It is reported[4] that most passengers rate a
vertical jerk of 2.0 m/s3 in a lift ride as acceptable, 6.0 m/s3 as intolerable and for a hospital
environment 0.7 m/s3 is suggested. In any case, limiting jerk is considered essential for riding
convenience.[5] ISO 18 738[6] defines how to measure elevator ride quality with respect to jerk,
acceleration, vibrations and noise, but does not venture into defining what are different levels of
elevator ride quality.
Achieving the shortest possible transition time, thereby not exceeding given limit magnitudes for
speed, acceleration, and jerk, will result in a third-order motion profile, with quadratic ramping
and de-ramping phases in the velocity, as illustrated below:

This motion profile consists of up to seven segments defined by the following:


1.
acceleration build-up: limit jerk implies linear increase of acceleration to the limit
acceleration, quadratic increase of speed
2.

limit acceleration: implies zero jerk and linear increase of speed

3.
acceleration ramp-down: approaching the desired limit velocity with negative limit jerk,
i.e. linear decrease of acceleration, (negative) quadratic increase of speed
4.

limit speed: implies zero jerk and zero acceleration

5.
deceleration build-up: limit negative jerk implies linear decrease of acceleration to the
negative limit acceleration, (negative) quadratic decrease of speed
6.

limit deceleration: implies zero jerk and linear decrease of speed

7.
deceleration ramp-down: limit jerk implies linear increase of acceleration to zero,
quadratic decrease of speed, approaching the desired position at zero speed and zero acceleration
The time allotted to segment 4, concerning constant velocity, is to be varied to suit the distance
between the two positions. If the initial and final positions are so close together that a complete
omission of this 4. segment does not suffice, the segments 2. and 6. with constant acceleration are

equally reduced and the limit of speed would not be reached in this variant of the profile. If also
this does not reduce the crossed distance sufficiently, in a next step the ramping segments 1., 3.,
5., and 7. are to be shortened by an equal amount and the limit of acceleration is not reached,
also.
There are also other strategies to design a motion profile, e.g. minimizing the square of the jerk
for a given transition time, to be selected according to the varying applications in machines,
people movers, chain hoists, automotive industries, robot design, and many more. For a
sinusoidal-shaped acceleration profile, with sinusoidal-shaped speed and bounded jerk also, see
above.
Jerk in manufacturing[edit]
Jerk is also important to consider in manufacturing processes. Rapid changes in acceleration of a
cutting tool can lead to premature tool wear and result in uneven cuts. This is why modern
motion controllers include jerk limitation features. In mechanical engineering, jerk is considered,
in addition to velocity and acceleration, in the development of cam profiles because of
tribological implications and the ability of the actuated body to follow the cam profile without
chatter.[7] Jerk must be often considered when the excitation of vibrations is a concern. A device
that measures jerk is called a "jerkmeter".
See also[edit]

Jounce, the derivative of jerk

Geomagnetic jerk

jerk

AbrahamLorentz force, a force in electrodynamics whose magnitude is proportional to

Shock (mechanics)

WheelerFeynman absorber theory

Notes[edit]
1.
^ Jump up to:a b "How Things Work: Roller Coasters - The Tartan Online". Thetartan.org.
2007-04-16. Retrieved 2013-09-15.
2.
Jump up^ //depatisnet.dpma.de/DepatisNet/depatisnet?
window=1&space=menu&content=treffer&action=pdf&docid=AT000000412975B
3.

Jump up^ http://www.mplusm.at/ifg/download/Presle-05.pdf

4.
Jump up^ Howkins, Roger E. "Elevator Ride Quality - The Human Ride Experience".
VFZ-Verlag fr Zielgruppeninformationen GmbH & Co. KG. Retrieved 31 December 2014.
5.
Jump up^ http://www.schindler.com/content/ie/internet/en/mobilitysolutions/products/elevators/schindler5300/_jcr_content/rightPar/downloadlist/downloadList/3_1340031711862.download.asset.3_134
0031711862/05SML9039_Inform_Sheet_EN.pdf

6.
Jump up^ ISO 18738-1:2012. "Measurement of ride quality -- Part 1: Lifts (elevators)".
International Organization for Standardization. Retrieved 31 December 2014.
7.
2005

Jump up^ Blair, G., "Making the Cam", Race Engine Technology 10, September/October

References[edit]

Sprott JC (2003). Chaos and Time-Series Analysis. Oxford University Press. ISBN 0-19850839-5.

Sprott JC (1997). "Some simple chaotic jerk functions" (PDF). Am J Phys 65 (6): 53743.
Bibcode:1997AmJPh..65..537S. doi:10.1119/1.18585. Retrieved 2009-09-28.

Blair G (2005). "Making the Cam" (PDF). Race Engine Technology (010). Retrieved
2009-09-29.
External links[edit]

What is the term used for the third derivative of position?, description of jerk in the
Usenet Physics FAQ

Mathematics of Motion Control Profiles

Elevator ride quality

Elevator manufacturer brochure

Patent of Wiener Kurve

(German) Description of Wiener Kurve

[show]

Kinematics

[show]

Classical mechanics derived SI units

Categories:

Classical mechanics

Physical quantities

Acceleration

Time in physics

Temporal rates

Navigation menu

Not logged in

Talk

Contributions

Create account

Log in

Article

Talk

Read

Edit

View history

Main page

Contents

Featured content

Current events

Random article

Donate to

store

Interaction

Help

About

Community portal

Recent changes

Contact page

Tools

What links here

Related changes

Upload file

Special pages

Permanent link

Page information

Wikidata item

Cite this page

Print/export

Create a book

Download as PDF

Printable version

Languages

??????????

Catal

Cetina

Deutsch

Espaol

?????

Franais

???

Bahasa Indonesia

Italiano

?????

Magyar

?????

Bahasa Melayu

Nederlands

???

Plattdtsch

Polski

Portugus

???????

Simple English

Slovencina

Slovencina

Suomi

Svenska

?????

Trke

??

Edit links

This page was last modified on 14 December 2015, at 12:59.

Text is available under the Creative Commons Attribution-ShareAlike License; additional


terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. is a
registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Privacy policy

About

Disclaimers

Contact

Developers

Mobile view

Kinematics
"Kinematic" redirects here. For the Australian band, see Kinematic (band).
Classical mechanics

Second law of motion

History

Timeline

Branches[hide]

Applied

Celestial

Continuum

Dynamics

Kinematics

Kinetics

Statics

Statistical

Fundamentals[show]

Formulations[show]

Core topics[show]

Rotation[show]

Scientists[show]

Kinematics is the branch of classical mechanics which describes the motion of points, bodies
(objects), and systems of bodies (groups of objects) without consideration of the causes of
motion.[1][2][3] Kinematics as a field of study is often referred to as the "geometry of motion".
[4][5][6] For further details, see Analytical dynamics.
To describe motion, kinematics studies the trajectories of points, lines, other geometric objects,
and their differential properties such as velocity and acceleration. Kinematics is used in
astrophysics to describe the motion of celestial bodies and systems, and inmechanical
engineering, robotics, and biomechanics[7] to describe the motion of systems composed of joined
parts (multi-link systems) such as an engine, a robotic arm or the skeleton of the human body.
The study of kinematics can be abstracted into purely mathematical functions. For instance,
rotation can be represented by elements of the unit circle in the complex plane. Other planar
algebras are used to represent the shear mapping of classical motion in absolute time and space
and to represent the Lorentz transformations of relativistic space and time. By using time as a
parameter in geometry, mathematicians have developed a science of kinematic geometry.
The use of geometric transformations, also called rigid transformations, to describe the
movement of components of a mechanical system simplifies the derivation of its equations of
motion, and is central to dynamic analysis.
Kinematic analysis is the process of measuring the kinematic quantities used to describe motion.
In engineering, for instance, kinematic analysis may be used to find the range of movement for a
given mechanism, and, working in reverse, kinematic synthesis designs a mechanism for a
desired range of motion.[8] In addition, kinematics applies algebraic geometry to the study of the
mechanical advantage of a mechanical system or mechanism.
Contents

[hide]

1Etymology of the term

2Kinematics of a particle trajectory

2.1Velocity and speed

2.2Acceleration

2.3Relative position vector

2.4Relative velocity

3Particle trajectories under constant acceleration

4Particle trajectories in cylindrical-polar coordinates

4.1Constant radius

4.2Planar circular trajectories

5Point trajectories in a body moving in the plane

5.1Displacements and motion

5.2Matrix representation

6Pure translation

7Rotation of a body around a fixed axis

8Point trajectories in body moving in three dimensions

8.1Position

8.2Velocity

8.3Acceleration

9Kinematic constraints

9.1Kinematic coupling

9.2Rolling without slipping

9.3Inextensible cord

9.4Kinematic pairs

9.5Kinematic chains

10See also

11References

12Further reading

13External links

Etymology of the term[edit]


The term kinematic is the English version of A.M. Ampre's cinmatique,[9] which he
constructed from the Greek ????a kinema ("movement, motion"), itself derived from ???e??
kinein ("to move").[10][11]
Kinematic and cinmatique are related to the French word cinma, but neither are directly
derived from it. However, they do share a root word in common, as cinma came from the
shortened form of cinmatographe, "motion picture projector and camera," once again from the
Greek word for movement but also the Greek word for writing. [12]
Kinematics of a particle trajectory[edit]

Kinematic quantities of a classical particle: mass m, position r, velocity v, acceleration a.

Position vector r, always points radially from the origin.

Velocity vector v, always tangent to the path of motion.

Acceleration vector a, not parallel to the radial motion but offset by the angular and Coriolis
accelerations, nor tangent to the path but offset by the centripetal and and radial accelerations.
Kinematic vectors in plane polar coordinates. Notice the setup is not restricted to 2d space, but a
plane in any higher dimension.
Particle kinematics is the study of the properties of the trajectory of a particle. The position of a
particle is defined to be the coordinate vector from the origin of a coordinate frame to the
particle. For example, consider a tower 50 m south from your home, where the coordinate frame
is located at your home, such that East is the x-direction and North is the y-direction, then the
coordinate vector to the base of the tower is r=(0, -50, 0). If the tower is 50 m high, then the
coordinate vector to the top of the tower is r=(0, -50, 50).
Usually a three-dimensional coordinate systems is used to define the position of a particle.
However, if the particle is constrained to lie in a plane or on a sphere, a two-dimensional
coordinate system can be used. All observations in physics are incomplete without the reference
frame being specified.
The position vector of a particle is a vector drawn from the origin of the reference frame to the
particle. It expresses both the distance of the point from the origin and its direction from the
origin. In three dimensions, the position of point P can be expressed as

where xP, yP, and zP are the Cartesian coordinates and i, j and k are the unit vectors along the x,
y, and z coordinate axes, respectively. The magnitude of the position vector |P| gives the distance
between the point P and the origin.

The direction cosines of the position vector provide a quantitative measure of direction. It is
important to note that the position vector of a particle isn't unique. The position vector of a given
particle is different relative to different frames of reference.
The trajectory of a particle is a vector function of time, P(t), which defines the curve traced by
the moving particle, given by

where the coordinates xP, yP, and zP are each functions of time.

The distance travelled is always greater than or equal to the displacement.


Velocity and speed[edit]
The velocity of a particle is a vector quantity that tells about the direction and magnitude of the
rate of change of the position vector, that is, how the position of a point changes with each instant
of time. Consider the ratio of the difference of two positions of a particle divided by the time
interval, which is called the average velocity over that time interval. This average velocity is
defined as

where ?P is the difference in the position vector over the time interval ?t.
In the limit as the time interval ?t becomes smaller and smaller, the average velocity becomes the
time derivative of the position vector,

Thus, velocity is the time rate of change of position, and the dot denotes the derivative with
respect to time. Furthermore, the velocity is tangent to the trajectory of the particle.
As a position vector itself is frame dependent, therefore its velocity is also dependent on the
reference frame.
The speed of an object is the magnitude |V| of its velocity. It is a scalar quantity:

where s is the arc-length measured along the trajectory of the particle. This arc-length traveled by
a particle over time is a non-decreasing quantity. Hence, ds/dt is non-negative, which implies that
speed is also non-negative.

Acceleration[edit]
The acceleration of a particle is the vector defined by the rate of change of the velocity vector.
The average acceleration of a particle over a time interval is defined as the ratio.

where ?V is the difference in the velocity vector and ?t is the time interval.
The acceleration of the particle is the limit of the average acceleration as the time interval
approaches zero, which is the time derivative,

Thus, acceleration is the second derivative of the position vector that defines the trajectory of a
particle.
Relative position vector[edit]
A relative position vector is a vector that defines the position of a particle relative to another
particle. It is the difference in position of the two particles.
If point A has position PA = (xA,yA,zA) and point B has position PB = (xB,yB,zB), the
displacement RB/A of B from A is given by:

Geometrically, the relative position vector RB/A is the vector from point A to point B. The values
of the coordinate vectors of points vary with the choice of coordinate frame, however the relative
position vector between a pair of points has the same length no matter what coordinate frame is
used and is said to be frame invariant.
To describe the motion of a particle B relative to another particle A, we notice that the position B
can be formulated as the position of A plus the position of B relative to A, that is:

Relative velocity[edit]
Main article: Relative velocity

Relative velocities between two particles in classical mechanics.


The relations between relative positions vectors become relations between relative velocities by
computing the time-derivative. The second time derivative yields relations for relative
accelerations.
For example, let the particle B move with velocity VB and particle A move with velocity VA in a
given reference frame. Then the velocity of B relative to A is given by:

This can be obtained by computing the time derivative of the relative position vector RB/A.
This equation provides a formula for the velocity of B in terms of the velocity of A and its
relative velocity,

With a large velocity V, where the fraction V/c is significant, c being the speed of light, another
scheme of relative velocity called rapidity, that depends on this ratio, is used in special relativity.
Particle trajectories under constant acceleration[edit]
Newton's laws state that a constant force acting on a particle generates a constant acceleration.
For example, a particle in a parallel gravity field experiences a force acting downwards that is
proportional to the constant acceleration of gravity, and no force in the horizontal direction. This
is called projectile motion.
If the acceleration vector A of a particle P is constant in magnitude and direction, the particle is
said to be undergoing uniformly accelerated motion. In this case, the trajectory P(t) of the particle
can be obtained by integrating the acceleration A with respect to time.
The first integral yields the velocity of the particle,

A second integration yields its trajectory,

Additional relations between displacement, velocity, acceleration, and time can be derived. Since
A = (V - V0)/t,

By using the definition of an average, this equation states that when the acceleration is constant
average velocity times time equals displacement.
A relationship without explicit time dependence may also be derived using the relation At = V V0,

where denotes the dot product. Divide both sides by t and expand the dot-products to obtain,

In the case of straight-line motion, where P and P0 are parallel to A, this equation becomes:

This can be simplified using the notation |A|=a, |V|=v, and |P|=r, so

This relation is useful when time is not known explicitly.

Figure 2: Velocity and acceleration for nonuniform circular motion: the velocity vector is
tangential to the orbit, but the acceleration vector is not radially inward because of its tangential
component a? that increases the rate of rotation: d?/dt = |a?|/R.
Particle trajectories in cylindrical-polar coordinates[edit]
See also: Generalized coordinates, Curvilinear coordinates, Orthogonal coordinates and FrenetSerret formulas
It is often convenient to formulate the trajectory of a particle P(t) = (X(t), Y(t) and Z(t)) using
polar coordinates in the X-Y plane. In this case, its velocity and acceleration take a convenient
form.
Recall that the trajectory of a particle P is defined by its coordinate vector P measured in a fixed
reference frame F. As the particle moves, its coordinate vector P(t) traces its trajectory, which is a
curve in space, given by:

where i, j, and k are the unit vectors along the X, Y and Z axes of the reference frame F,
respectively.
Consider a particle P that moves on the surface of a circular cylinder, it is possible to align the Z
axis of the fixed frame F with the axis of the cylinder. Then, the angle ? around this axis in the XY plane can be used to define the trajectory as,

The cylindrical coordinates for P(t) can be simplified by introducing the radial and tangential unit
vectors,

Using this notation, P(t) takes the form,

where R is constant.
In general, the trajectory P(t) is not constrained to lie on a circular cylinder, so the radius R varies
with time and the trajectory in cylindrical-polar coordinates becomes:

The velocity vector VP is the time derivative of the trajectory P(t), which yields:

where

In this case, the acceleration AP, which is the time derivative of the velocity VP, is given by:

Constant radius[edit]
If the trajectory of the particle is constrained to lie on a cylinder, then the radius R is constant and
the velocity and acceleration vectors simplify. The velocity of VP is the time derivative of the
trajectory P(t),

The acceleration vector becomes:

Planar circular trajectories[edit]

Each particle on the wheel travels in a planar circular trajectory (Kinematics of Machinery,
1876).[13]
A special case of a particle trajectory on a circular cylinder occurs when there is no movement
along the Z axis:

where R and Z0 are constants. In this case, the velocity VP is given by:

where

is the angular velocity of the unit vector et around the z axis of the cylinder.
The acceleration AP of the particle P is now given by:

The components

are called, respectively, the radial and tangential components of acceleration.


The notation for angular velocity and angular acceleration is often defined as

so the radial and tangential acceleration components for circular trajectories are also written as

Point trajectories in a body moving in the plane[edit]


The movement of components of a mechanical system is analyzed by attaching a reference frame
to each part and determining how the reference frames move relative to each other. If the
structural strength of the parts are sufficient then their deformation can be neglected and rigid
transformations used to define this relative movement. This bringsgeometry into the study of
mechanical movement.
Geometry is the study of the properties of figures that remain the same while the space is
transformed in various ways---more technically, it is the study of invariants under a set of
transformations.[14] Perhaps best known is high-school Euclidean geometry where planar
triangles are studied under congruent transformations (also called isometries or rigid
transformations). These transformations displace the triangle in the plane without changing the
angle at each vertex or the distances between vertices. Kinematics is often described as applied
geometry, where the movement of a mechanical system is described using the rigid
transformations of Euclidean geometry.
The coordinates of points in the plane are two-dimensional vectors in R2, so rigid
transformations are those that preserve the distance measured between any two points. The
Euclidean distance formula is simply the Pythagorean theorem. The set of rigid transformations
in an n-dimensional space is called the special Euclidean group on Rn, and denoted SE(n).
Displacements and motion[edit]

The movement of each of the components of the Boulton & Watt Steam Engine (1784) is
modeled by a continuous set of rigid displacements.
The position of one component of a mechanical system relative to another is defined by
introducing a reference frame, sayM, on one that moves relative to a fixed frame, F, on the other.
The rigid transformation, or displacement, of M relative to Fdefines the relative position of the
two components. A displacement consists of the combination of a rotation and atranslation.
The set of all displacements of M relative to F is called the configuration space of M. A smooth
curve from one position to another in this configuration space is a continuous set of
displacements, called the motion of M relative to F. The motion of a body consists of a
continuous set of rotations and translations.
Matrix representation[edit]
The combination of a rotation and translation in the plane R2 can be represented by a certain type
of 3x3 matrix known as a homogeneous transform. The 3x3 homogeneous transform is
constructed from a 2x2 rotation matrix A(f) and the 2x1 translation vector d=(dx, dy), as:

These homogeneous transforms perform rigid transformations on the points in the plane z=1, that
is on points with coordinates p=(x, y, 1).

In particular, let p define the coordinates of points in a reference frame M coincident with a fixed
frame F. Then, when the origin of M is displaced by the translation vector d relative to the origin
of F and rotated by the angle f relative to the x-axis of F, the new coordinates in F of points in M
are given by:

Homogeneous transforms represent affine transformations. This formulation is necessary because


a translation is not a linear transformation of R2. However, using projective geometry, so that R2
is considered to be a subset of R3, translations become affine linear transformations.[15]
Pure translation[edit]
If a rigid body moves so that its reference frame M does not rotate relative to the fixed frame F,
the motion is said to be pure translation. In this case, the trajectory of every point in the body is
an offset of the trajectory d(t) of the origin of M, that is:

Thus, for bodies in pure translation, the velocity and acceleration of every point P in the body are
given by:

where the dot denotes the derivative with respect to time and VO and AO are the velocity and
acceleration, respectively, of the origin of the moving frame M. Recall the coordinate vector p in
M is constant, so its derivative is zero.
Rotation of a body around a fixed axis[edit]
Main article: Circular motion

Figure 1: The angular velocity vector Opoints up for counterclockwise rotation and down for
clockwise rotation, as specified by the right-hand rule. Angular position ?(t) changes with time at
a rate ?(t) = d?/dt.
Rotational or angular kinematics is the description of the rotation of an object.[16] The
description of rotation requires some method for describing orientation. Common descriptions
include Euler angles and the kinematics of turns induced by algebraic products.
In what follows, attention is restricted to simple rotation about an axis of fixed orientation. The zaxis has been chosen for convenience.
Position
This allows the description of a rotation as the angular position of a planar reference frame M
relative to a fixed F about this shared z-axis. Coordinates p=(x, y) in M are related to coordinates
P=(X, Y) in F by the matrix equation:

where

is the rotation matrix that defines the angular position of M relative to F.


Velocity
If the point p does not move in M, its velocity in F is given by

It is convenient to eliminate the coordinates p and write this as an operation on the trajectory P(t),

where the matrix

is known as the angular velocity matrix of M relative to F. The parameter ? is the time derivative
of the angle ?, that is:

Acceleration
The acceleration of P(t) in F is obtained as the time derivative of the velocity,

which becomes

where

is the angular acceleration matrix of M on F, and

The description of rotation then involves these three quantities:

Angular position : the oriented distance from a selected origin on the rotational axis to a
point of an object is a vector r ( t ) locating the point. The vector r(t) has some projection (or,
equivalently, some component) r?(t) on a plane perpendicular to the axis of rotation. Then the
angular position of that point is the angle ? from a reference axis (typically the positive x-axis) to
the vector r?(t) in a known rotation sense (typically given by the right-hand rule).

Angular velocity : the angular velocity ? is the rate at which the angular position ?
changes with respect to time t:

The angular velocity is represented in Figure 1 by a vector O pointing along the axis of rotation
with magnitude ? and sense determined by the direction of rotation as given by the right-hand
rule.

Angular acceleration : the magnitude of the angular acceleration a is the rate at which the
angular velocity ? changes with respect to time t:

The equations of translational kinematics can easily be extended to planar rotational kinematics
for constant angular acceleration with simple variable exchanges:

Here ?i and ?f are, respectively, the initial and final angular positions, ?i and ?f are, respectively,
the initial and final angular velocities, and a is the constant angular acceleration. Although
position in space and velocity in space are both true vectors (in terms of their properties under
rotation), as is angular velocity, angle itself is not a true vector.
Point trajectories in body moving in three dimensions[edit]
Important formulas in kinematics define the velocity and acceleration of points in a moving body
as they trace trajectories in three-dimensional space. This is particularly important for the center
of mass of a body, which is used to derive equations of motion using either Newton's second law
or Lagrange's equations.
Position[edit]
In order to define these formulas, the movement of a component B of a mechanical system is
defined by the set of rotations [A(t)] and translations d(t) assembled into the homogeneous
transformation [T(t)]=[A(t), d(t)]. If p is the coordinates of a point P in B measured in the moving
reference frame M, then the trajectory of this point traced in F is given by:

This notation does not distinguish between P = (X, Y, Z, 1), and P = (X, Y, Z), which is hopefully
clear in context.
This equation for the trajectory of P can be inverted to compute the coordinate vector p in M as:

This expression uses the fact that the transpose of a rotation matrix is also its inverse, that is:

Velocity[edit]

The velocity of the point P along its trajectory P(t) is obtained as the time derivative of this
position vector,

The dot denotes the derivative with respect to time; because p is constant, its derivative is zero.
This formula can be modified to obtain the velocity of P by operating on its trajectory P(t)
measured in the fixed frame F. Substituting the inverse transform for p into the velocity equation
yields:

The matrix [S] is given by:

where

is the angular velocity matrix.


Multiplying by the operator [S], the formula for the velocity VP takes the form:

where the vector ? is the angular velocity vector obtained from the components of the matrix [O];
the vector

is the position of P relative to the origin O of the moving frame M; and

is the velocity of the origin O.


Acceleration[edit]
The acceleration of a point P in a moving body B is obtained as the time derivative of its velocity
vector:

This equation can be expanded firstly by computing

and

The formula for the acceleration AP can now be obtained as:

or

where a is the angular acceleration vector obtained from the derivative of the angular velocity
matrix;

is the relative position vector; and

is the acceleration of the origin of the moving frame M.


Kinematic constraints[edit]
Kinematic constraints are constraints on the movement of components of a mechanical system.
Kinematic constraints can be considered to have two basic forms, (i) constraints that arise from
hinges, sliders and cam joints that define the construction of the system, called holonomic
constraints, and (ii) constraints imposed on the velocity of the system such as the knife-edge
constraint of ice-skates on a flat plane, or rolling without slipping of a disc or sphere in contact
with a plane, which are called non-holonomic constraints. The following are some common
examples.
Kinematic coupling[edit]
A kinematic coupling exactly constrains all 6 degrees of freedom.
Rolling without slipping[edit]
An object that rolls against a surface without slipping obeys the condition that the velocity of its
center of mass is equal to the cross product of its angular velocity with a vector from the point of
contact to the center of mass:

For the case of an object that does not tip or turn, this reduces to .
Inextensible cord[edit]
This is the case where bodies are connected by an idealized cord that remains in tension and
cannot change length. The constraint is that the sum of lengths of all segments of the cord is the
total length, and accordingly the time derivative of this sum is zero.[17][18][19] A dynamic
problem of this type is the pendulum. Another example is a drum turned by the pull of gravity
upon a falling weight attached to the rim by the inextensible cord.[20] An equilibrium problem
(i.e. not kinematic) of this type is the catenary.[21]
Kinematic pairs[edit]
Main article: Kinematic pair

Reuleaux called the ideal connections between components that form a machine kinematic pairs.
He distinguished between higher pairs which were said to have line contact between the two
links and lower pairs that have area contact between the links. J. Phillips shows that there are
many ways to construct pairs that do not fit this simple classification.[22]
Lower pair[edit]
A lower pair is an ideal joint, or holonomic constraint, that maintains contact between a point,
line or plane in a moving solid (three-dimensional) body to a corresponding point line or plane in
the fixed solid body. There are the following cases:

A revolute pair, or hinged joint, requires a line, or axis, in the moving body to remain colinear with a line in the fixed body, and a plane perpendicular to this line in the moving body
maintain contact with a similar perpendicular plane in the fixed body. This imposes five
constraints on the relative movement of the links, which therefore has one degree of freedom,
which is pure rotation about the axis of the hinge.

A prismatic joint, or slider, requires that a line, or axis, in the moving body remain colinear with a line in the fixed body, and a plane parallel to this line in the moving body maintain
contact with a similar parallel plane in the fixed body. This imposes five constraints on the
relative movement of the links, which therefore has one degree of freedom. This degree of
freedom is the distance of the slide along the line.

A cylindrical joint requires that a line, or axis, in the moving body remain co-linear with a
line in the fixed body. It is a combination of a revolute joint and a sliding joint. This joint has two
degrees of freedom. The position of the moving body is defined by both the rotation about and
slide along the axis.

A spherical joint, or ball joint, requires that a point in the moving body maintain contact
with a point in the fixed body. This joint has three degrees of freedom.

A planar joint requires that a plane in the moving body maintain contact with a plane in
fixed body. This joint has three degrees of freedom.
Higher pairs[edit]
Generally speaking, a higher pair is a constraint that requires a curve or surface in the moving
body to maintain contact with a curve or surface in the fixed body. For example, the contact
between a cam and its follower is a higher pair called a cam joint. Similarly, the contact between
the involute curves that form the meshing teeth of two gears are cam joints.
Kinematic chains[edit]

Illustration of a four-bar linkage


fromhttp://en.wikisource.org/wiki/The_Kinematics_of_Machinery Kinematics of Machinery,
1876
Rigid bodies ("links") connected by kinematic pairs ("joints") are known as kinematic chains.
Mechanisms and robots are examples of kinematic chains. The degree of freedom of a kinematic

chain is computed from the number of links and the number and type of joints using the mobility
formula. This formula can also be used to enumerate the topologies of kinematic chains that have
a given degree of freedom, which is known as type synthesis in machine design.
Examples[edit]
The planar one degree-of-freedom linkages assembled from N links and j hinged or sliding joints
are:

N=2, j=1 : a two-bar linkage that is the lever;

N=4, j=4 : the four-bar linkage;

N=6, j=7 : a six-bar linkage. This must have two links ("ternary links") that support three
joints. There are two distinct topologies that depend on how the two ternary linkages are
connected. In the Watt topology, the two ternary links have a common joint; in the Stephenson
topology, the two ternary links do not have a common joint and are connected by binary links.
[23]

N=8, j=10 : eight-bar linkage with 16 different topologies;

N=10, j=13 : ten-bar linkage with 230 different topologies;

N=12, j=16 : twelve-bar linkage with 6,856 topologies.

For larger chains and their linkage topologies, see R. P. Sunkari and L. C. Schmidt, "Structural
synthesis of planar kinematic chains by adapting a Mckay-type algorithm", Mechanism and
Machine Theory #41, pp. 10211030 (2006).
See also[edit]

Acceleration

Analytical mechanics

Applied mechanics

Celestial mechanics

Centripetal force

Classical mechanics

Distance

Dynamics (physics)

Fictitious force

Forward kinematics

Four-bar linkage

Inverse kinematics

Jerk (physics)

Kepler's laws

Kinematic diagram

Kinetics (physics)

Motion (physics)

Orbital mechanics

Statics

Velocity

ChebychevGrblerKutzbach criterion

Kinematic coupling

References[edit]
1.
Jump up^ Edmund Taylor Whittaker (1904). A Treatise on the Analytical Dynamics of
Particles and Rigid Bodies. Cambridge University Press. Chapter 1. ISBN 0-521-35883-3.
2.
Jump up^ Joseph Stiles Beggs (1983). Kinematics. Taylor & Francis. p. 1. ISBN 0-89116355-7.
3.
Jump up^ Thomas Wallace Wright (1896). Elements of Mechanics Including Kinematics,
Kinetics and Statics. E and FN Spon. Chapter 1.
4.
Jump up^ Russell C. Hibbeler (2009). "Kinematics and kinetics of a particle".
Engineering Mechanics: Dynamics (12th ed.). Prentice Hall. p. 298. ISBN 0-13-607791-9.
5.
Jump up^ Ahmed A. Shabana (2003). "Reference kinematics". Dynamics of Multibody
Systems(2nd ed.). Cambridge University Press. ISBN 978-0-521-54411-5.
6.
Jump up^ P. P. Teodorescu (2007). "Kinematics". Mechanical Systems, Classical Models:
Particle Mechanics. Springer. p. 287. ISBN 1-4020-5441-6..
7.
Jump up^ A. Biewener (2003). Animal Locomotion. Oxford University Press. ISBN
019850022X.
8.
Jump up^ J. M. McCarthy and G. S. Soh, 2010, Geometric Design of Linkages, Springer,
New York.
9.

Jump up^ Ampre, Andr-Marie. Essai sur la Pilosophie des Sciences. Chez Bachelier.

10.
Jump up^ Merz, John (1903). A History of European Thought in the Nineteenth Century.
Blackwood, London. p. 5.
11.
Jump up^ O. Bottema & B. Roth (1990). Theoretical Kinematics. Dover Publications.
preface, p. 5. ISBN 0-486-66346-9.
12.

Jump up^ Harper, Douglas. "cinema". Online Etymology Dictionary.

13.
Jump up^ Reuleaux, F.; Kennedy, Alex B. W. (1876), The Kinematics of Machinery:
Outlines of a Theory of Machines, London: Macmillan
14.
Jump up^ Geometry: the study of properties of given elements that remain invariant under
specified transformations. "Definition of geometry". Merriam-Webster on-line dictionary.
15.
Jump up^ Paul, Richard (1981). Robot manipulators: mathematics, programming, and
control : the computer control of robot manipulators. MIT Press, Cambridge, MA. ISBN 978-0262-16082-7.
16.
Jump up^ R. Douglas Gregory (2006). Chapter 16. Cambridge, England: Cambridge
University.ISBN 0-521-82678-0.
17.
Jump up^ William Thomson Kelvin & Peter Guthrie Tait (1894). Elements of Natural
Philosophy. Cambridge University Press. p. 4. ISBN 1-57392-984-0.
18.
Jump up^ William Thomson Kelvin & Peter Guthrie Tait (1894). Elements of Natural
Philosophy. p. 296.
19.
Jump up^ M. Fogiel (1980). "Problem 17-11". The Mechanics Problem Solver. Research
& Education Association. p. 613. ISBN 0-87891-519-2.
20.
Jump up^ Irving Porter Church (1908). Mechanics of Engineering. Wiley. p. 111. ISBN 1110-36527-6.
21.
Jump up^ Morris Kline (1990). Mathematical Thought from Ancient to Modern Times.
Oxford University Press. p. 472. ISBN 0-19-506136-5.
22.
Jump up^ Phillips, Jack (2007). Freedom in Machinery, Volumes 1-2 (reprint ed.).
Cambridge University Press. ISBN 978-0-521-67331-0.
23.
Jump up^ Tsai, Lung-Wen (2001). Mechanism design:enumeration of kinematic
structures according to function (llustrated ed.). CRC Press. p. 121. ISBN 978-0-8493-0901-4.
Further reading[edit]

Koetsier, Teun (1994), "8.3 Kinematics", in Grattan-Guiness, Ivor, Companion


Encyclopedia of the History and Philosophy of the Mathematical Sciences 2, Routledge, pp. 994
1001, ISBN 0-415-09239-6

Moon, Francis C. (2007). The Machines of Leonardo Da Vinci and Franz Reuleaux,
Kinematics of Machines from the Renaissance to the 20th Century. Springer. ISBN 978-1-40205598-0.

Eduard Study (1913) D.H. Delphenich translator, "Foundations and goals of analytical
kinematics".
External links[edit]

Look up kinematics in Wiktionary, the free dictionary.

Java applet of 1D kinematics

Physclips: Mechanics with animations and video clips from the University of New South
Wales.

Kinematic Models for Design Digital Library (KMODDL), featuring movies and photos
of hundreds of working models of mechanical systems at Cornell University and an e-book
library of classic texts on mechanical design and engineering.

Micro-Inch Positioning with Kinematic Components

Authority control

GND: 4030664-1

NDL: 00574002

Categories:

Kinematics

Classical mechanics

Mechanical engineering

Mechanisms (engineering)

Navigation menu

Not logged in

Talk

Contributions

Create account

Log in

Article

Talk

Read

Edit

View history

Main page

Contents

Featured content

Current events

Random article

Donate to

store

Interaction

Help

About

Community portal

Recent changes

Contact page

Tools

What links here

Related changes

Upload file

Special pages

Permanent link

Page information

Wikidata item

Cite this page

Print/export

Create a book

Download as PDF

Printable version

Languages

???????

Asturianu

??????????

?????????? (???????????)?

?????????

Bosanski

Catal

Cetina

Dansk

Deutsch

Eesti

????????

Espaol

Euskara

?????

Franais

Galego

???

Hrvatski

Ido

Bahasa Indonesia

Italiano

?????

???????

Latvieu

Lietuviu

Magyar

??????????

??????

?????

Bahasa Melayu

??????

Nederlands

???

Norsk bokml

Norsk nynorsk

?????????

Piemontis

Polski

Portugus

Romna

???????

Scots

Shqip

Sicilianu

?????

Simple English

Slovencina

Slovencina

?????? / srpski

Srpskohrvatski / ??????????????

Suomi

Tagalog

?????

Trke

??????????

????

Ti?ng Vi?t

??????

??

Edit links

This page was last modified on 12 December 2015, at 15:52.

Text is available under the Creative Commons Attribution-ShareAlike License; additional


terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. is a
registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Privacy policy

About

Disclaimers

Contact

Developers

Mobile view

Speed of gravity

In classical theories of gravitation, the speed of gravity is the speed at which changes in a
gravitational field propagate. This is the speed at which a change in the distribution ofenergy and
momentum of matter results in subsequent alteration, at a distance, of the gravitational field
which it produces. In a more physically correct sense, the "speed of gravity" refers to the speed of
a gravitational wave, which should be the same speed as the speed of light (c).
Contents
[hide]

1Introduction

2Static fields

3Newtonian gravitation

4Laplace

5Electrodynamical analogies

5.1Early theories

5.2Lorentz

6Lorentz covariant models

7General relativity

7.1Background

7.2Aberration of field direction in general relativity, for a weakly accelerated observer

7.3Formulaic conventions

7.4Possible experimental measurements

8References

9External links

Introduction[edit]
The speed of gravitational waves in the general theory of relativity is equal to the speed of light
in vacuum, c.[1] Within the theory of special relativity, the constant c is not exclusively about
light; instead it is the highest possible speed for any interaction in nature. Formally, c is a
conversion factor for changing the unit of time to the unit of space.[2]This makes it the only
speed which does not depend either on the motion of an observer or a source of light and/or
gravity. Thus, the speed of "light" is also the speed of gravitational waves and any other massless
particle. Such particles include the gluon (carrier of the strong force), the photons that make up
light, and the theoretical gravitonswhich make up the associated field particles of gravity
(however a theory of the graviton requires a theory of quantum gravity).
Static fields[edit]
The speed of physical changes in a gravitational or electromagnetic field should not be confused
with "changes" in the behavior of static fields that are due to pure observer-effects. These
changes in direction of a static field, because of relativistic considerations, are the same for an
observer when a distant charge is moving, as when an observer (instead) decides to move with
respect to a distant charge. Thus, constant motion of an observer with regard to a static charge
and its extended static field (either a gravitational or electric field) does not change the field. For
static fields, such as the electrostatic field connected with electric charge, or the gravitational
field connected to a massive object, the field extends to infinity, and does not propagate. Motion
of an observer does not cause the direction of such a field to change, and by symmetrical
considerations, changing the observer frame so that the charge appears to be moving at a constant
rate, also does not cause the direction of its field to change, but requires that it continue to "point"
in the direction of the charge, at all distances from the charge.
The consequence of this is that static fields (either electric or gravitational) always point directly
to the actual position of the bodies that they are connected to, without any delay that is due to any
"signal" traveling (or propagating) from the charge, over a distance to an observer. This remains
true if the charged bodies and their observers are made to "move" (or not), by simply changing
reference frames. This fact sometimes causes confusion about the "speed" of such static fields,

which sometimes appear to change infinitely quickly when the changes in the field are mere
artifacts of the motion of the observer, or of observation.
In such cases, nothing actually changes infinitely quickly, save the point of view of an observer
of the field. For example, when an observer begins to move with respect to a static field that
already extends over light years, it appears as though "immediately" the entire field, along with
its source, has begun moving at the speed of the observer. This, of course, includes the extended
parts of the field. However, this "change" in the apparent behavior of the field source, along with
its distant field, does not represent any sort of propagation that is faster than light.
Newtonian gravitation[edit]
Isaac Newton's formulation of a gravitational force law requires that each particle with mass
respond instantaneously to every other particle with mass irrespective of the distance between
them. In modern terms, Newtonian gravitation is described by the Poisson equation, according to
which, when the mass distribution of a system changes, its gravitational field instantaneously
adjusts. Therefore the theory assumes the speed of gravity to be infinite. This assumption was
adequate to account for all phenomena with the observational accuracy of that time. It was not
until the 19th century that an anomaly in astronomical observations which could not be
reconciled with the Newtonian gravitational model of instantaneous action was noted: the French
astronomer Urbain Le Verrier determined in 1859 that the elliptical orbit of Mercury precesses at
a significantly different rate from that predicted by Newtonian theory.[3]
Laplace[edit]
The first attempt to combine a finite gravitational speed with Newton's theory was made by
Laplace in 1805. Based on Newton's force law he considered a model in which the gravitational
field is defined as a radiation field or fluid. Changes in the motion of the attracting body are
transmitted by some sort of waves.[4] Therefore, the movements of the celestial bodies should be
modified in the order v/c, where v is the relative speed between the bodies and c is the speed of
gravity. The effect of a finite speed of gravity goes to zero as c goes to infinity, but not as 1/c2 as
it does in modern theories. This led Laplace to conclude that the speed of gravitational
interactions is at least 7106 times the speed of light. This velocity was used by many in the 19th
century to criticize any model based on a finite speed of gravity, like electrical or mechanical
explanations of gravitation.
From a modern point of view, Laplace's analysis is incorrect. Not knowing about Lorentz'
invariance of static fields, Laplace assumed that when an object like the Earth is moving around
the Sun, the attraction of the Earth would not be toward the instantaneous position of the Sun, but
toward where the Sun had been if its position was retarded using the relative velocity (this
retardation actually does happen with the optical position of the Sun, and is called annual solar
aberration). Putting the Sun immobile at the origin, when the Earth is moving in an orbit of
radius R with velocity v presuming that the gravitational influence moves with velocity c, moves
the Sun's true position ahead of its optical position, by an amount equal to vR/c, which is the
travel time of gravity from the sun to the Earth times the relative velocity of the sun and the
Earth. The pull of gravity (if it behaved like a wave, such as light) would then be always
displaced in the direction of the Earth's velocity, so that the Earth would always be pulled toward
the optical position of the Sun, rather than its actual position. This would cause a pull ahead of

the Earth, which would cause the orbit of the Earth to spiral outward. Such an outspiral would be
suppressed by an amount v/c compared to the force which keeps the Earth in orbit; and since the
Earth's orbit is observed to be stable, Laplace's c must be very large. As is now known, it may be
considered to be infinite in the limit of straight-line motion, since as a static influence, it is
instantaneous at distance, when seen by observers at constant transverse velocity. For orbits in
which velocity (direction of speed) changes slowly, it is almost infinite.
The attraction toward an object moving with a steady velocity is towards its instantaneous
position with no delay, for both gravity and electric charge. In a field equation consistent with
special relativity (i.e., a Lorentz invariant equation), the attraction between static charges moving
with constant relative velocity, is always toward the instantaneous position of the charge (in this
case, the "gravitational charge" of the Sun), not the time-retarded position of the Sun. When an
object is moving in orbit at a steady speed but changing velocity v, the effect on the orbit is order
v2/c2, and the effect preserves energy and angular momentum, so that orbits do not decay.
Electrodynamical analogies[edit]
Early theories[edit]
At the end of the 19th century, many tried to combine Newton's force law with the established
laws of electrodynamics, like those of Wilhelm Eduard Weber, Carl Friedrich Gauss,Bernhard
Riemann and James Clerk Maxwell. Those theories are not invalidated by Laplace's critique,
because although they are based on finite propagation speeds, they contain additional terms
which maintain the stability of the planetary system. Those models were used to explain the
perihelion advance of Mercury, but they could not provide exact values. One exception was
Maurice Lvy in 1890, who succeeded in doing so by combining the laws of Weber and
Riemann, whereby the speed of gravity is equal to the speed of light. So those hypotheses were
rejected.[5][6]
However, a more important variation of those attempts was the theory of Paul Gerber, who
derived in 1898 the identical formula, which was also derived later by Einstein for the perihelion
advance. Based on that formula, Gerber calculated a propagation speed for gravity of 305 000
km/s, i.e. practically the speed of light. But Gerber's derivation of the formula was faulty, i.e., his
conclusions did not follow from his premises, and therefore many (including Einstein) did not
consider it to be a meaningful theoretical effort. Additionally, the value it predicted for the
deflection of light in the gravitational field of the sun was too high by the factor 3/2.[7][8][9]
Lorentz[edit]
In 1900 Hendrik Lorentz tried to explain gravity on the basis of his ether theory and the Maxwell
equations. After proposing (and rejecting) a Le Sage type model, he assumed likeOttaviano
Fabrizio Mossotti and Johann Karl Friedrich Zllner that the attraction of opposite charged
particles is stronger than the repulsion of equal charged particles. The resulting net force is
exactly what is known as universal gravitation, in which the speed of gravity is that of light. This
leads to a conflict with the law of gravitation by Isaac Newton, in which it was shown by Pierre
Simon Laplace that a finite speed of gravity leads to some sort of aberration and therefore makes
the orbits unstable. However, Lorentz showed that the theory is not concerned by Laplace's
critique, because due to the structure of the Maxwell equations only effects in the order v2/c2

arise. But Lorentz calculated that the value for the perihelion advance of Mercury was much too
low. He wrote:[10]
The special form of these terms may perhaps be modified. Yet, what has been said is sufficient to
show that gravitation may be attributed to actions which are propagated with no greater velocity
than that of light.
In 1908 Henri Poincar examined the gravitational theory of Lorentz and classified it as
compatible with the relativity principle, but (like Lorentz) he criticized the inaccurate indication
of the perihelion advance of Mercury.[11]
Lorentz covariant models[edit]
Henri Poincar argued in 1904 that a propagation speed of gravity which is greater than c would
contradict the concept of local time (based on synchronization by light signals) and the principle
of relativity. He wrote:[12]
What would happen if we could communicate by signals other than those of light, the velocity of
propagation of which differed from that of light? If, after having regulated our watches by the
optimal method, we wished to verify the result by means of these new signals, we should observe
discrepancies due to the common translatory motion of the two stations. And are such signals
inconceivable, if we take the view of Laplace, that universal gravitation is transmitted with a
velocity a million times as great as that of light?
However, in 1905 Poincar calculated that changes in the gravitational field can propagate with
the speed of light if it is presupposed that such a theory is based on the Lorentz transformation.
He wrote:[13]
Laplace showed in effect that the propagation is either instantaneous or much faster than that of
light. However, Laplace examined the hypothesis of finite propagation velocity ceteris non
mutatis; here, on the contrary, this hypothesis is conjoined with many others, and it may be that
between them a more or less perfect compensation takes place. The application of the Lorentz
transformation has already provided us with numerous examples of this.
Similar models were also proposed by Hermann Minkowski (1907) and Arnold Sommerfeld
(1910). However, those attempts were quickly superseded by Einstein's theory of general
relativity.[14] Whitehead's theory of gravitation (1922) explains gravitational red shift, light
bending, perihelion shift and Shapiro delay.[15]
General relativity[edit]
Background[edit]
General relativity predicts that gravitational radiation should exist and propagate as a wave at
lightspeed: a slowly evolving and weak gravitational field will produce, according togeneral
relativity, effects like those of Newtonian gravitation.
Suddenly displacing one of two gravitoelectrically interacting particles would, after a delay
corresponding to lightspeed, cause the other to feel the displaced particle's absence: accelerations
due to the change in quadrupole moment of star systems, like the HulseTaylor binary have

removed much energy (almost 2% of the energy of our own Sun's output) as gravitational waves,
which would theoretically travel at the speed of light.
Two gravitoelectrically interacting particle ensembles, e.g., two planets or stars moving at
constant velocity with respect to each other, each feel a force toward the instantaneous position
of the other body without a speed-of-light delay because Lorentz invariance demands that what a
moving body in a static field sees and what a moving body that emits that field sees be
symmetrical.
A moving body's seeing no aberration in a static field emanating from a "motionless body"
therefore causes Lorentz invariance to require that in the previously moving body'sreference
frame the (now moving) emitting body's field lines must not at a distance be retarded or aberred.
Moving charged bodies (including bodies that emit static gravitational fields) exhibit static field
lines that bend not with distance and show no speed of light delay effects, as seen from bodies
moving with regard to them.
In other words, since the gravitoelectric field is, by definition, static and continuous, it does not
propagate. If such a source of a static field is accelerated (for example stopped) with regard to its
formerly constant velocity frame, its distant field continues to be updated as though the charged
body continued with constant velocity. This effect causes the distant fields of unaccelerated
moving charges to appear to be "updated" instantly for their constant velocity motion, as seen
from distant positions, in the frame where the source-object is moving at constant velocity.
However, as discussed, this is an effect which can be removed at any time, by transitioning to a
new reference frame in which the distant charged body is now at rest.
The static and continuous gravitoelectric component of a gravitational field is not a
gravitomagnetic component (gravitational radiation); see Petrov classification. The
gravitoelectric field is a static field and therefore cannot superluminally transmit quantized
(discrete) information, i.e., it could not constitute a well-ordered series of impulses carrying a
well-defined meaning (this is the same for gravity and electromagnetism).
Aberration of field direction in general relativity, for a weakly accelerated observer[edit]
Main article: LinardWiechert potential
The finite speed of gravitational interaction in general relativity does not lead to the sorts of
problems with the aberration of gravity that Newton was originally concerned with, because there
is no such aberration in static field effects. Because the acceleration of the Earth with regard to
the Sun is small (meaning, to a good approximation, the two bodies can be regarded as traveling
in straight lines past each other with unchanging velocity) the orbital results calculated by general
relativity are the same as those of Newtonian gravity with instantaneous action at a distance,
because they are modelled by the behavior of a static field with constant-velocity relative motion,
and no aberration for the forces involved.[16] Although the calculations are considerably more
complicated, one can show that a static field in general relativity does not suffer from aberration
problems as seen by an unaccelerated observer (or a weakly accelerated observer, such as the
Earth). Analogously, the "static term" in the electromagnetic LinardWiechert potential theory
of the fields from a moving charge, does not suffer from either aberration or positionalretardation. Only the term corresponding to acceleration and electromagnetic emission in the
LinardWiechert potential shows a direction toward the time-retarded position of the emitter.

It is in fact not very easy to construct a self-consistent gravity theory in which gravitational
interaction propagates at a speed other than the speed of light, which complicates discussion of
this possibility.[17]
Formulaic conventions[edit]
In general relativity the metric tensor symbolizes the gravitational potential, and Christoffel
symbols of the spacetime manifold symbolize the gravitational force field. The tidal gravitational
field is associated with the curvature of spacetime.
Possible experimental measurements[edit]
The speed of gravity (more correctly, the speed of gravitational waves) can be calculated from
observations of the orbital decay rate of binary pulsars PSR 1913+16 (the HulseTaylor binary
system noted above) and PSR B1534+12. The orbits of these binary pulsars are decaying due to
loss of energy in the form of gravitational radiation. The rate of this energy loss ("gravitational
damping") can be measured, and since it depends on the speed of gravity, comparing the
measured values to theory shows that the speed of gravity is equal to the speed of light to within
1%.[18] However, according to PPN formalism setting, measuring the speed of gravity by
comparing theoretical results with experimental results will depend on the theory; use of a theory
other than that of general relativity could in principle show a different speed, although the
existence of gravitational damping at all implies that the speed cannot be infinite.[citation
needed]
In September 2002, Sergei Kopeikin and Edward Fomalont announced that they had made an
indirect measurement of the speed of gravity, using their data from VLBImeasurement of the
retarded position of Jupiter on its orbit during Jupiter's transit across the line-of-sight of the
bright radio source quasar QSO J0842+1835. Kopeikin and Fomalont concluded that the speed of
gravity is between 0.8 and 1.2 times the speed of light, which would be fully consistent with the
theoretical prediction of general relativity that the speed of gravity is exactly the same as the
speed of light.[19]
Several physicists, including Clifford M. Will and Steve Carlip, have criticized these claims on
the grounds that they have allegedly misinterpreted the results of their measurements. Notably,
prior to the actual transit, Hideki Asada in a paper to the Astrophysical Journal Letters theorized
that the proposed experiment was essentially a roundabout confirmation of the speed of light
instead of the speed of gravity.[20] However, Kopeikin and Fomalont continue to vigorously
argue their case and the means of presenting their result at the press-conference of AAS that was
offered after the peer review of the results of the Jovian experiment had been done by the experts
of the AAS scientific organizing committee. In later publication by Kopeikin and Fomalont,
which uses a bi-metric formalism that splits the space-time null cone in two one for gravity and
another one for light, the authors claimed that Asada's claim was theoretically unsound.[21] The
two null cones overlap in general relativity, which makes tracking the speed-of-gravity effects
difficult and requires a special mathematical technique of gravitational retarded potentials, which
was worked out by Kopeikin and co-authors[22][23] but was never properly employed by Asada
and/or the other critics.

Stuart Samuel also suggested that the experiment did not actually measure the speed of gravity
because the effects were too small to have been measured.[24] A response by Kopeikin and
Fomalont challenges this opinion.[25]
It is important to understand that none of the participants in this controversy are claiming that
general relativity is "wrong". Rather, the debate concerns whether or not Kopeikin and Fomalont
have really provided yet another verification of one of its fundamental predictions. A
comprehensive review of the definition of the speed of gravity and its measurement with highprecision astrometric and other techniques appears in the textbook Relativistic Celestial
Mechanics in the Solar System.[26]
References[edit]
1.
Jump up^ Hartle, JB (2003). Gravity: An Introduction to Einstein's General Relativity.
Addison-Wesley. p. 332. ISBN 981-02-2749-3.
2.
Jump up^ Taylor, Edwin F. and Wheeler, John Archibald, Spacetime Physics, 2nd edition,
1991, p. 12.
3.
Jump up^ U. Le Verrier, Lettre de M. Le Verrier M. Faye sur la thorie de Mercure et
sur le mouvement du prihlie de cette plante, C. R. Acad. Sci. 49 (1859), 379383.
4.
Jump up^ Laplace, P.S.: (1805) "A Treatise in Celestial Mechanics", Volume IV, Book X,
Chapter VII, translated by N. Bowditch (Chelsea, New York, 1966)
5.
Jump up^ Zenneck, J. (1903). "Gravitation". Encyklopdie der mathematischen
Wissenschaften mit Einschluss ihrer Anwendungen (in German) 5: 2567. doi:10.1007/978-3663-16016-8_2.
6.
Jump up^ Roseveare, N. T (1982). Mercury's perihelion, from Leverrier to Einstein.
Oxford: University Press. ISBN 0-19-858174-2.
7.
Jump up^ Gerber, P. (1898). "Die rumliche und zeitliche Ausbreitung der Gravitation".
Zeitschrift fr mathematische Physik (in German) 43: 93104.
8.

Jump up^ Zenneck, pp. 4951

9.

Jump up^ "Gerber's Gravity". Mathpages. Retrieved 2 Dec 2010.

10.
Jump up^ Lorentz, H.A. (1900). "Considerations on Gravitation". Proc. Acad. Amsterdam
2: 559574.
11.
Jump up^ Poincar, H. (1908). "La dynamique de l'lectron" (PDF). Revue gnrale des
sciences pures et appliques 19: 386402. Reprinted in Poincar, Oeuvres, tome IX, S. 551586
and in "Science and Method" (1908)
12.
Jump up^ Poincar, Henri (1904). "L'tat actuel et l'avenir de la physique mathmatique".
Bulletin des Sciences Mathmatiques 28 (2): 302324.. English translation in Poincar, Henri
(1905). "The Principles of Mathematical Physics". In Rogers, Howard J. Congress of arts and
science, universal exposition, St. Louis, 1904 1. Boston and New York: Houghton, Mifflin and
Company. pp. 604622. Reprinted in "The value of science", Ch. 79.

13.
Jump up^ Poincar, H. (1906). "Sur la dynamique de l'lectron" (PDF). Rendiconti del
Circolo Matematico di Palermo (in French) 21 (1): 129176. doi:10.1007/BF03013466. See also
the English Translation.
14.
Jump up^ Walter, Scott (2007). Renn, J., ed. "Breaking in the 4-vectors: the fourdimensional movement in gravitation, 19051910" (PDF). The Genesis of General Relativity
(Berlin: Springer) 3: 193252.
15.
Jump up^ Will, Clifford & Gibbons, Gary. "On the Multiple Deaths of Whitehead's
Theory of Gravity", to be submitted to Studies In History And Philosophy Of Modern Physics
(2006).
16.
Jump up^ Carlip, S. (2000). "Aberration and the Speed of Gravity". Phys. Lett. A 267 (2
3): 8187. arXiv:gr-qc/9909087. Bibcode:2000PhLA..267...81C. doi:10.1016/S03759601(00)00101-8.
17.
Jump up^ * Carlip, S. (2004). "Model-Dependence of Shapiro Time Delay and the "Speed
of Gravity/Speed of Light" Controversy". Class. Quant. Grav. 21: 38033812. arXiv:grqc/0403060.
18.
Jump up^ C. Will (2001). "The confrontation between general relativity and experiment".
Living Rev. Relativity 4: 4. arXiv:gr-qc/0103036. Bibcode:2001LRR.....4....4W.
19.
Jump up^ Ed Fomalont & Sergei Kopeikin (2003). "The Measurement of the Light
Deflection from Jupiter: Experimental Results". The Astrophysical Journal 598 (1): 704711.
arXiv:astro-ph/0302294. Bibcode:2003ApJ...598..704F. doi:10.1086/378785.
20.
Jump up^ Hideki Asada (2002). "Light Cone Effect and the Shapiro Time Delay". The
Astrophysical Journal Letters 574 (1): L69. arXiv:astro-ph/0206266.
Bibcode:2002ApJ...574L..69A.doi:10.1086/342369.
21.
Jump up^ Kopeikin S.M. & Fomalont E.B. (2006). "Aberration and the Fundamental
Speed of Gravity in the Jovian Deflection Experiment". Foundations of Physics 36 (8): 1244
1285. arXiv:astro-ph/0311063. Bibcode:2006FoPh...36.1244K. doi:10.1007/s10701-006-9059-7.
22.
Jump up^ Kopeikin S.M. & Schaefer G. (1999). "Lorentz covariant theory of light
propagation in gravitational fields of arbitrary-moving bodies". Physical Review D 60 (12): id.
124002 [44 pages].arXiv:gr-qc/9902030. Bibcode:1999PhRvD..60l4002K.
doi:10.1103/PhysRevD.60.124002.
23.
Jump up^ Kopeikin S.M. & Mashhoon B. (2002). "Gravitomagnetic effects in the
propagation of electromagnetic waves in variable gravitational fields of arbitrary-moving and
spinning bodies". Physical Review D 65 (6): id. 064025 [20 pages]. arXiv:gr-qc/0110101.
Bibcode:2002PhRvD..65f4025K. doi:10.1103/PhysRevD.65.064025.
24.

Jump up^ http://www.lbl.gov/Science-Articles/Archive/Phys-speed-of-gravity.html

25.
Jump up^ Kopeikin, Sergei & Fomalont, Edward (2006). "On the speed of gravity and
relativistic v/c corrections to the Shapiro time delay". Physics Letters A 355 (3): 163166.
arXiv:gr-qc/0310065.Bibcode:2006PhLA..355..163K. doi:10.1016/j.physleta.2006.02.028.

26.
Jump up^ S. Kopeikin, M. Efroimsky and G. Kaplan [1] Relativistic Celestial Mechanics
in the Solar System, Wiley-VCH, 2011. XXXII, 860 Pages, 65 Fig., 6 Tab.

Kopeikin, Sergei M. (2001). "Testing Relativistic Effect of Propagation of Gravity by


Very-Long Baseline Interferometry". Astrophys. J. 556 (1): L1L6. arXiv:grqc/0105060.Bibcode:2001ApJ...556L...1K. doi:10.1086/322872.

Asada, Hidecki (2002). "The Light-cone Effect on the Shapiro Time Delay". Astrophys. J.
574 (1): L69. arXiv:astro-ph/0206266. Bibcode:2002ApJ...574L..69A. doi:10.1086/342369.

Will, Clifford M. (2003). "Propagation Speed of Gravity and the Relativistic Time
Delay". Astrophys. J. 590 (2): 683690. arXiv:astro-ph/0301145.
Bibcode:2003ApJ...590..683W.doi:10.1086/375164.

Fomalont, E. B. & Kopeikin, Sergei M. (2003). "The Measurement of the Light


Deflection from Jupiter: Experimental Results". Astrophys. J. 598 (1): 704711. arXiv:astroph/0302294.Bibcode:2003ApJ...598..704F. doi:10.1086/378785.

Kopeikin, Sergei M. (Feb 21, 2003). "The Measurement of the Light Deflection from
Jupiter: Theoretical Interpretation". arXiv:astro-ph/0302462.

Kopeikin, Sergei M. (2003). "The Post-Newtonian Treatment of the VLBI Experiment on


September 8, 2002". Phys. Lett. A 312 (34): 147157. arXiv:grqc/0212121.Bibcode:2003PhLA..312..147K. doi:10.1016/S0375-9601(03)00613-3.

Faber, Joshua A. (Mar 14, 2003). "The speed of gravity has not been measured from time
delays". arXiv:astro-ph/0303346.

Kopeikin, Sergei M. (2004). "The Speed of Gravity in General Relativity and Theoretical
Interpretation of the Jovian Deflection Experiment". Classical and Quantum Gravity 21 (13):
32513286.arXiv:gr-qc/0310059. Bibcode:2004CQGra..21.3251K. doi:10.1088/02649381/21/13/010.

Samuel, Stuart (2003). "On the Speed of Gravity and the v/c Corrections to the Shapiro
Time Delay". Phys. Rev. Lett. 90 (23): 231101. arXiv:astroph/0304006.Bibcode:2003PhRvL..90w1101S. doi:10.1103/PhysRevLett.90.231101. PMID
12857246.

Kopeikin, Sergei & Fomalont, Edward (2006). "On the speed of gravity and relativistic
v/c corrections to the Shapiro time delay". Physics Letters A 355 (3): 163166. arXiv:grqc/0310065.Bibcode:2006PhLA..355..163K. doi:10.1016/j.physleta.2006.02.028.

Hideki, Asada (Aug 20, 2003). "Comments on "Measuring the Gravity Speed by VLBI"".
arXiv:astro-ph/0308343.

Kopeikin, Sergei & Fomalont, Edward (2006). "Aberration and the Fundamental Speed of
Gravity in the Jovian Deflection Experiment". Foundations of Physics 36 (8): 12441285.
arXiv:astro-ph/0311063. Bibcode:2006FoPh...36.1244K. doi:10.1007/s10701-006-9059-7.


Carlip, Steven (2004). "Model-Dependence of Shapiro Time Delay and the "Speed of
Gravity/Speed of Light" Controversy". Class. Quant. Grav. 21 (15): 38033812. arXiv:grqc/0403060.Bibcode:2004CQGra..21.3803C. doi:10.1088/0264-9381/21/15/011.

Kopeikin, Sergei M. (2005). "Comment on 'Model-dependence of Shapiro time delay and


the "speed of gravity/speed of light" controversy". Class. Quant. Grav. 22 (23): 51815186.
arXiv:gr-qc/0510048. Bibcode:2005CQGra..22.5181K. doi:10.1088/0264-9381/22/23/N01.

Pascual-Snchez, J.-F. (2004). "Speed of gravity and gravitomagnetism". Int. J. Mod.


Phys. D 13 (10): 23452350. arXiv:gr-qc/0405123.
Bibcode:2004IJMPD..13.2345P.doi:10.1142/S0218271804006425.

Kopeikin, Sergei (2006). "Gravitomagnetism and the speed of gravity". Int. J. Mod. Phys.
D 15 (3): 305320. arXiv:gr-qc/0507001.
Bibcode:2006IJMPD..15..305K.doi:10.1142/S0218271806007663.

Samuel, Stuart (2004). "On the Speed of Gravity and the Jupiter/Quasar Measurement".
Int. J. Mod. Phys. D 13 (9): 17531770. arXiv:astro-ph/0412401.
Bibcode:2004IJMPD..13.1753S.doi:10.1142/S0218271804005900.

Kopeikin, Sergei (2006). "Comments on the paper by S. Samuel "On the speed of gravity
and the Jupiter/Quasar measurement"". Int. J. Mod. Phys. D 15 (2): 273288. arXiv:grqc/0501001.Bibcode:2006IJMPD..15..273K. doi:10.1142/S021827180600853X.

Kopeikin, Sergei & Fomalont, Edward (2007). "Gravimagnetism, Causality, and


Aberration of Gravity in the Gravitational Light-Ray Deflection Experiments". General
Relativity and Gravitation 39(10): 15831624. arXiv:gr-qc/0510077.
Bibcode:2007GReGr..39.1583K. doi:10.1007/s10714-007-0483-6.

Kopeikin, Sergei & Fomalont, Edward (2008). "Radio interferometric tests of general
relativity". "A Giant Step: from Milli- to Micro-arcsecond Astrometry", Proceedings of the
International Astronomical Union, IAU Symposium 248 (S248): 383386.
Bibcode:2008IAUS..248..383F. doi:10.1017/S1743921308019613.

Zhu, Yin (2011). "Measurement of the Speed of Gravity". arXiv:1108.3761.

External links[edit]

Does Gravity Travel at the Speed of Light? in The Physics FAQ (also here).

Measuring the Speed of Gravity at MathPages

Hazel Muir, First speed of gravity measurement revealed, a New Scientist article on
Kopeikin's original announcement.

Clifford M. Will, Has the Speed of Gravity Been Measured?.

Kevin Carlson, MU physicist defends Einstein's theory and 'speed of gravity'


measurement.
Categories:

Effects of gravitation

History of physics

Navigation menu

Not logged in

Talk

Contributions

Create account

Log in

Article

Talk

Read

Edit

View history

Main page

Contents

Featured content

Current events

Random article

Donate to

store

Interaction

Help

About

Community portal

Recent changes

Contact page

Tools

What links here

Related changes

Upload file

Special pages

Permanent link

Page information

Wikidata item

Cite this page

Print/export

Create a book

Download as PDF

Printable version

Languages

Deutsch

Espaol

???????

Slovencina

Trke

Edit links

This page was last modified on 13 December 2015, at 11:43.

Text is available under the Creative Commons Attribution-ShareAlike License; additional


terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. is a
registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Privacy policy

About

Disclaimers

Contact

Developers

Mobile view

Gravitational field

In physics, a gravitational field is a model used to explain the influence that a massive body
extends into the space around itself, producing a force on another massive body.[1]Thus, a
gravitational field is used to explain gravitational phenomena, and is measured in newtons per
kilogram (N/kg). In its original concept, gravity was a force between pointmasses. Following
Newton, Laplace attempted to model gravity as some kind of radiation field or fluid, and since
the 19th century explanations for gravity have usually been taught in terms of a field model,
rather than a point attraction.
In a field model, rather than two particles attracting each other, the particles distort spacetime via
their mass, and this distortion is what is perceived and measured as a "force". In such a model
one states that matter moves in certain ways in response to the curvature of spacetime,[2] and
that there is either no gravitational force,[3] or that gravity is afictitious force.[4]
Contents
[hide]

1Classical mechanics

2General relativity

3See also

4Notes

Classical mechanics[edit]
In classical mechanics as in physics, a gravitational field is a physical quantity.[5] A gravitational
field can be defined using Newton's law of universal gravitation. Determined in this way, the
gravitational field g around a single particle of mass M is a vector field consisting at every point
of a vector pointing directly towards the particle. The magnitude of the field at every point is
calculated applying the universal law, and represents the force per unit mass on any object at that
point in space. Because the force field is conservative, there is a scalar potential energy per unit
mass, F, at each point in space associated with the force fields; this is called gravitational
potential.[6] The gravitational field equation is[7]

where F is the gravitational force, m is the mass of the test particle, R is the position of the test
particle, is a unit vector in the direction of R, t is time, G is the gravitational constant, and ? is
the del operator.
This includes Newton's law of gravitation, and the relation between gravitational potential and
field acceleration. Note that d2R/dt2 and F/m are both equal to the gravitational acceleration g
(equivalent to the inertial acceleration, so same mathematical form, but also defined as
gravitational force per unit mass[8]). The negative signs are inserted since the force acts
antiparallel to the displacement. The equivalent field equation in terms of mass density ? of the
attracting mass is:

which contains Gauss's law for gravity, and Poisson's equation for gravity. Newton's and Gauss's
law are mathematically equivalent, and are related by the divergence theorem. Poisson's equation
is obtained by taking the divergence of both sides of the previous equation. These classical
equations are differential equations of motion for a test particle in the presence of a gravitational
field, i.e. setting up and solving these equations allows the motion of a test mass to be determined
and described.
The field around multiple particles is simply the vector sum of the fields around each individual
particle. An object in such a field will experience a force that equals the vector sum of the forces
it would feel in these individual fields. This is mathematically:[9]

i.e. the gravitational field on mass mj is the sum of all gravitational fields due to all other masses
mi, except the mass mj itself. The unit vector is in the direction of Ri - Rj.
General relativity[edit]
See also: Gravitational acceleration General relativity and Gravitational potential General
relativity
In general relativity the gravitational field is determined by solving the Einstein field equations,
[10]

Here T is the stressenergy tensor, G is the Einstein tensor, and c is the speed of light,
These equations are dependent on the distribution of matter and energy in a region of space,
unlike Newtonian gravity, which is dependent only on the distribution of matter. The fields
themselves in general relativity represent the curvature of spacetime. General relativity states that
being in a region of curved space is equivalent to accelerating up thegradient of the field. By
Newton's second law, this will cause an object to experience a fictitious force if it is held still
with respect to the field. This is why a person will feel himself pulled down by the force of
gravity while standing still on the Earth's surface. In general the gravitational fields predicted by
general relativity differ in their effects only slightly from those predicted by classical mechanics,
but there are a number of easily verifiable differences, one of the most well known being the
bending of light in such fields.

See also[edit]

Classical mechanics

Gravitation

Gravitational potential

Newton's law of universal gravitation

Newton's laws of motion

Potential energy

Speed of gravity

Tests of general relativity

Defining equation (physics)

Notes[edit]
1.
Jump up^ Richard Feynman (1970). The Feynman Lectures on Physics Vol I. Addison
Wesley Longman. ISBN 978-0-201-02115-8.
2.
Jump up^ Geroch, Robert (1981). General relativity from A to B. University of Chicago
Press. p. 181. ISBN 0-226-28864-1., Chapter 7, page 181
3.
Jump up^ Grn, yvind; Hervik, Sigbjrn (2007). Einstein's general theory of relativity:
with modern applications in cosmology. Springer Japan. p. 256. ISBN 0-387-69199-5., Chapter
10, page 256
4.
Jump up^ J. Foster, J. D. Nightingale, J. Foster, J. D. Nightingale; J. Foster, J. D.
Nightingale, J. Foster, J. D. Nightingale (2006). A short course in general relativity (3 ed.).
Springer Science & Business. p. 55. ISBN 0-387-26078-1., Chapter 2, page 55
5.
Jump up^ Richard Feynman (1970). The Feynman Lectures on Physics Vol II. Addison
Wesley Longman. ISBN 978-0-201-02115-8. A field is any physical quantity which takes on
different values at different points in space.
6.
Jump up^ Dynamics and Relativity, J.R. Forshaw, A.G. Smith, Wiley, 2009, ISBN 978-0470-01460-8
7.
Jump up^ Encyclopaedia of Physics, R.G. Lerner, G.L. Trigg, 2nd Edition, VHC
Publishers, Hans Warlimont, Springer, 2005
8.
Jump up^ Essential Principles of Physics, P.M. Whelan, M.J. Hodgeson, 2nd Edition,
1978, John Murray, ISBN 0-7195-3382-1
9.
Jump up^ Classical Mechanics (2nd Edition), T.W.B. Kibble, European Physics Series,
Mc Graw Hill (UK), 1973, ISBN 0-07-084018-0.

10.
Jump up^ Gravitation, J.A. Wheeler, C. Misner, K.S. Thorne, W.H. Freeman & Co, 1973,
ISBN 0-7167-0344-0
Categories:

Theories of gravitation

Gravitation

Geodesy

General relativity

Navigation menu

Not logged in

Talk

Contributions

Create account

Log in

Article

Talk

Read

Edit

View history

Main page

Contents

Featured content

Current events

Random article

Donate to

store

Interaction

Help

About

Community portal

Recent changes

Contact page

Tools

What links here

Related changes

Upload file

Special pages

Permanent link

Page information

Wikidata item

Cite this page

Print/export

Create a book

Download as PDF

Printable version

Languages

???????

??????????

Catal

Deutsch

Eesti

????????

Espaol

Euskara

Franais

Galego

???

???????

slenska

Italiano

???????

Kreyl ayisyen

Latvieu

Limburgs

Magyar

?????

Nederlands

??????

???

Norsk bokml

Norsk nynorsk

Polski

Portugus

Romna

???????

Simple English

Slovencina

?????? / srpski

Srpskohrvatski / ??????????????

?????

Trke

??????????

????

Ti?ng Vi?t

??

Edit links

This page was last modified on 3 December 2015, at 22:03.

Text is available under the Creative Commons Attribution-ShareAlike License; additional


terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. is a
registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Privacy policy

About

Disclaimers

Contact

Developers

Mobile view

Classical mechanics

For the textbooks, see Classical Mechanics (Goldstein book) and Classical Mechanics (Kibble
and Berkshire book).
Classical mechanics

Second law of motion

History

Timeline

Branches[show]

Fundamentals[show]

Formulations[show]

Core topics[show]

Rotation[show]

Scientists[show]

Diagram of orbital motion of a satellite around the earth, showing perpendicular velocity and
acceleration (force) vectors.
In physics, classical mechanics and quantum mechanics are the two major sub-fields of
mechanics. Classical mechanics is concerned with the set of physical laws describing the motion
of bodies under the influence of a system of forces. The study of the motion of bodies is an
ancient one, making classical mechanics one of the oldest and largest subjects in science,
engineering andtechnology. It is also widely known as Newtonian mechanics.
Classical mechanics describes the motion of macroscopic objects, from projectiles to parts of
machinery, as well as astronomical objects, such as spacecraft, planets, stars, and galaxies.
Besides this, many specializations within the subject deal with solids, liquidsand gases and other
specific sub-topics. Classical mechanics also provides extremely accurate results as long as the
domain of study is restricted to large objects and the speeds involved do not approach the speed
of light. When the objects being dealt with become sufficiently small, it becomes necessary to
introduce the other major sub-field of mechanics, quantum mechanics, which reconciles the
macroscopic laws of physics with the atomic nature of matter and handles the waveparticle
duality of atoms andmolecules. When both quantum mechanics and classical mechanics cannot
apply, such as at the quantum level with high speeds,quantum field theory (QFT) becomes
applicable.
The term classical mechanics was coined in the early 20th century to describe the system of
physics begun by Isaac Newton and many contemporary 17th century natural philosophers,
building upon the earlier astronomical theories of Johannes Kepler, which in turn were based on
the precise observations of Tycho Brahe and the studies of terrestrial projectile motion of Galileo.
Since these aspects of physics were developed long before the emergence of quantum physics

and relativity, some sources exclude Einstein's theory of relativity from this category. However, a
number of modern sources do include relativistic mechanics, which in their view represents
classical mechanics in its most developed and most accurate form.[note 1]
The initial stage in the development of classical mechanics is often referred to as Newtonian
mechanics, and is associated with the physical concepts employed by and the mathematical
methods invented by Newton himself, in parallel with Leibniz, and others. This is further
described in the following sections. Later, more abstract and general methods were developed,
leading to reformulations of classical mechanics known asLagrangian mechanics and
Hamiltonian mechanics. These advances were largely made in the 18th and 19th centuries, and
they extend substantially beyond Newton's work, particularly through their use of analytical
mechanics.
Contents
[hide]

1Description of the theory

1.1Position and its derivatives

1.1.1Velocity and speed

1.1.2Acceleration

1.1.3Frames of reference

1.2Forces; Newton's second law

1.3Work and energy

1.4Beyond Newton's laws

2Limits of validity

2.1The Newtonian approximation to special relativity

2.2The classical approximation to quantum mechanics

3History

4Branches

5See also

6Notes

7References

8Further reading

9External links

Description of the theory[edit]

The analysis of projectile motion is a part of classical mechanics.


The following introduces the basic concepts of classical mechanics. For simplicity, it often
models real-world objects as point particles, objects with negligible size. The motion of a point
particle is characterized by a small number of parameters: its position, mass, and theforces
applied to it. Each of these parameters is discussed in turn.
In reality, the kind of objects that classical mechanics can describe always have a non-zero size.
(The physics of very small particles, such as the electron, is more accurately described by
quantum mechanics.) Objects with non-zero size have more complicated behavior than
hypothetical point particles, because of the additional degrees of freedom: a baseball can spin
while it is moving, for example. However, the results for point particles can be used to study such
objects by treating them as composite objects, made up of a large number of interacting point
particles. The center of mass of a composite object behaves like a point particle.
Classical mechanics uses common-sense notions of how matter and forces exist and interact. It
assumes that matter and energy have definite, knowable attributes such as where an object is in
space and its speed. It also assumes that objects may be directly influenced only by their
immediate surroundings, known as the principle of locality. In quantum mechanics, an object
may have either its position or velocity undetermined.
Position and its derivatives[edit]
Main article: Kinematics
The SI derived "mechanical"
(that is, not electromagnetic or thermal)
units with kg, m and s

position

angular position/angle
unitless (radian)
velocity
ms-1
angular velocity
s-1
acceleration
ms-2

angular acceleration
s-2
jerk
ms-3
"angular jerk" s-3
specific energy
m2s-2
absorbed dose rate

m2s-3

moment of inertia
kgm2
momentum
kgms-1
angular momentum
kgm2s-1
force
kgms-2
torque
kgm2s-2
energy
kgm2s-2
power
kgm2s-3
pressure and energy density
kgm-1s-2
surface tension
kgs-2
spring constant
kgs-2

irradiance and energy flux


kgs-3
kinematic viscosity
m2s-1
dynamic viscosity
kgm-1s-1
density (mass density)
kgm-3
density (weight density)
kgm-2s-2
number density
m-3
action
kgm2s-1
The position of a point particle is defined with respect to an arbitrary fixed reference point, O, in
space, usually accompanied by a coordinate system, with the reference point located at the origin
of the coordinate system. It is defined as the vector r fromO to the particle. In general, the point
particle need not be stationary relative to O, so r is a function of t, the time elapsed since an
arbitrary initial time. In pre-Einstein relativity (known as Galilean relativity), time is considered
an absolute, i.e., the time interval between any given pair of events is the same for all observers.
[1] In addition to relying on absolute time, classical mechanics assumes Euclidean geometry for
the structure of space.[2]
Velocity and speed[edit]
Main articles: Velocity and speed
The velocity, or the rate of change of position with time, is defined as the derivative of the
position with respect to time:
.
In classical mechanics, velocities are directly additive and subtractive. For example, if one car
traveling east at 60 km/h passes another car traveling east at 50 km/h, then from the perspective
of the slower car, the faster car is traveling east at60 - 50 = 10 km/h. Whereas, from the
perspective of the faster car, the slower car is moving 10 km/h to the west. Velocities are directly
additive as vector quantities; they must be dealt with using vector analysis.

Mathematically, if the velocity of the first object in the previous discussion is denoted by the
vector u = ud and the velocity of the second object by the vector v = ve, where u is the speed of
the first object, v is the speed of the second object, and d ande are unit vectors in the directions of
motion of each particle respectively, then the velocity of the first object as seen by the second
object is

Similarly,

When both objects are moving in the same direction, this equation can be simplified to

Or, by ignoring direction, the difference can be given in terms of speed only:

Acceleration[edit]
Main article: Acceleration
The acceleration, or rate of change of velocity, is the derivative of the velocity with respect to
time (the second derivative of the position with respect to time):

Acceleration represents the velocity's change over time: either of the velocity's magnitude or
direction, or both. If only the magnitude v of the velocity decreases, this is sometimes referred to
as deceleration, but generally any change in the velocity with time, including deceleration, is
simply referred to as acceleration.
Frames of reference[edit]
Main articles: Inertial frame of reference and Galilean transformation
While the position, velocity and acceleration of a particle can be referred to any observer in any
state of motion, classical mechanics assumes the existence of a special family ofreference frames
in terms of which the mechanical laws of nature take a comparatively simple form. These special
reference frames are called inertial frames. An inertial frame is such that when an object without
any force interactions (an idealized situation) is viewed from it, it appears either to be at rest or in
a state of uniform motion in a straight line. This is the fundamental definition of an inertial
frame. They are characterized by the requirement that all forces entering the observer's physical
laws[clarification needed] originate in identifiable sources (charges, gravitational bodies, and so
forth). A non-inertial reference frame is one accelerating with respect to an inertial one, and in
such a non-inertial frame a particle is subject to acceleration by fictitious forces that enter the
equations of motion solely as a result of its accelerated motion, and do not originate in
identifiable sources. These fictitious forces are in addition to the real forces recognized in an
inertial frame. A key concept of inertial frames is the method for identifying them. For practical

purposes, reference frames that are unaccelerated with respect to the distant stars (an extremely
distant point) are regarded as good approximations to inertial frames.
Consider two reference frames S and S'. For observers in each of the reference frames an event
has space-time coordinates of (x,y,z,t) in frame S and (x',y',z',t') in frame S'. Assuming time is
measured the same in all reference frames, and if we require x = x' when t = 0, then the relation
between the space-time coordinates of the same event observed from the reference frames S' and
S, which are moving at a relative velocity of u in the x direction is:
x' = x - ut
y' = y
z' = z
t' = t.
This set of formulas defines a group transformation known as the Galilean transformation
(informally, the Galilean transform). This group is a limiting case of the Poincar groupused in
special relativity. The limiting case applies when the velocity u is very small compared to c, the
speed of light.
The transformations have the following consequences:

v' = v - u (the velocity v' of a particle from the perspective of S' is slower by u than its
velocity v from the perspective of S)

a' = a (the acceleration of a particle is the same in any inertial reference frame)

F' = F (the force on a particle is the same in any inertial reference frame)

the speed of light is not a constant in classical mechanics, nor does the special position
given to the speed of light in relativistic mechanics have a counterpart in classical mechanics.
For some problems, it is convenient to use rotating coordinates (reference frames). Thereby one
can either keep a mapping to a convenient inertial frame, or introduce additionally a fictitious
centrifugal force and Coriolis force.
Forces; Newton's second law[edit]
Main articles: Force and Newton's laws of motion
Newton was the first to mathematically express the relationship between force and momentum.
Some physicists interpret Newton's second law of motion as a definition of force and mass, while
others consider it a fundamental postulate, a law of nature.[citation needed] Either interpretation
has the same mathematical consequences, historically known as "Newton's Second Law":

The quantity mv is called the (canonical) momentum. The net force on a particle is thus equal to
the rate of change of the momentum of the particle with time. Since the definition of acceleration
is a = dv/dt, the second law can be written in the simplified and more familiar form:

So long as the force acting on a particle is known, Newton's second law is sufficient to describe
the motion of a particle. Once independent relations for each force acting on a particle are
available, they can be substituted into Newton's second law to obtain an ordinary differential
equation, which is called the equation of motion.
As an example, assume that friction is the only force acting on the particle, and that it may be
modeled as a function of the velocity of the particle, for example:

where ? is a positive constant. Then the equation of motion is

This can be integrated to obtain

where v0 is the initial velocity. This means that the velocity of this particle decays exponentially
to zero as time progresses. In this case, an equivalent viewpoint is that the kinetic energy of the
particle is absorbed by friction (which converts it to heat energy in accordance with the
conservation of energy), and the particle is slowing down. This expression can be further
integrated to obtain the position r of the particle as a function of time.
Important forces include the gravitational force and the Lorentz force for electromagnetism. In
addition, Newton's third law can sometimes be used to deduce the forces acting on a particle: if it
is known that particle A exerts a force F on another particle B, it follows that B must exert an
equal and opposite reaction force, -F, on A. The strong form of Newton's third law requires that F
and -F act along the line connecting A and B, while the weak form does not. Illustrations of the
weak form of Newton's third law are often found for magnetic forces.
Work and energy[edit]
Main articles: Work (physics), kinetic energy and potential energy
If a constant force F is applied to a particle that achieves a displacement ?r,[note 2] the work
done by the force is defined as the scalar product of the force and displacement vectors:

More generally, if the force varies as a function of position as the particle moves from r1 to r2
along a path C, the work done on the particle is given by the line integral

If the work done in moving the particle from r1 to r2 is the same no matter what path is taken, the
force is said to be conservative. Gravity is a conservative force, as is the force due to an idealized
spring, as given by Hooke's law. The force due to friction is non-conservative.
The kinetic energy Ek of a particle of mass m travelling at speed v is given by

For extended objects composed of many particles, the kinetic energy of the composite body is the
sum of the kinetic energies of the particles.
The workenergy theorem states that for a particle of constant mass m the total work W done on
the particle from position r1 to r2 is equal to the change in kinetic energy Ek of the particle:

Conservative forces can be expressed as the gradient of a scalar function, known as the potential
energy and denoted Ep:

If all the forces acting on a particle are conservative, and Ep is the total potential energy (which is
defined as a work of involved forces to rearrange mutual positions of bodies), obtained by
summing the potential energies corresponding to each force

This result is known as conservation of energy and states that the total energy,

is constant in time. It is often useful, because many commonly encountered forces are
conservative.
Beyond Newton's laws[edit]
Classical mechanics also includes descriptions of the complex motions of extended non-pointlike
objects. Euler's laws provide extensions to Newton's laws in this area. The concepts of angular
momentum rely on the same calculus used to describe one-dimensional motion. The rocket
equation extends the notion of rate of change of an object's momentum to include the effects of
an object "losing mass".
There are two important alternative formulations of classical mechanics: Lagrangian mechanics
and Hamiltonian mechanics. These, and other modern formulations, usually bypass the concept
of "force", instead referring to other physical quantities, such as energy, speed and momentum,
for describing mechanical systems in generalized coordinates.
The expressions given above for momentum and kinetic energy are only valid when there is no
significant electromagnetic contribution. In electromagnetism, Newton's second law for currentcarrying wires breaks down unless one includes the electromagnetic field contribution to the
momentum of the system as expressed by the Poynting vector divided by c2, where c is the speed
of light in free space.
Limits of validity[edit]

Domain of validity for Classical Mechanics

Many branches of classical mechanics are simplifications or approximations of more accurate


forms; two of the most accurate being general relativity and relativistic statistical mechanics.
Geometric optics is an approximation to the quantum theory of light, and does not have a
superior "classical" form.
When both quantum mechanics and classical mechanics cannot apply, such as at the quantum
level with many degrees of freedom, quantum field theory (QFT) becomes applicable. QFT deals
with small distances and large speeds with many degrees of freedom as well as the possibility of
any change in the number of particles throughout the interaction. To deal with large degrees of
freedom at the macroscopic level, statistical mechanicsbecomes valid. Statistical mechanics
explores the large number of particles and their interactions as a whole in everyday life.
Statistical mechanics is mainly used in thermodynamics. In the case of high velocity objects
approaching the speed of light, classical mechanics is enhanced by special relativity. General
relativity unifies special relativity with Newton's law of universal gravitation, allowing physicists
to handle gravitation at a deeper level.
The Newtonian approximation to special relativity[edit]
In special relativity, the momentum of a particle is given by

where m is the particle's rest mass, v its velocity, and c is the speed of light.
If v is very small compared to c, v2/c2 is approximately zero, and so

Thus the Newtonian equation p = mv is an approximation of the relativistic equation for bodies
moving with low speeds compared to the speed of light.
For example, the relativistic cyclotron frequency of a cyclotron, gyrotron, or high voltage
magnetron is given by

where fc is the classical frequency of an electron (or other charged particle) with kinetic energy T
and (rest) mass m0 circling in a magnetic field. The (rest) mass of an electron is 511 keV. So the
frequency correction is 1% for a magnetic vacuum tube with a 5.11 kV direct current accelerating
voltage.
The classical approximation to quantum mechanics[edit]
The ray approximation of classical mechanics breaks down when the de Broglie wavelength is
not much smaller than other dimensions of the system. For non-relativistic particles, this
wavelength is

where h is Planck's constant and p is the momentum.

Again, this happens with electrons before it happens with heavier particles. For example, the
electrons used by Clinton Davisson and Lester Germer in 1927, accelerated by 54 volts, had a
wavelength of 0.167 nm, which was long enough to exhibit a single diffraction side lobe when
reflecting from the face of a nickel crystal with atomic spacing of 0.215 nm. With a larger
vacuum chamber, it would seem relatively easy to increase the angular resolution from around a
radian to a milliradian and see quantum diffraction from the periodic patterns of integrated circuit
computer memory.
More practical examples of the failure of classical mechanics on an engineering scale are
conduction by quantum tunneling in tunnel diodes and very narrow transistor gates inintegrated
circuits.
Classical mechanics is the same extreme high frequency approximation as geometric optics. It is
more often accurate because it describes particles and bodies with rest mass. These have more
momentum and therefore shorter De Broglie wavelengths than massless particles, such as light,
with the same kinetic energies.
History[edit]
Main article: History of classical mechanics
See also: Timeline of classical mechanics
Some Greek philosophers of antiquity, among them Aristotle, founder of Aristotelian physics,
may have been the first to maintain the idea that "everything happens for a reason" and that
theoretical principles can assist in the understanding of nature. While to a modern reader, many
of these preserved ideas come forth as eminently reasonable, there is a conspicuous lack of both
mathematical theory and controlled experiment, as we know it. These both turned out to be
decisive factors in forming modern science, and they started out with classical mechanics.
In his Elementa super demonstrationem ponderum, medieval mathematician Jordanus de Nemore
concept of "positional gravity" and the use of component forces.

Three stage Theory of impetus according to Albert of Saxony.


The first published causal explanation of the motions of planets was Johannes Kepler's
Astronomia nova published in 1609. He concluded, based on Tycho Brahe's observations of the
orbit of Mars, that the orbits were ellipses. This break with ancient thought was happening
around the same time that Galileo was proposing abstract mathematical laws for the motion of
objects. He may (or may not) have performed the famous experiment of dropping two
cannonballs of different weights from the tower of Pisa, showing that they both hit the ground at
the same time. The reality of this experiment is disputed, but, more importantly, he did carry out
quantitative experiments by rolling balls on an inclined plane. His theory of accelerated motion
derived from the results of such experiments, and forms a cornerstone of classical mechanics.

Sir Isaac Newton (16431727), an influential figure in the history of physics and whose three
laws of motion form the basis of classical mechanics

As foundation for his principles of natural philosophy, Isaac Newton proposed three laws of
motion: the law of inertia, his second law of acceleration (mentioned above), and the law of
action and reaction; and hence laid the foundations for classical mechanics. Both Newton's
second and third laws were given the proper scientific and mathematical treatment in Newton's
Philosophi Naturalis Principia Mathematica, which distinguishes them from earlier attempts at
explaining similar phenomena, which were either incomplete, incorrect, or given little accurate
mathematical expression. Newton also enunciated the principles of conservation of momentum
and angular momentum. In mechanics, Newton was also the first to provide the first correct
scientific and mathematical formulation of gravity in Newton's law of universal gravitation. The
combination of Newton's laws of motion and gravitation provide the fullest and most accurate
description of classical mechanics. He demonstrated that these laws apply to everyday objects as
well as to celestial objects. In particular, he obtained a theoretical explanation of Kepler's laws of
motion of the planets.
Newton previously invented the calculus, of mathematics, and used it to perform the
mathematical calculations. For acceptability, his book, the Principia, was formulated entirely in
terms of the long-established geometric methods, which were soon eclipsed by his calculus.
However, it was Leibniz who developed the notation of the derivative and integral preferred[3]
today.

Hamilton's greatest contribution is perhaps the reformulation of Newtonian mechanics, now


calledHamiltonian mechanics.
Newton, and most of his contemporaries, with the notable exception of Huygens, worked on the
assumption that classical mechanics would be able to explain all phenomena, including light, in
the form of geometric optics. Even when discovering the so-called Newton's rings (a wave
interferencephenomenon) his explanation remained with his own corpuscular theory of light.
After Newton, classical mechanics became a principal field of study in mathematics as well as
physics. Several re-formulations progressively allowed finding solutions to a far greater number
of problems. The first notable re-formulation was in 1788 by Joseph Louis Lagrange. Lagrangian
mechanics was in turn re-formulated in 1833 by William Rowan Hamilton.
Some difficulties were discovered in the late 19th century that could only be resolved by more
modern physics. Some of these difficulties related to compatibility with electromagnetic theory,
and the famous MichelsonMorley experiment. The resolution of these problems led to the
special theory of relativity, often included in the term classical mechanics.
A second set of difficulties were related to thermodynamics. When combined with
thermodynamics, classical mechanics leads to the Gibbs paradox of classical statistical
mechanics, in which entropy is not a well-defined quantity. Black-body radiation was not
explained without the introduction of quanta. As experiments reached the atomic level, classical
mechanics failed to explain, even approximately, such basic things as the energy levels and sizes
of atoms and the photo-electric effect. The effort at resolving these problems led to the
development of quantum mechanics.

Since the end of the 20th century, the place of classical mechanics in physics has been no longer
that of an independent theory. Instead, classical mechanics is now considered an approximate
theory to the more general quantum mechanics. Emphasis has shifted to understanding the
fundamental forces of nature as in the Standard model and its more modern extensions into a
unified theory of everything.[4] Classical mechanics is a theory for the study of the motion of
non-quantum mechanical, low-energy particles in weak gravitational fields. In the 21st century
classical mechanics has been extended into the complex domain and complex classical
mechanics exhibits behaviors very similar to quantum mechanics.[5]
Branches[edit]
Classical mechanics was traditionally divided into three main branches:

Statics, the study of equilibrium and its relation to forces

Dynamics, the study of motion and its relation to forces

Kinematics, dealing with the implications of observed motions without regard for
circumstances causing them
Another division is based on the choice of mathematical formalism:

Newtonian mechanics

Lagrangian mechanics

Hamiltonian mechanics

Alternatively, a division can be made by region of application:

Celestial mechanics, relating to stars, planets and other celestial bodies

Continuum mechanics, for materials modelled as a continuum, e.g., solids and fluids (i.e.,
liquids and gases).

Relativistic mechanics (i.e. including the special and general theories of relativity), for
bodies whose speed is close to the speed of light.

Statistical mechanics, which provides a framework for relating the microscopic properties
of individual atoms and molecules to the macroscopic or bulk thermodynamicproperties of
materials.
See also[edit]

Physics portal

Dynamical systems

History of classical mechanics

List of equations in classical mechanics

List of publications in classical mechanics

Molecular dynamics

Newton's laws of motion

Special theory of relativity

Quantum Mechanics

Quantum Field Theory

Notes[edit]
1.
Jump up^ . The notion of "classical" may be somewhat confusing, insofar as this term
usually refers to the era of classical antiquity in European history. While many discoveries within
themathematics of that period remain in full force today, and of the greatest use, much of the
science that emerged then has since been superseded by more accurate models. This in no way
detracts from the science of that time, though as most of modern physics is built directly upon the
important developments, especially within technology, which took place in antiquity and during
the Middle Ages in Europe and elsewhere. However, the emergence of classical mechanics was a
decisive stage in the development of science, in the modern sense of the term. What characterizes
it, above all, is its insistence on mathematics (rather than speculation), and its reliance on
experiment (rather than observation). With classical mechanics it was established how to
formulate quantitative predictions in theory, and how to test them by carefully designed
measurement. The emerging globally cooperative endeavor increasingly provided for much
closer scrutiny and testing, both of theory and experiment. This was, and remains, a key factor in
establishing certain knowledge, and in bringing it to the service of society. History shows how
closely the health and wealth of a society depends on nurturing this investigative and critical
approach.
2.
Jump up^ The displacement ?r is the difference of the particle's initial and final
positions: ?r = rfinal- rinitial.
References[edit]
1.
Jump up^ Mughal, Muhammad Aurang Zeb. 2009. Time, absolute. Birx, H. James (ed.),
Encyclopedia of Time: Science, Philosophy, Theology, and Culture, Vol. 3. Thousand Oaks, CA:
Sage, pp. 1254-1255.
2.

Jump up^ MIT physics 8.01 lecture notes (page 12) (PDF)

3.
Jump up^ Jesseph, Douglas M. (1998). "Leibniz on the Foundations of the Calculus: The
Question of the Reality of Infinitesimal Magnitudes". Perspectives on Science. 6.1&2: 640.
Retrieved 31 December 2011.
4.
Jump up^ Page 2-10 of the Feynman Lectures on Physics says "For already in classical
mechanics there was indeterminability from a practical point of view." The past tense here
implies that classical physics is no longer fundamental.

5.
Jump up^ Complex Elliptic Pendulum, Carl M. Bender, Daniel W. Hook, Karta Kooner in
Asymptotics in Dynamics, Geometry and PDEs; Generalized Borel Summation vol. I
Further reading[edit]

Alonso, M.; Finn, J. (1992). Fundamental University Physics. Addison-Wesley.

Feynman, Richard (1999). The Feynman Lectures on Physics. Perseus Publishing. ISBN
0-7382-0092-1.

Feynman, Richard; Phillips, Richard (1998). Six Easy Pieces. Perseus Publishing. ISBN
0-201-32841-0.

Goldstein, Herbert; Charles P. Poole; John L. Safko (2002). Classical Mechanics (3rd
ed.). Addison Wesley. ISBN 0-201-65702-3.

Kibble, Tom W.B.; Berkshire, Frank H. (2004). Classical Mechanics (5th ed.). Imperial
College Press. ISBN 978-1-86094-424-6.

Kleppner, D.; Kolenkow, R. J. (1973). An Introduction to Mechanics. McGraw-Hill.


ISBN 0-07-035048-5.

Landau, L.D.; Lifshitz, E.M. (1972). Course of Theoretical Physics, Vol. 1Mechanics.
Franklin Book Company. ISBN 0-08-016739-X.

Morin, David (2008). Introduction to Classical Mechanics: With Problems and Solutions
(1st ed.). Cambridge, UK: Cambridge University Press. ISBN 978-0-521-87622-3.*Gerald Jay
Sussman; Jack Wisdom (2001). Structure and Interpretation of Classical Mechanics. MIT Press.
ISBN 0-262-19455-4.

O'Donnell, Peter J. (2015). Essential Dynamics and Relativity. CRC Press. ISBN 978-1466-58839-4.

Thornton, Stephen T.; Marion, Jerry B. (2003). Classical Dynamics of Particles and
Systems (5th ed.). Brooks Cole. ISBN 0-534-40896-6.
External links[edit]

Wikimedia Commons has media related to Classical mechanics.

Wikiquote has quotations related to: Classical mechanics

Crowell, Benjamin. Newtonian Physics (an introductory text, uses algebra with optional
sections involving calculus)

Fitzpatrick, Richard. Classical Mechanics (uses calculus)

Hoiland, Paul (2004). Preferred Frames of Reference & Relativity

Horbatsch, Marko, "Classical Mechanics Course Notes".

Rosu, Haret C., "Classical Mechanics". Physics Education. 1999. [arxiv.org :


physics/9909035]

Shapiro, Joel A. (2003). Classical Mechanics

Sussman, Gerald Jay & Wisdom, Jack & Mayer,Meinhard E. (2001). Structure and
Interpretation of Classical Mechanics

Tong, David. Classical Dynamics (Cambridge lecture notes on Lagrangian and


Hamiltonian formalism)

Kinematic Models for Design Digital Library (KMODDL)

Movies and photos of hundreds of working mechanical-systems models at Cornell University.


Also includes an e-book library of classic texts on mechanical design and engineering.

MIT OpenCourseWare 8.01: Classical Mechanics Free videos of actual course lectures
with links to lecture notes, assignments and exams.

[show]

Isaac Newton

[show]

Branches of physics

Authority control

GND: 4038168-7

Categories:

Classical mechanics

Concepts in physics

Navigation menu

Not logged in

Talk

Contributions

Create account

Log in

Article

Talk

Read

Edit

View history

Main page

Contents

Featured content

Current events

Random article

Donate to

store

Interaction

Help

About

Community portal

Recent changes

Contact page

Tools

What links here

Related changes

Upload file

Special pages

Permanent link

Page information

Wikidata item

Cite this page

Print/export

Create a book

Download as PDF

Printable version

Languages

Alemannisch

???????

Aragons

???????

Asturianu

Az?rbaycanca

?????

Basa Banyumasan

??????????

?????????? (???????????)?

?????????

Bosanski

??????

Catal

Cetina

Cymraeg

Dansk

Deutsch

????????

Espaol

Esperanto

Euskara

?????

Fiji Hindi

Franais

Gaeilge

Galego

???

???????

??????

Hrvatski

Ido

Bahasa Indonesia

Interlingua

slenska

Italiano

?????

???????

???????

Kreyl ayisyen

Latina

Latvieu

Lietuviu

Magyar

??????????

??????

Malti

?????

Bahasa Melayu

??????

Nederlands

???

Norsk bokml

Norsk nynorsk

Occitan

O?zbekcha/???????

??????

??????

Polski

Portugus

Romna

??????????

???????

???? ????

Scots

Shqip

?????

Simple English

Slovencina

Slovencina

?????? ???????

?????? / srpski

Srpskohrvatski / ??????????????

Suomi

Svenska

Tagalog

?????

???????/tatara

??????

???

??????

Trke

??????????

????

Ti?ng Vi?t

Winaray

??????

??

??

Edit links

This page was last modified on 3 December 2015, at 23:20.

Text is available under the Creative Commons Attribution-ShareAlike License; additional


terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. is a
registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Privacy policy

About

Disclaimers

Contact

Developers

Mobile view

S-ar putea să vă placă și