Sunteți pe pagina 1din 136

Internet Message Access Protocol

From Wikipedia, the free encyclopedia


"IMAP" redirects here. For the antipsychotic,
see Fluspirilene.
Internet message access protocol (IMAP) is one of the
two most prevalent Internet standard protocols for email retrieval, the other being the Post Office
Protocol (POP).[1] Virtually all modern e-mail
clients and mail servers support both protocols as a
means of transferring e-mail messages from a server.

[edit]E-mail protocols
The Internet Message Access Protocol (commonly known
as IMAP) is an Application Layer Internet protocol that
allows an e-mail client to access e-mail on a remote mail
server. The current version, IMAP version 4 revision 1
(IMAP4rev1), is defined by RFC 3501. An IMAP server
typically listens on well-known port 143. IMAP
over SSL (IMAPS) is assigned well-known port number
993.
IMAP supports both on-line and off-line modes of
operation. E-mail clients using IMAP generally leave
messages on the server until the user explicitly deletes
them. This and other characteristics of IMAP operation
allow multiple clients to manage the same mailbox. Most
e-mail clients support IMAP in addition to POP to retrieve
messages; however, fewer email services support
IMAP.[2]IMAP offers access to the mail storage. Clients

may store local copies of the messages, but these are


considered to be a temporary cache.
Incoming e-mail messages are sent to an e-mail server
that stores messages in the recipient's email box. The
user retrieves the messages with an e-mail client that uses
one of a number of e-mail retrieval protocols. Some clients
and servers preferentially use vendor-specific, proprietary
protocols, but most support the Internet standard
protocols, SMTP for sending e-mail and POP and IMAP
for retrieving e-mail, allowing interoperability with other
servers and clients. For
example, Microsoft's Outlook client uses a proprietary
protocol to communicate with a Microsoft Exchange
Server server as does IBM's Notes client when
communicating with a Domino server, but all of these
products also support POP, IMAP, and outgoing SMTP.
Support for the Internet standard protocols allows many email clients such as Pegasus Mail or Mozilla
Thunderbird (see comparison of e-mail clients) to access
these servers, and allows the clients to be used with other
servers (see list of mail servers).
[edit]History
IMAP was designed by Mark Crispin in 1986 as a remote
mailbox protocol, in contrast to the widely used POP, a
protocol for retrieving the contents of a mailbox.[3]
IMAP was previously known as Internet Mail Access
Protocol, Interactive Mail Access Protocol (RFC 1064),
and Interim Mail Access Protocol.[4]
[edit]Original IMAP

The original Interim Mail Access Protocol was


implemented as a Xerox Lisp machine client and a TOPS20 server.
No copies of the original interim protocol specification or
its software exist.[citation needed] Although some of its
commands and responses were similar to IMAP2, the
interim protocol lacked command/response tagging and
thus its syntax was incompatible with all other versions of
IMAP.
[edit]IMAP2
The interim protocol was quickly replaced by
the Interactive Mail Access Protocol (IMAP2), defined
in RFC 1064 (in 1988) and later updated by RFC 1176 (in
1990). IMAP2 introduced command/response tagging and
was the first publicly distributed version.
[edit]IMAP3
IMAP3 is an extinct and extremely rare variant of
IMAP.[5] It was published as RFC 1203 in 1991. It was
written specifically as a counter proposal to RFC 1176,
which itself proposed modifications to IMAP2.[6] IMAP3
was never accepted by the
marketplace.[7][8] The IESG reclassified RFC1203
"Interactive Mail Access Protocol - Version 3" as a Historic
protocol in 1993. The IMAP Working Group used
RFC1176 (IMAP2) rather than RFC1203 (IMAP3) as its
starting point.[9][10]
[edit]IMAP2bis
With the advent of MIME, IMAP2 was extended to support
MIME body structures and add mailbox management

functionality (create, delete, rename, message upload)


that was absent in IMAP2. This experimental revision was
called IMAP2bis; its specification was never published in
non-draft form. An internet draft of IMAP2bis was
published by the IETF IMAP Working Group in October
1993. This draft was based upon the following earlier
specifications: unpublished IMAP2bis.TXT document,
RFC1176, and RFC1064
(IMAP2).[11] The IMAP2bis.TXT draft documented the state
of extensions to IMAP2 as of December 1992.[12] Early
versions of Pine were widely distributed with IMAP2bis
support[5] (Pine 4.00 and later supports IMAP4rev1).
[edit]IMAP4
An IMAP Working Group formed in the IETF in the early
1990s took over responsibility for the IMAP2bis design.
The IMAP WG decided to rename IMAP2bis to IMAP4 to
avoid confusion with a competing IMAP3 proposal from
another group that never got off the ground.[citation
needed]
The expansion of the IMAP acronym also changed
to the Internet Message Access Protocol
[edit]Advantages over POP
[edit]Connected and disconnected modes of operation
When using POP, clients typically connect to the e-mail
server briefly, only as long as it takes to download new
messages. When using IMAP4, clients often stay
connected as long as the user interface is active and
download message content on demand. For users with
many or large messages, this IMAP4 usage pattern can
result in faster response times.

[edit]Multiple clients simultaneously connected to the


same mailbox
The POP protocol requires the currently connected client
to be the only client connected to the mailbox. In contrast,
the IMAP protocol specifically allows simultaneous access
by multiple clients and provides mechanisms for clients to
detect changes made to the mailbox by other, concurrently
connected, clients. See for example RFC3501 section 5.2
which specifically cites "simultaneous access to the same
mailbox by multiple agents" as an example.
[edit]Access to MIME message parts and partial fetch
Usually all Internet e-mail is transmitted in MIME format,
allowing messages to have a tree structure where the leaf
nodes are any of a variety of single part content types and
the non-leaf nodes are any of a variety of multipart types.
The IMAP4 protocol allows clients to separately retrieve
any of the individual MIME parts and also to retrieve
portions of either individual parts or the entire message.
These mechanisms allow clients to retrieve the text portion
of a message without retrieving attached files or
to stream content as it is being fetched.
[edit]Message state information
Through the use of flags defined in the IMAP4 protocol,
clients can keep track of message state; for example,
whether or not the message has been read, replied to, or
deleted. These flags are stored on the server, so different
clients accessing the same mailbox at different times can
detect state changes made by other clients. POP provides
no mechanism for clients to store such state information
on the server so if a single user accesses a mailbox with

two different POP clients, state informationsuch as


whether a message has been accessedcannot be
synchronized between the clients. The IMAP4 protocol
supports both pre-defined system flags and client defined
keywords. System flags indicate state information such as
whether a message has been read. Keywords, which are
not supported by all IMAP servers, allow messages to be
given one or more tags whose meaning is up to the client.
Adding user created tags to messages is an operation
supported by some web-based email services, such
as Gmail.
[edit]Multiple mailboxes on the server
IMAP4 clients can create, rename, and/or delete
mailboxes (usually presented to the user as folders) on the
server, and move messages between mailboxes. Multiple
mailbox support also allows servers to provide access to
shared and public folders. The IMAP4 Access Control List
(ACL) Extension (RFC 4314) may be used to regulate
access rights.
[edit]Server-side searches
IMAP4 provides a mechanism for a client to ask the server
to search for messages meeting a variety of criteria. This
mechanism avoids requiring clients to download every
message in the mailbox in order to perform these
searches.
[edit]Built-in extension mechanism
Reflecting the experience of earlier Internet protocols,
IMAP4 defines an explicit mechanism by which it may be
extended. Many extensions to the base protocol have

been proposed and are in common use. IMAP2bis did not


have an extension mechanism, and POP now has one
defined by RFC 2449.
[edit]Disadvantages
While IMAP remedies many of the shortcomings of POP,
this inherently introduces additional complexity. Much of
this complexity (e.g., multiple clients accessing the same
mailbox at the same time) is compensated for by serverside workarounds such as Maildir or database backends.
The IMAP specification has been criticised for being
insufficiently strict and allowing behaviours that effectively
negate its usefulness. For instance, the specification
states that each message stored on the server has a
"unique id" to allow the clients to identify the messages
they have already seen between sessions. However, the
specification also allows these UIDs to be invalidated with
no restrictions, practically defeating their purpose.[13]
Unless the mail storage and searching algorithms on the
server are carefully implemented, a client can potentially
consume large amounts of server resources when
searching massive mailboxes.
IMAP4 clients need to maintain a TCP/IP connection to
the IMAP server in order to be notified of the arrival of new
mail. Notification of mail arrival is done through in-band
signaling, which contributes to the complexity of client-side
IMAP protocol handling somewhat.[14] A private
proposal, push IMAP, would extend IMAP to
implement push e-mail by sending the entire message
instead of just a notification. However, push IMAP has not

been generally accepted and current IETF work has


addressed the problem in other ways (see the Lemonade
Profile for more information).
Unlike some proprietary protocols which combine sending
and retrieval operations, sending a message and saving a
copy in a server-side folder with a base-level IMAP client
requires transmitting the message content twice, once to
SMTP for delivery and a second time to IMAP to store in a
sent mail folder. This is remedied by a set of extensions
defined by the IETF LEMONADE Working Group for
mobile devices: URLAUTH (RFC 4467) and CATENATE
(RFC 4469) in IMAP and BURL (RFC 4468) in SMTPSUBMISSION. POP servers don't support server-side
folders so clients have no choice but to store sent items on
the client. Many IMAP clients can be configured to store
sent mail in a client-side folder, or to BCC oneself and
then filter the incoming mail instead of saving a copy in a
folder directly. In addition to the LEMONADE "trio", Courier
Mail Server offers a non-standard method of sending
using IMAP by copying an outgoing message to a
dedicated outbox folder.
Like POP, IMAP is an e-mail only protocol. As a result,
items such as contacts, appointments or tasks cannot be
managed or accessed using IMAP.
[edit]See also

List of mail servers


Comparison of e-mail clients
Post Office Protocol (POP)
Push-IMAP

Simple Mail Access Protocol


Webmail
IMAP IDLE

MIME
From Wikipedia, the free encyclopedia
This article is about the email content type system. For the
World Wide Web content type system, see Internet media
type. For mime as an art form, see Mime artist. For the
British engineering society, see Institution of Mechanical
Engineers.
Multipurpose Internet Mail Extensions (MIME) is
an Internet standard that extends the format of email to
support:
Text in character sets other than ASCII
Non-text attachments
Message bodies with multiple parts
Header information in non-ASCII character sets
MIME's use, however, has grown beyond describing the
content of email to describe content type in general,
including for the web (see Internet media type) and as a
storage for rich content in some commercial products
(e.g., IBM Lotus Domino and IBM Lotus Quickr).
Virtually all human-written Internet email and a fairly large
proportion of automated email is transmitted via SMTP in
MIME format. Internet email is so closely associated with
the SMTP and MIME standards that it is sometimes
called SMTP/MIME email.[1]

The content types defined by MIME standards are also of


importance outside of email, such as in communication
protocols like HTTP for the World Wide Web. HTTP
requires that data be transmitted in the context of emaillike messages, although the data most often is not actually
email.
MIME is specified in six linked RFC memoranda: RFC
2045, RFC 2046, RFC 2047, RFC 4288, RFC
4289 and RFC 2049, which together define the
specifications.
[edit]Introduction
The basic Internet email transmission protocol, SMTP,
supports only 7-bit ASCII characters (see also 8BITMIME).
This effectively limits Internet email to messages
which, when transmitted, include only the characters
sufficient for writing a small number of languages,
primarily English. Other languages based on the Latin
alphabet typically include diacritics and are not supported
in 7-bit ASCII, meaning text in these languages cannot be
correctly represented in basic email.
MIME defines mechanisms for sending other kinds of
information in email. These include text in languages other
than English using character encodings other than ASCII,
and 8-bit binary content such as files
containing images, sounds, movies, and computer
programs. Parts of MIME are also reused in
communication protocols such as HTTP, which requires
that data be transmitted in the context of email-like
messages even though the data might not (and usually

doesn't) actually have anything to do with email, and the


message body can actually be binary. Mapping messages
into and out of MIME format is typically done automatically
by an email client or by mail servers when sending or
receiving Internet (SMTP/MIME) email.
The basic format of Internet email is defined in RFC 5322,
which is an updated version of RFC 2822 and RFC 822.
These standards specify the familiar formats for text
email headers and body and rules pertaining to commonly
used header fields such as "To:", "Subject:", "From:", and
"Date:". MIME defines a collection of email headers for
specifying additional attributes of a message
including content type, and defines a set of transfer
encodings which can be used to represent 8-bit binary
data using characters from the 7-bit ASCII character set.
MIME also specifies rules for encoding non-ASCII
characters in email message headers, such as "Subject:",
allowing these header fields to contain non-English
characters.
MIME is extensible. Its definition includes a method to
register new content types and other MIME attribute
values.
The goals of the MIME definition included requiring no
changes to existing email servers and allowing plain text
email to function in both directions with existing clients.
These goals were achieved by using additional RFC 822style headers for all MIME message attributes and by
making the MIME headers optional with default values
ensuring a non-MIME message is interpreted correctly by
a MIME-capable client. A simple MIME text message is

therefore likely to be interpreted correctly by a non-MIME


client even if it has email headers which the non-MIME
client won't know how to interpret. Similarly, if the quoted
printable transfer encoding (see below) is used, the ASCII
part of the message will be intelligible to users with nonMIME clients.
[edit]MIME headers
[edit]MIME-Version
The presence of this header indicates the message is
MIME-formatted. The value is typically "1.0" so this header
appears as
MIME-Version: 1.0
According to MIME co-creator Nathaniel Borenstein, the
intention was to allow MIME to change, to advance to
version 2.0 and so forth, but this decision led to the
opposite outcome, making it nearly impossible to create a
new version of the standard.
"We did not adequately specify how to handle a future
MIME version," Borenstein said. "So if you write something
that knows 1.0, what should you do if you encounter 2.0 or
1.1? I sort of thought it was obvious but it turned out
everyone implemented that in different ways. And the
result is that it would be just about impossible for the
Internet to ever define a 2.0 or a 1.1."[2]
[edit]Content-ID
The Content-ID header is primarily of use in multi-part
messages (as discussed below); a Content-ID is a

permanently globally unique identifier for a message part,


allowing each part to be universally referred to by its
Content-ID (e.g., in IMG tags of an HTML message
allowing the inline display of attached images).[3] The
content ID is contained within angle brackets in the
Content-ID header. Here is an example:
Content-ID:
<5.31.32252.1057009685@server01.example.net
>
The standards don't really have a lot to say about exactly
what is in a Content-ID; they're only supposed to be
globally and permanently unique (meaning that no two are
ever the same, even when generated by different people
in different times and places). To achieve this, some
conventions have been adopted; one of them is to include
an at sign (@), with the hostname of the computer which
created the content ID to the right of it. This ensures the
content ID is different from any created by other
computers (well, at least it is when the originating
computer has a unique Internet hostname; if, as
sometimes happens, an anonymous machine inserts
something generic like localhost, uniqueness is no longer
guaranteed). Then, the part to the left of the at sign is
designed to be unique within that machine; a good way to
do this is to append several constantly-changing strings
that programs have access to. In this case, four different
numbers were inserted, with dots between them: the
rightmost one is a timestamp of the number of seconds
since January 1, 1970, known as the Unix epoch; to the

left of it is the process ID of the program that generated


the message (on servers running Unix or Linux, each
process has a number which is unique among the
processes in progress at any moment, though they do
repeat over time); to the left of that is a count of the
number of messages generated so far by the current
process; and the leftmost number is the number of parts in
the current message that have been generated so far. Put
together, these guarantee that the content ID will never
repeat; even if multiple messages are generated within the
same second, they either have different process IDs or a
different count of messages generated by the same
process.
That's just an example of how a unique content ID can be
generated; different programs do it differently. It's only
necessary that they remain unique, a requirement that is
necessary to ensure that, even if a bunch of different
messages are joined together as part of a bigger multi-part
message (as happens when a message is forwarded as
an attachment, or assembled into a MIME-format digest),
you won't have two parts with the same content ID, which
would be likely to confuse mail programs greatly.
There's a similar header called Message-ID which assigns
a unique identifier to the message as a whole; this is not
actually part of the MIME standards, since it can be used
on non-MIME as well as MIME messages. If the
originating mail program doesn't add a message ID, a
server handling the message later on probably will, since a
number of programs (both clients and servers) want every
message to have one to keep track of them. Some

headers discussed in the Other Headers article make use


of message IDs.
When referenced in the form of a Web URI, content IDs
and message IDs are placed within the URI schemes cid
and mid respectively, without the angle brackets:
cid:5.31.32252.1057009685@server01.example.
net
[edit]Content-Type
This header indicates the Internet media type of the
message content, consisting of a type and subtype, for
example
Content-Type: text/plain
Through the use of the multipart type, MIME allows
messages to have parts arranged in a tree structure where
the leaf nodes are any non-multipart content type and the
non-leaf nodes are any of a variety of multipart types. This
mechanism supports:

simple text messages using text/plain (the default


value for "Content-Type: ")
text plus attachments (multipart/mixed with
a text/plain part and other non-text parts). A MIME
message including an attached file generally indicates
the file's original name with the "Content-disposition:"
header, so the type of file is indicated both by the MIME
content-type and the (usually OS-specific) filename
extension

reply with original attached (multipart/mixed with


a text/plain part and the original message as
a message/rfc822 part)
alternative content, such as a message sent in both
plain text and another format such
as HTML (multipart/alternative with the same content
in text/plain and text/html forms)
image, audio, video and application (for
example, image/jpeg, audio/mp3, video/mp4,
and application/msword and so on)
many other message constructs
[edit]Content-Disposition
The original MIME specifications only described the
structure of mail messages. They did not address the
issue of presentation styles. The content-disposition
header field was added in RFC 2183 to specify the
presentation style. A MIME part can have:

an inline content-disposition, which means that it should


be automatically displayed when the message is
displayed, or
an attachment content-disposition, in which case it is
not displayed automatically and requires some form of
action from the user to open it.
In addition to the presentation style, the contentdisposition header also provides fields for specifying the
name of the file, the creation date and modification date,
which can be used by the reader's mail user agent to store
the attachment.

The following example is taken from RFC 2183, where the


header is defined
Content-Disposition: attachment;
filename=genome.jpeg;
modification-date="Wed, 12
February 1997 16:29:51 -0500";
The filename may be encoded as defined by RFC 2231.
As of 2010, a good majority of mail user agents do not
follow this prescription fully. The widely used Mozilla
Thunderbird mail client makes its own decisions about
which MIME parts should be automatically displayed,
ignoring the content-disposition headers in the
messages. Thunderbird prior to version 3 also sends out
newly composed messages with inline content-disposition
for all MIME parts. Most users are unaware of how to set
the content-disposition to attachment.[4] Many mail user
agents also send messages with the file name in
the name parameter of the content-type header instead
of the filename parameter of the contentdisposition header. This practice is discouraged.[5]
[edit]Content-Transfer-Encoding
In June 1992, MIME (RFC 1341, since made obsolete
by RFC 2045) defined a set of methods for representing
binary data in ASCII text format. The content-transferencoding: MIME header has 2-sided significance:

It indicates whether or not a binary-to-text


encoding scheme has been used on top of the original
encoding as specified within the Content-Type header:

1. If such a binary-to-text encoding method has been


used, it states which one.
2. If not, it provides a descriptive label for the format of
content, with respect to the presence of 8 bit or binary
content.
The RFC and the IANA's list of transfer encodings define
the values shown below, which are not case sensitive.
Note that '7bit', '8bit', and 'binary' mean that no binary-totext encoding on top of the original encoding was used. In
these cases, the header is actually redundant for the email
client to decode the message body, but it may still be
useful as an indicator of what type of object is being sent.
Values 'quoted-printable' and 'base64' tell the email client
that a binary-to-text encoding scheme was used and that
appropriate initial decoding is necessary before the
message can be read with its original encoding (e.g. UTF8).

Suitable for use with normal SMTP:


7bit up to 998 octets per line of the code range
1..127 with CR and LF (codes 13 and 10 respectively)
only allowed to appear as part of a CRLF line ending.
This is the default value.
quoted-printable used to encode arbitrary octet
sequences into a form that satisfies the rules of 7bit.
Designed to be efficient and mostly human readable
when used for text data consisting primarily of USASCII characters but also containing a small
proportion of bytes with values outside that range.
base64 used to encode arbitrary octet sequences
into a form that satisfies the rules of 7bit. Designed to

be efficient for non-text 8 bit and binary data.


Sometimes used for text data that frequently uses
non-US-ASCII characters.
Suitable for use with SMTP servers that support
the 8BITMIME SMTP extension:
8bit up to 998 octets per line with CR and LF
(codes 13 and 10 respectively) only allowed to
appear as part of a CRLF line ending.
Suitable only for use with SMTP servers that support
the BINARYMIME SMTP extension (RFC 3030):
binary any sequence of octets.
There is no encoding defined which is explicitly designed
for sending arbitrary binary data through SMTP transports
with the 8BITMIME extension. Thus base64 or quotedprintable (with their associated inefficiency) must
sometimes still be used. This restriction does not apply to
other uses of MIME such as Web Services with MIME
attachments or MTOM
[edit]Encoded-Word
Since RFC 2822, conforming message header names and
values should be ASCII characters; values that contain
non-ASCII data should use the MIME encodedword syntax (RFC 2047) instead of a literal string. This
syntax uses a string of ASCII characters indicating both
the original character encoding (the "charset") and the
content-transfer-encoding used to map the bytes of the
charset into ASCII characters.
The form is: "=?charset?encoding?encoded text?=".

charset may be any character set registered with IANA.


Typically it would be the same charset as the message
body.
encoding can be either "Q" denoting Q-encoding that is
similar to the quoted-printable encoding, or "B"
denoting base64 encoding.
encoded text is the Q-encoded or base64-encoded text.
An encoded-word may not be more than 75 characters
long, including charset, encoding, encoded text, and
delimiters. If it is desirable to encode more text than will
fit in an encoded-word of 75 characters,
multiple encoded-words (separated by CRLF SPACE)
may be used.
[edit]Difference between Q-encoding and quotedprintable
The ASCII codes for the question mark ("?") and equals
sign ("=") may not be represented directly as they are
used to delimit the encoded-word. The ASCII code for
space may not be represented directly because it could
cause older parsers to split up the encoded word
undesirably. To make the encoding smaller and easier to
read the underscore is used to represent the ASCII code
for space creating the side effect that underscore cannot
be represented directly. Use of encoded words in certain
parts of headers imposes further restrictions on which
characters may be represented directly.
For example,
Subject: =?iso-8859-1?Q?=A1Hola,_se=F1or!?=

is interpreted as "Subject: Hola, seor!".

The encoded-word format is not used for the names of the


headers (for example Subject). These header names are
always in English in the raw message. When viewing a
message with a non-English email client, the header
names are usually translated by the client.
[edit]Multipart messages
A MIME multipart message contains a boundary in the
"Content-Type: " header; this boundary, which must not
occur in any of the parts, is placed between the parts, and
at the beginning and end of the body of the message, as
follows:
MIME-Version: 1.0
Content-Type: multipart/mixed;
boundary=frontier
This is a message with multiple parts in
MIME format.
--frontier
Content-Type: text/plain
This is the body of the message.
--frontier
Content-Type: application/octet-stream
Content-Transfer-Encoding: base64
PGh0bWw+CiAgPGhlYWQ+CiAgPC9oZWFkPgogIDxib2R
5PgogICAgPHA+VGhpcyBpcyB0aGUg
Ym9keSBvZiB0aGUgbWVzc2FnZS48L3A+CiAgPC9ib2R
5Pgo8L2h0bWw+Cg=

--frontier-Each part consists of its own content header (zero or


more Content- header fields) and a body. Multipart content
can be nested. The content-transfer-encoding of a
multipart type must always be "7bit", "8bit" or "binary" to
avoid the complications that would be posed by multiple
levels of decoding. The multipart block as a whole does
not have a charset; non-ASCII characters in the part
headers are handled by the Encoded-Word system, and
the part bodies can have charsets specified if appropriate
for their content-type.
Notes:
Before the first boundary is an area that is ignored by
MIME-compliant clients. This area is generally used to
put a message to users of old non-MIME clients.
It is up to the sending mail client to choose a boundary
string that doesn't clash with the body text. Typically this
is done by inserting a long random string.
The last boundary must have two hyphens at the end.
[edit]Multipart subtypes
The MIME standard defines various multipart-message
subtypes, which specify the nature of the message parts
and their relationship to one another. The subtype is
specified in the "Content-Type" header of the overall
message. For example, a multipart MIME message using
the digest subtype would have its Content-Type set as
"multipart/digest".

The RFC initially defined 4 subtypes: mixed, digest,


alternative and parallel. A minimally compliant application
must support mixed and digest; other subtypes are
optional. Applications must treat unrecognised subtypes
as "multipart/mixed". Additional subtypes, such as signed
and form-data, have since been separately defined in
other RFCs.
The following is a list of the most commonly used
subtypes; it is not intended to be a comprehensive list.
[edit]Mixed
Multipart/mixed is used for sending files with different
"Content-Type" headers inline (or as attachments). If
sending pictures or other easily readable files, most mail
clients will display them inline (unless otherwise specified
with the "Content-disposition" header). Otherwise it will
offer them as attachments. The default content-type for
each part is "text/plain".
Defined in RFC 2046, Section 5.1.3
[edit]Digest
Multipart/digest is a simple way to send multiple text
messages. The default content-type for each part is
"message/rfc822".
Defined in RFC 2046, Section 5.1.5
[edit]Message
A message/rfc822 part contains an email message,
including any headers. Rfc822 is a misnomer, since the
message may be a full MIME message. This is used for
digests as well as for email forwarding.

Defined in RFC 2046.


[edit]Alternative
The multipart/alternative subtype indicates that each part
is an "alternative" version of the same (or similar) content,
each in a different format denoted by its "Content-Type"
header. The formats are ordered by how faithful they are
to the original, with the least faithful first and the most
faithful last. Systems can then choose the "best"
representation they are capable of processing; in general,
this will be the last part that the system can understand,
although other factors may affect this.
Since a client is unlikely to want to send a version that is
less faithful than the plain text version, this structure
places the plain text version (if present) first. This makes
life easier for users of clients that do not understand
multipart messages.
Most commonly, multipart/alternative is used for email with
two parts, one plain text (text/plain) and one HTML
(text/html). The plain text part provides backwards
compatibility while the HTML part allows use of formatting
and hyperlinks. Most email clients offer a user option to
prefer plain text over HTML; this is an example of how
local factors may affect how an application chooses which
"best" part of the message to display.
While it is intended that each part of the message
represent the same content, the standard does not require
this to be enforced in any way. At one time, anti-spam
filters would only examine the text/plain part of a
message,[citation needed] because it is easier to parse than the

text/html part. But spammers eventually took advantage of


this, creating messages with an innocuous-looking
text/plain part and advertising in the text/html part. Antispam software eventually caught up on this trick,
penalizing messages with very different text in a
multipart/alternative message.[citation needed]
Defined in RFC 2046, Section 5.1.4
[edit]Related
A multipart/related is used to indicate that each message
part is a component of an aggregate whole. It is for
compound objects consisting of several inter-related
components - proper display cannot be achieved by
individually displaying the constituent parts. The message
consists of a root part (by default, the first) which
reference other parts inline, which may in turn reference
other parts. Message parts are commonly referenced by
the "Content-ID" part header. The syntax of a reference is
unspecified and is instead dictated by the encoding or
protocol used in the part.
One common usage of this subtype is to send a web page
complete with images in a single message. The root part
would contain the HTML document, and use image tags to
reference images stored in the latter parts.
Defined in RFC 2387
[edit]Report
Multipart/report is a message type that contains data
formatted for a mail server to read. It is split between a
text/plain (or some other content/type easily readable) and

a message/delivery-status, which contains the data


formatted for the mail server to read.
Defined in RFC 6522
[edit]Signed
A multipart/signed message is used to attach a digital
signature to a message. It has two parts, a body part and
a signature part. The whole of the body part, including
mime headers, is used to create the signature part. Many
signature types are possible, like application/pgp-signature
(RFC 3156) and application/pkcs7-signature (S/MIME).
Defined in RFC 1847, Section 2.1
[edit]Encrypted
A multipart/encrypted message has two parts. The first
part has control information that is needed to decrypt the
application/octet-stream second part. Similar to signed
messages, there are different implementations which are
identified by their separate content types for the control
part. The most common types are "application/pgpencrypted" (RFC 3156) and "application/pkcs7-mime"
(S/MIME).
Defined in RFC 1847, Section 2.2
[edit]Form Data
As its name implies, multipart/form-data is used to express
values submitted through a form. Originally defined as part
of HTML 4.0, it is most commonly used for submitting files
via HTTP.
Defined in RFC 2388
[edit]Mixed-Replace (experimental)

The content type multipart/x-mixed-replace was developed


as part of a technology to emulate server push and
streaming over HTTP.
All parts of a mixed-replace message have the same
semantic meaning. However, each part invalidates "replaces" - the previous parts as soon as it is received
completely. Clients should process the individual parts as
soon as they arrive and should not wait for the whole
message to finish.
Originally developed by Netscape,[6] it is still supported
by Mozilla, Firefox, Chrome,[7] Safari (but not in Safari on
the iPhone)[citation needed] and Opera, but traditionally ignored
by Microsoft. It is commonly used in IP cameras as the
MIME type for MJPEG streams.[8]
[edit]Byteranges
The multipart/byteranges is used to represent
noncontiguous byte ranges of a single message. It is used
by HTTP when a server returns multiple byte ranges and
is defin

100BASE-T
Reprints
In 100 Mbps (megabits per second) Ethernet (known as Fast
Ethernet), there are three types of physical wiring that can
carry signals:

100BASE-T4 (four pairs of telephone twisted pair wire)

100BASE-TX (two pairs of data grade twisted-pair wire)


100BASE-FX (a two-strand optical fiber cable)
This designation is an Institute of Electrical and Electronics
Engineers shorthand identifier. The "100" in the media type
designation refers to the transmission speed of 100 Mbps. The
"BASE" refers to baseband signalling, which means that only
Ethernet signals are carried on the medium. The "T4," "TX," and
"FX" refer to the physical medium that carries the signal.
(Through repeaters, media segments of different physical types
can be used in the same system.)

The TX and FX types together are sometimes referred to as


"100BASE-X." (The designation for "100BASE-T" is also
sometimes seen as "100BaseT.")
RELATED GLOSSARY TERMS: MDI/MDIX (medium dependent
interface/MDI crossover), 802.3,Ethernet Glossary, Quad
FastEthernet (QFE), 10BASE-2, coaxial cable (illustrated), Power
over Ethernet (PoE), Ethernet point-of-presence
(EPOP), DVMRP (Distance Vector Multicast Routing
Protocol), 1000BASE-T
100Base-T
An Ethernet standard that transmits at 100 Mbps.
Called "Fast Ethernet" when first deployed in 1995
and officially the IEEE 802.3u standard, it is a 100
Mbps version of 10Base-T (10 Mbps Ethernet). Like
10Base-T, 100Base-T is a shared media LAN when
used with a hub (all nodes share the 100 Mbps)

and 100 Mbps between each pair of nodes when


used with a switch. Most Ethernet adapters and
switches are 10/100 devices, which support both
100Base-T and 10Base-T (see 10/100 adapter).
100Base-T, 100Base-T4 and 100Base-TX
100Base-T uses two pairs of wires in Category 5
UTP cable, while 100Base-TX requires two pairs in
Category 6 cable. 100Base-T4 uses all four wire
pairs in older Category 3 cables. See 10BaseT and 100Base-FX.

"Twisted Pair" Ethernet


All stations in a 10Base-T and 100Base-T Ethernet are wired to
a central hub or switch using twisted pair wires and RJ-45
connectors.

Post Office Protocol


From Wikipedia, the free encyclopedia
This article needs
additional citations for verification. Please
help improve this article by adding citations
to reliable sources. Unsourced material may
be challenged and removed. (November 2007)
In computing, the Post Office Protocol (POP) is
an application-layer Internet standard protocol used by
local e-mail clients to retrieve e-mail from a
remote server over a TCP/IP connection.[1] POP
and IMAP (Internet Message Access Protocol) are the two
most prevalent Internet standard protocols for e-mail
retrieval.[2] Virtually all modern e-mail clients
and servers support both. The POP protocol has been
developed through several versions, with version 3
(POP3) being the current standard. Most webmail service
providers such as Hotmail, Gmail and Yahoo! Mail also
provide IMAP and POP3 service.
[edit]Overview
POP supports simple download-and-delete requirements
for access to remote mailboxes (termed maildrop in the
POP RFC's).[3] Although most POP clients have an option
to leave mail on server after download, e-mail clients using

POP generally connect, retrieve all messages, store them


on the user's PC as new messages, delete them from the
server, and then disconnect. Other protocols, notably
IMAP, (Internet Message Access Protocol) provide more
complete and complex remote access to typical mailbox
operations. Many e-mail clients support POP as well as
IMAP to retrieve messages; however, fewer Internet
Service Providers (ISPs) support IMAP[citation needed].
A POP3 server listens on well-known
port 110. Encrypted communication for POP3 is either
requested after protocol initiation, using
the STLS command, if supported, or by POP3S, which
connects to the server using Transport Layer
Security (TLS) or Secure Sockets Layer (SSL) on wellknown TCP port 995.
Available messages to the client are fixed when a POP
session opens the maildrop, and are identified by
message-number local to that session or, optionally, by a
unique identifier assigned to the message by the POP
server. This unique identifier is permanent and unique to
the maildrop and allows a client to access the same
message in different POP sessions. Mail is retrieved and
marked for deletion by message-number. When the client
exits the session, the mail marked for deletion is removed
from the maildrop.
[edit]History
POP (POP1) is specified in RFC 918 (1984), POP2
by RFC 937 (1985). The original specification of POP3
is RFC 1081 (1988). Its current specification is RFC 1939,

updated with an extension mechanism, RFC 2449 and an


authentication mechanism in RFC 1734.
POP2 has been assigned well-known port 109.
The original POP3 specification supported only an
unencrypted USER/PASS login mechanism or
Berkeley .rhosts access control. POP3 currently supports
several authentication methods to provide varying levels of
protection against illegitimate access to a user's e-mail.
Most are provided by the POP3 extension mechanisms.
POP3 clients support SASL authentication methods via
the AUTH extension. MIT Project Athena also produced
a Kerberized version.
RFC 1460 introduced APOP into the core protocol. APOP
is a challenge/response protocol which uses
the MD5 hash function in an attempt to avoid replay
attacks and disclosure of the shared secret. Clients
implementing APOP include Mozilla Thunderbird, Opera
Mail, Eudora, KMail, Novell Evolution,
RimArts' Becky!,[4] Windows Live Mail, PowerMail, Apple
Mail, and Mutt.
An informal proposal had been outlined for
a "POP4" specification, complete with a working server
implementation. This "POP4" proposal added basic folder
management, multipart message support, as well as
message flag management, allowing for a light protocol
which supports some popular IMAP features which POP3
currently lacks. However, in doing so, it shared with IMAP
the embedding in a communication protocol a specific
model of a mailbox, which, although common, is not

universal. No progress has been observed in


this "POP4" proposal since 2003.[5]
[edit]Extensions
An extension mechanism was proposed in RFC 2449 to
accommodate general extensions as well as announce in
an organized manner support for optional commands,
such as TOP and UIDL. The RFC did not intend to
encourage extensions, and reaffirmed that the role of
POP3 is to provide simple support for mainly downloadand-delete requirements of mailbox handling.
The extensions are termed capabilities and are listed by
the CAPA command. Except for APOP, the optional
commands were included in the initial set of capabilities.
Following the lead of ESMTP (RFC 5321), capabilities
beginning with an X signify local capabilities.
[edit]STARTTLS
The STARTTLS extension allows the use of Transport
Layer Security (TLS) or Secure Sockets Layer (SSL) to be
negotiated using the STLS command, on the standard
POP3 port, rather than an alternate. Some clients and
servers instead use the alternate-port method, which uses
TCP port 995 (POP3S).
[edit]SDPS
Demon Internet introduced extensions to POP3 that allow
multiple accounts per domain, and has become known
as Standard Dial-up POP3 Service (SDPS).[1] To access
each account, the username includes the hostname,
as john@hostname or john+hostname.

Google Apps uses the same method.


[edit]Comparison with IMAP
Clients that leave mail on servers generally use the UIDL
command to get the current association of messagenumbers to message identified by its unique identifier. The
unique identifier is arbitrary, and might be repeated if the
mailbox contains identical messages. In contrast, IMAP
uses a 32-bit unique identifier (UID) that is assigned to
messages in ascending (although not necessarily
consecutive) order as they are received. When retrieving
new messages, an IMAP client requests the UIDs greater
than the highest UID among all previously-retrieved
messages, whereas a POP client must fetch the entire
UIDL map. For large mailboxes, this can require significant
processing.
MIME serves as the standard for attachments and nonASCII text in e-mail. Although neither POP3 nor SMTP
require MIME-formatted e-mail, essentially all non-ASCII
Internet e-mail comes MIME-formatted, so POP clients
must also understand and use MIME. IMAP, by design,
assumes MIME-formatted e-mail.

IPv6
From Wikipedia, the free encyclopedia
IPv6 (Internet Protocol version 6) is a version of
the Internet Protocol (IP) that is intended to succeed IPv4,
which is the communications protocolcurrently used to

direct almost all Internet traffic.[1] IPv6 will allow the


Internet to support many more devices by greatly
increasing the number of possible addresses.

Logo for IPv6 released by the Internet Society.


The Internet operates by transferring data between hosts
in packets that are routed across networks as specified
by routing protocols. These packets require an addressing
scheme, such as IPv4 or IPv6, to specify their source and
destination. Each host, computer or other device on the
Internet must be assigned an IP address in order to
communicate. The growth of the Internet has created a
need for more addresses than are possible with IPv4,
which allows 32 bits for an IP address, and therefore has
232 (4 294 967 296) possible addresses. IPv6, which was
developed by the Internet Engineering Task Force (IETF)
to deal with this long-anticipated IPv4 address exhaustion,
uses 128-bit addresses, allowing
2128 (approximately3.41038) addresses. This expansion
can accommodate vastly more devices and users on the
internet as well as providing greater flexibility in allocating
addresses and efficiency for routing traffic. It also

eliminates the primary need for network address


translation (NAT), which has gained widespread
deployment as an effort to alleviate IPv4 address
exhaustion.
Like IPv4, IPv6 is an internet-layer protocol for packetswitched internetworking and provides end-toend datagram transmission across multiple IP networks. It
is described in Internet standard document RFC 2460,
published in December 1998.[2] In addition to offering more
addresses, IPv6 also implements features not present in
IPv4. It simplifies aspects of address assignment
(stateless address autoconfiguration), network
renumbering and router announcements when changing
network connectivity providers. The IPv6 subnet size has
been standardized by fixing the size of the host identifier
portion of an address to 64 bits to facilitate an automatic
mechanism for forming the host identifier from linklayer media addressing information (MAC
address). Network security is also integrated into the
design of the IPv6 architecture, including the option
of IPsec.
The last top level (/8) blocks of 16 million free IPv4
addresses were assigned in February 2011 by the Internet
Assigned Numbers Authority (IANA) to the five Regional
Internet registries (RIRs). However, many free addresses
still remain within most assigned blocks, and each RIR will
continue with standard address allocation policy until it is
at its last /8 block. After that, only blocks of 1024
addresses (a /22) will be made available from the RIR to
each Local Internet registry (LIR). As of February 2011,

only the Asia-Pacific Network Information Centre (APNIC)


had reached this stage.[3]
For the Internet to make use of the advantages of IPv6
over IPv4, most hosts on the Internet, as well as the
networks connecting them, need to deploy the IPv6
protocol. However, IPv6 deployment has been slow. While
deployment of IPv6 is accelerating, especially in the AsiaPacific region and some European countries, areas such
as the Americas and Africa are comparatively lagging in
deployment of IPv6. IPv6 does not implement
interoperability features with IPv4, and creates essentially
a parallel, independent network. Exchanging traffic
between the two networks requires special translator
gateways, but modern computer operating systems
implement dual-protocol software for transparent access
to both networks either natively or using atunneling
protocol such as 6to4, 6in4, or Teredo. In December 2010,
despite marking its 12th anniversary as a Standards Track
protocol, IPv6 was only in its infancy in terms of general
worldwide deployment.
[edit]Motivation and origin
[edit]IPv4
Main article: IPv4

Decomposition of an IPv4 address to its binary value.


Internet Protocol Version 4 (IPv4) was the first publicly
used version of the Internet Protocol. IPv4 addresses are
typically displayed as four numbers, each in the range 0 to
255, or 8 bits per number, for a total of 32 bits. Thus IPv4
provides an addressing capability of 232 or approximately
4.3 billion addresses. Address exhaustion was not initially
a concern in IPv4 as this version was originally presumed
to be an internal test within ARPA, and not intended for
public use.
The decision to put a 32-bit address space on there was
the result of a year's battle among a bunch of engineers
who couldn't make up their minds about 32, 128, or
variable-length. And after a year of fighting, I saidI'm
now at ARPA, I'm running the program, I'm paying for this
stuff, I'm using American tax dollars, and I wanted some
progress because we didn't know if this was going to work.
So I said: OK, it's 32-bits. That's enough for an
experiment; it's 4.3 billion terminations. Even the Defense
Department doesn't need 4.3 billion of everything and
couldn't afford to buy 4.3 billion edge devices to do a test
anyway. So at the time I thought we were doing an
experiment to prove the technology and that if it worked
we'd have opportunity to do a production version of it.
Well, it just escaped! It got out and people started to use it,
and then it became a commercial thing. So this [IPv6] is
the production attempt at making the network scalable.
Vint Cerf, Google IPv6 Conference 2008[4]

During the first decade of operation of the Internet (by the


late 1980s), it became apparent that methods had to be
developed to conserve address space. In the early 1990s,
even after the redesign of the addressing system using
a classless network model, it became clear that this would
not suffice to prevent IPv4 address exhaustion, and that
further changes to the Internet infrastructure were
needed.[5]
[edit]Working-group proposal
By the beginning of 1992, several proposals appeared and
by the end of 1992, the IETF announced a call for white
papers.[6] In September 1993, the IETF created a
temporary, ad-hoc IP Next Generation (IPng) area to deal
specifically with IPng issues. The new area was led by
Allison Mankin and Scott Bradner, and had a directorate
with 15 engineers from diverse backgrounds for directionsetting and preliminary document review:[5][7] The workinggroup members were J. Allard (Microsoft), Steve
Bellovin (AT&T), Jim Bound (Digital Equipment
Corporation), Ross Callon (Wellfleet), Brian
Carpenter (CERN), Dave Clark (MIT), John
Curran (NEARNET), Steve Deering (Xerox), Dino
Farinacci (Cisco), Paul Francis (NTT), Eric Fleischmann
(Boeing), Mark Knopper (Ameritech), Greg Minshall
(Novell), Rob Ullmann (Lotus), and Lixia Zhang
(Xerox).[citation needed]
The Internet Engineering Task Force adopted the IPng
model on July 25, 1994, with the formation of several IPng
working groups.[5] By 1996, a series of RFCs was released
defining Internet Protocol version 6 (IPv6), starting

with RFC 1883. (Version 5 was used by the


experimental Internet Stream Protocol.)
It is widely expected that the Internet will use IPv4
alongside IPv6 for the foreseeable future. IPv4-only and
IPv6-only nodes cannot communicate directly, and need
assistance from an intermediary gateway or must use
other transition mechanisms.
[edit]Exhaustion of IPv4 addresses
Main article: IPv4 address exhaustion
On February 3, 2011, in a ceremony in Miami, the Internet
Assigned Numbers Authority (IANA) assigned the last
batch of five /8 address blocks to the Regional Internet
Registries,[8] officially depleting the global pool of
completely fresh blocks of addresses.[9] Each /8 address
block represents approximately 16.7 million possible
addresses, for a total of over 80 million potential
addresses combined.
At the time, it was anticipated that these addresses could
well be fully consumed within three to six months at thencurrent rates of allocation.[10] APNIC was the first RIR to
exhaust its regional pool on 15 April 2011, except for a
small amount of address space reserved for the transition
to IPv6, which will be allocated in a much more restricted
way.[11]
In 2003, the director of Asia-Pacific Network Information
Centre (APNIC), Paul Wilson, stated that, based on thencurrent rates of deployment, the available space would
last for one or two decades.[12] In September 2005, a
report by Cisco Systems suggested that the pool of

available addresses would exhaust in as little as 4 to 5


years.[13] In 2008, a policy process started for the endgame and post-exhaustion era.[14] In 2010, a daily updated
report projected the global address pool exhaustion by the
first quarter of 2011, and depletion at the five regional
Internet registriesbefore the end of 2011.[15]
[edit]Comparison to IPv4
IPv6 specifies a new packet format, designed to minimize
packet header processing by routers.[2][16] Because the
headers of IPv4 packets and IPv6 packets are significantly
different, the two protocols are not interoperable.
However, in most respects, IPv6 is a conservative
extension of IPv4. Most transport and application-layer
protocols need little or no change to operate over IPv6;
exceptions are application protocols that embed internetlayer addresses, such as FTP and NTPv3, where the new
address format may cause conflicts with existing protocol
syntax.
[edit]Larger address space

Decomposition of an IPv6 address into its binary form


The main advantage of IPv6 over IPv4 is its larger address
space. The length of an IPv6 address is 128 bits,

compared to 32 bits in IPv4.[2]The address space therefore


has 2128 or approximately 3.41038 addresses. By
comparison, this amounts to
approximately 4.81028addresses for each of the seven
billion people alive in 2011.[17] In addition, the IPv4
address space is poorly allocated, with approximately 14%
of all available addresses utilized.[18] While these numbers
are large, it was not the intent of the designers of the IPv6
address space to assure geographical saturation with
usable addresses. Rather, the longer addresses simplify
allocation of addresses, enable efficientroute aggregation,
and allow implementation of special addressing features.
In IPv4, complex Classless Inter-Domain Routing (CIDR)
methods were developed to make the best use of the
small address space. The standard size of a subnet in
IPv6 is 264 addresses, the square of the size of the entire
IPv4 address space. Thus, actual address space
utilization rates will be small in IPv6, but network
management and routing efficiency is improved by the
large subnet space and hierarchical route aggregation.
Renumbering an existing network for a new connectivity
provider with different routing prefixes is a major effort with
IPv4.[19][20] With IPv6, however, changing the prefix
announced by a few routers can in principle renumber an
entire network, since the host identifiers (the leastsignificant 64 bits of an address) can be independently
self-configured by a host.[21]
[edit]Multicasting
Multicasting, the transmission of a packet to multiple
destinations in a single send operation, is part of the base

specification in IPv6. In IPv4 this is an optional although


commonly implemented feature.[22] IPv6 multicast
addressing shares common features and protocols with
IPv4 multicast, but also provides changes and
improvements by eliminating the need for certain
protocols. IPv6 does not implement traditional IP
broadcast, i.e. the transmission of a packet to all hosts on
the attached link using a special broadcast address, and
therefore does not define broadcast addresses. In IPv6,
the same result can be achieved by sending a packet to
the link-local all nodes multicast group at
address ff02::1, which is analogous to IPv4 multicast to
address224.0.0.1. IPv6 also provides for new multicast
implementations, including embedding rendezvous point
addresses in an IPv6 multicast group address, which
simplifies the deployment of inter-domain solutions.[23]
In IPv4 it is very difficult for an organization to get even
one globally routable multicast group assignment, and the
implementation of inter-domain solutions is very
arcane.[24] Unicast address assignments by a local Internet
registry for IPv6 have at least a 64-bit routing prefix,
yielding the smallest subnet size available in IPv6 (also 64
bits). With such an assignment it is possible to embed the
unicast address prefix into the IPv6 multicast address
format, while still providing a 32-bit block, the least
significant bits of the address, or approximately 4.2 billion
multicast group identifiers.[citation needed] Thus each user of an
IPv6 subnet automatically has available a set of globally
routable source-specific multicast groups for multicast
applications.[25]

[edit]Stateless address autoconfiguration (SLAAC)


See also: IPv6 address
IPv6 hosts can configure themselves automatically when
connected to a routed IPv6 network using the Neighbor
Discovery Protocol via Internet Control Message Protocol
version 6 (ICMPv6) router discovery messages. When first
connected to a network, a host sends a link-local router
solicitation multicast request for its configuration
parameters; if configured suitably, routers respond to such
a request with a router advertisement packet that contains
network-layer configuration parameters.[21]
If IPv6 stateless address autoconfiguration is unsuitable
for an application, a network may use stateful
configuration with the Dynamic Host Configuration
Protocol version 6 (DHCPv6) or hosts may be configured
statically.
Routers present a special case of requirements for
address configuration, as they often are sources for
autoconfiguration information, such as router and prefix
advertisements. Stateless configuration for routers can be
achieved with a special router renumbering protocol.[26]
[edit]Mandatory network-layer security
Internet Protocol Security (IPsec) was originally developed
for IPv6, but found widespread deployment first in IPv4,
into which it was back-engineered. Earlier, IPsec was an
integral part of the base IPv6 protocol suite,[2][27] but has
since been made optional.[28]
[edit]Simplified processing by routers

In IPv6, the packet header and the process of packet


forwarding have been simplified. Although IPv6 packet
headers are at least twice the size of IPv4 packet headers,
packet processing by routers is generally more
efficient,[2][16] thereby extending the end-to-end principle of
Internet design. Specifically:
The packet header in IPv6 is simpler than that used in
IPv4, with many rarely used fields moved to separate
optional header extensions.
IPv6 routers do not perform fragmentation. IPv6 hosts
are required to either perform path MTU discovery,
perform end-to-end fragmentation, or to send packets
no larger than the IPv6 default minimum MTU size of
1280 octets.
The IPv6 header is not protected by a checksum;
integrity protection is assumed to be assured by both
link-layer and higher-layer (TCP, UDP, etc.) error
detection. UDP/IPv4 may actually have a checksum of
0, indicating no checksum; IPv6 requires UDP to have
its own checksum. Therefore, IPv6 routers do not need
to recompute a checksum when header fields (such as
thetime to live (TTL) or hop count) change. This
improvement may have been made less necessary by
the development of routers that perform checksum
computation at link speed using dedicated hardware,
but it is still relevant for software-based routers.
The TTL field of IPv4 has been renamed to Hop Limit,
reflecting the fact that routers are no longer expected to
compute the time a packet has spent in a queue.
[edit]Mobility

Unlike mobile IPv4, mobile IPv6 avoids triangular


routing and is therefore as efficient as native IPv6. IPv6
routers may also allow entire subnets to move to a new
router connection point without renumbering.[29]
[edit]Options extensibility
The IPv6 protocol header has a fixed size (40 octets).
Options are implemented as additional extension headers
after the IPv6 header, which limits their size only by the
size of an entire packet. The extension header mechanism
makes the protocol extensible in that it allows future
services for quality of service, security, mobility, and
others to be added without redesign of the basic
protocol.[2]
[edit]Jumbograms
IPv4 limits packets to 65535 (2161) octets of payload. An
IPv6 node can optionally handle packets over this limit,
referred to as jumbograms, which can be as large
as 4294967295 (2321) octets. The use of jumbograms
may improve performance over high-MTU links. The use
of jumbograms is indicated by the Jumbo Payload Option
header.[30]
[edit]Privacy
Like IPv4, IPv6 supports globally unique static IP
addresses, which can be used to track a single device's
Internet activity. Most devices are used by a single user,
so a device's activity is often assumed to be equivalent to
a user's activity. This is a cause for concern to anyone
who wish to keep their Internet activity secret.

Activity tracking based on IP address is a potential privacy


issue for all IP-enabled devices. However, device activity
can be particularly simple to track when the host identifier
portion of the IPv6 address is automatically generated
from the network interface's MAC address.
Privacy extensions for IPv6 have been defined to address
these privacy concerns.[31] When privacy extensions are
enabled, the operating system generates ephemeral IP
addresses by concatenating a randomly generated host
identifier with the assigned network prefix. These
ephemeral addresses, instead of trackable static IP
addresses, are used to communicate with remote hosts.
The use of ephemeral addresses makes it difficult to
accurately track a user's Internet activity by scanning
activity streams for a single IPv6 address.[32]
Privacy extensions are enabled by default in Windows,
Mac OS X (since 10.7), and iOS (since version
4.3).[33] Some Linux distributions have enabled privacy
extensions as well.[34]
Privacy extensions do not protect the user from other
forms of activity tracking, such as tracking cookies.
Privacy extensions do little to protect the user from
tracking if only one or two hosts are using a given network
prefix, and the activity tracker is privy to this information. In
this scenario, the network prefix is the unique identifier for
tracking. Network prefix tracking is less of a concern if the
user's ISP assigns a dynamic network prefix via
DHCP.[35][36]
[edit]Packet format

Main article: IPv6 packet

IPv6 packet header


An IPv6 packet has two parts: a header and payload.
The header consists of a fixed portion with minimal
functionality required for all packets and may contain
optional extensions to implement special features.
The fixed header occupies the first 40 octets (320 bits) of
the IPv6 packet. It contains the source and destination
addresses, traffic classification options, a hop counter, and
a pointer for extension headers, if any. The Next
Header field, present in each extension, points to the next
element in the chain of extensions. The last field points to
the upper-layer protocol that is carried in the
packet's payload.
Extension headers carry options that are used for special
treatment of a packet in the network, e.g., for routing,
fragmentation, and for security using theIPsec framework.
Without special options, a payload must be less
than 64kB. With a Jumbo Payload option (in a Hop-ByHop Options extension header), the payload must be less
than 4 GB.

Unlike in IPv4, routers never fragment a packet. Hosts are


expected to use Path MTU Discovery to make their
packets small enough to reach the destination without
needing to be fragmented. See IPv6
Packet#Fragmentation.
[edit]Addressing
Main article: IPv6 address
Compared to IPv4, the most obvious advantage of IPv6 is
its larger address space. IPv4 addresses are 32 bits long
and number about 4.3109 (4.3 billion).[37] IPv6 addresses
are 128 bits long and number about 3.41038. IPv6's
addresses are deemed enough for the foreseeable
future.[38]
IPv6 addresses are written in eight groups of
four hexadecimal digits separated by colons, such
as 2001:0db8:85a3:0000:0000:8a2e:0370:7334.
IPv6 unicast addresses other than those that start with
binary 000 are logically divided into two parts: a 64-bit
(sub-)network prefix, and a 64-bit interface identifier.[39]
For stateless address autoconfiguration (SLAAC) to work,
subnets require a /64 address block, as defined in RFC
4291 section 2.5.1. Local Internet registries get assigned
at least /32 blocks, which they divide among ISPs.[40] The
obsolete RFC 3177 recommended the assignment of a /48
to end-consumer sites. This was replaced by RFC 6177,
which "recommends giving home sites significantly more
than a single /64, but does not recommend that every
home site be given a /48 either". /56s are specifically
considered. It remains to be seen if ISPs will honor this

recommendation; for example, during initial


trials, Comcast customers were given a single /64
network.[41]
IPv6 addresses are classified by three types of networking
methodologies: unicast addresses identify each network
interface, anycast addresses identify a group of interfaces,
usually at different locations of which the nearest one is
automatically selected, and multicast addresses are used
to deliver one packet to many interfaces.
The broadcast method is not implemented in IPv6. Each
IPv6 address has a scope, which specifies in which part of
the network it is valid and unique. Some addresses are
unique only on the local (sub-)network. Others are globally
unique.
Some IPv6 addresses are reserved for special purposes,
such as loopback, 6to4 tunneling, and Teredo tunneling.
See RFC 5156. Also, some address ranges are
considered special, such as link-local addresses for use
on the local link only, Unique Local addresses (ULA) as
described in RFC 4193, and solicited-node multicast
addresses used in the Neighbor Discovery Protocol.
[edit]IPv6 in the Domain Name System
Main article: IPv6 address#IPv6 addresses in the Domain
Name System
In the Domain Name System, hostnames are mapped to
IPv6 addresses by AAAA resource records, socalled quad-A records. For reverse resolution, the IETF
reserved the domain ip6.arpa, where the name space is
hierarchically divided by the 1-

digit hexadecimal representation of nibble units (4 bits) of


the IPv6 address. This scheme is defined in RFC 3596.
[edit]Address format
An IPv6 address is represented by 8 groups of 16-bit
hexadecimal values separated by colons (:). For example:
2001:0db8:85a3:0000:0000:8a2e:0370:7334
The hexadecimal digits are case-insensitive.
An IPv6 address can be abbreviated with the following
rules:
1. Omit leading zeroes in a 16-bit value.
2. Replace one or more groups of consecutive zeroes by
a double colon.
Below is an example of these rules:
Address fe80 : 0000 : 0000 : 0000 : 0202 : b3ff : fe1e : 8329
After Rule
fe80 :
1
After Rule
fe80 :
2

0:

0:

0 : 202 : b3ff : fe1e : 8329

: 202 : b3ff : fe1e : 8329

Below are the text representations of these addresses:


fe80:0000:0000:0000:0202:b3ff:fe1e:8329
fe80:0:0:0:202:b3ff:fe1e:8329
fe80::202:b3ff:fe1e:8329

Another interesting example is the loopback


address:[37]
0:0:0:0:0:0:0:1
::1
As IPv6 addresses may have more than one
representation, which can lead to confusion,
there is a proposed standard for representing
them in text.[42]
[edit]Transition mechanisms
IPv6 transition
mechanisms
Standards Track
4in6
6in4
6over4
DS-Lite
6rd
6to4
ISATAP
NAT64 / DNS64
Teredo
SIIT

Experimental

TSP

Informational

IVI
TRT
Drafts

4rd
AYIYA
dIVI

Deprecated

NAT-PT
NAPT-PT

Until IPv6 completely supplants IPv4, a


number of transition mechanisms[43] are
needed to enable IPv6-only hosts to reach
IPv4 services and to allow isolated IPv6 hosts
and networks to reach the IPv6 Internet over
the IPv4 infrastructure. People have made
various proposals for this transition period:

RFC 2185, Routing Aspects of IPv6


Transition
RFC 2766, Network Address Translation
Protocol Translation NAT-PT, obsoleted as
explained in RFC 4966 Reasons to Move

the Network Address Translator Protocol


Translator NAT-PT to Historic Status
RFC 3053, IPv6 Tunnel Broker
RFC 3056, 6to4. Connection of IPv6
Domains via IPv4 Clouds
RFC 3142, An IPv6-to-IPv4 Transport Relay
Translator
RFC 4213, Basic Transition Mechanisms for
IPv6 Hosts and Routers
RFC 4380, Teredo: Tunneling IPv6 over
UDP through Network Address Translations
NATs
RFC 4798, Connecting IPv6 Islands over
IPv4 MPLS Using IPv6 Provider Edge
Routers (6PE)
RFC 5214, Intra-Site Automatic Tunnel
Addressing Protocol ISATAP
RFC 5569, IPv6 Rapid Deployment on IPv4
Infrastructures (6rd)
RFC 5572, IPv6 Tunnel Broker with the
Tunnel Setup Protocol (TSP)
RFC 6180, Guidelines for Using IPv6
Transition Mechanisms during IPv6
Deployment
RFC 6343, Advisory Guidelines for 6to4
Deployment
[edit]Dual IP stack implementation
The dual-stack protocol implementation in
an operating system is a fundamental IPv4-toIPv6 transition technology. It implements IPv4

and IPv6 protocol stacks either independently


or in a hybrid form. The hybrid form is
commonly implemented in modern operating
systems that implement IPv6. Dual-stack
hosts are described in RFC 4213.
Modern hybrid dual-stack implementations of
IPv4 and IPv6 allow programmers to write
networking code that works transparently on
IPv4 or IPv6. The software may use
hybridsockets designed to accept both IPv4
and IPv6 packets. When used in IPv4
communications, hybrid stacks use an IPv6
application programming interface and
represent IPv4 addresses in a special address
format, the IPv4-mapped IPv6 address.
[edit]IPv4-mapped IPv6 addresses
Hybrid dual-stack IPv6/IPv4 implementations
recognize a special class of addresses, the
IPv4-mapped IPv6 addresses. In these
addresses, the first 80 bits are zero, the next
16 bits are one, and the remaining 32 bits are
the IPv4 address. You may see these
addresses with the first 96 bits written in the
standard IPv6 format, and the remaining 32
bits written in the customary dot-decimal
notation of IPv4. For
example, ::ffff:192.0.2.128 represents
the IPv4 address 192.0.2.128. A
deprecated format for IPv4-compatible IPv6
addresses was ::192.0.2.128.[44]

Because of the significant internal differences


between IPv4 and IPv6, some of the lowerlevel functionality available to programmers in
the IPv6 stack does not work identically with
IPv4-mapped addresses. Some common IPv6
stacks do not implement the IPv4-mapped
address feature, either because the IPv6 and
IPv4 stacks are separate implementations
(e.g., Microsoft Windows 2000, XP, and
Server 2003), or because of security concerns
(OpenBSD).[45] On these operating systems, a
program must open a separate socket for
each IP protocol it uses. On some systems,
e.g., the Linux kernel, NetBSD, and FreeBSD,
this feature is controlled by the socket
option IPV6_V6ONLY, as specified in RFC
3493.[46]
[edit]Tunneling
In order to reach the IPv6 Internet, an isolated
host or network must use the existing IPv4
infrastructure to carry IPv6 packets. This is
done using a technique known as tunneling,
which encapsulates IPv6 packets within IPv4,
in effect using IPv4 as a link layer for IPv6.
IP protocol 41 indicates IPv4 packets which
encapsulate IPv6 datagrams. Some routers or
network address translation devices may
block protocol 41. To pass through these
devices, you might use UDP packets to
encapsulate IPv6 datagrams. Other

encapsulation schemes, such


as AYIYA or Generic Routing Encapsulation,
are also popular.
Conversely, on IPv6-only internet links, when
access to IPv4 network facilities is needed,
tunneling of IPv4 over IPv6 protocol occurs,
using the IPv6 as a link layer for IPv4.
[edit]Automatic tunneling
Automatic tunneling refers to a technique
where the routing infrastructure automatically
determines the tunnel endpoints. Some
automatic tunneling techniques are below.
6to4 is recommended by RFC 3056. It uses
protocol 41 encapsulation.[47] Tunnel
endpoints are determined by using a wellknown IPv4 anycast address on the remote
side, and embedding IPv4 address information
within IPv6 addresses on the local side. 6to4
is widely deployed today.
Teredo is an automatic tunneling technique
that uses UDP encapsulation and can
allegedly cross multiple NAT boxes.[48] IPv6,
including 6to4 and Teredo tunneling, are
enabled by default inWindows
Vista[49] and Windows 7. Most Unix systems
implement only 6to4, but Teredo can be
provided by third-party software such
as Miredo.

ISATAP[50] treats the IPv4 network as a virtual


IPv6 local link, with mappings from each IPv4
address to a link-local IPv6 address. Unlike
6to4 and Teredo, which are intersite tunnelling mechanisms, ISATAP is
an intra-site mechanism, meaning that it is
designed to provide IPv6 connectivity between
nodes within a single organisation.
[edit]Configured and automated tunneling
(6in4)
In configured tunneling, the tunnel endpoints
are explicitly configured, either by an
administrator manually or the operating
system's configuration mechanisms, or by an
automatic service known as a tunnel
broker;[51] this is also referred to as automated
tunneling. Configured tunneling is usually
more deterministic and easier to debug than
automatic tunneling, and is therefore
recommended for large, well-administered
networks. Automated tunneling provides a
compromise between the ease of use of
automatic tunneling and the deterministic
behaviour of configured tunneling.
Raw encapsulation of IPv6 packets
using IPv4 protocol number 41 is
recommended for configured tunneling; this is
sometimes known as 6in4 tunneling. As with
automatic tunneling, encapsulation within UDP

may be used in order to cross NAT boxes and


firewalls.
[edit]Proxying and translation for IPv6-only
hosts
Main article: IPv6 transition mechanisms
After the regional Internet registries have
exhausted their pools of available IPv4
addresses, it is likely that hosts newly added
to the Internet might only have IPv6
connectivity. For these clients to have
backward-compatible connectivity to existing
IPv4-only resources, suitable IPv6 transition
mechanisms must be deployed.
One form of address translation is the use of a
dual-stack application-layer proxy server, for
example a web proxy.
NAT-like techniques for application-agnostic
translation at the lower layers in routers and
gateways have been proposed. The NAT-PT
standard was dropped due to a number of
criticisms,[52]however more recently the
continued low adoption of IPv6 has prompted
a new standardization effort under the
name NAT64.
[edit]Application transition
RFC 4038, Application Aspects of IPv6
Transition, is an informational RFC that covers
the topic of IPv4 to IPv6 application transition

mechanisms. Other RFCs that pertain to IPv6


at the application level are:
RFC 3493, Basic Socket Interface
Extensions for IPv6
RFC 3542, Advanced Sockets Application
Program Interface (API) for IPv6
Similar to the OS-level WAN stack,
applications can be:

IPv4 only
IPv6 only
dual set of IPv4 and IPv6 only
hybrid IPv4 and IPv6
[edit]IPv6 readiness

Compatibility with IPv6 networking is mainly a


software or firmware issue. However, much of
the older hardware that could in principle be
upgraded is likely to be replaced instead.
The American Registry for Internet
Numbers (ARIN) suggested that all Internet
servers be prepared to serve IPv6-only clients
by January 2012.[53] Sites will only be
accessible over NAT64 if they do not use IPv4
literals as well.
[edit]Software
Most personal computers running
recent operating system versions are IPv6ready. Most popular applications with network
capabilities are ready, and most others could

be easily upgraded with help from the


developers. Java applications adhering to
Java 1.4 (February 2002) standards work with
IPv6.[54]
[edit]Hardware and embedded systems
Low-level equipment such as network
adapters and network switches may not be
affected by the change, since they transmit
link-layer frames without inspecting the
contents. However, networking devices that
obtain IP addresses or perform routing of IP
packets do need to understand IPv6.
Most equipment would be IPv6 capable with a
software or firmware update if the device has
sufficient storage and memory space for the
new IPv6 stack. However, manufacturers may
be reluctant to spend on software
development costs for hardware they have
already sold when they are poised for new
sales from IPv6-ready equipment.[citation needed]
In some cases, non-compliant equipment
needs to be replaced because the
manufacturer no longer exists or software
updates are not possible, for example,
because the network stack is implemented in
permanent read-only memory.
The CableLabs consortium published the 160
Mbit/s DOCSIS 3.0 IPv6-ready specification
for cable modems in August 2006. The widely

used DOCSIS 2.0 does not support IPv6. The


new 'DOCSIS 2.0 + IPv6' standard supports
IPv6, which may on the cable modem side
require only a firmware upgrade.[55][56] It is
expected that only 60% of cable modems'
servers and 40% of cable modems will be
DOCSIS 3.0 by 2011.[57] However, most ISPs
that support DOCSIS 3.0 do not support IPv6
across their networks.
Other equipment which is typically not IPv6ready ranges from Voice over Internet
Protocol devices to laboratory equipment and
printers.[citation needed]
[edit]Deployment
Main article: IPv6 deployment
The introduction of Classless Inter-Domain
Routing (CIDR) in the Internet routing and IP
address allocation methods in 1993 and the
extensive use of network address
translation (NAT) delayed the inevitable IPv4
address exhaustion, but the final phase of
exhaustion started on February 3,
2011.[15] However, despite a decade long
development and implementation history as a
Standards Track protocol, general worldwide
deployment is still in its infancy. As of October
2011, about 3% of domain names and 12% of
the networks on the internet have IPv6
protocol support.[58]

IPv6 has been implemented on all major


operating systems in use in commercial,
business, and home consumer environments.
Since 2008, the domain name system can be
used in IPv6. IPv6 was first used in a major
world event during the 2008 Summer Olympic
Games,[59] the largest showcase of IPv6
technology since the inception of
IPv6.[60] Countries like China or the Federal
U.S. Government are also starting to require
IPv6 capability on their equipment.
Finally, modern cellular telephone
specifications mandate IPv6 operation and
deprecate IPv4 as an optional capability.[61]

IPv4

Follow
Internet Protocol version 4 (IPv4) is the fourth revision in
the development of the Internet Protocol (IP) and the first
version of the protocol to be widely deployed. Together
with IPv6, it is at the core of standards-based
internetworking methods of the Internet. As of 2012 IPv4 is
still the most widely deployed Internet Layer protocol.

IPv4 is described in IETF publication RFC 791 (September


1981), replacing an earlier definition (RFC 760, January
1980).
IPv4 is a connectionless protocol for use on packetswitched Link Layer networks (e.g., Ethernet). It operates
on a best effort delivery model, in that it does not
guarantee delivery, nor does it assure proper sequencing
or avoidance of duplicate delivery. These aspects,
including data integrity, are addressed by an upper layer
transport protocol, such as the Transmission Control
Protocol (TCP).
IPv4 uses 32-bit (four-byte) addresses, which limits the
address space to 4294967296 (2) addresses.

Clients and Servers


In general, all of the machines on the Internet can be
categorized as two types: servers and clients. Those
machines that provide services (like Web servers or FTP
servers) to other machines are servers. And the machines
that are used to connect to those services are clients.
When you connect to Yahoo! at www.yahoo.com to read a
page, Yahoo! is providing a machine (probably a cluster of
very large machines), for use on the Internet, to service
your request. Yahoo! is providing a server. Your machine,
on the other hand, is probably providing no services to
anyone else on the Internet. Therefore, it is a user
machine, also known as a client. It is possible and

common for a machine to be both a server and a client,


but for our purposes here you can think of most machines
as one or the other.
A server machine may provide one or more services on
the Internet. For example, a server machine might have
software running on it that allows it to act as a Web server,
an e-mail server and an FTP server. Clients that come to a
server machine do so with a specific intent, so clients
direct their requests to a specific software server running
on the overall server machine. For example, if you are
running a Web browser on your machine, it will most likely
want to talk to the Web server on the server machine.
Your Telnetapplication will want to talk to the Telnet
server, your e-mail application will talk to the e-mail server,
and so on...

client/server
Reprints
Client/server describes the relationship between two computer
programs in which one program, the client, makes a service
request from another program, the server, which fulfills the
request. Although the client/server idea can be used by
programs within a single computer, it is a more important idea
in a network. In a network, the client/server model provides a
convenient way to interconnect programs that are distributed
efficiently across different locations. Computer transactions

using the client/server model are very common. For example,


to check your bank account from your computer, a client
program in your computer forwards your request to a server
program at the bank. That program may in turn forward the
request to its own client program that sends a request to
a database server at another bank computer to retrieve your
account balance. The balance is returned back to the bank data
client, which in turn serves it back to the client in your personal
computer, which displays the information for you.
The client/server model has become one of the central ideas of
network computing. Most business applications being written
today use the client/server model. So does the Internet's main
program, TCP/IP. In marketing, the term has been used to
distinguish distributed computing by smaller dispersed
computers from the "monolithic" centralized computing
of mainframe computers. But this distinction has largely
disappeared as mainframes and their applications have also
turned to the client/server model and become part of network
computing.
In the usual client/server model, one server, sometimes called
a daemon, is activated and awaits client requests. Typically,
multiple client programs share the services of a common server
program. Both client programs and server programs are often
part of a larger program or application. Relative to the Internet,
your Web browser is a client program that requests services
(the sending of Web pages or files) from a Web server (which
technically is called a Hypertext Transport Protocol
or HTTP server) in another computer somewhere on the

Internet. Similarly, your computer with TCP/IP installed allows


you to make client requests for files from File Transfer Protocol
(FTP) servers in other computers on the Internet.
Other program relationship models included master/slave, with
one program being in charge of all other programs, and peerto-peer, with either of two programs able to initiate a
transaction.
Getting started with client/servers
To explore how client/servers are used in the enterprise, here
are some additional resources:
How server virtualization improves efficiency in a client-server
model: Here an experienced network expert explains how
virtualization can optimize productivity in this expert response.
Slow response times from client/server PC's running IFS
applications: How do you find out the culprit of a network
delays when you see little bandwidth being used? Read this
Q&A to see what an experienced network administrating expert
has to say.
RELATED GLOSSARY TERMS: virtual area network (VAN), 10high-day busy period (10HD busy period), graceful
degradation, online, softswitch, maximum transmission unit
(MTU), traffic shaping (packet shaping), Dialed Number
Identification Service (DNIS), WATS (wide-area telephone
service), mail user agent (MUA)

Perbezaan antara 3G vs 4G Edisi ke-2


1:03 PM Syed Mohd Nooh No comments
Jawapan:
Bagaikan "tsunami" atau empohan maklumat
mengenai kedatang teknologi internet 4G, ramai
pelangan dan pengguna tertanya-tanya tentang
apa sebenarnya perbezaan antara rangkain 4G
yang baru dengan rangkaian 3G yang sudah
semakin mendapat tempat di hati penguna tegar
teknologi tanpa wayar. Jika kita perhatikan, iklan
tentang kelajuan yang lebih tinggi dan pantas
sering di paparkan tanpa memberitahu kepada
penguna tentang penjeleasan yang tepat. "Internet
Provider" seperti Maxis, Celcom dan dsb pula jelas
mengambil langkah tunggu dan lihat untuk
menggunakan teknologi 4G ini dan tampaknya
kurang yakin akan keperluan untuk pengguna
untuk menerokai teknologi 4G. Pengeluar telefon
bimbit pula sering membuat perbandingan dengan
persaing-pesaing mereka dengan mengunakan
teknologi yang belum "matang" dan masih samarsamar.
Cara yang paling mudah untuk menjelaskan
perbezaan antara 3G dan 4G ini boleh di jelaskan
dengan membuat analisis yang mudah tentang
tujuan utama kenapa teknologi ini dicipta.

Generasi ketiga atau 3G (3rd Generation) internet


tanpa wayar ini direka berdasarkan teknologi yang
pada peringkat awalnya di reka untuk tujuan
membuat panggilan telefon. Oleh kerana panggilan
telefon memerlukan penghantaran jumlah data
yang kurang besar berbanding melayari intenet,
ciptaan teknologi internet 3G hanya bertujuan
untuk menanangi keperluan penghantaran atau
pemindahan jumlah data dengan kelajuan yang
sederhana tidak seperti keperluan untuk
memindahkan jumlah data yang besar untuk
menonton video streaming di internet. Ini
sebabnya telefon yang berteknologi 3G di pasaran
adakalanya bermasalah ketika pengguna cuba
untuk mengakses video streaming atau laman web
yang mempunya jumlah datan yang besar.
Untuk menyahut cabaran arus penggunaan
internet tanpa wayar dan juga keperluan pengguna
tegar internet, terciptanya 4G internet tanpa
wayar, direka untuk komunikasi yang merangkumi
pemindahan data yang lebih banyak dari panggilan
telefon. Pengendalian kemudahan panggilan
telefon boleh di anggap terlalu mudah dengan ada
teknologi 4G, kerana teknologi ini memang direka
untuk komunikasi internet, streaming video,
menonton televisyen dan setiap tugas yang
sebelum ini hanya boleh dilakukan jika pengguna

mempunyai sambungan selaju DSL; kini boleh di


lakukan dengan telefon pada rangkaian 4G.

Salah satu perbezaan yang ketara antara


rangkaian 3G dan 4G adalah kelajuan muat turun
yang ditawarkan. Kelajuan muat turun pada
rangkaian 3G umumnya sekitar 0.6 - 1.4 Megabits
sesaat, dengan maksimum kelajuan 3.1 Megabits
sesaat pada waktu tertentu. Untuk rangkaian 4G
pula, ianya menawarkan kelajuan muat turun
antara 3 - 6 Megabits sesaat, dengan maksimum
kelajuan 10 Megabits sesaat, 3 -5 kali ganda
kelajuan maksimum yang boleh di capai dari
rangkaian generasi ketiga (3G).
Perbezaan lain yang boleh di ambil kira adalah
perbezaan muat naik antara kedua-dua rankaian
tersebut. Contohnya, untuk muak naik video atau
fail dari computer ke laman web, rangkaian 3G
menawarkan kelajuan hanya pada 0.5 Megabit
berbanding 1 Megabits dengan menggunakan
rangkaian 4G. Boleh di katakan rangkaian 4G
boleh memuat naik dua kali ganda kepantasan
rangkaian 3G.

Contoh lain kelajuan rangkaian 4G, jika muat turun


rancangan televisyen mengambil masa kira-kira 19
minit pada rangkaian 3G, namun ianya mengambil
kurang dari 5 minit jika menggunakan rangkaian
4G. Selain dari itu, membuat tempahan makan
malam hanya mengambil masa setengah saat
pada rangkaian 4G, 5 saat jika menggunakan
rangkaian 3G dan banyak lagi contoh-contoh lain
tentang kelajuan teknologi 4G ini.
Semua ini bermakna bahawa 4G internet tanpa
wayar akan membolehkan pengguna untuk
mengakses internet pada telefon mereka pada
kelajuan yang sama jika mereka lakukan di rumah
menggunakan DSL. Boleh di katakan, teknologi 4G
pada telefon makin menghampiri kelajuan jika
menggunakan computer biasa. Mungkin pada
masa hadapan kita tidak perlu lagi bergantung
kepada computer untuk melayari internet dengan
patas, kita hanya memerlukan telefon yang
dilengkapi dengan teknologi 4G sahaja.
Kepada mereka yang ingin tahu lebih mendalam
tentang teknologi ini, boleh rujuk jadual di bawah.

1G

Dates Cool New Features


70's to Telefon bimbit diperkenalkan,
80's terutamanya untuk suara sahaja.

2G

90's to
2000

2.5G

20012004

3G

20042005

4G

2006+

Peningkatan prestasi dengan


membolehkan pengguna pada
satu saluran pada satu masa.
Semakin banyak telefon bimbit
digunakan untuk pemindahan data
dan suara.
Internet berubah dengan fokus
terhadap penghantaran
data."Enhanced multimedia dan
"video streaming" menjadi
kenyataan. Telefon boleh melayar
laman web tetapi terhad.
Kebolehan untuk menggunakan
"Enhanced multimedia" dan "video
streaming" semakin meningkat.
Standard dicipta untuk
membolehkan akses universal dan
mudah alih antara peranti yang
berlainan (Telefon, PDA, dll)
Kelajuan mencapai hingga
40Mbps. Keupayaan untuk
menggunakan "Enhanced
multimedia" dan "video
streaming" semakin meningkat.
Peranti dilengkapi untuk
"roaming" di seluruh dunia.
Technology

1G
2G
2.5G
3G
4G

Analog
CMRT
AMPS
Digital Circuit
DGSM
CDMA
Switched
AMPS
Digital Packet
GPRS
EDGE
Switched
Digital Packet
WUMTS
CDMA2000
Switched
CDMA
Digital Broadband
802.11

Data Rate
1G
9.6 Kbps to 14.4 Kbps
D-AMPS 9.6 Kbps to 14.4 Kbps
GSM
9.6 Kbps to 14.4 Kbps
2G
IS95A
9.6 Kbps to 14.4 Kbps
IS95B
115 Kbps
2.5G
56 Kbps to 144 Kbps
UMTS
2+ Mbps, up to 384 Kbps
384 Kbps (wide area access), 2
3G WCDMA
Mbps (local area access)
CDMA2000 614 Kbps
4G
20-40 Mbps

The differences between social media and social


networking are just about as vast as night and day. There
are some key differences and knowing what they are can
help you gain a better understanding on how to leverage
them for your brand and business.
1. By Any Definition
Social media is a way to transmit, or share information
with a broad audience. Everyone has the opportunity to
create and distribute. All you really need is an internet
connection and you're off to the races.
On the other hand, social networking is an act of
engagement. Groups of people with common interests, or
like-minds, associate together on social networking sites
and build relationships through community.
2. Communication Style
Social media is more akin to a communication channel. It's
a format that delivers a message. Like television, radio or
newspaper, social media isn't a location that you visit.

Social media is simply a system that disseminates


information to' others.
With social networking, communication is two-way.
Depending on the topic, subject matter or atmosphere,
people congregate to join others with similar experiences
and backgrounds. Conversations are at the core of social
networking and through them relationships are developed.
3. Return on Investment
It can be difficult to obtain precise numbers for determining
the ROI from social media. How do you put a numeric
value on the buzz and excitement of online conversations
about your brand, product or service? This doesn't mean
that ROI is null, it just means that the tactics used to
measure are different. For instance, influence, or the
depth of conversation and what the conversations are
about, can be used to gauge ROI.
Social networking's ROI is a bit more obvious. If the
overall traffic to your website is on the rise and you're
diligently increasing your social networking base, you
probably could attribute the rise in online visitors to your
social efforts.
4. Timely Responses
Social media is hard work and it takes time. You can't
automate individual conversations and unless you're a
well-known and established brand, building a following
doesn't happen overnight. Social media is definitely a
marathon and not a sprint.

Because social networking is direct communication


between you and the people that you choose connect
with, your conversations are richer, more purposeful and
more personal. Your network exponentially grows as you
meet and get introduced to others.
5. Asking or Telling
A big no-no on with social media is skewing or
manipulating comments, likes, diggs, stumbles or other
data, for your own benefit (personal or business). Asking
friends, family, co-workers or anyone else to cast a vote
just to cast it, doesn't do anyone much good for anyone
and it can quickly become a PR nightmare if word leaks
out about dishonest practices.
With social networking, you can tell your peers about your
new business or blog and discuss how to make it a
success. The conversations that you create can convert
many people into loyal fans, so it's worth investing the
time.
Social media and social networking do have some
overlap, but they really aren't the same thing. Knowing that
they're two separate marketing concepts can make a
difference in how you position your business going
forward.
Share your thoughts or add a comment or two. We'd
love to hear from you.

Institute of Electrical and Electronics Engineers


From Wikipedia, the free encyclopedia
"IEEE" redirects here. It is not to be confused with
the Institution of Electrical Engineers (IEE).
The Institute of Electrical and Electronics
Engineers (IEEE, read I-Triple-E) is a nonprofit professional association headquartered in New York
Citythat is dedicated to advancing technological innovation
and excellence. It has more than 400,000 members in
more than 160 countries, about 51.4% of whom reside in
the United States.[2][3]
[edit]History

The IEEE corporate office is on the 17th floor of 3 Park


Avenue in New York City
The IEEE is incorporated under the Not-for-Profit
Corporation Law of the state of New York in the United

States.[4] It was formed in 1963 by the merger of


the Institute of Radio Engineers (IRE, founded 1912) and
the American Institute of Electrical Engineers (AIEE,
founded 1884).
The major interests of the AIEE were wire communications
(telegraphy and telephony) and light and power systems.
The IRE concerned mostly radioengineering, and was
formed from two smaller organizations, the Society of
Wireless and Telegraph Engineers and the Wireless
Institute. With the rise of electronics in the 1930s,
electronics engineers usually became members of the
IRE, but the applications of electron tube technology
became so extensive that the technical boundaries
differentiating the IRE and the AIEE became difficult to
distinguish. After World War II, the two organizations
became increasingly competitive, and in 1961, the
leadership of both the IRE and the AIEE resolved to
consolidate the two organizations. The two organizations
formally merged as the IEEE on January 1, 1963.
Notable Presidents of IEEE and its founding organizations
include Elihu Thomson (AIEE, 18891890), Alexander
Graham Bell (AIEE, 18911892),Charles Proteus
Steinmetz (AIEE, 19011902), Lee De Forest (IRE,
1930), Frederick E. Terman (IRE, 1941), William R.
Hewlett (IRE, 1954), Ernst Weber (IRE, 1959; IEEE,
1963), and Ivan Getting (IEEE, 1978).
IEEE's Constitution defines the purposes of the
organization as "scientific and educational, directed toward
the advancement of the theory and practice
of Electrical, Electronics, Communications and Computer

Engineering, as well as Computer Science, the allied


branches of engineering and the
relatedarts and sciences."[1] In pursuing these goals, the
IEEE serves as a major publisher of scientific journals and
organizer of conferences, workshops, and symposia
(many of which have associated published proceedings). It
is also a leading standards development organization for
the development of industrial standards (having developed
over 900 active industry technical standards) in a broad
range of disciplines, including electric
power and energy,biomedical technology and
healthcare, information technology, information assurance,
telecommunications, consumer electronics, transportation,
aerospace, and nanotechnology. IEEE develops and
participates in educational activities such
as accreditation of electrical engineering programs in
institutes of higher learning. The IEEE logo is a diamondshaped design which illustrates the right hand grip
rule embedded in Benjamin Franklin's kite, and it was
created at the time of the 1963 merger. [5]
IEEE has a dual complementary regional and technical
structure with organizational units based on geography
(e.g., the IEEE Philadelphia Section, IEEE South Africa
Section [1]) and technical focus (e.g., the IEEE Computer
Society). It manages a separate organizational unit (IEEEUSA) which recommends policies and implements
programs specifically intended to benefit the members, the
profession and the public in the United States.

The IEEE includes 38 technical Societies, organized


around specialized technical fields, with more than 300
local organizations that hold regular meetings.
The IEEE Standards Association is in charge of the
standardization activities of the IEEE.
[edit]Publications
Main article: List of IEEE publications
IEEE produces 30% of the world's literature in the
electrical and electronics engineering and computer
science fields, publishing well over 100 peer-reviewed
journals.[6]
The published content in these journals as well as the
content from several hundred annual conferences
sponsored by the IEEE are available in the IEEE online
digital library for subscription-based access and individual
publication purchases.[7]
In addition to journals and conference proceedings, the
IEEE also publishes tutorials and the standards that are
produced by its standardization committees.
[edit]Educational activities

Picture of the place where an office of IEEE works in


the District University of Bogot, Colombia.
The IEEE provides learning opportunities within the
engineering sciences, research, and technology. The goal
of the IEEE education programs is to ensure the growth of
skill and knowledge in the electricity-related technical
professions and to foster individual commitment to
continuing education among IEEE members, the
engineering and scientific communities, and the general
public.
IEEE offers educational opportunities such as IEEE
eLearning Library,[8] the Education Partners
Program,[9] Standards in Education[10] and Continuing
Education Units (CEUs).[11]
IEEE eLearning Library is a collection of online
educational courses designed for self-paced learning.
Education Partners, exclusive for IEEE members, offers
on-line degree programs, certifications and courses at a
10% discount. The Standards in Education website
explains what standards are and the importance of
developing and using them. The site includes tutorial
modules and case illustrations to introduce the history of
standards, the basic terminology, their applications and
impact on products, as well as news related to standards,
book reviews and links to other sites that contain
information on standards. Currently, twenty-nine states in
the United States require Professional Development Hours
(PDH) to maintain a Professional Engineering license,
encouraging engineers to seek Continuing Education
Units (CEUs) for their participation in continuing education

programs. CEUs readily translate into Professional


Development Hours (PDHs), with 1 CEU being equivalent
to 10 PDHs. Countries outside the United States, such as
South Africa, similarly require continuing professional
development (CPD) credits, and it is anticipated that IEEE
Expert Now courses will feature in the CPD listing for
South Africa.
IEEE also sponsors a website[12] designed to help young
people understand better what engineering means, and
how an engineering career can be made part of their
future. Students of age 818, parents, and teachers can
explore the site to prepare for an engineering career, ask
experts engineering-related questions, play interactive
games, explore curriculum links, and review lesson plans.
This website also allows students to search for accredited
engineering degree programs in Canada and the United
States; visitors are able to search by
state/province/territory, country, degree field, tuition
ranges, room and board ranges, size of student body, and
location (rural, suburban, or urban).
[edit]Standards and development process
Main article: IEEE Standards Association
IEEE is one of the leading standards-making organizations
in the world. IEEE performs its standards making and
maintaining functions through the IEEE Standards
Association (IEEE-SA). IEEE standards affect a wide
range of industries including: power and energy,
biomedical and healthcare, Information Technology (IT),
telecommunications, transportation, nanotechnology,

information assurance, and many more. In 2005, IEEE


had close to 900 active standards, with 500 standards
under development. One of the more notable IEEE
standards is the IEEE 802 LAN/MAN group of standards
which includes the IEEE 802.3 Ethernet standard and
the IEEE 802.11 Wireless Networking standard.
[edit]Membership and member grades
Most IEEE members are electrical and electronics
engineers, but the organization's wide scope of interests
has attracted people in other disciplines as well
(e.g., computer science, mechanicaland civil engineering)
as well as biologists, physicists, and mathematicians.
An individual can join the IEEE as a student member,
professional member, or associate member. In order to
qualify for membership, the individual must fulfil certain
academic or professional criteria and abide to the code of
ethics and bylaws of the organization. There are several
categories and levels of IEEE membership and affiliation:

Student Members: Student membership is available for


a reduced fee to those who are enrolled in an
accredited institution of higher education as
undergraduate or graduate students in technology or
engineering.
Members: Ordinary or
professional Membership requires that the individual
have graduated from a technology or engineering
program of an appropriately accredited institution of
higher education or have demonstrated professional
competence in technology or engineering through at

least six years of professional work experience. An


associate membership is available to individuals whose
area of expertise falls outside the scope of the IEEE or
who does not, at the time of enrollment, meet all the
requirements for full membership. Students and
Associates have all the privileges of members, except
the right to vote and hold certain offices.
Society Affiliates: Some IEEE Societies also allow a
person who is not an IEEE member to become
a Society Affiliate of a particular Society within the IEEE,
which allows a limited form of participation in the work of
a particular IEEE Society.
Senior Members: Upon meeting certain requirements,
a professional member can apply for Senior
Membership, which is the highest level of recognition
that a professional member can directly apply for.
Applicants for Senior Member must have at least three
letters of recommendation from Senior, Fellow, or
Honorary members and fulfill other rigorous
requirements of education, achievement, remarkable
contribution, and experience in the field. The Senior
Members are a selected group, and certain IEEE officer
positions are available only to Senior (and Fellow)
Members. Senior Membership is also one of the
requirements for those who are nominated and elevated
to the grade IEEE Fellow, a distinctive honor.
Fellow Members: The Fellow grade of membership is
the highest level of membership, and cannot be applied
for directly by the member instead the candidate must
be nominated by others. This grade of membership is
conferred by the IEEE Board of Directors in recognition

of a high level of demonstrated extraordinary


accomplishment.
Honorary Members: Individuals who are not IEEE
members but have demonstrated exceptional
contributions, such as being a recipient of an IEEE
Medal of Honor, may receive Honorary
Membership from the IEEE Board of Directors.
Life Members and Life Fellows: Members who have
reached the age of 65 and whose number of years of
membership plus their age in years adds up to at least
100 are recognized as Life Members and, in the case
of Fellow members, as Life Fellows.
[edit]Awards
Through its awards program, the IEEE recognizes
contributions that advance the fields of interest to the
IEEE. For nearly a century, the IEEE Awards Program has
paid tribute to technical professionals whose exceptional
achievements and outstanding contributions have made a
lasting impact on technology, society and the engineering
profession.
Funds for the awards program, other than those provided
by corporate sponsors for some awards, are administered
by the IEEE Foundation.
[edit]Medals

IEEE Medal of Honor


IEEE Edison Medal
IEEE Founders Medal (for leadership, planning, and
administration)
IEEE James H. Mulligan, Jr. Education Medal

IEEE Alexander Graham Bell Medal (for


communications engineering)
IEEE Simon Ramo Medal (for systems engineering)
IEEE Medal for Engineering Excellence
IEEE Medal for Environmental and Safety Technologies
IEEE Medal in Power Engineering
IEEE Richard W. Hamming Medal (for information
technology)
IEEE Heinrich Hertz Medal (for electromagnetics)
IEEE John von Neumann Medal (for computer-related
technology)
IEEE Jack S. Kilby Signal Processing Medal
IEEE Dennis J. Picard Medal for Radar Technologies
and Applications
IEEE Robert N. Noyce Medal (for microelectronics)
IEEE Medal for Innovations in Healthcare Technology
IEEE/RSE Wolfson James Clerk Maxwell Award
IEEE Centennial Medal
[edit]Technical field awards

IEEE Biomedical Engineering Award


IEEE Cledo Brunetti Award (for nanotechnology and
miniaturization)
IEEE Components, Packaging, and Manufacturing
Technologies Award
IEEE Control Systems Award
IEEE Electromagnetics Award
IEEE James L. Flanagan Speech and Audio Processing
Award
IEEE Andrew S. Grove Award (for solid-state devices)

IEEE Herman Halperin Electric Transmission and


Distribution Award
IEEE Masaru Ibuka Consumer Electronics Award
IEEE Internet Award
IEEE Reynold B. Johnson Data Storage Device
Technology Award
IEEE Reynold B. Johnson Information Storage Systems
Award
IEEE Richard Harold Kaufmann Award (for industrial
systems engineering)
IEEE Joseph F. Keithley Award in Instrumentation and
Measurement
IEEE Gustav Robert Kirchhoff Award (for electronic
circuits and systems)
IEEE Leon K. Kirchmayer Graduate Teaching Award
IEEE Koji Kobayashi Computers and Communications
Award
IEEE William E. Newell Power Electronics Award
IEEE Daniel E. Noble Award (for emerging
technologies)
IEEE Donald O. Pederson Award in Solid-State Circuits
IEEE Frederik Philips Award (for management
of research and development)
IEEE Photonics Award
IEEE Emanuel R. Piore Award (for information
processing systems in computer science)
IEEE Judith A. Resnik Award (for space engineering)
IEEE Robotics and Automation Award

IEEE Frank Rosenblatt Award (for biologically and


linguistically motivated computational paradigms such
as neural networks
IEEE David Sarnoff Award (for electronics)
IEEE Charles Proteus Steinmetz
Award (for standardization)
IEEE Marie Sklodowska-Curie Award (for nuclear and
plasma engineering)
IEEE Eric E. Sumner Award (for communications
technology)
IEEE Undergraduate Teaching Award
IEEE Nikola Tesla Award (for power technology)
IEEE Kiyo Tomiyasu Award (for technologies holding
the promise of innovative applications)
[edit]Recognitions

IEEE Haraden Pratt Award


IEEE Richard M. Emberson Award
IEEE Corporate Innovation Recognition
IEEE Ernst Weber Engineering Leadership Recognition
IEEE Honorary Membership
[edit]Prize paper awards

IEEE Donald G. Fink Prize Paper Award


IEEE W.R.G. Baker Award
[edit]Scholarships

IEEE Life Members Graduate Study Fellowship in


Electrical Engineering was established by the IEEE in
2000. The fellowship is awarded annually to a first year,
full time graduate student obtaining their masters for

work in the area of electrical engineering, at an


engineering school/program of recognized standing
worldwide.[13]
IEEE Charles LeGeyt Fortescue Graduate
Scholarship was established by the IRE in 1939 to
commemorate Charles Legeyt Fortescue's contributions
to electrical engineering. The scholarship is awarded for
one year of full-time graduate work obtaining their
masters in electrical engineering an ANE engineering
school of recognized standing in the United States.[14]
[edit]Societies

IEEE is supported by 38 societies, each one focused on a


certain knowledge area. They provide specialized
publications, conferences, business networking and
sometimes other services.[15][16]
IEEE Aerospace and Electronic Systems Society
IEEE Instrum
IEEE Antennas & Propagation Society
IEEE Intellige
IEEE Broadcast Technology Society
IEEE Magneti
IEEE Circuits and Systems Society
IEEE Microw
IEEE Communications Society
IEEE Nuclear
IEEE Components, Packaging & Manufacturing
IEEE Oceanic
Technology Society
IEEE Photoni
IEEE Computational Intelligence Society
IEEE Power E
IEEE Computer Society
IEEE Power &
IEEE Consumer Electronics Society
IEEE Product
IEEE Control Systems Society
IEEE Professi
IEEE Dielectrics & Electrical Insulation Society
IEEE Reliabil
IEEE Education Society
IEEE Robotic
IEEE Electromagnetic Compatibility Society
IEEE Signal P

IEEE Electron Devices Society


IEEE Engineering in Medicine and Biology Society
IEEE Geoscience and Remote Sensing Society
IEEE Industrial Electronics Society
IEEE Industry Applications Society
IEEE Information Theory Society
[edit]Technical councils

IEEE Society
IEEE Solid-St
IEEE Systems
IEEE Ultrason
Society
IEEE Vehicul

IEEE technical councils are collaborations of several


IEEE societies on a broader knowledge area. There are
currently seven technical councils:[15][17]
IEEE Biometrics Council
IEEE Council on Electronic Design Automation
IEEE Nanotechnology Council
IEEE Sensors Council
IEEE Council on Superconductivity
IEEE Systems Council
IEEE Technology Management Council
[edit]Technical committees

To allow a quick response to new innovations, IEEE can


also organize technical committees on top of
their societies and technical councils. There are currently
two such technical committees:[15]
IEEE Committee on Earth Observation (ICEO)
IEEE Technical Committee on RFID (CRFID)
[edit]Organizational units

Technical Activities Board (TAB)


[edit]IEEE Foundation

The IEEE Foundation is a charitable foundation


established in 1973 to support and promote technology
education, innovation and excellence.[18] It is incorporated
separately from the IEEE, although it has a close
relationship to it. Members of the Board of Directors of the
foundation are required to be active members of IEEE,
and one third of them must be current or former members
of the IEEE Board of Directors.
Initially, the IEEE Foundation's role was to accept and
administer donations for the IEEE Awards program, but
donations increased beyond what was necessary for this
purpose, and the scope was broadened. In addition to
soliciting and administering unrestricted funds, the
foundation also administers donor-designated funds
supporting particular educational, humanitarian, historical
preservation, and peer recognition programs of the
IEEE.[18] As of the end of 2009, the foundation's total
assets were $27 million, split equally between unrestricted
and donor-designated funds.[19]
[edit]Copyright policy
The IEEE requires authors to transfer their copyright for
works they submit for publication.[20][21]
The IEEE generally does not create its own research. It is
a professional organization that coordinates journal peerreview activities and holds subject-specific conferences in
which authors present their research. The IEEE then
publishes the authors' papers in journals and
other proceedings, and authors are required to give up
their exclusive rights to their works.[20]

Section 6.3.1 IEEE Copyright Policies subsections 7 and


8 states that "all authorsshall transfer to the IEEE in
writing any copyright they hold for their individual papers",
but that the IEEE will grant the authors permission to
make copies and use the papers they originally authored,
so long as such use is permitted by the Board of Directors.
The guidelines for what the Board considers a "permitted"
use are not entirely clear, although posting a copy on a
personally controlled website is allowed. The author is
also not allowed to change the work absent explicit
approval from the organization. The IEEE justifies this
practice in the first paragraph of that section, by stating
that they will "serve and protect the interests of its authors
and their employers".[20][21]
The IEEE places research papers and other publications
such as IEEE standards behind a "pay wall"[20], although
the IEEE explicitly allows authors to make a copy of the
papers that they authored freely available on their own
website. As of September 2011, the IEEE also provides
authors for most new journal papers with the option to pay
to allow free download of their papers by the public from
the IEEE publication website.[22]
IEEE publications have received a Green[23] rating the
from SHERPA/RoMEO guide[24] for affirming "authors
and/or their companies shall have the right to post their
IEEE-copyrighted material on their own servers without
permission" (IEEE Publication Policy 8.1.9.D[25]).
This open access policy effectively allows authors, at their
choice, to make their article openly available. Roughly 1/3
of the IEEE authors take this route[citation needed].

Some other professional associations do not impose the


same requirements on authors. For example,
the USENIX association[20] requires that the author only
give up the right to publish the paper elsewhere for 12
months (in addition to allowing authors to post copies of
the paper on their own website during that time). The
organization operates successfully even though all of its
publications are freely available online.[20]
Institut Jurutera Elektrik dan Elektronik (IEEE)
Institut Elektrik dan Elektronik Jurutera Standards Association
(IEEE-SA) adalah pemaju utama piawaian industri global dalam
pelbagai industri, termasuk Teknologi Maklumat, Kuasa dan
Tenaga, Telekomunikasi, Pengangkutan, Perubatan dan
Kesihatan dan Piawaian baru dan teknologi baru muncul seperti
Nanoteknologi. Di samping itu, untuk memajukan teori dan
amalan elektrik, elektronik dan komputer kejuruteraan dan
sains komputer, IEEE menaja persidangan dan simposium dan
mesyuarat dan menerbitkan kertas kerja teknikal penting dan
standard. IEEE membangunkan menonjol 802 Standard untuk
Kawasan Tempatan dan Rangkaian Wireless Metropolitan dan
rangkaian berwayar. 802 standard boleh didapati untuk muat
turun bebas dari Get IEEE 802 laman web .
IEEE mempunyai 42 masyarakat teknikal dan majlis-majlis
teknikal yang banyak yang telah kumpulan yang bekerjasama
secara aktif untuk membangunkan standard. Berikut adalah

senarai beberapa kumpulan utama yang mempunyai laman


web awam dilihat.
Masyarakat IEEE Komunikasi
Ia adalah sebuah komuniti yang terdiri daripada pelbagai
kumpulan profesional industri dengan kepentingan bersama
dalam memajukan semua teknologi komunikasi. Persatuan
menaja penerbitan, persidangan, program pendidikan, aktiviti
tempatan, jawatankuasa teknikal, dan standard.
Masyarakat IEEE Komputer
Antara aktiviti teknikal yang banyak, ia mempunyai
pembangunan standard yang sangat aktif organisasi yang
diketuai oleh Lembaga Piawaian Aktiviti (BPS) yang
menyediakan satu rangka kerja organisasi dan persekitaran
yang kondusif dalam mana untuk membangunkan secara
meluas diterima, standard bunyi, tepat pada masanya, dan
teknikal yang cemerlang yang akan memajukan teori dan
amalan sains dan teknologi pemprosesan komputer dan
maklumat.
Masyarakat IEEE EMC
IEEE Society Keserasian Elektromagnetik (EMC) adalah
pemaju utama antarabangsa ujian asas dan standard
pengukuran untuk EMC
IEEE Power Engineering Society
Seperti masyarakat kakak, ia menaja penerbitan,

persidangan, aktiviti-aktiviti pendidikan, jawatankuasa teknikal


dan standard. Fokus standard adalah penjanaan, penghantaran
dan pengagihan kuasa elektrik.
Jawatankuasa Standard IEEE 802
membangunkan standard Rangkaian Kawasan Tempatan dan
Rangkaian Kawasan Metropolitan standard. Piawaian yang
paling banyak digunakan adalah untuk keluarga Ethernet,
Token Ring, Wireless LAN, Merapatkan dan Maya
menghubungkan LAN. Satu Kumpulan Kerja individu
memberikan fokus untuk setiap kawasan.
IEEE juga menawarkan pelbagai bahan pada proses
pembangunan standard. Berikut adalah beberapa berguna
Web:
IEEE Piawaian Pembangunan Online (menyediakan panduan
lengkap langkah demi langkah untuk membangunkan Standard
IEEE)
IEEE SA Latihan Persembahan (Pembentangan Latihan IEEESA)

4G

From Wikipedia, the free encyclopedia


This article is about the mobile telecommunications
standard. For other uses, see 4G (disambiguation).
This article may be too technical for most
readers to understand. Please
help improve this article to make it
understandable to non-experts, without
removing the technical details. The talk
page may contain suggestions. (December
2011)
In telecommunications, 4G is the fourth generation of cell
phone mobile communications standards. It is a successor
of the third generation (3G) standards. A 4G system
provides mobile ultra-broadband Internet access, for
example to laptops with USB wireless modems,
to smartphones, and to other mobile devices. Conceivable
applications include amended mobile web access, IP
telephony, gaming services, high-definition mobile TV,
video conferencing and 3D television.
Two 4G candidate systems are commercially deployed:
The Mobile WiMAX standard (at first in South Korea in
2006), and the first-release Long term evolution (LTE)
standard (in Scandinavia since 2009). It has however
been debated if these first-release versions should be
considered as 4G or not. See technical definition. In the
U.S. Sprint Nextel has deployed Mobile WiMAX networks
since 2008, and MetroPCS was the first operator to offer
LTE service in 2010. USB wireless modems have been
available since the start, while WiMAX smartphones have

been available since 2010, and LTE smartphones since


2011. Equipment made for different continents are not
always compatible, because of different frequency bands.
Mobile WiMAX and LTE smartphones are currently (April
2012) not available for the European market.
[edit]Technical definition
In March 2008, the International Telecommunications
Union-Radio communications sector (ITU-R) specified a
set of requirements for 4G standards, named
the International Mobile Telecommunications
Advanced (IMT-Advanced) specification, setting peak
speed requirements for 4G service at 100 megabits per
second (Mbit/s) for high mobility communication (such as
from trains and cars) and 1 gigabit per second (Gbit/s) for
low mobility communication (such as pedestrians and
stationary users).[1]
Since the above mentioned first-release versions of Mobile
WiMAX and LTE support much less than 1 Gbit/s peak bit
rate, they are not fully IMT-Advanced compliant, but are
often branded 4G by service providers. On December 6,
2010, ITU-R recognized that these two technologies, as
well as other beyond-3G technologies that do not fulfill the
IMT-Advanced requirements, could nevertheless be
considered "4G", provided they represent forerunners to
IMT-Advanced compliant versions and "a substantial level
of improvement in performance and capabilities with
respect to the initial third generation systems now
deployed".[2]

Mobile WiMAX Release 2 (also known as WirelessMANAdvanced or IEEE 802.16m') and LTE Advanced (LTE-A)
are IMT-Advanced compliant backwards compatible
versions of the above two systems, standardized during
the spring 2011,[citation needed] and promising peak bit rates in
the order of 1 Gbit/s. Services are expected in 2013.[3]
As opposed to earlier generations, a 4G system does not
support traditional circuit-switched telephony service, but
all-Internet Protocol (IP) based communication such as IP
telephony. As seen below, the spread spectrum radio
technology used in 3G systems, is abandoned in all 4G
candidate systems and replaced by OFDMA multicarrier transmission and other frequency-domain
equalization (FDE) schemes, making it possible to transfer
very high bit rates despite extensive multi-path radio
propagation (echoes). The peak bit rate is further
improved by smart antennaarrays for multiple-input
multiple-output (MIMO) communications.
Finally, the term "generation" used to name successive
evolutions of radio networks in general is arbitrary. There
are several interpretations of it, and no official definition
despite the large consensus behind ITU-R's labels. As you
can read along this article, a comment is made about the
legitimate use of the term almost each time it is used.
From the point of view of ITU-R, 4G is equivalent to IMTAdvanced which has specific performance requirements
as explained below. But from the point of view of
operators, a generation of network refers to the
deployment of a new non-backward compatible
technology. This usually corresponds to a huge

investment with its own depreciation period, marketing


strategy (if any), and deployment phases. It can even be
different among operators. From the end user point of
view, only performance makes sense. We expect that the
next generation of network performs better than the
previous one which is not that simple to state. Indeed
while a new generation of network arrives, the previous
one keeps evolving to a point where it outperforms the first
version of the new generation. In many countries, GSM,
UMTS and LTE networks still coexist. It is thus much less
ambiguous to use the name of the technology/standard,
possibly followed by its version number, than a subjective
arbitrary generation number which is destined to be
challenged endlessly.
[edit]Background
The nomenclature of the generations generally refers to a
change in the fundamental nature of the service, nonbackwards-compatible transmission technology, higher
peak bitrates, new frequency bands, wider channel
frequency bandwidth in Hertz, and higher capacity for
many simultaneous data transfers (higher system spectral
efficiency in bit/second/Hertz/site).
New mobile generations have appeared about every ten
years since the first move from 1981 analog (1G) to digital
(2G) transmission in 1992. This was followed, in 2001, by
3G multi-media support, spread spectrum transmission
and at least 200 kbit/s peak bitrate, in 2011/2012 expected
to be followed by "real" 4G, which refers to all-Internet
Protocol (IP) packet-switched networks giving mobile ultrabroadband (gigabit speed) access.

While the ITU has adopted recommendations for


technologies that would be used for future global
communications, they do not actually perform the
standardization or development work themselves, instead
relying on the work of other standards bodies such as
IEEE, The WiMAX Forum and 3GPP.
In mid 1990s, the ITU-R standardization organization
released the IMT-2000 requirements as a framework for
what standards should be considered 3G systems,
requiring 200 kbit/s peak bit rate. In 2008, ITU-R specified
the IMT-Advanced (International Mobile
Telecommunications Advanced) requirements for 4G
systems.
The fastest 3G-based standard in the UMTS family is
the HSPA+ standard, which was commercially available in
2009 and offers 28 Mbit/s downstreams (22 Mbit/s
upstreams) without MIMO, i.e. only with one antenna, and
in 2011 accelerated up to 42 Mbit/s peak bit rate
downstreams using either DC-HSPA+ (simultaneous use
of two 5 MHz UMTS carrier)[4] or 2x2 MIMO. In theory 672
Mbit/s is possible, but still not deployed. The fastest 3Gbased standard in the CDMA2000 family is the EV-DO
Rev. B, which was available in 2010 and offers 15.67
Mbit/s downstreams.[citation needed]
[edit]IMT-Advanced Requirements
This article uses 4G to refer to IMT-Advanced
(International Mobile Telecommunications Advanced), as
defined by ITU-R. An IMT-Advanced cellular system must
fulfill the following requirements:[5]

Based on an all-IP packet switched network.


Peak data rates of up to approximately 100 Mbit/s for
high mobility such as mobile access and up to
approximately 1 Gbit/s for low mobility such as
nomadic/local wireless access.
Dynamically share and use the network resources to
support more simultaneous users per cell.
Scalable channel bandwidth 520 MHz, optionally up to
40 MHz.[6][7]
Peak link spectral efficiency of 15 bit/s/Hz in the
downlink, and 6.75 bit/s/Hz in the uplink (meaning that
1 Gbit/s in the downlink should be possible over less
than 67 MHz bandwidth).
System spectral efficiency of up to 3 bit/s/Hz/cell in the
downlink and 2.25 bit/s/Hz/cell for indoor usage.[6]
Smooth handovers across heterogeneous networks.
Ability to offer high quality of service for next generation
multimedia support.
In September 2009, the technology proposals were
submitted to the International Telecommunication Union
(ITU) as 4G candidates.[8] Basically all proposals are
based on two technologies:

LTE Advanced standardized by the 3GPP


802.16m standardized by the IEEE (i.e. WiMAX)
Implementations of Mobile WiMAX and first-release LTE
are largely considered a stopgap solution that will offer a
considerable boost until WiMAX 2 (based on the 802.16m
spec) and LTE Advanced are deployed. The latter

standard versions were ratified in spring 2011, but are still


far from being implemented.[5]
The first set of 3GPP requirements on LTE Advanced was
approved in June 2008.[9] LTE Advanced was to be
standardized in 2010 as part of Release 10 of the 3GPP
specification. LTE Advanced will be based on the existing
LTE specification Release 10 and will not be defined as a
new specification series. A summary of the technologies
that have been studied as the basis for LTE Advanced is
included in a technical report.[10]
First release LTE and Mobile WiMAX implementations are
in some sources considered pre-4G or near-4G, as they
do not fully comply with the planned requirements of
1 Gbit/s for stationary reception and 100 Mbit/s for mobile.
Confusion has been caused by some mobile carriers who
have launched products advertised as 4G but which
according to some sources are pre-4G versions,
commonly referred to as '3.9G', which do not follow the
ITU-R defined principles for 4G standards, but today can
be called 4G according to ITU-R. A common argument for
branding 3.9G systems as new-generation is that they use
different frequency bands from 3G technologies; that they
are based on a new radio-interface paradigm; and that the
standards are not backwards compatible with 3G, whilst
some of the standards are forwards compatible with IMT2000 compliant versions of the same standards.
[edit]System standards
[edit]IMT-2000 compliant 4G standards

Recently, ITU-R Working Party 5D approved two industrydeveloped technologies (LTE Advanced and
WirelessMAN-Advanced)[11] for inclusion in the ITUs
International Mobile Telecommunications Advanced (IMTAdvanced program), which is focused on global
communication systems that would be available several
years from now.
[edit]LTE Advanced
See also: 3GPP Long Term Evolution (LTE) below
LTE Advanced (Long-term-evolution Advanced) is a
candidate for IMT-Advanced standard, formally
submitted by the 3GPP organization to ITU-T in the fall
2009, and expected to be released in 2012. The target
of 3GPP LTE Advanced is to reach and surpass the ITU
requirements.[12] LTE Advanced is essentially an
enhancement to LTE. It is not a new technology but
rather an improvement on the existing LTE network.
This upgrade path makes it more cost effective for
vendors to offer LTE and then upgrade to LTE
Advanced which is similar to the upgrade from WCDMA
to HSPA. LTE and LTE Advanced will also make use of
additional spectrum and multiplexing to allow it to
achieve higher data speeds. Coordinated Multi-point
Transmission will also allow more system capacity to
help handle the enhanced data speeds. Release 10 of
LTE is expected to achieve the IMT Advanced speeds.
Release 8 currently supports up to 300 Mbit/s download
speeds which is still short of the IMT-Advanced
standards.[13]

Data speeds of LTE


Advanced
LTE Advanced
Peak download 1 Gbit/s
Peak upload

500 Mbit/s

[edit]IEEE 802.16m or WirelessMAN-Advanced


The IEEE 802.16m or WirelessMANAdvanced evolution of 802.16e is under development,
with the objective to fulfill the IMT-Advanced criteria of
1 Gbit/s for stationary reception and 100 Mbit/s for
mobile reception.[14]
[edit]Forerunner versions
[edit]3GPP Long Term Evolution (LTE)
See also: LTE Advanced above

Telia-branded Samsung LTE modem


The pre-4G technology 3GPP Long Term
Evolution (LTE) is often branded "4G-LTE", but the
first LTE release does not fully comply with the IMTAdvanced requirements. LTE has a theoretical net bit
rate capacity of up to 100 Mbit/s in the downlink and
50 Mbit/s in the uplink if a 20 MHz channel is used
and more if multiple-input multiple-output (MIMO), i.e.
antenna arrays, are used.
The physical radio interface was at an early stage
named High Speed OFDM Packet Access (HSOPA),
now named Evolved UMTS Terrestrial Radio
Access (E-UTRA). The first LTE USB dongles do not
support any other radio interface.
The world's first publicly available LTE service was
opened in the two Scandinavian

capitals Stockholm (Ericsson and Nokia Siemens


Networkssystems) and Oslo (a Huawei system) on 14
December 2009, and branded 4G. The user terminals
were manufactured by Samsung.[15] Currently, the
three publicly available LTE services in the United
States are provided by MetroPCS,[16] Verizon
Wireless,[17] and AT&T. As of April 2012, US
Cellular[18] also offers 4G LTE. Sprint Nextel has also
stated it's considering switching from WiMax to LTE in
the near future.[17]
T-Mobile Hungary launched a public beta test
(called friendly user test) on 7 October 2011, and
offers commercial 4G LTE service since 1 January
2012.[citation needed]
In South Korea, SK Telecom and LG U+ have
enabled access to LTE service since 1 July 2011 for
data devices, slated to go nationwide by 2012.[19]
Data speeds of LTE
LTE
Peak download 100 Mbit/s
Peak upload

50 Mbit/s

[edit]Mobile WiMAX (IEEE 802.16e)


The Mobile WiMAX (IEEE 802.16e-2005) mobile
wireless broadband access (MWBA) standard (also

known as WiBro in South Korea) is sometimes


branded 4G, and offers peak data rates of 128 Mbit/s
downlink and 56 Mbit/s uplink over 20 MHz wide
channels[citation needed].
In June 2006, the world's first commercial mobile
WiMAX service was opened by KT in Seoul, South
Korea.[20]
Sprint Nextel has begun using Mobile WiMAX, as of
September 29, 2008 branded as a "4G" network even
though the current version does not fulfil the IMT
Advanced requirements on 4G systems.[21]
In Russia, Belarus and Nicaragua WiMax broadband
internet access is offered by a Russian
company Scartel, and is also branded 4G, Yota.
Data speeds of WiMAX
WiMAX
Peak download 128 Mbit/s
Peak upload

56 Mbit/s

[edit]TD-LTE for China Market


Just when Long-Term Evolution (LTE) and WiMax
vigorously promoting in the global
telecommunications industry, the former (LTE) is also
the most powerful 4G mobile communication leading
technology, is a meteoric rise, and quickly occupied

the Chinese market. Qualcomm and the Yota's TDLTE is not yet mature, but many domestic and
international wireless carriers one after another turn
to TD-LTE. IBM data show that 67% of the operators
are considering LTE, because this is the main source
of their future market. The above news also
confirmed this statement of IBM. While only 8% of the
operators to consider the use of WiMAX. WiMax can
provide the fastest network transmission to its
customers on the market, but still not the rival of LTE.
TD-LTE is not the first 4G wireless mobile broadband
network data standard, it is China's 4G standard that
amendmented and published by China's largest
telecom operators - China Mobile. After a series of
field trials, is expected into the commercial phase in
the next two years . Ulf Ewaldsson, Ericsson's vice
president said: "the Chinese Ministry of Industry and
China Mobile in the fourth quarter of this year will hold
a large-scale field test, by then, Ericsson will help the
hand." But view from the current development trend,
whether this standard advocated by China Mobile will
be widely recognized by the international market, is
still debatable.
[edit]Discontinued candidate systems
[edit]UMB (formerly EV-DO Rev. C)
Main article: Ultra Mobile Broadband
UMB (Ultra Mobile Broadband) was the brand name
for a discontinued 4G project within
the 3GPP2 standardization group to improve
the CDMA2000 mobile phone standard for next

generation applications and requirements. In


November 2008, Qualcomm, UMB's lead sponsor,
announced it was ending development of the
technology, favouring LTE instead.[22] The objective
was to achieve data speeds over 275 Mbit/s
downstream and over 75 Mbit/s upstream.
[edit]Flash-OFDM
At an early stage the Flash-OFDM system was
expected to be further developed into a 4G standard.
[edit]iBurst and MBWA (IEEE 802.20) systems
The iBurst system (or HC-SDMA, High Capacity
Spatial Division Multiple Access) was at an early
stage considered as a 4G predecessor. It was later
further developed into the Mobile Broadband Wireless
Access (MBWA) system, also known as IEEE 802.20.
[edit]Data rate comparison
The following table shows a comparison of 4G
candidate systems as well as other competing
technologies.
Comparison of Mobile Internet Access methods

Com
mon
Name

Fam
ily

Prim
ary
Use

Radio
Tech

Dow
nstre
am
(Mbi
t/s)

Ups
trea
m
(Mb
it/s)

Notes

Comparison of Mobile Internet Access methods

Com
mon
Name

Fam
ily

HSPA+ 3GPP

LTE

Prim
ary
Use

Used
in 4G

Radio
Tech

Dow
nstre
am
(Mbi
t/s)

21
CDMA/F
42
DD
84
MIMO
672

Genera OFDMA/ 100


3GPP
Cat3
l 4G
MIMO/S
150

Ups
trea
m
(Mb
it/s)

Notes

5.8
11.5
22
168

HSPA+
is widely
deployed
.
Revision
11 of the
3GPP
states
that HSP
A+ is
expected
to have a
throughp
ut
capacity
of 672
Mbps.

50
LTECat3/ Advance
4
d update

Comparison of Mobile Internet Access methods

Com
mon
Name

Fam
ily

Prim
ary
Use

Radio
Tech

Dow
nstre
am
(Mbi
t/s)

C-FDMA Cat4
300
Cat5
(in
20 MH
z
FDD)[2
3]

37
Wirele MIMO(10 M
WiMax
802.16 ssMA SOFDM
Hz
rel 1
N
A
TDD)

Ups
trea
m
(Mb
it/s)
75
Cat5
(in 20
MHz
FDD)[
23]

Notes

expected
to offer
peak
rates up
to 1
Gbit/s
fixed
speeds
and 100
Mb/s to
mobile
users.

17
With 2x2
(10 M
MIMO.[2
Hz
4]
TDD)

46
With 2x2
WiMax 802.16 Wirele MIMO- 83
rel 1.5 -2009 ssMA SOFDM (20 M (20 M MIMO.E
Hz
Hz
nhanced

Comparison of Mobile Internet Access methods

Com
mon
Name

Fam
ily

Prim
ary
Use
N

Radio
Tech

Dow
nstre
am
(Mbi
t/s)

Ups
trea
m
(Mb
it/s)

TDD)
141
(2x20
MHz
FDD)

TDD)
138
(2x20
MHz
FDD)

2x2
MIMO
110
(20 M
Hz
Wirele MIMO- TDD)
WiMA 802.16
ssMA SOFDM 183
X rel 2 m
(2x20
N
A
MHz
FDD)
4x4
MIMO
219
(20 M

2x2
MIM
O
70
(20 M
Hz
TDD)
188
(2x20
MHz
FDD)
4x4
MIM
O

Notes

with
20Mhz
channels
in
802.162009[24]

Also low
mobility
users can
aggregat
e
multiple
channels
for up to
DL
throughp
ut
1Gbps[24]

Comparison of Mobile Internet Access methods

Com
mon
Name

Fam
ily

Prim
ary
Use

Radio
Tech

Mobile
Interne
t
Flash- mobilit
FlashFlashOFD y up to
OFDM
OFDM
M
200 mp
h
(350 k
m/h)
HIPER

HIPE Mobile OFDM


RMA Interne

Dow
nstre
am
(Mbi
t/s)

Ups
trea
m
(Mb
it/s)

Hz
TDD)
365
(2x20
MHz
FDD)

140(2
0 MH
z
TDD)
376
(2x20
MHz
FDD)

5.3
10.6
15.9

1.8
3.6
5.4

56.9

Notes

Mobile
range
30 km
(18
miles)
extended
range 55
km (34
miles)

Comparison of Mobile Internet Access methods

Com
mon
Name

Fam
ily

MAN N

Prim
ary
Use

Radio
Tech

Dow
nstre
am
(Mbi
t/s)

Ups
trea
m
(Mb
it/s)

Notes

Mobile
802.11
OFDM/
Intern
Wi-Fi
(11n)
MIMO
et

288.8 (using
4x4
configuration
in 20 MHz
bandwidth) or
600 (using
4x4
configuration
in 40 MHz
bandwidth)

Antenna,
RF front
end enha
ncements
and
minor
protocol
timer
tweaks
have
helped
deploy
long
range P2
P networ
ks
compro
mising
on radial
coverage

Comparison of Mobile Internet Access methods

Com
mon
Name

Fam
ily

Prim
ary
Use

Radio
Tech

Dow
nstre
am
(Mbi
t/s)

Ups
trea
m
(Mb
it/s)

Notes

,
throughp
ut and/or
spectra
efficienc
y
(310 km
& 382 k
m)

HCMobile
SDMA/T
95
iBurst 802.20 Intern
DD/MIM
et
O

36

Cell
Radius:
312 km
Speed:
250 km/h
Spectral
Efficienc
y: 13
bits/s/Hz
/cell
Spectru
m Reuse

Comparison of Mobile Internet Access methods

Com
mon
Name

Fam
ily

Prim
ary
Use

Radio
Tech

Dow
nstre
am
(Mbi
t/s)

Ups
trea
m
(Mb
it/s)

Notes

Factor:
"1"
Mobile
EDGE
TDMA/F
1.6
Evoluti GSM Intern
DD
et
on

CDMA/F
UMTS
DD
WUMTS
Genera
CDMA
/3GS
l 3G
CDMA/F
HSDPA
M
DD/MIM
+HSUP
O
A

UMTS- UMTS Mobile CDMA/T

0.5

3GPP Re
lease 7

HSDPA
is widely
deployed
. Typical
downlink
rates
today 2
0.384 0.384
Mbit/s,
14.4
5.76
~200
kbit/s
uplink;
HSPA+
downlink
up to 56
Mbit/s.
16
Reported

Comparison of Mobile Internet Access methods

Com
mon
Name

Fam
ily

TDD /3GS
M

Prim
ary
Use

Radio
Tech

Interne DD
t

Dow
nstre
am
(Mbi
t/s)

Ups
trea
m
(Mb
it/s)

Notes

speeds
accordin
g
to IPWir
eless usi
ng
16QAM
modulati
on
similar
toHSDP
A+HSU
PA
Rev B
EVnote: N
DO Rel.
is the
0
Mobile
2.45
0.15
CDMA/F
number
EV- CDM
Interne
3.1
1.8
DD
of 1.25
DO Rev A2000
t
4.9xN 1.8xN
MHz
.A
chunks
EVof
DO Rev

Comparison of Mobile Internet Access methods

Com
mon
Name
.B

Fam
ily

Prim
ary
Use

Radio
Tech

Dow
nstre
am
(Mbi
t/s)

Ups
trea
m
(Mb
it/s)

Notes

spectrum
used.
EV-DO
is not
designed
for
voice,
and
requires
a
fallback
to
1xRTT
when a
voice
call is
placed or
received.

Notes: All speeds are theoretical maximums and will


vary by a number of factors, including the use of
external antennae, distance from the tower and the

ground speed (e.g. communications on a train may


be poorer than when standing still). Usually the
bandwidth is shared between several terminals. The
performance of each technology is determined by a
number of constraints, including the spectral
efficiency of the technology, the cell sizes used, and
the amount of spectrum available. For more
information, see Comparison of wireless data
standards.
For more comparison tables, see bit rate progress
trends, comparison of mobile phone
standards, spectral efficiency comparison
table and OFDM system comparison table.
[edit]Principal technologies in all candidate systems
[edit]Key features
The following key features can be observed in all
suggested 4G technologies:

Physical layer transmission techniques are as


follows:[25]
MIMO: To attain ultra high spectral efficiency by
means of spatial processing including multiantenna and multi-user MIMO
Frequency-domain-equalization, for
example multi-carrier modulation (OFDM) in the
downlink or single-carrier frequency-domainequalization (SC-FDE) in the uplink: To exploit
the frequency selective channel property without
complex equalization

Frequency-domain statistical multiplexing, for


example (OFDMA) or (single-carrier FDMA) (SCFDMA, a.k.a. linearly precoded OFDMA, LPOFDMA) in the uplink: Variable bit rate by
assigning different sub-channels to different
users based on the channel conditions
Turbo principle error-correcting codes: To
minimize the required SNR at the reception side
Channel-dependent scheduling: To use the timevarying channel
Link adaptation: Adaptive modulation and errorcorrecting codes
Mobile-IP utilized for mobility
IP-based femtocells (home nodes connected to
fixed Internet broadband infrastructure)
As opposed to earlier generations, 4G systems does
not support circuit switched telephony. Most[which?] 4G
standards lack soft-handover support, also known
as cooperative relaying.
[edit]Multiplexing and Access schemes

This section contains information which


may be of unclear or
questionable importance or relevance t
o the article's subject matter. Please
help improve this article by clarifying or
removing superfluous information. (May
2010)
The Migration to 4G standards incorporates elements
of many early technologies and often you will read

about solutions that use Code (a cypher), Frequency


or Time as the basis of multiplexing the spectrum
more efficiently. While Spectrum is considered
finite, Cooper's Law has shown that we have
developed more efficient ways of using spectrum just
as the Moore's law has show our ability to increase
processing.
As the wireless standards evolved, the access
techniques used also exhibited increase in efficiency,
capacity and scalability. The first generation wireless
standards used TDMA and FDMA. In the wireless
channels, TDMA proved to be less efficient in
handling the high data rate channels as it requires
large guard periods to alleviate the multipath impact.
Similarly, FDMA consumed more bandwidth for guard
to avoid inter carrier interference. So in second
generation systems, one set of standard used the
combination of FDMA and TDMA and the other set
introduced an access scheme called CDMA. Usage
of CDMA increased the system capacity, but as a
theoretical drawback placed a soft limit on it rather
than the hard limit (i.e. a CDMA network setup does
not inherently reject new clients when it approaches
its limits, resulting in a denial of service to all clients
when the network overloads; though this outcome is
avoided in practical implementations by admission
control of circuit switched or fixed bitrate
communication services). Data rate is also increased
as this access scheme (providing the network is not
reaching its capacity) is efficient enough to handle the
multipath channel. This enabled the third generation

systems, such as IS-2000, UMTS, HSXPA, 1xEVDO, TD-CDMA and TD-SCDMA, to use CDMA as the
access scheme. However, the issue with CDMA is
that it suffers from poor spectral flexibility and
computationally intensive time-domain equalization
(high number of multiplications per second) for
wideband channels.
Recently, new access schemes like Orthogonal
FDMA (OFDMA), Single Carrier FDMA (SCFDMA), Interleaved FDMA and Multi-carrier
CDMA (MC-CDMA) are gaining more importance for
the next generation systems. These are based on
efficient FFT algorithms and frequency domain
equalization, resulting in a lower number of
multiplications per second. They also make it possible
to control the bandwidth and form the spectrum in a
flexible way. However, they require advanced
dynamic channel allocation and traffic adaptive
scheduling.
WiMax is using OFDMA in the downlink and in the
uplink. For the next generation UMTS, OFDMA is
used for the downlink. By contrast, Singel-carrier FDE
is used for the uplink since OFDMA contributes more
to the PAPR related issues and results in nonlinear
operation of amplifiers. IFDMA provides less power
fluctuation and thus require energy-inefficient linear
amplifiers. Similarly, MC-CDMA is in the proposal for
the IEEE 802.20 standard. These access schemes
offer the same efficiencies as older technologies like

CDMA. Apart from this, scalability and higher data


rates can be achieved.
The other important advantage of the above
mentioned access techniques is that they require less
complexity for equalization at the receiver. This is an
added advantage especially in
the MIMOenvironments since the spatial
multiplexing transmission of MIMO systems inherently
requires high complexity equalization at the receiver.
In addition to improvements in these multiplexing
systems, improved modulation techniques are being
used. Whereas earlier standards largely used Phaseshift keying, more efficient systems such as
64QAM are being proposed for use with the 3GPP
Long Term Evolution standards.
[edit]IPv6 support
Main articles: Network layer, Internet protocol,
and IPv6
Unlike 3G, which is based on two parallel
infrastructures consisting of circuit
switched and packet switched network nodes
respectively, 4G will be based on packet
switching only. This will requirelow-latency data
transmission.
By the time that 4G was deployed, the process
of IPv4 address exhaustion was expected to be in its
final stages. Therefore, in the context of
4G, IPv6 support is essential to support a large
number of wireless-enabled devices. By increasing

the number of IP addresses, IPv6 removes the need


for network address translation (NAT), a method of
sharing a limited number of addresses among a
larger group of devices, although NAT will still be
required to communicate with devices that are on
existing IPv4 networks.
As of June 2009, Verizon has
posted specifications that require any 4G devices on
its network to support IPv6.[26]
[edit]Advanced antenna systems
Main articles: MIMO and MU-MIMO
The performance of radio communications depends
on an antenna system, termed smart or intelligent
antenna. Recently, multiple antenna technologies are
emerging to achieve the goal of 4G systems such as
high rate, high reliability, and long range
communications. In the early 1990s, to cater for the
growing data rate needs of data communication,
many transmission schemes were proposed. One
technology, spatial multiplexing, gained importance
for its bandwidth conservation and power efficiency.
Spatial multiplexing involves deploying multiple
antennas at the transmitter and at the receiver.
Independent streams can then be transmitted
simultaneously from all the antennas. This
technology, called MIMO (as a branch of intelligent
antenna), multiplies the base data rate by (the smaller
of) the number of transmit antennas or the number of
receive antennas. Apart from this, the reliability in
transmitting high speed data in the fading channel

can be improved by using more antennas at the


transmitter or at the receiver. This is
called transmit or receive diversity. Both
transmit/receive diversity and transmit spatial
multiplexing are categorized into the space-time
coding techniques, which does not necessarily
require the channel knowledge at the transmitter. The
other category is closed-loop multiple antenna
technologies, which require channel knowledge at the
transmitter.
[edit]Open-wireless Architecture and Softwaredefined radio (SDR)
One of the key technologies for 4G and beyond is
called Open Wireless Architecture (OWA), supporting
multiple wireless air interfaces in an open architecture
platform.
SDR is one form of open wireless architecture
(OWA). Since 4G is a collection of wireless
standards, the final form of a 4G device will constitute
various standards. This can be efficiently realized
using SDR technology, which is categorized to the
area of the radio convergence.
[edit]History of 4G and pre-4G technologies
The 4G system was originally envisioned by the
Defense Advanced Research Projects Agency
(DARPA).[citation needed] The DARPA selected the
distributed architecture and end-to-end Internet
protocol (IP), and believed at an early stage in peerto-peer networking in which every mobile device

would be both a transceiver and a router for other


devices in the network, eliminating the spoke-and-hub
weakness of 2G and 3G cellular systems.[27] Since
the 2.5G GPRS system, cellular systems have
provided dual infrastructures: packet switched nodes
for data services, and circuit switched nodes for voice
calls. In 4G systems, the circuit-switched
infrastructure is abandoned and only a packetswitched network is provided, while 2.5G and 3G
systems require both packet-switched and circuitswitched network nodes, i.e. two infrastructures in
parallel. This means that in 4G, traditional voice calls
are replaced by IP telephony.

In 2002, the strategic vision for 4G


which ITU designated as IMT-Advancedwas laid
out.
In 2005, OFDMA transmission technology is
chosen as candidate for the HSOPA downlink, later
renamed 3GPP Long Term Evolution (LTE) air
interface E-UTRA.
In November 2005, KT demonstrated mobile
WiMAX service in Busan, South Korea.[28]
In April 2006, KT started the world's first
commercial mobile WiMAX service in Seoul, South
Korea.[29]
In mid-2006, Sprint Nextel announced that it would
invest about US$5 billion in a WiMAX technology
buildout over the next few years[30] ($5.76 billion
in real terms[31]). Since that time Sprint has faced
many setbacks, that have resulted in steep

quarterly losses. On May 7,


2008, Sprint, Imagine, Google, Intel, Comcast, Brig
ht House, and Time Warner announced a pooling
of an average of 120 MHz of spectrum; Sprint
merged its Xohm WiMAX division with Clearwire to
form a company which will take the name "Clear".
In February 2007, the Japanese company NTT
DoCoMo tested a 4G communication system
prototype with 4x4 MIMO called VSF-OFCDM at
100 Mbit/s while moving, and 1 Gbit/s while
stationary. NTT DoCoMo completed a trial in which
they reached a maximum packet transmission rate
of approximately 5 Gbit/s in the downlink with
12x12 MIMO using a 100 MHz frequency
bandwidth while moving at 10 km/h,[32] and is
planning on releasing the first commercial network
in 2010.
In September 2007, NTT Docomo demonstrated eUTRA data rates of 200 Mbit/s with power
consumption below 100 mW during the test.[33]
In January 2008, a U.S. Federal Communications
Commission (FCC) spectrum auction for the
700 MHz former analog TV frequencies began. As
a result, the biggest share of the spectrum went to
Verizon Wireless and the next biggest to
AT&T.[34] Both of these companies have stated
their intention of supporting LTE.
In January 2008, EU commissioner Viviane
Reding suggested re-allocation of 500800 MHz
spectrum for wireless communication, including
WiMAX.[35]

On 15 February 2008 - Skyworks Solutions


released a front-end module for e-UTRAN.[36][37][38]
In 2008, ITU-R established the detailed
performance requirements of IMT-Advanced, by
issuing a Circular Letter calling for candidate Radio
Access Technologies (RATs) for IMT-Advanced.[39]
In April 2008, just after receiving the circular letter,
the 3GPP organized a workshop on IMT-Advanced
where it was decided that LTE Advanced, an
evolution of current LTE standard, will meet or
even exceed IMT-Advanced requirements following
the ITU-R agenda.
In April 2008, LG and Nortel demonstrated e-UTRA
data rates of 50 Mbit/s while travelling at
110 km/h.[40]
On 12 November 2008, HTC announced the first
WiMAX-enabled mobile phone, the Max 4G[41]
In December 2008, San Miguel Corporation,
southeast Asia's largest food and beverage
conglomerate, has signed a memorandum of
understanding with Qatar Telecom QSC (Qtel) to
build wireless broadband and mobile
communications projects in the Philippines. The
joint-venture formed wi-tribe Philippines, which
offers 4G in the country.[42] Around the same
time Globe Telecom rolled out the first WiMAX
service in the Philippines.
On 3 March 2009, Lithuania's LRTC announcing
the first operational "4G" mobile WiMAX network in
Baltic states.[43]

In December 2009, Sprint began advertising "4G"


service in selected cities in the United States,
despite average download speeds of only 3
6 Mbit/s with peak speeds of 10 Mbit/s (not
available in all markets).[44]
On 14 December 2009, the first commercial LTE
deployment was in the Scandinavian
capitals Stockholm and Oslo by the SwedishFinnish network operator TeliaSonera and its
Norwegian brandname NetCom (Norway).
TeliaSonera branded the network "4G". The
modem devices on offer were manufactured
by Samsung (dongle GT-B3710), and the network
infrastructure created by Huawei (in Oslo)
and Ericsson (in Stockholm). TeliaSonera plans to
roll out nationwide LTE across Sweden, Norway
and Finland.[45][46] TeliaSonera used spectral
bandwidth of 10 MHz, and single-in-single-out,
which should provide physical layer net bitrates of
up to 50 Mbit/s downlink and 25 Mbit/s in the
uplink. Introductory tests showed
a TCP throughput of 42.8 Mbit/s downlink and
5.3 Mbit/s uplink in Stockholm.[47]
On 25 February 2010, Estonia's EMT opened LTE
"4G" network working in test regime.[48]
On 4 June 2010, Sprint Nextel released the first
WiMAX smartphone in the US, the HTC Evo 4G.[49]
In July 2010, Uzbekistan's MTS deployed LTE
in Tashkent.[50]

On 25 August 2010, Latvia's LMT opened LTE


"4G" network working in test regime 50% of
territory.
On 6 December 2010, at the ITU World
Radiocommunication Seminar 2010, the ITU stated
that LTE, WiMax and similar "evolved 3G
technologies" could be considered "4G".[2]
On 12 December 2010, VivaCell-MTS launches
in Armenia 4G/LTE commercial test network with a
live demo conducted in Yerevan.[51]
On 28 April 2011, Lithuania's Omnitel opened LTE
"4G" network working in 5 biggest cities.[52]
In September 2011, All three Saudi telecom
companies STC, Mobily and Zain announced that
they will offer 4G LTE for high speed USB sticks for
mobile computers, with further development for
telephones by 2013.[53]
In 2011, Argentinas Claro launch 4G HSPA+
network in the country.
In 2011, Thailand's Truemove-H launch 4G HSPA+
network with nation-wide availability.
On February 10, 2011, the Samsung Galaxy
Indulge offered by MetroPCS is the first
commercially available LTE smartphone[54][55]
On March 17, 2011, HTC ThunderBolt offered by
Verizon in the U.S. was the second LTE
smartphone to be sold commercially.[56][57]
On 31 January 2012, Thailand's AIS and its
subsidiaries DPC under co-operative with CAT
Telecom for 1800 MHz frequency band

and TOT for 2300 MHz frequency band launch the


first field trial LTE in Thailand by authorization
from NBTC[58]
In February 2012, Ericsson demonstrated mobileTV over LTE, utilizing the new eMBMS service
(enhanced Multimedia Broadcast Multicast
Service).[59]
On 10 April 2012, Bharti Airtel launched 4G [LTE]
in Kolkata, first in India which resulted India to be
one of the first countries in the world to deploy the
cutting edge technology commercially.[60]
On 20 May 2012, Azerbaijan's biggest mobile
operator Azercell launched 4G [LTE].[61]
[edit]Deployment plans
This section of the article is too long to
read comfortably, and
needs subsections. Please format the
article according to the guidelines laid out
in the Manual of Style. (May 2012)
In May 2005, Digiweb, an Irish fixed and wireless
broadband company, announced that they had
received a mobile communications license from the
Irish telecoms regulator ComReg. This service will be
issued the mobile code 088 in Ireland and will be
used for the provision of 4G mobile
communications.[62][63] Digiweb launched a mobile
broadband network using FLASH-OFDM technology
at 872 MHz.

On September 20, 2007, Verizon


Wireless announced plans for a joint effort with
the Vodafone Group to transition its networks to the
4G standard LTE. On December 9, 2008, Verizon
Wireless announced their intentions to build and
begin to roll out an LTE network by the end of 2009.
Since then, Verizon Wireless has said that they will
start their rollout by the end of 2010.
On July 7, 2008, South Korea announced plans to
spend 60 billion won, or US$58,000,000, on
developing 4G and even 5G technologies, with the
goal of having the highest mobile phone market share
by 2012, and the hope of an international standard.[64]
Telus and Bell Canada, the major
Canadian cdmaOne and EV-DO carriers, have
announced that they will be cooperating towards
building a fourth generation (4G) LTE wireless
broadband network in Canada. As a transitional
measure, they are implementing 3G UMTS that went
live in November 2009.[65]
Sprint Nextel offers a 3G/4G connection plan,
currently available in select cities in the United
States.[44] It delivers rates up to 10 Mbit/s. Sprint has
announced that they will launch a LTE network in
early 2012.[66]
In the United Kingdom and
in Ireland, O2 UK and O2 Ireland (both subsidiaries
of Telefnica Europe) are to use Slough as a guinea
pig in testing the 4G network and has called
upon Huawei to install LTE technology in six masts

across the town to allow people to talk to each other


via HD video conferencing and play PlayStation
games while on the move.[67] On February 29, 2012,
the first commercial 4G LTE service in the UK
launched in Borough of Southwark, London.[68] Ofcom
is in the process of auctioning off the UK-wide 4G
spectrum. This will use the airspace made available
following the country's analogue television signal
switch off.[69]
Verizon Wireless has announced that it plans to
augment its CDMA2000-based EV-DO 3G network in
the United States with LTE, and is supposed to
complete a rollout of 175 cities by the end of 2011,
two thirds of the US population by mid-2012, and
cover the existing 3G network by the end of
2013.[70] AT&T, along with Verizon Wireless, has
chosen to migrate toward LTE from 2G/GSM and
3G/HSPA by 2011.[71]
Sprint Nextel has deployed WiMAX technology which
it has labeled 4G as of October 2008. It is currently
deploying to additional markets and is the first US
carrier to offer a WiMAX phone.[72]
The U.S. FCC is exploring the possibility of
deployment and operation of a nationwide 4G public
safety network which would allow first responders to
seamlessly communicate between agencies and
across geographies, regardless of devices. In June
2010 the FCC released a comprehensive white paper
which indicates that the 10 MHz of dedicated
spectrum currently allocated from the1700

MHz spectrum for public safety will provide adequate


capacity and performance necessary for normal
communications as well as serious emergency
situations.[73]
TeliaSonera started deploying LTE (branded "4G") in
Stockholm and Oslo November 2009 (as seen
above), and in several Swedish, Norwegian, and
Finnish cities during 2010. In June 2010, Swedish
television companies used 4G to broadcast live
television from the Swedish Crown Princess' Royal
Wedding.[74]
Safaricom, a telecommunication company in East&
Central Africa, began its setup of a 4G network in
October 2010 after the now retired& Kenya Tourist
Board Chairman, Michael Joseph, regarded their 3G
network as a white elephant i.e. it failed to perform to
expectations. Huawei was given the contract the
network is set to go fully commercial by the end of Q1
of 2011
Telstra announced on 15 February 2011, that
it intends to upgrade its current Next G network to 4G
with Long Term Evolution (LTE) technology in the
central business districts of all Australian capital cities
and selected regional centers by the end of
2011.[75][when?]
Sri Lanka Telecom Mobitel and Dialog Axiata
announced that first time in South Asia Sri Lanka
have successfully tested and demonstrated 4G
technology on 6 May 2011(Sri Lanka Telecom

Mobitel) and 7 May 2011(Dialog Axiata) and began


the setup of their 4G Networks in Sri Lanka.[76][77]
Mobitel was able to reach 96Mbit/s of speed while
Dialog Axiata reached 128Mbit/s on their
demonstration.
In mid September 2011, [5] Mobily of Saudi Arabia,
announced their 4G LTE networks to be ready after
months of testing and evaluations.
In December 2011, UAE's Etisalat announced
commercial launch of 4G LTE services covering over
70% of country's urban areas.[citation needed] As of May,
2012 only few areas have been covered.[citation needed]
In India on 10 April 2012, India's telecom
company Bharti Airtel has launched India's first 4g
services in Kolkata using TD-LTE technology.[78] It's
only 14 months back before the official launching in
Kolkata when a group consisting of China
Mobile, Bharti Airtel and SoftBank Mobile came
together, called GTI (Global TDLTE Initiative)
in Barcelona and they signed the commitment
towardsTD-LTE (Time-Division Long-Term Evolution)
standards for the Asian region.
On 27 April 2012, Brazils telecoms regulator Agencia
Nacional de Telecomunicacoes (Anatel) announced
that the 6 host cities for the 2013 Confederations
Cup to be held there will be the first to have their
networks upgraded to 4G.[79]
On 21 June 2012, SFR will launch 4G in Marseille. It
will be the first 4G commercial launch in France.

[edit]Beyond 4G research
Main article: 5G
A major issue in 4G systems is to make the high bit
rates available in a larger portion of the cell,
especially to users in an exposed position in between
several base stations. In current research, this issue
is addressed by macro-diversity techniques, also
known as group cooperative relay, and also by BeamDivision Multiple Access (BDMA).[80]
Pervasive networks are an amorphous and at present
entirely hypothetical concept where the user can be
simultaneously connected to several wireless access
technologies and can seamlessly move between
them (See vertical handoff, IEEE 802.21). These
access technologies can be Wi-Fi, UMTS, EDGE, or
any other future access technology. Included in this
concept is also smart-radio (also known as cognitive
radio) technology to efficiently manage spectrum use
and transmission power as well as the use of mesh
routing protocols to create a pervasive network.

S-ar putea să vă placă și