Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

The Infinite Bit: An Inside Story of Digital Technology
The Infinite Bit: An Inside Story of Digital Technology
The Infinite Bit: An Inside Story of Digital Technology
Ebook915 pages13 hours

The Infinite Bit: An Inside Story of Digital Technology

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This book is the story of digital technology from a scientific and engineering perspective. It brings to the reader the wonders of science and the ingenuity in engineering. It shows that technology is not just a tool but also an interesting process. It explores technology’s creation from the perspective of needs, problems, and ideas that shaped the digital revolution.

Developments are seen through the eyes and thoughts of inventors. In addition, these are placed within a historical context of the times in which they lived. The narration focuses on the joy of discovery and the impact of invention. Recent technology of the twenty-first century is traced to its beginnings, thus giving a perspective of evolution from simple origins to the complex systems of today. Controversies and engineering blunders have an important place in this story.

For the layperson, the book will serve as a readable introduction to terms we often encounter in everyday language but may not necessarily understand them—Twitter, SMS, email, digital encoding, online security, megapixels, gigabytes, resolution, HDTV, MP3, data modem, ADSL broadband, smartphones, bandwidth, and bit rate. These terms are merged into a continuous narrative that uses little technical jargon or mathematics.

The book starts with telegraphy and telephony. Telephony is perhaps the most important technology of the twentieth century. Many others, including the modern Web and cellular communications, evolved from it. Going beyond telecommunications, the development of computing machines is narrated in detail. Early history of the Internet, which later incorporated the World Wide Web, is given due importance. Condensed histories of famous corporations form a necessary part of the narrative—AT&T, Hewlett-Packard, IBM, Microsoft, Cisco, and Google.

The book will benefit all users of digital technology who are curious to learn about its inventors, the inventions, and the contexts in which the technology evolved. Beyond students and practising engineers, the book’s non-technical style will appeal to many not necessarily from an engineering background.

LanguageEnglish
Release dateOct 13, 2013
ISBN9781301381098
The Infinite Bit: An Inside Story of Digital Technology

Related to The Infinite Bit

Related ebooks

Electrical Engineering & Electronics For You

View More

Related articles

Reviews for The Infinite Bit

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    The Infinite Bit - Arvind Padmanabhan

    The Infinite Bit: An Inside Story of Digital Technology

    Published by Arvind Padmanabhan.

    Smashwords Edition.

    Copyright © Arvind Padmanabhan, 2013.

    Cover Design © Arvind Padmanabhan, 2013.

    Chapter Cartoons © Boopathy Srinivasan, 2013.

    Pictures under Creative Commons license are released as such.

    Pictures in public domain remain so.

    Other Illustrations © Arvind Padmanabhan, 2013.

    Thank you for downloading this free ebook. Although this is a free book, it remains the copyrighted property of the author, and may not be reproduced, copied and distributed for commercial or non-commercial purposes. If you enjoyed this book, please encourage your friends to download their own copy at Smashwords.com, where they can also discover other works by this author. Thank you for your support.

    ARVIND PADMANABHAN graduated from the National University of Singapore with a master’s degree in electrical engineering. He has worked extensively on various wireless technologies. His interests include cryptography, Internet technology, and natural language processing. This is his first book on digital technology and it aims to simplify the subject for the layperson. He lives in Bangalore, India.

    Mapping of Book Chapters to a Typical Digital Communication System

    Table of Contents

    Preface

    0001 Once Upon A Time

    0010 The Science of Engineering

    0011 Appreciating Noise

    0100 A Measure of Information

    0101 All in a Few Words

    0110 Reaching for the Limit

    0111 For Your Eyes Only

    1000 In the Land of Ones and Zeros

    1001 The Goodness of Being Soft

    1010 Beyond Borders

    1011 Bits on Wings

    1100 From Carbon to Silicon

    Acknowledgements

    Notes

    Bibliography

    Preface

    Some eight years ago, I received an email from a customer complaining that our solution wasn’t working for him. We used to supply equipment for testing cellular mobile phones before they are released to the market. The customer experienced failures often enough that he decided to capture a series of screenshots and attach them in the email so that I could debug them.

    The problem was that each image was 2.5 megabytes (MB) large and he had ten of them.¹ Worse still, his mail server had some sort of an upper limit on attachment sizes. He got around it by sending me a series of five emails, each with two attachments. At my end, the corporate mail servers and systems were configured to scan all emails and attachments for viruses. In the end, the entire process became dead slow. The necessity of downloading 25 MB of data from a remote mail server held up for a quarter of an hour more urgent emails. When I finally got down to analysing the screenshots, I found that the error messages were of a textual nature with simple error codes. The screenshots, at least in this case, added little value.

    The problem with today’s technology is that it can be easily misused. Technology has not reached a point where it can assess and decide what the user wants. Technology is yet unable to find the best match across user needs, content, and delivery. Sometimes when it tries to do so, it gets it wrong. Until technology becomes smart, until it learns to learn and adapt, users need to know something about the tools they work with. Only then can they use them in the right manner. Every engineer’s dream is to build a system whose technology is transparent to the user. In other words, the common person on the street need not know anything about it to use it. But this dream in the present day is yet to be realized.

    Since that email incident, things have changed so much that the old paradigms are surviving only in pockets of almost obsolete systems. The order of the day is networking and ubiquitous connectivity. Broadband data speeds have increased dramatically. Systems are usually connected to the Internet. Large data farms have grown world over, riding on waves of miniaturization, specialization, distributed computing, and the ensuing economies of scale.

    Today, the same customer might possibly send me a notification by email or Twitter but the screenshots themselves might be stored in a web cloud. I could pick up the files from the cloud when I want them. In other words, the culture of pushing content to collaborators has been transformed to one of pulling. The customer’s email system might be more intelligent, compressing large images before sending them out. Files amounting to 25 MB can be brought down to just 1 MB. The customer can dispense with images altogether and send only the necessary error codes. For interactive debugging, given the necessary privileges, I can even log into the customer’s system halfway across the world, debug, and perhaps fix the problem in a matter of minutes.

    But not everyone is tech-savvy. People often learn from their mistakes but to learn by the method of trial and error is expensive. To learn by reading about technology is usually difficult since most books are in textbook-style, ridden with equations and technical jargon. This book aims to simplify the concepts of digital technology from a broad perspective. With such an understanding, it is hoped that readers will appreciate the complexity of technology that is often taken for granted.

    If technology is perceived to be simple, it is a tribute to the engineers. It is the engineer’s mission to make complex things simple to use. Where engineering is successful, external simplicity often belies an underlying complexity. This has the unfortunate consequence that engineers are often not appreciated for their contributions. By telling their stories, it is hoped to set the record straight. While the numerous stories behind discoveries and inventions are often interesting, we restrict ourselves to the best and the most important of them. The effort is to weave together a complete and coherent account rather than be comprehensive. After all, this is not an encyclopedia. Neither is this a textbook. It has few equations. It uses little mathematics but is not unmathematical. Mathematics, when its essence is conveyed in plain words, can add value and bring clarity.

    Today’s digital world is often equated with the Internet.² Internet is only one of many things that make up this digital world. A better approach to understanding digital technology is to look at the general framework of a digital communication system. The need of every user is to communicate and to experience. Communication in the beginning was private. When it became public, it was in the form of broadcasting. Recent years have seen a steady shift in the dynamics where individual expression in public domain is as important as standard news articles and announcements. Experience comes with a rich diversity that includes interaction, learning, and entertainment. Digital systems attempt to satisfy these needs based on three core principles:

    1. Efficiency—since resources are expensive and finite, we must make the best of them.

    2. Correctness—preserve data integrity since corrupt data is useless or even misleading.

    3. Security—keep our identities intact, protect our data from eavesdropping, and enable confidence in systems that drive e-commerce.

    To these traditional core principles, recent decades have seen the emerging importance of three secondary principles:

    1. Connectivity—inter-network systems and make possible transparent access of distributed services.

    2. Mobility—enable access beyond systems tethered to fixed locations so that users can avail services worldwide on the move.

    3. Usability—use smart systems that make possible minimal manual intervention to achieve best user experience.

    The enablers of the above principles are science and engineering. This book is about these twin enablers as much as it is about the men and women behind them. Readers can relate by direct experience since many aspects of this technology are pretty much integrated into our lives and culture—secure e-commerce transactions, mobile telephony, Internet radio, 10-megapixel digital camera, DVD movies, MP3 songs, HDTV on LCD flat screens, Skype calls, JPEG images attached to emails, broadband modems, Wi-Fi hotspots, or laser printing. Needless to say, all these acronyms of modern technology need getting used to.

    A Framework for Digital Technology

    A Caltech colleague once asked the eminent physicist, Richard Feynman, to explain certain concepts relating to Fermi-Dirac statistics. Feynman had by then made discoveries of his own in the field of quantum mechanics. He was one of the most respected physicists of his time. Responding to his colleague’s request, being as ever an enthusiastic educator, he decided to prepare a freshman lecture on the topic. He came back a few days later and told the faculty member, You know, I couldn’t do it. I couldn’t reduce it to the freshman level. That means we really don’t understand it.³

    To simplify a complex and vast subject as digital technology is a daunting task. Almost every branch of the field leads to many sub-branches, many of which have in time matured to become formidable branches of knowledge and advancement in their own right. Then there is the complexity of inter-branch influences and cross-application of concepts. To do justice to this immense tree of digital technology, as within my reach, I have taken the approach of breadth of coverage rather than depth.

    If I have failed to bring clarity in some areas, I must appeal to Feynman’s sentiments, that I don’t really understand it at a fundamental level. Nonetheless, the task of writing this book has brought me lots of new knowledge and clarifications on the old.

    0001 Once Upon A Time

    One winter evening in 1819, students at the University of Copenhagen assembled for a lecture-cum-demonstration. The subject was electricity, still in its infancy. Researchers had been experimenting on electricity since early seventeenth century. There was still much to be understood and much more to be explained. This lecture was therefore nothing short of the state-of-the-art knowledge and investigative work current at the time. The lecturer for the evening was the university’s Professor of Physics, Hans Christian Oersted.

    Professor Oersted had spent the entire afternoon assembling the equipment for the lecture. It was customary for him to demonstrate a few standard experiments for which the results were known in advance. When ideas occurred to him, he usually added new experiments in the presence of his inquisitive audience, often enlisting them to assist him in his endeavours.

    The lecture proceeded as intended. The apparatus was explained. The expected effects were observed and the underlying theory was put forward to the students. Just as his audience proceeded to disperse, something unexpected happened. A magnetic needle had been placed accidentally, or perhaps by fortunate chance, near a wire. When one of his assistants closed the circuit by mistake, the needle turned. The professor exclaimed in surprise. The audience moved forward for a closer look. The professor repeated the experiment, opening and then closing the circuit repeatedly. The results were unmistakable. The flow of current in the wire affected the magnet. Thus was born the science of electromagnetism.

    The above story is nothing more than a popular account of Oersted’s discovery of electromagnetism. The public loves nothing more than the picture of an eccentric scientist struggling with the elements for years, until nature takes pity and decides to give up the secrets. If a scientist is an astute observer, he may not lose this rare chance and the discovery is his for all eternity. Historians of science, looking at the evidence before them, often incomplete and sometimes contradictory, have quite a different view of Oersted’s discovery. For this, we need to travel further back to the start of the nineteenth century.

    The year 1800 is well remembered as the year of the birth of electric current. Galvanic current, as it was later called, was in tribute to Italian Luigi Galvani, who in 1786 had experimented with electric discharges through bodily fluid and muscle tissue of a dead frog laid bare on a metallic table. Professor Galvani could neither offer a satisfactory explanation to the convulsions of the frog nor translate his experiment to more useful effects. That was done partially by his contemporary and compatriot Alessandro Volta, a professor at the University of Padua.

    Until the time of Volta, electricity was known only in its static form. In other words, electric charge could be accumulated, often in great quantities, and discharged. Discharge was in an instant and often in spectacular fashion. It was man-made lightning, only in a much smaller scale. Among the early experimenters was American Benjamin Franklin who in 1752 trapped charges from a lightning by flying a kite in a thunderstorm. By this process, he was able to store large amounts of charge in Leyden jars, which was an early form of today’s charge capacitors.⁴ Using this knowledge, he would later invent the lightning rod, something we use to this day.

    Through much of the eighteenth century, static electricity did not have any significant application. It was used for entertainment such as Francis Hauksbee’s glowing glass spheres. It was used at times for shock treatment of patients. There was no scientific basis for this. No one understood electricity much, less so the effect of electric discharge through the human body. Still, those were curious times and there were many who were not hesitant to try new things.

    Until prehistoric man tamed fire, he had been in awe of it. As the eighteenth century drew to a close, man had some control over static electric discharges but he had not yet tamed it. If anything, European Renaissance had brought about a new outlook in scientific enquiry. Prehistoric man had been a poor scientific being. He had accepted nature as it was. Nineteenth-century man was more inquisitive. He was not a mere observer of nature’s workings. He wanted to pry open her secrets. He was, in short, passionately involved with her.

    It was in such a time that Volta tamed electricity. Taking the cue from Galvani’s results, he hypothesized that physical contact of two dissimilar metals results in charge separation and charge flow. Building on this hypothesis, his experiments led him to invent what we now call the Voltaic pile, a stack of pairs of zinc and copper plates separated by cloths soaked in brine. Soon Volta invented a variant of the pile, which came to be called Volta’s crown of cups. With these inventions, Volta could produce a form of current that was no longer instantly discharged. Current was now brought to a form that was continuous.

    By the year 1800, we thus had three distinct fields of knowledge—magnetism, in which early pioneering work had been done by William Gilbert in the court of Queen Elizabeth I of England; electricity, then a term that referred only to static or frictional electricity that could be created by rubbing dissimilar materials; galvanism, the continuous current produced by Volta’s invention of the cell, soon to be improved by other scientists in the field. More than these branches of knowledge, scientists became interested in their similarities and interrelationships.

    It was not difficult to see that static electricity and galvanism were one and the same. Both related to flow of electric charges. Their difference was only one of form—one uncontrolled and impulsive, the other controlled and continuous. The relationship between these and magnetism was less obvious. It was not even clear if the two were connected in some way.

    By the time Oersted entered the field, scientists had been seeking a way to unify electricity and magnetism for nearly two hundred years. In a series of carefully constructed experiments, William Gilbert had established two fundamental facts about magnetism—that like poles repel and opposite poles attract. In 1785, performing even more delicate experiments involving accurate measurement apparatus that he himself had constructed, French scientist Charles Coulomb did the same for electric charges—that like charges repel and opposite charges attract. While Gilbert had experimented on electrostatics as well, he had not known about charges repelling. Seventeenth-century folks who cared about these scientific advancements noted that at times when lightning occurred, iron needles lying nearby got magnetized. The suggestion that electricity and magnetism had an underlying unifying force, a common origin, was not outrageous even to careful sceptics.

    But not everyone was convinced. French mathematician André-Marie Ampère for one expressed his views in 1802 stating that electricity and magnetism are two fluids of rather distinct nature. Englishman Thomas Young in 1807 too saw no obvious connection between the two. Ampère would later make pioneering discoveries and lay the foundations of the science of electrodynamics. Young for this part had already proposed that light travels like a wave and challenged Newtonian view that light travelled as particles. In time, Ampère’s friend Augustin Fresnel would bring Ampère over to the wave theory camp.⁵ The commonality between Ampère’s and Young’s contributions would be electromagnetism. Light was after all an electromagnetic wave. A conclusive proof of this would come only in the 1880s in the hands of an ingenious German physicist (Chapter 11).

    Scientists of the age were influenced as much by philosophy as by the work of their colleagues. Those enquiring into the observable nature of the world were known as natural philosophers only because the word scientist was yet to be coined. The twin aspects of philosophical theorizing and active experimentation often, but not always, went hand in hand. When science had so little foundation to build upon, this was a necessity. Scientists were part-philosophers. In this environment grew the German school of Romantic Naturphilosophie of Friedrich W. J. Schelling whose ideas influenced Oersted. Schelling believed in the unity of the forces of nature, manifested in various forms in our world of experience. Secondly, it is impossible to know this unity by experiment alone. Thus, we must take some concepts as a priori from which speculative physics is possible and acceptable.

    Oersted believed that Schelling’s idea of unity meant that electricity and magnetism must be related, somehow. As often is the case in science, belief in something is the starting point for experimentation and discovery. But the line of attack was not obvious. No experiments suggested to Oersted or others. As if to elevate the challenge, a new element was thrown into the mix—chemistry.

    Ever since Volta invented the cell, investigation into electrical phenomena took a new turn. Within a few weeks of Volta’s discovery, English chemists William Nicholson and Anthony Carlisle decomposed water by passing galvanic current through water. Hydrogen bubbles formed at one end and oxygen bubbles at the other. A little later in 1807, Humphrey Davy in England decomposed metal salts to isolate potassium and sodium. A year later, he isolated many new metals from their oxides—magnesium, strontium, barium, and calcium. Thus was born the new field of electrochemistry. Galvanic current directly propelled chemistry to new heights. Electrochemistry suggested that electric forces could be used to break chemical forces. Thus, it was possible to break a chemical compound by the use of electric force.

    When the English chemists passed electricity through water, galvanism might have inspired them to do so. They possibly applied Schelling’s belief in the unity of all things. Volta had shown that a chemical reaction could produce electrical reaction—although Volta himself did not advance the chemical explanation. To Nicholson and Carlisle, this possibly suggested that electrical reaction could produce chemical reaction.

    These developments were not lost on Oersted. He enquired if electric forces and chemical forces were simply different manifestations of the same force. Oersted’s first communication on this matter was published in a French journal in 1806.⁷ One would have expected things to move on quickly from this point onwards but Oersted remained silent for six years. He clarified his views in 1812 in a German publication, followed a year later in a French publication.⁸ Ideas were now beginning to take a definitive shape in Oersted’s mind but much of it was still speculative in the manner of Schelling’s Naturphilosophie. Nonetheless, it was an important starting point in the evolution of scientific thinking.

    Oersted was seeking to identify the unifying force behind all chemical processes. With this, all of chemistry would be explained with reference to primary forces. He also conjectured that perhaps galvanic electricity has a greater affinity to magnetism than frictional electricity. On the physical reality of electricity, borrowing from Young’s wave theory, he came closer to the truth. He proposed that electricity was not a fluid but disturbances of equilibrium in matter, a series of continual loss and replenishment of charges, which gives the effect of a continuous flow of galvanic current. This idea was a direct influence of chemical reactions. Thus, electricity propagated like a wave and did not flow like a fluid, about which he stated,

    One could express this succession of opposed forces which exists in the transmission of electricity by saying that electricity is always propagated in an undulatory manner.

    Following these remarks in 1813, Oersted seems to have moved on to other things and set aside further investigation, until the winter of 1819. By the time of the memorable lecture of 1819, he came to believe galvanism as a transition between the extremes of static electricity and magnetism. Static electricity was transient and momentarily discharged. Magnetism was ever-present. The earth’s magnetic field always existed. Lodestones and magnets rarely lost their power to attract or repel. Galvanism with its continuous flow of current, so long as the Voltaic pile remained chemically potent, seemed to fit snugly between static electricity and magnetism.

    Historians do not agree on the dates of Oersted’s discovery. Was it in the winter of 1819 or was it in the spring of 1820 that Oersted made the discovery in front of an audience? Some claim that it happened only in July 1820 when full details of his experiments became public. Oersted’s own account of 1820 mentions the winter of 1819 in insufficient detail. However, modern commentaries that include English translations of Oersted’s 1821 publications state clearly that the discovery was made in April 1820.¹⁰ Historiography of science is not an easy science. What concerns us more is the nature of the discovery.

    Oersted’s main impediment seems have to been a Newtonian view of the world, in which forces are central and act at a distance.¹¹ Even the very terminology of the day reflected the overbearing influence of Newton. Scientists talked and wrote about magnetic forces, electric forces, and chemical forces. Volta was a living person and voltage as a word was not yet in the vocabulary of researchers in electrics. Force was the key operative word and it followed directly from Newtonian mechanics and gravitational force laws. What we today call voltage was for a good part of nineteenth century called electromotive force or emf. The term continues to persist in some modern textbooks. Against the well-established theories of Newton and everything that followed in agreement, it was difficult to conceive or propose contrary views. Scientists had great faith in Newtonian theories and this prevented them from thinking differently. It was in this environment of constrained selective thinking that Oersted made his first mistake.

    When a magnet is placed in east-west direction near a magnetic needle, the needle that normally points to earth’s magnetic north, would reorientate itself to the magnet east-west. Oersted extended this idea and placed a current-carrying wire in the east-west direction in place of the magnet. If magnetism proceeded from galvanism, the effect of the current would be similar to the magnet. The poles of the magnet due to flowing current would be located somewhere along the wire. Nothing happened to the needle, which kept pointing to earth’s magnetic north. Oersted concluded that perhaps the effect was too small to overcome earth’s magnetism but battery technology in the 1810s was primitive.

    Electric discharge gave off light and heat. An idea occurred to Oersted. If all forces of nature were related, then light and heat would possibly be related to electricity and magnetism. He added a platinum wire to his galvanic circuit so that he now had an incandescent wire that carried current. The results were the same. The magnetic needle remained unaffected. Apparently, even a glowing wire did not result in magnetism strong enough to deflect the needle. All this while, Oersted had been reluctant to place the wire in the north-south direction.

    He then made an incremental change in the setup. He placed the wire perpendicular to the plane of the needle, perhaps vaguely inspired by lightning affecting magnetic compasses.¹² He noticed some effect on the needle but the results were not consistent. Nothing could be concluded. He tried bending the wire into different shapes. He obtained consistent results but his attempts to locate the poles of magnet along the wire failed.

    It was this that he demonstrated to his audience. The fact that he failed to locate the poles did not impress the audience and they proceeded to leave. By now, Oersted was in a mixed state of desperation and disappointment. All his scientific lines of attack following the tradition of Newton had failed. In a last desperate attempt, he ditched Newton, placed the wire in north-south direction, and saw the needle turn. The departing visitors were called back and the effect was demonstrated.

    In the days that followed, Oersted performed a series of experiments, as many as sixty of them, to establish some basic facts of electromagnetism.¹³ He placed the magnetic needle above the galvanic current-carrying wire. Then he placed the needle below the wire. He also reversed the direction of current flow. The result of these experiments was that the central force theory of Newton was in doubt. Magnetism was not inside the conductor, it was outside it. It was also circular, looping all around the conductor with decreasing effect as the distance from the conductor increased. The question of identifying the poles became irrelevant. Circularity meant that the poles were not concentrated within the wire and the magnetic needle’s deflection depended strictly on its placement relative to the direction of the current. In Oersted’s own words,¹⁴

    It is sufficiently evident from the preceding facts that the electric conflict is not confined to the conductor, but dispersed pretty widely in the circumjacent space. From the preceding facts we may likewise collect that this conflict performs circles; for without this condition, it seems impossible that the one part of the uniting wire, when placed below the magnetic pole, should drive it towards the east, and when placed above it towards the west; for it is the nature of a circle that the motions in opposite parts should have an opposite direction.

    Oersted’s Experiments on Electromagnetism

    (a) Magnetic needle aligns itself tangential to a circular magnetic field centred on the wire. (b) Effect of galvanism on the needle is not seen because the needle already points north. (c) Oersted’s crucial discovery when wire is placed in north-south direction. In this case, the needle deflects to east-west. (d) Changing the current direction changes deflection by 180 degrees. (e) Placing the wire below the needle changes deflection by 180 degrees.

    Oersted summarized the results in a short Latin paper of July 1820. The paper was dispatched to many leading scientific institutions across Europe. It immediately triggered feverish experimental work across Europe. Ampère came to know of Oersted’s work only in September that year. Within a week, he verified many of Oersted’s experiments and established his own current laws, which are now fundamental to the science of electrodynamics. By September 1820, Johann Schweigger in Germany created a device to multiply the magnetic effect by looping wires many times over. With its increased sensitivity, it could be used for accurate quantitative measurements of current flow in a circuit.¹⁵ Ampère would later name this device the galvanometer, heralding the start of instrumentation.¹⁶

    Although Newton’s theory was challenged and the circular nature of magnetism was established, there was no satisfactory scientific explanation. Experience is one thing but without explanation, it stood to critical scrutiny. It would take a lot more than observation and experimental results to topple Newton’s central force theory. A proper scientific explanation had to wait for two of the greatest names in the field—Michael Faraday and James Clerk Maxwell (Chapter 11). In the decades that followed, central force theory was limited to Newton’s gravitational laws; but in the 1910s, a German physicist named Albert Einstein would rewrite the books. It is therefore fair to say that today’s knowledge is only a partial truth, its validity not absolute but relative to the limitations of our current understanding.

    Oersted’s discovery must not be dismissed as either trivial or obvious. For nearly two hundred years, the link between electricity and magnetism had been suspected but no one had managed to prove it. In the years 1776 and 1777, the Bavarian Academy of Sciences announced a prize to anyone who could find the missing link. Needless to say, there were no winners. For twenty years since the birth of the Voltaic pile, no scientist conceived of a suitable experiment though galvanic circuits had been in regular use in laboratories all across Europe and America. Given this scientific landscape, Oersted’s discovery is remarkable. If it was accidental, accident had a minor role to play when considered against a backdrop of numerous experimental failures and evolving thought processes that led to the discovery. A few decades later, microbiologist Louis Pasteur made a philosophical remark that seems relevant to Oersted’s discovery: Chance favours the prepared mind.

    Electromagnetism is one of the fundamental principles of modern science. Twenty-first-century world as we have built it would be quite different without an understanding of this principle. Steam turbines that convert heat into electricity rely on electromagnetism. A kitchen blender would not work without it. A computer hard disk would not exist. Satellite TV and mobile phones that depend on wireless transmission exploit it. The earliest forms of communication by electricity would not have been conceived. It is to this form of communication that we now turn our attention.

    

    One remarkable property of electricity that everyone noticed from the early years was that it was instantaneous. If there was a limit to the speed at which it travelled, it was imperceptible. In 1746, Abbe Nollet arranged a group of two hundred Carthusian monks in a mile-long circuit for an unprecedented experiment. The monks, clad in their simple robes and tunics, held hands and an iron wire as if in fervent communal prayer, quite unsure what was going to happen next. They had unwittingly consented to stand in as eighteenth-century lab mice. When Nollet discharged charges from a Leyden jar, the shock caused the monks to jump up and shriek out at the same time. It was no doubt an experiment designed to impress spectators.¹⁷ If current flowed thus without delay, it seemed possible to use it to pass information quickly from one place to another.

    The idea of transmitting information using electricity was a conceptual leap in scientific thinking. Communication need no longer be linked to transportation. The days of carrier pigeons and horseback messengers were numbered. However, the separation of communication from transportation was not as novel as it sounds. Smoke signals had been used in primitive communities. For centuries, people had been using blazing beacons and flickering lanterns from hilltops and ridges to communicate, particularly at night. The idea of using light to communicate was applied to reflecting mirrors and waving flags. Both the British and the French fleets made extensive use of flag signalling at the 1805 Battle of Trafalgar. Well into the twentieth century, navies around the world continued to use flag signalling. Even today, software engineers use the term flag to signal control operations from one code block to another. The problem with these optical means of communication was that they required line of sight. Any obstruction in the path had to be cleared away or alternative sites located. Fog or inclement weather resulted in a communication blackout. But there was an alternative.

    Sound too had been a means of distant communication for centuries. Using bugles, bells, and drums generals had given orders to their lieutenants fighting on battlefronts. From eighteenth century onwards, Europeans exploring sub-Saharan Africa discovered an ingenious system of African talking drums. These drums were capable of conveying long phrases and entire sentences in rich rhythmic tones and overtones. Their inspiration had been human speech itself from which they had evolved an entire phonetic vocabulary for the drums. Europeans took a long time to understand the advanced system of the Africans. An authoritative insight was published only in 1949 by John Carrington.¹⁸

    Electrical communication—the idea occurred to many, some using Leyden jar discharges and others using the continuous current of Volta. In Germany, Samuel von Sömmering used the still new science of electrochemistry in 1809 to convey information at a short distance. In his apparatus, galvanic electricity was passed through water to create bubbles. The bubbles were trapped into a column, its volume measured and interpreted according to agreed conventions between the sender and the receiver. Following the tradition of Nollet, Francis Ronalds managed to construct a message transmission system in 1816 to a simulated 8-mile distance. Two wooden frames supported a cage of wires in his back garden in a London suburb.¹⁹ The sender and the receiver ends had synchronized dials using a clockwork mechanism. When a message needed to be sent, an electrically charged wire common to both parties was grounded. This discharged a pair of separated pith balls attached to the ends of silk threads suspended from the wire. With the loss of charge, the balls came together. At that exact moment, an operator at the receiver read the pointed markings on the dial. The idea of using static discharges and pith balls had been suggested anonymously in 1753.²⁰

    The Ronalds system was slow and depended on the two dials being always in synchronization. With the benefit of hindsight, we today realize that any system based on loosely synchronized clock dials was doomed to fail. Synchronization cannot be assumed a priori. In almost all modern communication systems, both parties first establish synchronization before commencing message exchanges. They are also required by design to maintain synchronization at all times, track, and correct drifts as often as possible. Thus, synchronization precedes communication and subsequently communication assists in maintenance of synchronization. In some cases, an external common source of synchronization may be employed. Such is the case with mobile cellular systems that use Global Positioning System (GPS) satellites as the common clock reference.

    From such early developments was born electric telegraphy. By 1820, Oersted had sort of traced the separate lineages of electricity, galvanism, and magnetism to discover electromagnetism at their root. Within a year, Ampère proposed that it might be possible to transmit information at a distance using deflections of a magnetic needle. He was of course perfectly correct but as every engineer knows, there is a vast difference between scientific theory and reduction to practice. The earliest known implementation of Ampère’s suggestion comes to us sixteen years later. Called Alexander’s telegraph, it was exhibited in Edinburgh in 1837. At best, it was a working prototype. Thirty magnetic needles were arranged in a grid of 5 x 6, so that an operator could signal all letters of the English alphabet plus four punctuation marks. The entire setup required thirty pairs of conducting wires. Using that many wires simply wasn’t practical. Meanwhile, a completely different form of telegraphy had already ventured beyond the confines of the laboratory and into the open world.

    For some time, Frenchman Claude Chappe had been intrigued by the possibilities of communication at a distance. But he was a clergyman and did not take the decisive step of putting his ideas into practice. Then came the French Revolution with the storming of the Bastille in 1789. Chappe lost his clerical position and returned to his hometown of Brûlon. There, with apparently nothing else to do, he turned his attention to communication. With the assistance of his four brothers, he conceived of a primitive system using a couple of synchronized pendulum clocks with identical dials. Messages were exchanged with nothing more inventive than by a crude means of banging on casseroles. The limitation of using sound was obvious and notwithstanding casseroles with best acoustic properties, they could not communicate more than 400 metres. Experiments with static electricity failed from the start due to lack of proper insulators for the wires. The problem with static electricity had always been unreliability due to leakages. Therefore, the Chappe brothers fell back upon the ancient method of using optical signals. Though optical signalling had been in use for centuries, the Chappe brothers brought an essential improvement.

    It sometimes happens that just when a system is thought to have attained maturity, a new technology enters the scene and propels the system to new heights. For optical communication, this technology was the invention of the telescope. Dutchman Hans Lippershey invented it in 1608 and Galileo Galilei improved it the following year. By the time of the French Revolution, the telescope had seen numerous innovations—some using only refracting lenses, others only reflecting mirrors, and yet others a combination of the two in various configurations. Telescopes had become more precise, portable, and affordable. The popular one then was the achromatic refracting telescope patented by John Dollond in 1758. Beyond the possibility of increasing viewing distance by using a Dollond telescope, Chappe believed that he could find much better ways of sending messages from one place to another.

    The fact was that in ancient times messages were agreed upon in advance between communicating parties. By mutual agreement, if someone waved a red flag, it might mean danger. A white flag might mean surrender. But what if someone wanted to say something more complex: It is dangerous to attack now. Wait till after dusk. Unless this message had been agreed upon earlier, there was no way to signal arbitrary messages. In other words, one had to talk within the constraint of possible messages that both parties had worked out in advance. Clearly, this was a big constraint. One either had to have a large number of possible messages and a suitable means of signalling all those messages, or often resort to approximations by selecting a message that compromised only a little on the meaning. Aeneas (350 BC) and Greek historian Polybius (150 BC) had written about such signalling systems of fixed message sets.

    It was this grand problem Claude Chappe was intending to solve. He wanted to convey any arbitrary message without prior agreement between the sender and the receiver. Using the same pendulum clocks, but this time replacing the clumsy casseroles with telescopes and rotating panels, the Chappe brothers demonstrated a working prototype to municipal officers on March 2, 1791. The world’s first ever telegraph message was sent from Brûlon to Parcé, a distance of 12 miles, a phrase of nine words communicated in just four minutes. The message read, "Si vous réussissez vous serez bientôt couvert de gloire (If you succeed you will soon bask in glory)."²¹ History was made that day. Suddenly, the world did not seem as big as it had been assumed. Unfortunately, the details of translating those nine pertinent words into optical signals are not preserved. It was clear, however, that Claude Chappe had solved the translation problem to handle any message. All that was needed now was to refine the method.

    The Chappe brothers did not bask in this initial success, for they wanted glory itself. Claude Chappe recognized many areas of potential improvements. The pendulum clocks had to go. Something simpler in form yet more powerful in capability had to be invented. This had to happen quickly before competition came in. The government, still in its uneasy formative period, had to be convinced of its value.

    In the ensuing months, the Chappe brothers experimented with both the transmission apparatus as well as the method of conversion from message to optical signals. In modern computing terminology, we would call the former hardware and the latter software. More precisely, the method of conversion is what engineers term encoding, which is simply the process of representing the original message into a form that is more suitable for transmission. It is encoding the forms the revolutionary aspect of Chappe telegraph.

    In 1792, a new apparatus composed of movable shutters was installed in Belleville, northeast of Paris. Before any trials could be done, a French mob, suspecting Royalist involvement, destroyed it. Then in 1793 came the Reign of Terror and King Louis XVI went to the guillotine. In such troubling times, Claude Chappe pulled off another successful trial, this time with a new and improved system. The National Convention, convinced of the power that telegraph would bring in such revolutionary times, sanctioned close to sixty thousand francs for the construction of the first optical telegraph line. But the sanctioned money would not come on time and Claude Chappe had to deal with labour problems and delays.

    Despite all odds, in July 1794, less than a year after the sanction, the line was completed. Optical telegraphy was born. It also went by the names aerial telegraphy or semaphore line. Operation commenced on the first semaphore line, from Paris to Lille, a distance of 120 miles covered by eighteen telegraph stations. Its first application, one may say in exclusive use, was for the military who craved for rapid news from the frontiers to the capital. In fact, the news of the recapture of Le Quesnoy from the Austrians and the Prussians travelled from Lille to Paris within two hours of the victory.²² This victory was in some sense a victory for telegraphy. In the decades to come, the French optical telegraph system would grow to become the most advanced and well-managed system in the world, until the coming of electric telegraph. The system extended into neighbouring countries under Napoleonic control, reaching as far as Algeria, Morocco, and Egypt. Some countries followed the French system with variations while others succumbed to the not-invented-here syndrome.

    Murray’s Shutter Telegraphy

    With six shutters, each in two possible positions, an alphabet of 64 messages could be signalled. This was among the earliest binary systems in communication technology.

    In 1795, the English adopted a system of six shutters proposed by Lord George Murray. Each shutter took one of two possible positions—vertical (closed) or horizontal (open). This meant that a total of 2⁶ or 64 distinct messages could be signalled. It is claimed that Murray got the idea from Abraham Edelcrantz’s system of ten shutters in operation in Sweden since 1794. However, there is no doubt that the shutter system had been partially attempted by Claude Chappe himself in 1792 before the fateful riot. Today we can recognize in these shutters the earliest form of a binary system used for communication, a method of representation built from only two possibilities—open or close, a zero or a one.

    Murray’s shutters did not catch on because it had practical difficulties. Chappe was a true engineer of the field, not a scientist of the laboratory. His first-hand experience told him that shutters were not easy to view from a distance. When the sun reflected off the panels at particular angles or if the sky was too bright, operators made mistakes. An open shutter could be wrongly interpreted as being closed, and vice versa. Moreover, the use of only two states, though efficient, slowed transmission due to the inherent difficulties in converting a letter or a number to the correct orientation of each of the shutter panels. What was needed was a method that compromised a little on efficiency but increased the speed of transmission. What was needed was a system that would be easier to operate for the sender. What was needed was a system that would be easier to read through a telescope some six miles away.

    The end result of much deliberation was that Chappe came up with a system that would stand the test of time. His apparatus consisted of a central regulator, to which were hinged two smaller indicators that could be folded or extended as required. The indicators were balanced with counterweights. The regulator itself was mounted on a ladder and linked by a system of brass chains and pulleys to operator controls. One key design feature was that the controls mimicked the arrangement of the regulator and the twin indicators. Thus the operator knew exactly what signal had been written in the air without ever going outside to see what he had written. The word telegraphy itself comes from its Greek roots, tele (far) and graphein (write), and therefore stands for far writing.

    The regulator and indicators were wooden and painted black to stand out against the bright sky. To improve stability in strong winds, which was common since the installations were mostly on hilltops and tall buildings, the regulator and indicators were louvred. This had the added advantage of a lighter mechanism and hence easier to operator. In addition, the regulator and indicators were divided into segments so that louvres of alternate segments were offset by ninety degrees. This gave better visibility whatever be the angle of the sun. The overall design resembled a human communicating with outstretched arms. Chappe argued that this gave much better visibility than shutters. The chance of making a mistake was much lower.

    Chappe Telegraphy

    Positions of regulator and twin indicators signal a particular symbol. As an example, the illustration shows signalling of the French word bonjour.²³

    As for the encoding scheme, everything depended on the angles of the regulator and its indicators. The regulator could be in vertical or horizontal position. Diagonal positions of the regulator were special positions to indicate that the operator was in the process of setting the signal and the receiving operator should wait for it to be ready. The indicators could be vertical, horizontal, or at 45-degree angles. Barring the position when the indicator was aligned to the regulator, this meant each indicator had seven possible positions. Overall, the system was capable on indicating 2 x 7 x 7 = 98 distinct symbols in the air. A system capable of handling only 98 symbols was not very impressive, particularly given the fact that constructing and installing the signalling apparatus through towns and countryside involved a substantial investment. Chappe had to figure out a way to signal hundreds of messages using only 98 symbols.

    Chappe understood one key aspect of communication—the separation of message symbols and control symbols. Message symbols are those that carry information from sender to receiver. Control symbols are those that facilitate transmission of message symbols. If things go wrong, control symbols help to resume or ensure proper error-free communications. Control symbols are like watchdogs so that message symbols can be transmitted as intended by the sender. Chappe’s motivation for this was simple. It is impossible to guarantee proper communication with only message symbols. Where human operators are involved, mistakes can happen. Foggy weather can introduce errors. To overcome these limitations, the system ought to have built-in ability to catch errors, recover from mistakes, and confirm proper message reception. Control symbols enable this.

    To be fair, Chappe was not a genius for he had borrowed this idea from one of Newton’s contemporaries, Robert Hooke. Way back in 1684, speaking at a lecture at the Royal Society of London, Hooke put forward the basic principles of a communication system. He stated that any communication system must adopt a set of symbols that can be arbitrary but an efficient representation of the language alphabet. With the regulator and indicators, Chappe had done just this and assigned meaning to each of his new symbols. Hooke also proposed the separation of message and control symbols. With control symbols, the designer could define and implement a necessary set of rules. These rules would enable both parties to coordinate their actions to ensure smooth communication. To clarify, Hooke gave explicit details for such control signalling,²⁴

    I am ready to communicate [synchronization]. I am ready to observe [synchronization]. I shall be ready presently [delay]. I see plainly what you shew [acknowledgement]. Shew the last again [error detection and retransmission]. Not too fast [rate control]. Shew faster [rate control]. Answer me presently [request acknowledgement]. Dixi [end of message]. Make haste to communicate this to the next correspondent [message priority]. I stay for an answer [stop transmission and wait for reply].

    This last aspect, critical for any communication system, is what we today call protocol. A protocol is nothing more than a set of rules and control messages that allow two computers or devices to talk to each other without running into confusion. Protocols are at the heart of all message exchanges on the modern Internet. Therefore, before we pat our backs in self-congratulatory manner for the wonders of the Internet and the ingenuity of our own generation, let us pause to see that the basic principles had already been laid down by Robert Hooke 350 years ago.

    Hooke, as great a scientific figure as Newton himself, did not manage to make his proposal into a working system of any commercial success. His proposal lay neglected for more than a hundred years. Whether Chappe discovered Hooke’s proposal or did the proposal discover Chappe, either way, it was a fruitful union from which optical telegraphy was born. Claude Chappe thus became the world’s first communication engineer. The official title given to him at the time was Ingénieur Télégraphe.

    Inspired by the ideas of Hooke, Chappe reserved six symbols for control, leaving only 92 for messages.²⁵ If a single symbol is used per message, the system can handle only 92 messages. However, if one were to combine two symbols to indicate a message, the operator can signal 92 x 92 = 8464 possible messages. This can easily be understood within the context of modern decimal numeric system. We have only digits 0-9 but if we use two digits to represent a number, we can represent any number from 0-99. Any higher number could be represented by allowing further extension of the basic digits. The key concept here is the encoding of a larger set of messages or numbers in terms of a smaller set of symbols or digits. Engineers would later exploit this rather simple-looking concept. In fact, it would become the most important tool in their engineering toolkit; but the discovery would come independently from an unexpected quarter (Chapter 2).

    To handle the enlarged set of possible messages, Chappe prepared a codebook. While the first symbol referred to line number, the second symbol referenced a particular page in the codebook. In addition, the use of a single symbol for a message was retained for more frequent messages. In essence, letters of the French alphabet, numbers, common words, and phrases used single symbols. This optimized on speed of transmission. By 1799, code designers added more messages to enhance the capability of the system. In later years, more control codes were added based on Edelcrantz’s system in Sweden. The fact that the Swedish system had better control codes might have come about out of necessity, due to the higher error rates in using shutters rather than Chappe’s better visual design. In fact, the spirit of Chappe’s design was not unlike modern sign language used by the deaf and dumb.

    Chappe’s optical telegraphy established the first principles based on which future developments would take shape. Speed of communication was revolutionized. A communication network was established with an effective system of relay stations manned by trained operators. Some intermediate stations did more than simply relay the message. They interpreted them with the aid of codebooks and asked for retransmission if errors were detected. In 1833, there were 1000 uniformed operators, 34 inspectors, and 20 directors. By the 1840s, the network had more than 530 relay stations spanning almost 5000 kilometres.²⁶ To oversee operations, special civic bodies were set up. In fact, laws had been passed earlier to enable Chappe to cut down trees and access any property across the land to bring up stations. Most importantly, the need for codes for efficient signalling inspired engineering innovation.

    Despite these successes, optical telegraphy had its problems. Communication by night was not possible. Bad weather increased error rates. In winter months, transmission was prone to errors and only one in three signals arrived correctly. The mechanical nature of the transmitting devices meant breakdowns and repairs. The entire network was costly to operate and was never opened for private messaging. The government used it to relay military commands, lottery results, and financial news. Thus from the outset, it was the public that owned telegraphy in France. It was all too important and powerful to be left in private hands. Countries across Europe adopted the model. This would go on to influence electric telegraphy and even telephony of the future. On the other hand, the US government would fail to see the potential of these new technologies and leave it to private players. To this day, private companies dominate telecommunications in the US. Only towards the end of the twentieth century, would European governments embark on the path of privatization.

    If only Oersted’s discovery had happened thirty years earlier, and had scientists focused on tapping this new power for communication as a priority, electric telegraphy might have displaced optical telegraphy early on. But nature had decreed that the secrets of electromagnetism would be more difficult to unravel. In fact, the road to electric telegraphy was fraught with misleading signposts along the way. The first of these came in 1824, when Englishman Peter Barlow famously declared that electric telegraphy could not work for distances more than 200 feet. To transmit electricity over long distances was indeed a first-class problem worthy of the best scientists. Any of the leading European minds could have solved this problem had they only persevered and not been biased by Barlow’s statement. The honour was left to an American.

    Joseph Henry was born of Scottish parents and was apprenticed to a watchmaker. At the age of 22, he got interested in science and enrolled at the Albany Academy in New York. His diligence and commitment to science paid off when he eventually became a professor at the Academy. The work of European scientists kindled his interest in electromagnetism. In 1827, he began research in that field in earnest. In 1825, William Sturgeon had invented a powerful electromagnet that could lift weights. He had done this by first bending a soft iron bar into the shape of a horseshoe and winding conducting wire around the bar. His experiments had shown that this increased the magnetic strength at the poles.

    Henry combined Sturgeon’s work and Schweigger’s multiplier to increase significantly the strength of the magnet. He first insulated the wire with silk, but rather than loosely wind the wire, he tightly covered the iron core pole to pole with many turns. Henry published this work in 1828 but this was just a start. After performing numerous experiments, his seminal work on the subject appeared in 1831. In those days, the terms current and voltage had not yet been established. Respectively, the equivalent terms of the day were quantity and intensity. Henry’s research led him to conceive of two types of electromagnets—quantity magnet and intensity magnet.

    What Henry discovered was that when he increased the number of turns, magnetic strength increased alright but sometimes when he used a different cell, the result was reversed. He surmised that the length of the winding must have some effect and decided to connect in parallel separate smaller windings, but collectively still covering the entire horseshoe. He argued that this would give the current multiple paths of passage. The result was more dramatic than he had expected. While a single small winding covering a ninth of the core could lift only 7 pounds, nine such windings in parallel could lift 650 pounds of weight.²⁷ This astonishing result was accomplished by using only a small galvanic cell. He also discovered that if the small windings are connected in series, in a sense similar to a single winding covering the entire core, more lifting power could be obtained by adding more cells to the galvanic battery.

    The principle Henry had discovered is what we today call impedance matching. In simple words, the power transferred to a load is maximized when its resistance equals the battery’s internal resistance. In other words, the resistances of load and battery should be matched.²⁸ But in those early days, no one knew anything about resistance let alone the intricacies of matchmaking. George Ohm had implicitly defined resistance and published his famous law only a few years earlier. His law, first published in German, was at the time neither well known nor widely accepted. An English translation of Ohm’s work appeared only in 1838.²⁹ Henry had independently arrived at the idea of resistance but had not put it through quantitative analysis. Nonetheless, Henry had established the key fact that wires connected in parallel result in a lower resistance than the same connected in series.

    From these remarkable studies, Henry drew important conclusions. An intensity battery, one supplying large voltage using many cells in series, can be used to drive an intensity magnet miles away. It did not matter if this circuit had a higher resistance due to the long wire. Cells in series resulted in higher battery resistance and hence were matched to the circuit. This intensity magnet, by opening or closing of a secondary circuit, can trigger a quantity magnet. The quantity magnet needs to use only a small cell but it magnetic strength is derived from tightly wound coils connected in parallel. Thus, mechanical effects at a distance can be obtained by a combination of these two types of magnet. He demonstrated the same to his students by passing current through a mile of wire and thereby ringing a bell or crashing heavy weights. The basic principle was that an electrical circuit could affect, via electromagnetic action, a neighbouring circuit. Henry’s device that utilized this phenomenon was later named the relay. In the early twentieth century, the relay would play a key role in the workings of the world’s first computers.

    Henry’s Experimental Setup Using a Quantity Magnet

    In this setup, Henry shows how a quantity magnet (A) with many parallel windings is strong enough to lift heavy weights even when the power source is a single cell (B & C). Source: (Henry 1831, pp. 408).

    The relay was independently invented by William Cooke and Charles Wheatstone in England. Unlike Henry, who remained a scientist all his life, Cooke was an enterprising businessman. Although by profession a maker of anatomical models, his interest in electric telegraphy came about after being introduced to an early device of P. L. Schilling, a Russian diplomat in Germany. Schilling had used magnetic needles, as many as six, to signal messages. In later years, he reduced his system to use a single needle and came pretty close to something similar to Morse code. His premature death in 1837 forestalled further development.

    Teaming up with Wheatstone, then a professor at King’s College, London, Cooke invented a form of telegraph that used five magnetic needles. Since a needle could be made to turn left or right, one is tempted to think that his telegraph could signal 2⁵ or 32 symbols. But the design was such that some orientations of the needles were invalid. Though the design could accommodate only 20 symbols, by selecting a pair of needles per symbol, the arrangement made it easier to read the symbols directly from the receiving instrument’s dashboard. The fact that letters C, J, Q, U,

    Enjoying the preview?
    Page 1 of 1