Sunteți pe pagina 1din 334

Invention of Dr.A.B.

Rajib Hazarika’s Devices 1

INVENTIONS OF Dr. A.B.RAJIB


HAZARIKA’S DEVICES

By Dr.A.B.Rajib Hazarika, AES

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 2

CONTENTS

INVENTIONS OF Dr. A.B.RAJIB HAZARIKA ON FUTURE DEVICES

1. PREFACE ABOUT AUTHOR


2. INTRODUCTION
3. HISTORY AND BACKGROUND
3.1 TOKOMAK AND TYPES OF TOKOMAKS
3.2 TOKOMAK WORLD RECORDS
3.3 TABLETOP NUCLEAR FUSION DEVICE
4. INVENTIONS NAME
4.1. LASER AMPLIFIED QUINCENT UNIQUE TECHNOLOGY (LAQUIT)
4.2. DOUBLE TOKOMAK COLLIDER (DTC)
4.3. MAGNETIC CONFINEMENT TOKOMAK COLLIDER (MCTC) HUB
4.4. DUO TRIAD TOKOMAK COLLIDER (DTTC) HUB &VASIMR DANISHA SPACE
ROCKET
4.5. DIFFUSION ASSOCIATED NEOCLASSICAL INDEGENOUS SYSTEM OF HALL
ASSEMBLY (DANISHA)
4.6. FUZZY DIFFERENTIAL INCLUSION (FDI) SIMULATION CODE
4.7. PARABOLIC COORDINATE STUDY FOR ITER
4.8. GREEN FUNCTION SOLUTION
4.9. R-T INSTABILITY DUE TO LOW FREQUENCY
5. APPLICATIONS
5.1. AUTOMOBILES
5.2. MICROWAVE
5.3. FM RADIO
5.4. TV AND RADIO TRANSMISSION
5.5. CAMERAS AND IMAGING SYSTEMS FOR SATELLITES
5.6. HYBRID FUSION ENERGY GENERATION
5.7. INKJET PRINTERS
5.8. MOBILE PHONES
5.9. TOUCH SCREEN TECHNOLOGY
5.10. LCDs AND OLEDs
5.11. ROCKETS FOR SPACECRAFTS TO MOON, MARS AND JUPITER
5.12. COMPUTER CHIPS
5.13. PYROELECTRIC APPLIANCES
5.14. COMPUTER SCREEN
5.15. HI- FI SOUND SYSTEM
5.16. HYPERPLANES WITH THE SPEED OF MACH 7-8
5.17. SPACE TELECOMMUNICATION
5.18. SUBMARINES
5.19. NANOTECHNOLOGY
5.20. MISSILES
5.21. THERMONUCLEAR TESTING DEVICE
5.22. THEORY FOR DOUBLE TRIOS STAR: Saiph star
5.23. CYCLONE PATTERN STUDY
5.24. DRABRH GRAY CODE
6. PATENTS FOR DESIGN AND INTERNATIONAL APPLICATION
7. BIBLOGRAPHY
8. SUBJECT INDEX

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 3

Page no.(i)

Dr.A.B.Rajib Hazarika, A.E.S.


MSc, PhD, MIAMP (Germany), FRAS (Lond.), MWASET, MFFS (USA), MIBC (UK), MNPSS
(USA)

Assistant Professor, Res: “Anjena Manzil”, Kadomtola,


Dept. of Mathematics, Modhupur, P.O. Modhupur,
Diphu Govt. College, Diphu, Dist: Nagaon, Assam, India
Karbi Anglong, Assam, India Pin - 782001
Pin- 782462, M- 9435166881 Ph- 03672-256327
************************************************************************

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 4

Page no.(ii)

PREFACE ABOUT THE AUTHOR

As available on website: http://wpedia.goo.ne.jp/enwiki/User:Drabrh/Dr.A.B.Rajib_Hazarika

User:Drabrh/Dr.A.B.Rajib Hazarika

From Wikipedia, the free encyclopedia

< User:Drabrh
Jump to: navigation, search
" A.B.Rajib Hazarika" redirects here. For Dr.A.B.Rajib Hazarika, see Dr.A.B.Rajib Hazarika.

Dr.A.B.Rajib Hazarika

[[File:Dr.A.B.Rajib Hazarika & his two kids.jpg [1]


|frameless|alt=]]
Dr.A.B.Rajib Hazarika with Laquit(son) and Danisha(daughter)

Azad Bin Rajib Hazarika

Born July 2, 1970 (age 40)

Jammu, Jammu and Kashmir, India

Residence Nagaon, Assam, India

Nationality Indian

Ethnicity Assamese Muslim

Citizenship India

Education PhD, PDF, FRAS

University of Jodhpur
Jai Narayan Vyas University
Alma mater Institute of Advanced Study in Science & Technology
</ref>http://www.iasst.in/]
Kendriya Vidyalaya[1] http://www.akipoonacollege.com/

Assistant Professor (Lecturer), Diphu Govt. College ,


Occupation
Diphu,Assam,India

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 5

Years
2004- onwards
active

Diphu Government College


Employer
Government of Assam ,Assam Education Service

Lecturer ,Assistant Professor,Mathematician, Academician


Known for ,Fusion,Astronomy

Home town Nagaon, Assam, India

Salary Rs 40000 per month

Height 6 feet and 2 inches

Weight 100 kg

Doctorate, Dr., FRAS (London), Assam Education Service,


Title
AES

Member of Scientific and Technical committee & Editorial


Board review board of Natuaral and Applied sciences World Academy
member of of Science ,Engineering & Technology</ref>
http://www.waset.org/NaturalandAppliedSciences.php?page=45

Sunni Islam,
Religion

Spouse Helmin Begum Hazarika

Children Laquit Ali Hazarika(son), Danisha Begum Hazarika(daughter)

Rosmat Ali Hazarika@Rostam Ali Hazarika@Roufat Ali


Parents
Hazarika and Anjena Begum Hazarika

Call-sign Drabrh or Raja

Website

http://www.facebook.com/Drabrajib
http://in.linkedin.com/pub/dr-a-b-rajib-hazarika/25/506/549
http://en.wikipedia.org/wiki/Special:Contributions/Drabrh
http://www.diphugovtcollege.org/

http://www.karbianglong.nic.in/diphugovtcollege.org/teaching.html

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 6

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES (born July 02, 1970, in Jammu, Jammu and Kashmir, India) is
Assistant Professor(Lecturer) Diphu Government College ,Diphu in KarbiAnglong district , Government of
Assam[2], [3] , KarbiAnglong,Assam's largest conglomerate by Government of Assam . He is also the
Fellow of Royal Astronomical Society[4],London ,Member of International Association of Mathematical
Physics, World Academy of Science ,Engineering & Technology , Focus Fusion Society, Dense Plasma
Focus, Plasma Science Society of India, International Biographical centre, Assam Science Society, Assam
Academy of Mathematics,International Atomic Energy Agency,Nuclear and Plasma Society,Society of
Industrial and Applied Mathematics,German Academy of Mathematics and Mechanics,Fusion Science &
Technology Society,Indian National Science Academy,Indian Science Congress Association,Advisory
Committee of Mathematical Education, Royal Society,International Biographical Centre.

Contents

• 1 Early life
o 1.1 Early career
ƒ 1.1.1 Currently working
• 2 Career
• 3 Research
• 4 Patent & Innovation
• 5 Research Guidence
• 6 Personal life
• 7 Quotes
• 8 Awards and recognition
• 9 References
• 10 External links

Early life

Dr.A.B.Rajib Hazarika was born into the famous Hazarika family, a prominent family belonging to Dhing's
wealthy Muslim Assamese community of Nagaon district. He was born to Anjena Begum Hazarika and
Rusmat Ali Hazarika. He is eldest of two childrens of his parents younger one is a Shamim Ara
Rahman(nee Hazarika)daughter .

Early career

Dr.A.B.Rajib Hazarika completed his PhD degree in Mathematics from J N Vyas University of Jodhpur in
1995 with specialization in Plasma instability, the thesis was awarded “best thesis” by Association of
Indian Universities in 1998 and the Post-Doctoral Fellow Program from Institute of Advanced Study in
Science & Technology [5] in Guwahati Assam in 1998 as Research Associate in Plasma Physics Division
in theory group studying the Sheath phenomenon. As a Part-time Lecturer in Nowgong college, Assam
before joining the present position in Diphu Government College ,Diphu in KarbiAnglong district [6],[7]
He is a member of the wikipedia[8], [9].
He is Fellow of Royal Astronomical Society [10],member of International Association Mathematical
Physics [11], member of World Academy of Science,Engineering & Technology [12], [13],member of
Plasma Science Society of India [14] , [15] ,member of Focus Fusion Society forum [16] ,member of Dense
Plasma Focus [17], Member of Assam Science Society [18], Member of Assam Academy of Mathematics
[19]

Currently working

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 7

He joined the Diphu Government College[20] in July 2004 as Lecturer in Mathematics (Gazetted officer)
through Assam Public Service commission[21] in Assam Education Service [22] ,AES-I. [23] now
redesignated as Assistant Professor.

Career

In May 1993, Dr.A.B.Rajib Hazarika was awarded Junior Research Fellowship, University Grants
Commission, National Eligibility Test and eligibility for Lecturership ,Govt. of India and worked as
JRF(UGC,NET) in Department of Mathematics and Statistics of J N Vyas University in Jodhpur. Later on
in May 1995 got Senior Research Fellowship(UGC,NET) and continued research for completion of PhD on
27th Dec 1995 .From 1993 onwards taught in Kamala Nehru College for women, Jodhpur and in Faculty of
Science in J N Vyas University in Jodhpur up to the completion of PhD .In 1998 May joined Plasma
Physics Division of Institute of Advanced Study in Science & Technology in Guwahati as Research
Associate for PDF in theory group to study the sheath phenomena of National Fusion Programme [24] of
Govt. of India . Then joined Nowgong College as a part-time Lecturer after which in 2004, July joined the
present position of Lecturer in Diphu Government College which is redesignated as Assistant Professor.

Research

During PhD </ref> http://www.iopscience.iop.org/1402-4896/51/6/012/pdf/physcr_51_6_012.pdf


</ref> http://www.iopsciences.iop.org/1402-4896/53/1/011/pdf/1402-4896_53_1_011.pdf
</ref> http://www.niscair.res.in/sciencecommunication/abstractingjournals/isa_1jul08.asp
</ref> http://en.wiktionary.org/wiki/Wikitionary:Sandbox
</ref> http://adsabs.harvard.edu/abs/1996PhyS..53...578

during PDF the research was based on Astronomy, Astrophysics, Geophysics , for plasma instability with
the title of thesis as “Some Problems of instabilities in partially ionized and fully ionized plasmas” which
later on in 1998 was assessed as best thesis of the year by Association of Indian Universities in New Delhi.
His current interest lies in Astronomy, Astrophysics, Geophysics, Fusion Plasma, and innovation of fusion
devices, design of fusion devices, simulation codes and theoretical mathematical modeling.He is known for
his theoretical research work on Gravitational instability and gravitational collapse M=23/2 Msun as a new
formula for Chandrasekhar limit now known as Bhatia-Hazarika Limit , when the rotating neutron star,
pulsars are formed .When the mass of the star is more than this limit a neutron star shrinks or abberates due
to gravitational collapse up to a point size in space. As it is known that when the star passes limit of the size
of old star more than three times that of mass of sun it passes the Schwarchild radius and there on is a black
hole from where we can receive no more information as its gravitational field is too intense to permit
anything , even photons to escape.Research at Diphu Govt. College </ref>
http://en.wikipedia.org/wiki/Special:Contributions/Drabrh/File:Drabrhdouble_trios_saiph_star01.pdf
</ref> http://en.wikipedia.org/wiki/File:Drabrh_bayer_rti.pdf
</ref> http://en.wikipedia.org/wiki/File:Columb_drabrh.pdf
</ref> http://en.wikipedia.org/wiki/File:Drabrh_double_trios.pdf
</ref> http://en.wikipedia.org/wiki/File:Drabrhiterparabolic2007.pdf
</ref> http://en.wikipedia.org/wiki/File:Drabrh_mctc_feedbackloop.pdf
</ref> http://en.wikipedia.org/wiki/File:Drabrh_tasso_07.pdf
</ref> http://en.wikipedia.org/wiki/File:Abstracts.pdf?page=2

Patent & Innovation

Applied for patent in US patent and trademarks office has innovated three future fusion devices Double
Tokomak collider (DTC), Magnetic confinement Tokomak collider (MCTC) hub, Duo Triad Tokomak
collider (DTTC) hub .A Hall thruster as diffusion associated neoclassical indigenous system of Hall
assembly (DANISHA)is designed applied for international application No.PCT/IB2009/008024 in World

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 8

Intellectual Property Organisation[25].He has innovated a new simulation code Fuzzy Differential
Inclusion Code in 2003 for fusion process.[26], [27]

Research Guidence

Research guidence is given to two students in Mathematics for MPhil degree

Personal life

Dr.A.B.Rajib Hazarika has a metallic Scarlet red Tata Indigo CS of Tata motors make and loves to drive
himself.

Quotes

• "Fakir(saint) and lakir(line) stops at nothing but at destination"


• "Expert criticizes the wrong but demonstrates the right thing"
• “Intellectuals are measured by their brain not by their age and experience”
• “Two type of persons are happy in life one who knows everything another who doesn’t know
anything”
• “Implosion in device to prove every notion wrong for fusion”
• “Meditation gives fakir(saint) long life and fusion devices the long lasting confinement”

Awards and recognition

Dr.A.B.Rajib Hazarika got Junior Research Fellowship,Government of India


Senior Research Fellowship,Government of India
Research AssociateshipDSTGovernment of India
Fellow of Royal Astronomical Society [28]
Member of Advisory committee of Mathematical Education Royal Society London
Member of Scientific and Technical committee & editorial review board on Natural and applied sciences of
World Academy of Science ,Engineering &Technology [29]
Leading professional of the world-2010 as noted and eminent professional from International Biographical
Centre Cambridge

References

1. ^ http://www.kvafsdigaru.org/
Poona College of Arts, Science &Commerce

• Template:Http://en.wikipedia.org/wiki/Special:contributions/Drabrh

External links

Wikimedia Commons has media related to: Drabrh/Dr.A.B.Rajib Hazarika

• [30]
• Dr.A.B.Rajib Hazarika's profile on the Linkedin Website
• [31]]]

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 9

dr ab rajib hazarika aes 19:01, 16 October 2010 (UTC) dr ab rajib hazarika aes

Categories: Jai Narayan Vyas University alumni | Institute of Advanced Study in Science & Technology
alumni | List of Indian mathematician | List of Indians by state | List of people of Assam | PhD | PDF |
Assamese | Nuclear fusion people | Hazarika family | Poona college of Arts ,Science & Commerce alumni |
Fellow of Royal Astronomical Society | 1970 births | Living people | Sunni Islam people | Kendriya
Vidyalaya alumni | Academician | Indian Sunni muslim

• User:Drabrh/Dr.A.B.Rajib Hazarika English


• User:Drabrh/Dr.A.B.Rajib Hazarika goo
• User:Drabrh/Dr.A.B.Rajib Hazarika English
• User:Drabrh/Dr.A.B.Rajib Hazarika











- goo

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 10

Inventions of Dr.A.B.Rajib Hazarika

Chapter-1: Introduction of Plasma & Plasma Instabilities.


1.1 INTRDUCTION
Researches in physics are familiar with the phenomena of discharge of electricity through rarefied
gases and the study of the property of ionised gases has proved to be a must fruitful branch of physics. The
study which was initiated at the beginning of this century has paved the way for the development of
modern physics which has giving us insight in to the nature and structure of atoms, molecules and nucleus.
The atom or molecule becomes ionised when it loses one or more of its elections and this ionised atom or
molecule has properties which are not observed in case of neutral atoms or molecules. Matter exists in three
states of aggregation namely solid, liquid and gas. In a solid the important factor that holds the atoms
together is called the binding energy. If external energy say in the form of heat is applied to a solid so that
energy imparted per atom is greater than the binding energy the solid is converted to liquid. Further
addition of that will convert the liquid to a gas and if more heat is applied to a gas so teat the outer most
electrons are detached from the gas atom, the gas is said to have become ionized. The energy necessary to
remove an electron from an atom is known as ionization potential There are various other sources besides
heat such as the ultraviolet light, the x-rays, the radiations from radioactive sources, flames, chemical
action and soon, which can ionize a gas, when the gas becomes ionized, its dynamical behaviour is
influenced by external electric and magnetic field, the separation of charges within the ionised gas brings in
new forces and its properties become different from those of neutral atoms and molecules. Such a state of
matter has been designated as plasma, a term first used by Langmuir in 1923. In analogy with the three
states namely solid, liquid and gas, plasma has been called the fourth state of matter. With the opening up
of a new horizon in the field of atomic physics with the discovery of election and other epoch making
discoveries which followed in succession attention of the physicists was diverted away from the study of
the nature of the plasma itself. A group workers such as Townsend in England, Tonks and Langmuir in
America, seen back in Germany, however, continued their work in this line specially on the breakdown of
gases and other related problems in are and spark plasma.

Though plasma is not the normal state of matter in our planet, yet in the sun and in the stars it is
the only form in which matter can exist due to extremely high temperature prevalent there. The existence of
lone sphere at an approximate height of fifty kilometres above the surface of the earth which has been
formed due to ionization of as molecules in the upper atmosphere by the ultraviolet radiation from the sun
and which is responsible for the propagation of the radio waves round the earth is a typical example of a
plasma state. The aurora provides another example. The extensive study whish has been undertaken to
understand the formation and structure of the different lonosheric layers and the nature of their seasonal
and diurnal variation has considerably helped in elucidating the nature of the temperature in o.i.e.v. the
discovery of high energy radiation belts in the upper atmosphere known as van Allen belts is a typical
example of a plasma which consists of electrons and positive ions trapped in the earth's magnetic field. in
this case the charged particle density is 103 per cc, KTe is 1 K, Kit is 1 eV and the magnetic field is 500 x
105 gauss, where the and Ti are the electron and ion temperature respectively. With the development of
more refined techniques in the field of experimental physics especially after the Second World War, the
study of the properties of ionised gases has received a new impetus and a new branch of physics called
Gaseous Electronics has developed and is providing useful basic data in the study of physics of plasma
state. In 1929, F. Houtermanns and R. Atkinson suggested that the source of energy in the sun and in the
stars is the thermo nuclear or fusion reaction among the nuclei of the light elements particularly through
processes in which hydrogen is converted to helium. The question of energy production was solved
successfully by Bethe (1939), Bethe and Eritchfield (1938) and Weizsacker (1938). The sequence of vents
leading to energy Production has been visualized as follows. Due to extremely high temperature obtaining
in the sun two protons combined together with the formation of a deuteron according to the following
equation beta decay takes place
H1 + H1= H2 + BT + neutrino

As the mean half life of the electron is about 10 seconds the deuteron will react within a vary short
time with a proton
H2 + H1= He3 + 8 radiation

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 11

He3 Thus formed will react with another He3 nucleus according to the following reaction
He3 = He3= He4 +2H1

The energy released in the process in which a He4 nucleus is formed from two proton reaction is
28 Mev.

This theory of energy production which has been successful in explaining the emission of energy
from the sun can be utilized, it was thought, for the generation of energy in the earth by simulating the
conditions that are present in the sun and the stars. The immense possibility of this form of energy
production has stimulated a great interest in research in plasma physics in recent years.

NUCLEAR FUSION

There are two types of nuclear reaction in which large amounts of energy may be liberated. In both
types, the rest mass of the products is less than the original rest mass. The fission of uranium, already
described, is an example of one type. The other involves the combination of two light nuclei to form a
nucleus that is more complex but whose rest mass is less than the sum of the rest masses of the original
nuclei. Examples of such energy-liberating reactions are as follows:
1H1 + 1H1 ---------------------> 1H2+ 1eo
1H2 + 1H1 ---------------------> 2He3 + 8- radiation
2He3 + 2He3 ------------------> 2He4 + 1H1 + 1H1
In the first, two protons combine to form a deuteron and a positron (a positively charged electron.)
In the second, a proton and a deuteron unite to form the light isotope of helium. For the third reaction to
occur the first two reactions must occur twice, in which case two nuclei of light helium unite to form
ordinary believed to take place in the interior of the sun and also in many other stars that are know to
composed mainly of hydrogen.
The positrons produced during the first step of the proton- proton chain collide with electrons;
annihilation takes place, and twin energy is converted in to gamma radiation. The net effect of the chain,
therefore, is the combination of four hydrogen nucleus into a helium nucleus and gamma radiation the net
amount of energy released may be calculated from mass balance as follows:
Mss of four hydrogen atoms (including electrons) = 4.03132 U
Mass of one helium plus two additional electrons = .400370 U
Difference in Mass = 0.02702

In the case of the sun, 1g of its mass contains about 2 x 10 23 protons. Hence, if all of these
protons were fused in to helium, the energy released would be about 57,000 kWh. If the sun were to
continue to radiate at its present rate, it would take about 30 billion years to exhaust it’s apply of protons.

For fusion to occur, the two nuclei must come together to within the range of the nuclear farce,
typically of the order of 2 x 1015 m. To do this they must overcome the electrical repulsion of their positive
charges; for two protons at this a distance the corresponding potential energy is of the order of 1.1 x 1013 J
or 0.7 Mev, which thus represents the initial kinetic energy the fusing nuclei must have.

Such energies are available at extremely high temperature. The average translational kinetic
energy of a gas molecule at temperature T is 3RT/2, Were R is Boltzmann's constant. For this to be equal to
1.1 x 1013 J, the temperature must be of the order of 5x 109 K. of course, not all the nuclei have to have
this energy, but this calculation show that the temperature must be of the order of millions of Kelvin if any
appreciable fraction of the nuclei are to have enough kinetic energy to sun mount the electrical repulsion
and achieve fusion.

Such temperatures occur in stars as a result of gravitational contraction and its associated
liberation of gravitational potential energy. When the temperature gets high enough, the reactions occur,
more energy is librated, and the pressure of the resulting radiation prevents further contraction. Only after
most of the hydrogen has been converted in to helium will further contraction and an accompanying
increase of temperature result. Conditions are then suitable for the formation of heavier elements.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 12

Temperatures and pressures similar to those in the interior of stars may be achieved on earth at the
moment of explosion of a uranium or plutonium fissions bomb. If the fission bomb is surrounded by
proportions of the hydrogen isotopes, these may be coursed to combine in to helium and liberate still more
energy. This combination of uranium and hydrogen is called a "hydrogen bomb".
Intensive efforts are underway in May laboratories to achieve controlled fusion reactions, which
potentially represent an enormous new energy resource. In one kind of experiment, plasma is heated to
extremely high temperature by an electrical discharge, while being contained by appropriately shaped
magnetic field. In another, pellets of the material to be fused are heated by a high intensity laser beam.
Reactions being studied include the following:

1H2 + 1H2 ----------> 1H3 + 1H1 + 4 Mev --------------> (1)


1H3 + 1H2 ----------> 2He4 + 0n1 + 17.6 Mev----------> (2)
1H2 + 1H2 ----------> 2He3 + 0n1 + 3.3 Mev ----------> (3)
2He3 + 1H2 ---------> 2He4 + 1H1 + 18.3 MeV -------> (4)

In the first, two deuterons combine to form tritium and a proton. In the second, the tritium nucleus
combines with another neutron to form helium and a neutron. The result of both of these reactions together
is the conversion of three deuterons in to a helium-4 nucleus, a proton, a neutron, with the liberation of 21.6
Mev of energy. Reactions. (3) And (4) together achieve the some conversion. In a plasma containing
deuterium, the two pairs of reactions occur with roughly equal probability. As succeeded in producing these
reactions under controlled conditions in such a way as to yield a net surplus of usable energy, but the
practical problems do not appear to be insurmountable.

The basic conditions for the occurrence of a thermo nuclear reaction cam are listed as follows:

(a) An extremely high energy in the form of heat has to be supplied to overcome the coulomb repulsion,
between the nuclear charges and if the collisions occur with sufficient high energy there is a reasonable
chance of an energy producing nuclear reaction.
(b) In the sun and in the stars the gravitational farce is capable of keeping the interaction nuclei together for
sufficient time to produce a nuclear reaction. On earth, the mass of fusion reactor will obviously be to small
to utilize this principle consequently methods have to be devised to confine the plasma and isolate it on the
surroundings that high energies of the particles should not be dissipated by coming in contact with the cold
wall of the vacuum in which the plasma has been generated.

In recent years various methods have been suggested for the creation of high temperature and the
confinement of the plasma which we shall discuss in the text. The goal of the present trend of research is
the production of controlled release of energy by thermo nuclear process.

In plasma physics we are dealing with a large ensemble of interaction parting particles and two
methods can be adopted for the description of processes occurring in a plasma we can inv estimate the
motion of a single particle and its interaction with other particles or the plasma can be regarded as a fluid
and though there is no net charge in a plasma the electrons and positive ions will be affected by the external
electric and magnetic field and consequently the equations of magneto- hydrodynamics can be utilized for
the study of plasma dynamics.

To start with therefore, we shall confine our attention to the methods that are used for the
formation of plasma namely the methods used for ionization of gas. In order to understand the processes
occurring in an ionised gas the basic principles that are operating should be clearly stated and their physical
interpretation. Should be realised. Hence in the present text more emphasis will be laid on the presentation
of basic principles supplemented by Mathematical formulation where eve necessary. Both the theoretical
and associated experimental work will be dealt with. The goal of the present day research work in plasma
physics is the successful implementation of the process of energy production from thermo nuclear process
in a controlled fashion. We shall see how far and to what extent process has been realised and what
attempts are being made to reach the goal.

1.2 Basic Concepts about plasma

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 13

A neutral gas and in ionised gas or a plasma arises externally applied eclectic or magnetic field has little
effect on its proper ties those of the ionised gas are profoundly affected by either on electric or a magnetic
field or by the simultaneous presence of both these fields. In case of ordinary gas the molecules are always
moving with random velocities which can be closely represented by Maxwell Boltzmann distribution law.
In plasma instead of the single species of particles in a gas, we have at least three types of particles such as
electrons, positive ions & neutral molecules. The velocity or energy distribution of electrons in an ionised
gas in case of molecular gases such as oxygen, Hydrogen and nitrogen can be represented by Maxwell
Boltzmann distribution law to a fair degree of accuracy but specially in case of rare gases the distribution
can be represented by what is known as Druyvestyn distribution which holds if the ionization takes places
due to excitement by a d c electric field.
In his analysis, druyvestyn considered
The loss of energy by electrons in elastic
Impact with molecules but ignored the
Effect of inelastic collision. His result
For the energy distribution was.

⎛ − 3δε 2 ⎞
P(ε ) = Cε 1 / 2 exp⎜⎜ 2 2 2 ⎟⎟ Maxwell distribution
⎝ 2λ e E ⎠

Druyvestyn distribution.
Where C is a Constant, E the energy. Fig:-
1= 2m, mis electronic mass,

M is the mass of molecule,


An electronic mean free path, e the electronic charge and E is the electric field.

The distribution curves for both the maxwellian and Drulyvestyn distributions shown in fig.

1.3 PLASMA PRODUCTION BY LASER

After the development of Laser, a new method for the generation of plasma has been achieved. It
was first reported by Maker, Terhune and savage in 1964 than when radiation from powerful Q switched
laser was brought to a focus in a gas, the gas which is normally insulating and transparent to radiation at
ordinary intensities was rapidly converted to highly conducting self-luminous hot plasma. A typical
shockwave accompanies the generation of the plasma and this ware propagates through the surrounding
gas. From poynting's theorem in electrodynamics is can be shown that if I is the intensity of the Laser beam
in atts/em2 then the electric field associated with the wave is E=19.3 I 2/1 where E is expressed in votts/cm.
From analogy with the breakdown of gases with the laser beam and the consequent production of plasma is
intimately connected with the electric C field. For a laser with the output intensity of 1011 watt/ cm2 the
associated electric field is calculated to be 7 x 106 V/cm. Typical laser plasmas have been produced by
beams from ruby (X = .6943/4m) and neodymium laser X=1.06 cm for a flash duration T= 100 ns. The
whole process of plasma formation can be divided in to three distinct phases firstly initiation, Secondly
formative growth and the onset of breakdown and thirdly the plasma formation with the generation of
shock waves and their propagation. It is generally taken that the breakdown of the gas takes place when the
electron concentration reaction reaches a value of 1013 electrons per it. The gas will remain heated for
substantially longer time than the duration of the laser flash and then the energy is dissipated by the
processes of recombination, radiation and conduction and the local thermo dynamic equilibrium will be
attained in a time of the order of 105 sec.

A distinguishing feature of the production of plasma by the laser radiation is the fact that although
the quantum of energy have associated with for example a neodymium laser is only 1.17 eve, gases with
high ionization and excitation potentials can be readily ionised by the laser light. In respect laser ionization
is different from the process of photon- ionization where the absorption of a single photon by an atom
courses the gas to be ionised. For example in case of helium itch an ionization potential of 24.66 eV, it is

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 14

necessary for the helium atom to absorb as many as twenty four photons to get ionised when irradiated by a
beam from neobelymium laser. This process is known we “Multiphoton- absorption" which plays a
significant potential of the gas then an atom will require Vi/ hv quanta to ionise it.

1.4 Radiation from plasma:-

There are various sources by which energy is lost from a plasma and we shall Consider here the
processes by which energy is lost in the from of radiation. The processes are as follows

(i) Radiation emitted by excited atoms and ions:-

It the ions in a plasma are not completely stripped the on electron attached to this ion can absorb
energy from a freely moving electron and thus be raised to an excited tae. When the electron returns to the
lower quantum level, the excitation energy is emitted in the form of optical radiation.

(ii) Bremsstrahlung losses:-

One of the main causes for the loss of energy from plasma is by Bremsstrahlung. This radiation is
due to the interaction between electron and positive ion, the magnitude and direction of the velocity of the
electron change that is the electron is either accelerated or decelerated.

(iii) Cyclotron or betatron emission:-

It will be seen that in order to confine plasma a magnetic field is used. In such a field the ions and
electrons gyrate about the magnetic field with a certain frequency which is He/mc, where H is the magnetic
field. Consequently under the action of the magnetic field they radiate energy. This type of radiation of
energy is called the cyclotron or synchrotron radiation.

(iv) Recombination radiation:-

When an electron passes close to an ion it may be captured by the ion resulting in the process of
recombination and formation of the bound state. The energy which is librated is evidently the sum of the
kinetic energy of the electron and its binding energy.

Plasma oscillations and waves:-

Introduction: - A wide variety of oscillatory motions is possible in plasma. The first observation
that periodic fluctuation could occur in apparently stable plasma was made by Appleton and Webb (1923)
and shortly after world by Perning (1926). They found that the current through a D.C. discharge had
superimposed upon it a small a.c component depending upon the discharge conditions the frequency of
which could vary from 103 to 108 cycles/Sec.

These oscillations could be picked up by metal solid stuck at the outside of the discharge tube
which system may serve as a condenser forming part of a resonance system. These oscillations were
associated with the striations in a discharge. A systematic investigation was undertaken by Donahue and
Diske (1951) who used a photomultiplier and a scope to measure the discharge current, voltage and light
intensity. Using Longmuir probe method Emelius (1956) and his co-workers have studied the phenomena
and have concluded that the oscillations in plasma do occur in a wide variety of ways covering a wide rang
of frequencies.

In general, except for high current discharges, there are a large number of neutral molecules in
addition to charged particles in plasma and thermal equilibrium is maintained mainly by collisions of
charged particles with neutral molecules. This equilibrium may be disturbed by local variations. For
example, there may be a momentary charge separation in small volume. The mutual electric field of
charges in a displaced volume produces a restoring force which could lead to consequent oscillatory motion
of the charges.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 15

1.5 Particle disrupting the position of plasma;-

In considering the physical of plasma two methods have generally been used. One in which the
motion of individual particles composing the plasma such s electrons and positive ions are considered is
called the particle description of the plasma. In the second method, the plasma may be considered as was
interacting charged fluids the negatively charged electron fluid and the positively charged ion fluid and the
equations of magnetohyro dynamic apply. In a dense plasma, the motion of the charged particles is strongly
affected by the field created by the neighbouring particles and is also governed by the externally applied
field such as electric or magnetic field. In rare field plasma however, the influence of the internal field may
be neglected and the motion is mainly governed by the externally applied field. Leaving aside for the
moment the magneto hydrodynamic approach we shall confine our. Attention to the particle description of
the plasma. This method also gives a clear physical decryption of the processes involved. We shall consider
number o physically possible configurations of the applied electric and magnetic field.

1.6 Theory of simple oscillations:-

The first simple treatment of plasma oscillations was given by Tonks and Longmuir (1929) who
considered idealised plasma with no thermal motion of ions and electrons in which two types of oscillations
are basically possible. rest the electron oscillations which are so fast that ions can be regarded as stationary
and secondly the ion oscillations which are so slow that electrons at all times adjust their energy and
density so as to remain in equilibrium and satisfy Boltzmann distribution. The disturbed regions are
presumed to contain a large number of particles and to have large dimensions compared with the distances
between atoms.

1.7 Hydro magnetic waves:-

When a magnetic field is present in plasma, another type of wave is propagated and the property
of these waves was studies by Alven (1950) and they are also known as Alfven waves.

1.8 Magneto sonic wave:-

Another type of hydro magnetic wave is can be produced in plasma in which the particle velocity
V is parallels to the direction of propagation both being perpendicular to B the magnetic field. This is a
longitudinal wave and in analogy with sound wave it is called magneto sonic wave.

We have already and waves the various type of plasma oscillations and waves, their process of
generantion and their made of propagation. Normally due to some local departure from charge equilibrium
electron and ion plasma oscillations are generated. These are stationary oscillations and there is no
propagation of waves. It we take into account the pressure variation that results due to electron or ion
oscillations then in case of plasma electron oscillation a wave propagate which is called electron acoustic
wave and in case of ions it is designated as ion acoustic wave. When a magnetic field is present both the
electron plasma frequency and ion plasma frequency are modified.

Hydro magnetic waves are generated in plasma when a steady magnetic field is present. When the
magnetic field is in the direction of propagation the waves are called Alfven waves. When the magnetic
field is in the direction perpendicular to the wave vector the waves are known as magneto sonic waves.

Though many experimental investigations have been carried out demonstrating the existence of
thee oscillations and experimental results are in quantitative agreement with the theoretical deductions there
is scope for further experimental investigation in this line. Apart from the physical understanding of the
phenomena these waves and oscillations produce effects which are intimately connected with the stability
theory of the plasma.

1.9 Plasma production:-

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 16

Several methods for the production plasma in the laboratory such as collision ionization, Photo
ionization, thermal ionization, production by breakdown of gases either may the d.c.a.c. or, radio frequency
or microwave field. A method that has been developed recently is the technique of plasma gun.

1.10 Plasma gun:


A plasma gun is a device for producing and accelerating in vacuum bursts of plasma with a
velocity in excess of 107 cm/see. This method has the advantage that the plasma can be produced in an
external system and then transferred into another region where it is to be studied. Ashby in 1968 has
described a plasma gun in which a certain quality of gas say 1cc of deuterium at atmospheric pressure is
admitted to the system by means of a special valve vary rapidly say of the order of 100 micro second.

Another type plasma gun has been described by Marshal (1960). It consists of two coaxial
cylinders which serve as electrodes, the length of the cylinders varying from 30 to 100 c.m No. Magnetic
field is used.

1.11 Plasma instability:

Plasma instability can be defined as the small disturbance 9Perturboiton) given to the system
which is at rest or static position there by causing a linear or non linear shift from the static position. It can
be explained as a ball experiment.
(a) Linearly unstable (b) Linearly unstable
(Explosive) (Non Explosive)
(c) Non linearly unstable stable (d) Stable

Instability

Convective wave number (complex) Absolute Wave number is real

Plasma Instabilities

Explosive Non Explosive


(Absolute) (Convective)

Linearly unstable Non linear understable linearly unstable Stable

Macro instability Micro Instability Micro Instability Macro Instability Micro Instability Macro
Instability Micro Instability
Electro dynamic Electro static El. dynamic El. static
Magnetic, un-magnetised
Collisional Non Collisional Non collisional
Viscous Non Viscous Non viscous

Thus it often happens that positive that caesium ions are emitted and at equilibrium plasma is formed in the
interelectrode space consisting of neutral caesium atoms positive caesium ions and electrons. These
positive ions effectively neutralise the space charge. The formation of the plasma however produces a
sheath close to each electrode and actual potential distribution can be expressed by curve. With the
reduction of the space charge the diode can act as constant current device.

Beside the neutralization of space charge caesium atoms when they are adsorbed in the tungsten
surface lower the work function of both the emitter and the collector. The lowering of work function of the

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 17

emitter enables to obtain higher current at a lower temperature. The lowering of the work function of the
collector increases the efficiency of the device by at the same time it should be seen that electrons are not
emitted from the collector so that the temperature of the collector should be kept at a much lower value.
The pressure of caesium vapour that should be maintained within the diode is rather critical to
obtain good efficiency. At high pressure caesium is condensed on the surface of the cathode at a high rate
decreasing its work function drastically and the conductivity of the plasma increase due to increase in
volume ionization but at the same time it is essential that electron released from the cathode surface should
reach the collector without suffering much collision and consequent loss which implies a low pressure, In
practice is found that a pressure of 1 torr will meet both ends to a large extent.
In order to calculate the efficiency of diode converter it should be remembered that a thermo ionic
converter works along the same line as a Carnot engine with a hot and a cold source. The loss of heat is
mainly due to radiation loss from the cathode, loss of heat due to conduction through plasma and loss due
to conduction through electric circuits and insulators. If we consider the radiation loss as the major
contributing factor they input power.

Pin= Js Vc + CT 4 x
Where Js is the current density, C is Stefan Boltzmann constant, x the emissivity of the cathode surface and
T is the temperature of the cathode.
Pout= Js (Vc-VA)
Where VA is the total effect of voltage loss across the wires.
In stead of caesium some noble gases like argon and helium can be used for the production of
plasma and this results in the operation of the diode at a lower temperature and reduction of losses due to
small collision cross section of the noble gases but the difficulty arises due to the fact that as the ionization
potential of the noble gases is high, it becomes difficult to ionize them.

To study Instability we use two different methods

1) Normal mode technique.


2) Energy Principle
1) Normal mode technique: In normal mode technique we use perturbation in temporal mode &
solve the dispersion relation to grade the growth rate.
2) Energy principle (or variation principal):- This is use to study the symmetric system by using
variation principle to solves the electro magnetic equation get the dispersion relation for the growth
rate.
1.12. Types of Instability:
So far lot plasma instability they are as follows:

1. Absolute
2. Alfven Wave
3. Magneto pause
4. Beam cyclotron
5. Bounce resonance
6. Buneman
7. Buneman- Farley
8. Collisional helical
9. Crenkov
10. Convective
11. Counter streaming
12. Decay
13. Dory-Guest Harris
14. Drift (loss) cone
15. Drift cyclotron
16. Drift Wave
17. Anisotropic velocity
18. Humped distribution
19. Two humped distribution

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 18

20. Exb
21. Electro magnetic cyclotron wave
22. Electro Static
23. Electrostatic ion cyclotron
24. Explosive
25. Fish bone
26. Gravitational
27. Harris
28. Helical
29. Hose (fire hose)
30. Hydromantic
31. Flute
32. Kelvin-Helmholtz
33. Kink
34. Kruskal-Shafrarov
35. Macroscopic (macro)
36. Microscopic (micro)
37. Mirror
38. Modulation
39. Negative dissipative
40. Negative energy
41. Non Convective
42. Plasma cloud
43. Auroral Sheet
44. Current Pinch
45. Tearing mode
46. Field aligned current
47. Geomagnetic tail
48. Ion acoustic waves (IAW)
49. Dust ion acoustic wave (DIAW)
50. Dust acoustic wave (DAW)
51. Ionosphere
52. Magneto spherical
53. Partially ionized plasma
54. Radiation belt
55. Ring current
56. Rayleigh – Taylor
57. Plataeu-Rayleigh
58. Plateau Rayleigh
59. Rictkyer – Meyer
60. Sausage
61. Simon
62. Two Stream
63. Universal Weibel Soliton
64. Parametric
65. Tokomak
66. Ballooning
67. Reverse field Pinch

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 19

CHAPTER -3

3.1. Tokomak Discoveries, 'Firsts' and 'Onlys'

Not primarily intended for the non-technical audience! Notable discoveries include:

Discovered
Phenomenon Discovered Date
at . . .
TMP, USSR
(Sometimes
First tokomak known 1954
incorrectly as
TMB.)
1958
First use of a metallic liner inside the torus T-1, USSR
(approx.)
First tokomak to demonstrate plasma disruptions TM-3, USSR 1963
First full carbon first wall TM-3, USSR 1963
LT-3,
First tokomak to study runaway electrons systematically 1960s
Australia
Identification of magnetic islands (although this terminology was not
TM-2, USSR 1966
used at the time).
First machine to produce 'significant' neutrons. TM-3, USSR 1971
1972
First injection of frozen fuel pellets Ormak, USA
(approx.)
1972
First tokomak where NBI heating exceeded ohmic heating Ormak, USA
(approx.)
1973
Discovery of ion-ion hybrid resonance scenarios TM-1VCh
(approx.)
TO-1, USSR
First demonstration of feedback control of plasma position using 1973
or
variable vertical magnetic field. (approx.)
CLEO, UK
JFT-2a 1975
First use of a poloidal divertor
(DIVA), Japan (approx.)
JFT-2a, 1975
First demonstration of impurity control using a divertor
Japan (approx.)
First machine designed to use Lower Hybrid waves as a current Versator II,
1977
drive mechanism. USA
TRIAM-1M, 1978
First use of Nb3Sn superconductor on a tokomak
Japan (approx.)
1970s
First neutral beam heating CLEO, UK
(approx.)
First tokomak to use superconducting coils (TF only) T-7, USSR 1970s

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 20

Development of the Hugill Diagram DITE, UK Late 1970s


First observation of RF-driven current using the Lower-Hybrid Wave JFT-2, Japan 1980
1980
First demonstration of a beta limit (before theory predicted it) ISX-B, USA
(approx.)
1980
First observation of fishbone instability PDX, USA
(approx.)
First use of beryllium as a plasma facing material, reducing Zeff by UNITOR,
1982
a factor of two. Germany
Development of Boronisation (with di-borane) as a vessel TEXTOR,
1980s
conditioning system Germany
1980s
First experimental observation of ballooning modes TFTR, USA
(approx.)
First experimental observation of kinetic Alfvén wave - an important TCA, 1980s
validation of kinetic theory. Switzerland (approx.)
ASDEX,
First demonstration of H-mode 1982
Germany
First current drive from breakdown by LH PLT, USA 1985
Discovery of boot-strap current TFTR, USA 1986
First observation of ‘monster’ sawteeth, stabilised by fast ions JET, EU 1986
First 'stable AC operation' of a tokomak STOR-1M 1987
First observation of the 'snake' (an m=1 density perturbation seen
JET, EU 1988
by the soft X-ray cameras following pellet injection).
COMPASS-D,
First use of saddle coils to simulate error fields 1989
UK
First machine capable of studying synergy between ECRH and LH FTU, Italy 1990
First D-T plasma, (using trace quantities of tritium) JET, EU 1991
First magnetic fusion experiment to use fusion power plant fuel
TFTR, USA 1993
mixture of 50% deuterium (D) and 50% tritium (T)
First demonstration of radio frequency heating of a D-T plasma
TFTR, USA 1994
using second harmonic tritium resonance
First experimental observation of the "enhanced reversed shear"
TFTR, USA 1994
confinment mode
First production of Internal Transport Barrier (ITB) in high beta H-
JT-60, Japan 1994
mode plasmas
First observation of neoclassical tearing modes TFTR, USA 1995
First unambiguous measurements of self-heating by alpha particles
TFTR, USA 1995
in a DT fusion plasma
ISTTOK, 1990s
First machine to operate in multi-cylce flat top mode
Portugal (approx.)
First fully remote exchange of complete divertor JET, EU 1998
First fully superconducting tokomak (TF and all PF coils) EAST, China 2006
First observation of 3D feature geodesic acoustic mode (GAM)
HL-2A, China 2006
zonal flows.
TCV,
100% bootstrap current achieved. 2006
Switzerland
Only machine capable of adjusting TF ripple JET, EU Present day

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 21

Only tritium compatible tokomak today JET, EU Present day


TCV,
Only tokomak capable of achieving negative triangularity. Present day
Switzerland
ISTTOK,
Only tokomak to be capable of AC operation on a regular basis Present day
Portugal

If you would like to suggest another notable discovery to add to this table, please contact me with evidence
to support your claim.

3.2. Tokomak World Records

Not primarily intended for the non-technical audience! World records include:

Record Perhaps held by . . .


Highest toroidal field Alcator C-MOD, USA
(Probably) COMPASS-D, UK
This record might be considered controversial.
Which characteristics count?.
Note that START achieved H-mode, but is a
Smallest conventional tokomak to demonstrate spherical tokomak.
all the characteristics of H-mode Also note that the Canadian T de V is the
smallest machine to have contributed H-modes
to the International Multi-Tokomak Database
and that this has significantly affected
extrapolations to ITER size machines.
Largest major radius (5m) ET, USA
Greatest increase in pulse duration by using AC
ISTTOK, Portugal
operation (35 ms to 220 ms)
Greatest fusion power output (16.1 MW) JET, EU (in divertor configuration)
Largest plasma volume JET, EU in limiter configuration
Highest plasma current (7 MA) JET, EU in limiter configuration
Largest DC flywheel generator JFT-2M, Japan
Record NBI power injection JT-60U, Japan
8
Highest Ion Temperature (5.2 x 10 °C) JT-60U, Japan
Highest fusion triple product JT-60U, Japan
Longest confinement time JT-60U, Japan
Highest proportion of boot-strap current (100%
TCV, Switzerland
achieved in 2006)
First fully non-plasma current driven by ECCD
TCV, Switzerland
alone (210kA in 2000)
Most desirable second-hand tokomak (?) (Iran
T de V, Canada
offered $90 million!)
Longest pulse duration (5 hours 16 minutes) TRIAM-1M, Japan
Highest injected/extracted energy (1.1GJ in a Tore Supra, France

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 22

pulse)
Highest beta achieved in a tokomak (40%) START, UK

If you would like to suggest another world record to add to this table, please contact me with evidence to
support your claim.

3.3 Tabletop nuclear fusion device developed


February 13th, 2006

An internal view of the vacuum chamber containing


the fusion device, showing two pyroelectric crystals
that generate a powerful electric field when heated or
cooled. Photo by Rensselaer/Yaron Danon
Researchers at Rensselaer Polytechnic Institute
have developed a tabletop accelerator that
produces nuclear fusion at room temperature,
providing confirmation of an earlier experiment
conducted at the University of California, Los
Angeles (UCLA), while offering substantial
improvements over the original design.
The device, which uses two opposing crystals to generate a powerful electric
field, could potentially lead to a portable, battery-operated neutron generator for a variety of
applications, from non-destructive testing to detecting explosives and scanning luggage at
airports. The new results are described in the Feb. 10 issue of Physical Review Letters.
“Our study shows that ‘crystal fusion’ is a mature technology with considerable commercial
potential,” says Yaron Danon, associate professor of mechanical, aerospace, and nuclear
engineering at Rensselaer. “This new device is simpler and less expensive than the previous
version, and it has the potential to produce even more neutrons.”
The device is essentially a tabletop particle accelerator. At its heart are two opposing
“pyroelectric” crystals that create a strong electric field when heated or cooled. The device is filled
with deuterium gas — a more massive cousin of hydrogen with an extra neutron in its nucleus.
The electric field rips electrons from the gas, creating deuterium ions and accelerating them into a
deuterium target on one of the crystals. When the particles smash into the target, neutrons are
emitted, which is the telltale sign that nuclear fusion has occurred, according to Danon.
A research team led by Seth Putterman, professor of physics at UCLA, reported on a similar
apparatus in 2005, but two important features distinguish the new device: “Our device uses two
crystals instead of one, which doubles the acceleration potential,” says Jeffrey Geuther, a
graduate student in nuclear engineering at Rensselaer and lead author of the paper. “And our
setup does not require cooling the crystals to cryogenic temperatures — an important step that
reduces both the complexity and the cost of the equipment.”
The new study also verified the fundamental physics behind the original experiment. This
suggests that pyroelectric crystals are in fact a viable means of producing nuclear fusion, and that
commercial applications may be closer than originally thought, according to Danon.
“Nuclear fusion has been explored as a potential source of power, but we are not looking at this
as an energy source right now,” Danon says. Rather, the most immediate application may come
in the form of a battery-operated, portable neutron generator. Such a device could be used to
detect explosives or to scan luggage at airports, and it could also be an important tool for a wide
range of laboratory experiments.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 23

The concept could also lead to a portable x-ray generator, according to Danon. “There is already
a commercial portable pyroelectric x-ray product available, but it does not produce enough
energy to provide the 50,000 electron volts needed for medical imaging,” he says. “Our device is
capable of producing about 200,000 electron volts, which could meet these requirements and
could also be enough to penetrate several millimeters of steel.”
In the more distant future, Danon envisions a number of other medical applications of pyroelectric
crystals, including a wearable device that could provide safe, continuous cancer treatment.
Frank Saglime, a graduate student in nuclear engineering at Rensselaer, also contributed to the
research.
Source: Rensselaer Polytechnic Institute

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 24

Chapter-4
INVENTIONS

4.1. STEADY STATE SUPERCONDUCTING TOKOMAK COLLIDERS (SSSTC)


LASER AMPLIFIED QUINCENT UNIQUE INDIGENOUS TECHNOLOGY (LAQUIT)
“A voyage of vision in fusion with life to enlighten the minds, household & technology”
This is a pioneering effort to get energy of the stars, galaxies and black holes of space of in
universe. The enormous energy stored in black holes formed due to the collision of two neutron stars of
galaxies is tried to obtain in the laboratory by using Tokomak of indigenous type. Here we have assumed
the Tokomak like the source (star/ galaxy) which stores energy in itself, the fusion of two will naturally
enhance the energy stored. The collision in space gives energy due to the superflated structure obtained
after the collision. The black hole thus formed is a whirlpool of stars in which gravitational field is
enormous which attracts any thing which comes in contact with the whirlpool ( eddy current ).The
hypothesis prompts us to study the behavior of such whirlpool(eddy current) in the laboratory .Though this
sounds some what difficult but then also here I attempt to make it simpler to understand and to work out
with the plan to get that much equivalent energy in the device which I have named it as “steady state
superconducting double tokomak collider(SSSDTC)”.And as this technology is attempted first time in
world it is indigenous in nature and unique state –of- art technology is developed to attain such large
energy (of the order of Terawatt). So far in India we have been able to obtain energy by fusion using
Tokomak (of order of GigaWatt).That is reason why we cal it as “Laser amplified Quintcent indigenous
technology (LAQUIT)”. Quintcent because it is silent device which does not produce sound which can be
heard, Laser plasma is use in fusion process. It is called collider because in this device I have made attempt
to have two tokomaks fuse together and the common region between the two tokomak which is the main
area of interest is the almost straight linear accelerator which gets the laser produced plasma may be from
opposite (flow past each other) side of may be in the same direction .Here I shall be studying the problems
of superposed plasma of tokomak.
It is in steady state that we are interested in i.e. where the electrons and protons are
considered to equal to each other , in stable or equilibrium condition with out instability and the moment
we study for the neutral particle the device is behaving in different manner the thermonuclear device which
can be use for fusion technology. A stage is obtained as in an ordinary tokomak to achieve the
superconducting phase at a near to room temperature to avoid the loss of heat and charge (energy).
Aspect ratio plays crucial role as in tokomaks , but the angle subtended at the center of the
tokomak to the fused portion plays important role .More energy can be obtained of the size and shape of
both tokomak is some .It varies in size and shape the amount of energy obtained is reduced substantially
.The theory which works behind this is from space plasma when the two galaxies say on spiraling in
clockwise direction and another with anticlockwise direction comes closer and have a collision the try to
supersede the other to have resultant energy or rotation in their way .The stars or galaxy which eve is more
powerful and big on has its won gravitational field which gives the resultant field direction . Certainly there
shall be rotational and viscous (gyro viscosity, ion viscosity, electron viscosity) effect which comes in
effect. Here particles try to attain the Chandrasekhar’s limit (with viscous effect) and Bhatia-Hazarika’s
limit (with viscous and rotational effect) for the black hole in the universe.
For magnetic field we have toroidal around each tokomak and collider portion .Surrounded by
another magnetic field i.e., the poloidal field again around the collider also. If we consider ‘R’ be the radius
of each tokomak (same size) and inner radius of tubular portion to be ‘r’ .Radius of the linear part is also ‘r’
( )
.Total distance between centers of both the tokomak 2R cos θ .The distance covered by a particle
traveling full complete circuit is 2πR − 2 Rθ + 2 Rπ = 2 R(2π − θ ) .
Length of straight collider portion = (2R sin θ )
Radius of collider portion =r
Volume of collider portion = (2 R sin θ )πr
2

Volume of whole system = (2 R sin θ )πr + 2πr 2 R(2π − θ )


2

= 2 Rπr [sin θ + 2π − θ ]
2

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 25

π ⎛ 3π ⎞
Max. Volume when θ= = 2 Rπr 2 ⎜ + 1⎟ ; Min. volume when θ = 0 = 4 Rπ 2 r 2
2 ⎝ 2 ⎠

4.1.1 PRESENT WORK

It is based on DOUBLE TOKAMAK COLLIDER (DTC) with Low- β plasma having low frequency
fluctuation which is being stabilized for sheared velocity, finite conductivity and with other parameters.
The induced RTI is suppressed by above mentioned parameters and as a whole the classical transport
phenomena is taken into consideration. The heat conductivity is calculated, Banana (Hazarika’s) regime is
calculated where an important result regime for DOUBLE TOKAMAK COLLIDER (DTC) which is
D ps
DH = i.e., the term in bracket is better off the Pfirsch-
[(q + 1){(2π − 2φ )sec φ + tan φ} − 2ε sin φ ]2
Schluter regime. After the Bohm diffusion the Hazarika’s diffusion coefficient is calculated. Bohm
diffusion also gets changed as
⎡ 2 R cos φ [(q + 1){(2π − 2φ )sec φ + tan φ } − 2ε sin φ ]⎤
3/ 2

DB = DH ⎢ ⎥ .Here we see that at first comes


⎣ r ⎦
the Bohm diffusion than classical plateau, Pfirsch-Schluter’s regime than comes the Hazarika’s regime for
DOUBLE TOKAMAK COLLIDER (DTC) for transport phenomena one new result is found as
q 2 v cl
v⊥ = . The above facts compel one to study the
[(q + 1){(2π − 2φ )sec φ + tan φ} − 2ε sin φ ]2
classical phenomena along with collisional transport phenomena, Mirror effect decreases drastically.
Toroidal and poloidal beta are calculated. Earlier Bhatia and Hazarika (1995) have studied the effect of self
gravitating superposed plasma flowing past each other which of use in the Double Tokomak Collider
(DTC)’s collider region. The two torii meets together at collider region which is the source region of
collision or stability in DOUBLE TOKAMAK COLLIDER (DTC) .This may be considered of interest to
particle Physicist for quantum theory researchers and so on.

COMPARISION OF HAZARIKA'S (BANANA)REGIME


FOR DTC AND TOKAMAK

1
0
10 2
-2
9 3
Series1
-4
Series2
8 4

7 5
6

4.1.2 PRESENT WORK

It is based on MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB with Low- β


plasma having low frequency fluctuation which is being stabilized for sheared velocity, finite conductivity

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 26

and with other parameters. The induced RTI is suppressed by above mentioned parameters and as a whole
the classical transport phenomena is taken into consideration. The heat conductivity is calculated, Banana
(Hazarika’s) regime is calculated where an important result regime for MAGNETIC CONFINEMENT
D ps
TOKAMAK COLLIDER (MCTC) HUB which is D H = i.e., the term in
[4 + q(1 + sin 3φ sin θ )]2
bracket is better off the Pfirsch-Schluter regime. After the Bohm diffusion the Hazarika’s diffusion
⎡ R (1 + sin 3φ sin θ ) ⎤
3/ 2

coefficient is calculated. Bohm diffusion also gets changed as D B = D H ⎢ ⎥ .Here


⎣ r ⎦
we see that at first comes the Bohm diffusion than classical plateau, Pfirsch-Schluter’s regime than comes
the Hazarika’s regime for MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB for
transport phenomena one new result is found as
q 2 vcl
v⊥ = . The above facts compel one to study the classical phenomena along with
[4 + q(1 + sin 3φ sin θ )]
collisional transport phenomena, Mirror effect decreases drastically. Toroidal and poloidal beta are
calculated. Earlier Bhatia and Hazarika (1995) have studied the effect of self gravitating superposed plasma
flowing past each other which of use in the MAGNETIC CONFINEMENT TOKAMAK COLLIDER
(MCTC) Hub’s collider region. The two torii meets together at collider region which is the source region of
collision or stability in MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB .This may
be considered of interest to particle Physicist for quantum theory researchers and so on.

Schematic diagram of Magnetic Confinement Tokomak Collider (MCTC)

4.1.3 DUO TRIAD TOKAMAK COLLIDER (DTTC) HUB


Low- β plasma having low frequency fluctuation which is being stabilized for sheared velocity, finite
conductivity and other parameters. The induced RTI is suppressed by above mentioned parameters and as a
whole the classical transport phenomena is taken into consideration. The heat conductivity is calculated,

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 27

Banana (Hazarika’s) regime is calculated where an important result regime for DUO TRIAD TOKAMAK
D ps
COLLIDER (DTTC) HUB which is DH = i.e., the term in bracket is better off the Pfirsch-
[6 + sC h ]2
Schluter regime. After the Bohm diffusion the Hazarika’s diffusion coefficient is calculated. Bohm
3/ 2
⎡ RC h ⎤
diffusion also gets changed as D B = D H ⎢ ⎥ .Here we see that at first comes the Bohm diffusion
⎣ r ⎦
than classical plateau, Pfirsch-Schluter’s regime than comes the Hazarika’s regime for DUO TRIAD
TOKAMAK COLLIDER (DTTC) HUB for transport phenomena one new result is found as

DUO TRIAD TOKOMAK COLLIDER

q 2 vcl
v⊥ = . The above facts compel one to study the classical phenomena along with collisional
[6 + sC h ]
transport phenomena, Mirror effect decreases drastically. Toroidal and poloidal beta are calculated. Earlier
Bhatia and Hazarika (1995) have studied the effect of self gravitating superposed plasma flowing past each
other which of use in the DUO TRIAD TOKAMAK COLLIDER (DTTC) Hub’s collider region. The two
torii meets together at collider region which is the source region of collision or stability in DUO TRIAD
TOKAMAK COLLIDER (DTTC) HUB .This may be considered of interest to particle Physicist for
quantum theory researchers and so on.

HAZARIKA'S REGIME(BANANA)

1
15 10 2
14 3
5
13 4
12 0 5 Series1

11 6
10 7
9 8

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 28

Fig.1. The particle are trapped in showed region Hazarika’s (banana) regime which calculated from skin
depth in eqn. (11), q=2.5, R/r=1.5, rL =3.5, θ = 0.1, φ = 0.2 in radians

PARTICLE TRAPPING IN HAZARIKA’S (BANANA) REGIME


Here we can observe that the particle trapped which is exhibited by the Hazarika’s regime
(banana) is broader than the Tokomak case in Fig.1.

COMPARISION OF TOKOMAK AND DUO TRIAD


TOKOMAK COLLIDER(DTTC)

1
15200 2
14 3
100
13 4 TOKOMAK Series1
12 0 5
DTTC Series2
11 6
10 7
9 8

Series 1. Tokomak, Series 2.DTTC (HUB)

FIG.2.Comparision of Hazarika’s (banana) regime for DTTC (HUB) and Tokomak is shown for
θ = 0.1, φ = 0.2 in radians=2.5, R=1.5 in eqn. (14).
It is observed from the above graph that the confinement time required for the DTTC (HUB) is much lesser
than the Tokomak case.
Condition for particle trapping: The velocity should be less than equal to the centrifugal force
v 2 ≤ 2rg , the motion of the particle is oscillatory and the particle never loses contact with the circular
path. v 〉 2rg , the particle leaves the circle and then describes a parabolic path. If v = 2rg , the
2 2

motion of the particle becomes oscillatory goes unto diametrical path by performing the banana
(Hazarika’s) regime path.

4.1.4. DIFFUSION ASSOCIATED NEOCLASSICAL INDEGENOUS SYSTEM OF HALL


ASSEMBLY(DANISHA): A HALL THRUSTER

DIFFUSION ASSOCIATED NEOCLASSICAL INDEGINOUS SYSTEM OF HALL ASSEMBLY


(DANISHA) FOR HALL EFFECT THRUSTER AND SUPPRESSION OF FLR & SHEARED
AXIAL FLOW ON RTI

Suppression of sheared axial flow and finite larmor radius (FLR) on Rayleigh-Taylor instability with
Diffusion associated neoclassical indigenous system of Hall assembly (DANISHA) is studied in toroidal

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 29

geometry coordinates for derived magneto hydrodynamic formulation for getting the thrust effect by using
such magnetic device used for first time .The DANISHA hall thruster works for 22000(twenty two
thousand hours) instead of 8000 hrs in case of SPT-100. The sheared axial flow is introduced into MHD

and FLR effect via
∂t
( )
→ −i ω + ik ⊥2 ρ i2 Ω i . The sheared axial flow with a lower peak velocity
suppresses the RT instability. It is observed that the FLR suppress the RT instability strongly than the
sheared axial flow. The results are same as in case of slab geometry.

. The induced RTI is suppressed by above mentioned parameters and as a whole the classical transport
phenomena is taken into consideration. The heat conductivity is calculated, Banana (Hazarika’s) regime is
calculated where an important result regime for DUO TRIAD TOKAMAK COLLIDER (DANISHA) HUB
D ps
which is D H = i.e., the term in bracket is better off the Pfirsch-Schluter regime. After the
[6 + sC h ]2
Bohm diffusion the Hazarika’s diffusion coefficient is calculated. Bohm diffusion also gets changed as
3/ 2
⎡ RC ⎤
DB = DH ⎢ h ⎥ .Here we see that at first comes the Bohm diffusion than classical plateau, Pfirsch-
⎣ r ⎦
Schluter’s regime than comes the Hazarika’s regime for DUO TRIAD TOKAMAK COLLIDER
(DANISHA) HUB for transport phenomena one new result is found as
q 2 vcl
v⊥ = . The above facts compel one to study the classical phenomena along with collisional
[6 + sC h ]
transport phenomena, Mirror effect decreases drastically. Toroidal and poloidal beta are calculated. Earlier
Bhatia and Hazarika (1995) have studied the effect of self gravitating superposed plasma flowing past each
other which of use in the DUO TRIAD TOKAMAK COLLIDER (DANISHA) Hub’s collider region. The
two torii meets together at collider region which is the source region of collision or stability in DUO
TRIAD TOKAMAK COLLIDER (DANISHA) HUB .This may be considered of interest to particle
Physicist for quantum theory researchers and so on. The present work has been divided into 10 sections

Schematic diagram of DANISHA

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 30

DANISHA

Cross-sectional view of DANISHA Hall thruster

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 31

DANISHA

DANISHA
HALL
THRUSTER

Lateral view of DANISHA Hall Thruster

BASIC EQUATIONS for VASIMR (DANISHA) ©

∂E + ω
= B+ C h (1)
∂x c
∂B ω pi E + 4πi
2

Ch = − neV+ (2)
∂x cω ci C h c

∂V+ eE +
V+ = + i(ω − C hω ci )V+ + gL−n1 (3)
∂x mi

2 2
∂V eB0 x ∂ C h B+ V+ ∂B0 x
V+ + = − (4)
∂x 8πme nC h B+ ∂x 2 B+ C h ∂x

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 32

C h B+ nVx
= j = constant (5)
B0

Xenon Gas chamber

E x + iE y
E + (x ) = C h1 / 2 e −iωt (6)
B0 x
B+

Boundary conditions

V+ (− ∞ ) = 0
E + (+ ∞ ) = 0
B+ (− ω ) = 0

4
V ⎛ ω pi ⎞
γ = x2 ⎜⎜ l ⎟⎟ + gL−n1 is the growth rate for the VASIMR DANISHA© with axial velocity in
lω C h ⎝ c ⎠
consideration.
4
dγ 1 ⎛ ω pi ⎞
= 2 ⎜⎜ l ⎟⎟ , the derivative of growth rate with respect to axial velocity is positive showing
dV x lC h ⎝ c ⎠
that the axial velocity stabilizes the system.

4
ν ⎛ ω pi ⎞
γ = 2 ⎜⎜ l ⎟⎟ + gL−n1 Is the growth rate for the VASIMR DANISHA© with the FLR in
lC h ⎝ c ⎠
consideration for the system.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 33

4
dγ 1 ⎛ ω pi ⎞
= 2 ⎜⎜ l ⎟⎟ , similarly the growth rate with respect to the FLR is positive giving us the
dν lC h ⎝ c ⎠
stabilizing effect for the system.

pRCh γ
P=
σ

RF power dissipation is given by

ω pe
2
I ce2
jx Ex ≈
ν e c4

The power lost via gas excitation and subsequent line radiation can be estimated as

3/ 2
⎛T ⎞ me n0 ne 4 ⎡ − E exe ⎤
Pred ≈ 8π ⎜⎜ e ⎟⎟ exp ⎢ ⎥
⎝ me ⎠ Te E exe ⎣ Te ⎦

1/ 2
me ⎛ 4ΛLCh n0e 4ωce ⎞ c ⎡− E ⎤
I ce ≈ c 2
⎜ ⎟ exp ⎢ exe ⎥
L ⎜⎝ 3Te Eexeω ⎟⎠ Lω ⎣ Te ⎦ Where
1/ 2
⎡ − Eexe ⎤ ⎛ 8πn03 L3 miσ i e 6 ⎞
exp⎢ ⎥ ≈ ⎜⎜ 4
⎟⎟
⎣ Te ⎦ ⎝ me E exe ⎠

1/ 2
⎛ 2Λω ce Lmiσ i C h ⎞
I ce ≈ 4n0 c e ⎜⎜
3 10
⎟⎟ Which is square root of Hazarika constant times the VASIMR.
⎝ 3ω Te E exe
3 5

VASIMR DANISHA©

1
γ =
τ
This provides the confinement time for the VASIMR DANISHA© as

−1
⎡ V ⎛ lω
1 ⎞
4
⎤ In secs
τ = = ⎢ 2X ⎜⎜ pi ⎟⎟ + gL−n1 ⎥
γ ⎢⎣ C h lω ⎝ c ⎠ ⎥⎦

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 34

Is 7 times the VASIMR gives us 7 X 8000 hrs=56000 hrs=2333.33 days=6.392 years

For power of DANISHA©

3 7
⎛ ωc ⎞ ⎛ c ⎞
PDANISHA = me nC ⎜
3 ⎟ ⎜ ⎟
⎜ω
h ⎟ ⎜ lω ⎟
⎝ pi ⎠ ⎝ pi ⎠

C h =Hazarika constant for DANISHA VASIMR


First bracket term is velocity component; second bracket term is a constant non-dimensional quantity.
Considering initially that the VASIMR is having 1400 N/m as the power we get for DANISHA©

P (DANISHA©) = 18.525 times the VASIMR=18.525 X 1400 N/m = 25935 N/m

REFERENCES
1. Hazarika,A.B.R.: Submitted in Physics of Plasma (2009a)& 13th National symposium on plasma
Science &Technology, Rajkot(1998); 16th National symposium on plasma Science
&Technology, Guwahati(2001)

2. Hazarika,A.B.R.: Submitted in Physics of Plasma (2009b)& 18th National symposium on


plasma Science &Technology, Ranchi(2003);19th National symposium on plasma Science
&Technology, Bhopal(2004)

3. Hazarika,A.B.R.: Submitted in Plasma of Plasmas (2009c)& Proceeding of 20th National


symposium on plasma Science &Technology, Cochin Univ. of Sci. & Technology,
Cochin(2005) ;

4. Hazarika, A.B.R.: Submitted in Physic of plasma (2009d)& Proceeding of 3rd Technical meeting
of International Atomic Energy Agency on Theory of Plasma Instabilities, Univ. of York, York,
UK(2007),31pp

5. Pfirsch, D: Theoretical and computational plasma physics (1978), IAEA-SMR-31/21, pp59.


6. Pfrisch, D., SCHLUTER, A.: Max-Planck-Institut fur Physik und Astrophsik, Munich, Rep.
MPI/PA/7/62(1962).
7. Kerner, W: Z. Naturforsch. 33a,792(1978)
8. Samain,A., Wekoff, F: Nuc. Fus. 17,53(1977)
9. Bhatia, P.K and Hazarika, A.B.Rajib :Phy Scr 53,57(1996)
10. Hazarika,A.B.R: Proceeding of National symposium of Plasma Science and
Technology(2009),Hamirpur(HP)
11. Bhatia, P.K. and Hazarika, A.B.Rajib : J Ind. Acad. Maths.29(1),141(2007)
12. Gaunge. J, Lin. H and Xiao -Ming. Q: Plasma Sci & Tech 7(3),2805(2005)
13. Xiao-Ming. Q, Lin .H, Guangde. J: Plasma Sci & Tech. 4(5),1429(2002)
14. Ning. Z, Yu. D, Li. H and Yan. G : Plasma Sci and Tech. Vol.11(2),194(2009)
15.Qui,X.M, Huang,L, Jian ,G: Plasma Sci &Tech , 5, 1429(2002)
16.De Groot, J.S, Toor, A, Goldberg, S.M et al: Phys Plasmas 4, 1519(1997)
17.Haines,M.G.: IEEE transaction on Plasma Sci.26,1275(1998)
18.Shumlak,U and Hartman,C.W.: Phys Rev. Lett. 75, 3285(1995)
19.Arber,T.D,coppins,M,Scheffel,J: Phys Rev. Lett. 77, 1766(1996)
20.Ganguly,G: Phys Plasmas 4,2322(1997)
21.Qui,X,M,Huang,L and Jian,G.D: Chin. Phys Lett. 19,217(2002)
22.Turchi,P.J and Baker,W.L: J. Appl. Phys,44,4936(1973)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 35

23.Morozov,A.I: Introduction to Plasma Kinetics, Fizmat, Moscow(2006)


24.Choueiri, E Y :Physics of Plasmas,8,1411(2001)

4.3. MAGNETIC CONFINEMENT TOKAMAK COLLIDER HUB (MCTC): A CONCEPTUAL


DEVICE

PART-A
TOROIDAL COORDINATES

FEEDBACK STABILIZATION OF RAYLEIGH-TAYLOR INSTABILITY IN MAGNETIC


CONFINEMENT TOKAMAK COLLIDER HUB (MCTC): A CONCEPTUAL DEVICE

A low- β and high aspect ratio Magnetic confinement Tokamak collider(MCTC)hub is taken into
consideration for Low-frequency stabilization process with toroidal coordinates playing the vital role as the
configuration is governed by the transport phenomena which subsides the effect on the unstable mode. The
present study is to stabilizes such system if density gradient ( ∇n ) plays against the gravity in the upward
direction thereby causing the R-T instability. Here the conductivity causes the implosion in the system
which can be stabilized by the sheared flow, density gradient ( ∇n ), and Hall current where as the finite
resistivity, with feed back loop current and current diffusivity stabilizes the system .The solution is done in
terms of feedback loop current and the differential equation is solved by using trivial solution as sine wave
with peak value for feedback loop current is assumed with phase difference of π / 2 is used to find the
appropriate solution of the differential equation .Above study is done theoretically to obtain the growth rate
for the stabilizing process. The transport phenomenon decreases by (1 + sin 3θ sin φ )−1 / 4 over what one
considers in classical Tokamak case.

A conceptual device for greater energy is being considered hypothetically i.e., Magnetic confinement
Tokomak collider (MCTC) hub, Hazarika (2003, 2004) for RTI stabilization in Low- β plasma and for
Fuzzy Differential Inclusion (FDI) simulation is done to get the diffusion phenomena as we get new regime
(Hazarika’s regime) for skin depth is seen to be very sharp with new moon like crescent having advantage
over Tokomak. It is shown that velocity drift is also very much greater than that of Tokomak and BETA
machine. Hazarika (2005, 2007a),Low-frequency is studied for thermal conductivity by Hazarika (2007b),
Feedback stabilization is studied in DTC by Hazarika (2007c),Hazarika(2007d) studied the classical
transport phenomena in Double Tokomak Collider (DTC).Magnetic confinement Tokomak collider
(MCTC) Hub is studied for neo classical theory of transport phenomena by Hazarika (2007e).

4.3.1 Magnetic Confinement Tokomak Collider (MCTC) Hub

It is based on MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB with Low- β


plasma having low frequency fluctuation which is being stabilized for sheared velocity, finite conductivity
and with other parameters. The induced RTI is suppressed by above mentioned parameters and as a whole
the classical transport phenomena is taken into consideration. The heat conductivity is calculated, Banana
(Hazarika’s) regime is calculated where an important result regime for MAGNETIC CONFINEMENT
D ps
TOKAMAK COLLIDER (MCTC) HUB which is D H = i.e., the term in
[4 + q(1 + sin 3φ sin θ )]2
bracket is better off the Pfirsch-Schluter regime. After the Bohm diffusion the Hazarika’s diffusion
⎡ R (1 + sin 3φ sin θ ) ⎤
3/ 2

coefficient is calculated. Bohm diffusion also gets changed as D B = D H ⎢ ⎥ .Here


⎣ r ⎦
we see that at first comes the Bohm diffusion than classical plateau, Pfirsch-Schluter’s regime than comes

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 36

the Hazarika’s regime for MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB for
transport phenomena one new result is found as
q 2 vcl
v⊥ = . The above facts compel one to study the classical phenomena along with
[4 + q(1 + sin 3φ sin θ )]
collisional transport phenomena, Mirror effect decreases drastically. Toroidal and poloidal beta are
calculated. Earlier Bhatia and Hazarika (1995) have studied the effect of self gravitating superposed plasma
flowing past each other which of use in the MAGNETIC CONFINEMENT TOKAMAK COLLIDER
(MCTC) Hub’s collider region. The two torii meets together at collider region which is the source region of
collision or stability in MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB .This may
be considered of interest to particle Physicist for quantum theory researchers and so on. The present work
has been divided into 10 sections

Schematic diagram of Magnetic Confinement Tokomak Collider (MCTC)

4.3.2. BASIC EQUATIONS

The basic equations which governs the MAGNETIC CONFINEMENT TOKAMAK COLLIDER
(MCTC) HUB are as follows same as (4.2.1-4.2.9)

Initially we are considering the stress term to be absent i.e. ∇ • ∏ =0


Here η ,finite conductivity, Π i (stress tensor), Te (electron temperature) , qe⊥ ( perpendicular heat
conductivity of electron),E ( electric field), v ⊥ i (perpendicular ion velocity) , χ (magnetic
r
diffusivity), µ (viscosity) , p e (electron pressure), B (magnetic field), p i (ion pressure ) ,q(safety factor)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 37

According to the geometry of the considered device the magnetic field also changes. The magnetic
field coils are arranged around the MCTC hub in the toroidal way, the toroidal magnetic field is
Bθ = 4 Bθ and the poloidal magnetic field is given by
Bφ = Bφ (1 + sin 3φ sin θ ) .The total magnetic field is given by B = Bθ + Bφ ,
B = 4 Bθ + Bφ (1 + sin 3φ sin θ )
B = Bθ [4 + q(1 + sin 3φ sin θ )] Where q is the safety factor. (4.3.1)
8πnT
Therefore the beta parameter, β = 2 for MCTC hub, the
Bθ [4 + q(1 + sin 3φ sin θ )]
2

8πnT
toroidal β θ = 2 ;
Bθ [4 + q(1 + sin 3φ sin θ )]
2

8πnTq 2
Poloidal β φ = ; where D H = (1 + sin 3φ sin θ ) is Hazarika’s factor for
2

Bφ [4 + q(1 + sin 3φ sin θ )]


2 2

MCTC hub
Here we have for equilibrium condition
r
∇p = J × B ` (4.3.2)

∇p = −
1
R (1 + sin 3φ sin θ ) 2
2 2
1
[
∇ 16 R 2 (1 + sin 3φ sin θ ) Bθ2
2
]
⎛ Bφ2 (1 + sin 3φ sin θ ) ⎞⎟
2
⎜ 1
+ ⎜1 + ⎟ R(1 + sin 3φ sin θ ) 2 (4.3.3)
R (1 + sin 3φ sin θ )
2
⎜ 16 Bθ2

⎝ ⎠
U
[
+ 2 R(1 + sin 3φ sin θ ) ∇φ × Bφ

2
r
]
This is Hazarika’s MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB formula for
equilibrium, where U is the feedback loop voltage which considers here absent for the present study.

m
The resistivity η can be expressed by electron-ion collision frequency η = ν ei with this
e2n
ν ei mTc 2
we get Hazarika’s diffusion term as D H = = ν ei rL2 , rL is the finite ion
e B [4 + q (1 + sin 3φ sin θ )]
2 2 2

ηc 2
larmor radius for MCTC hub. Dm = is the magnetic diffusion coefficient describing the skin

effect.

4.3.3. BANANA REGIME

If we do not consider collisions still all the particles in a MAGNETIC CONFINEMENT


TOKAMAK COLLIDER (MCTC) HUB plasma could move freely round the quad (four) tori along the
field lines. Magnetic field differs and varies along the field lines a length of the

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 38

( )
order qR 1 + sin 3φ sin θ , a particle sees magnetic mirrors at a distance of qR(1 + sin 3φ sin θ ) .The
⎛ ∆B ⎞
strength of mirrors ⎜
⎟ ratio is given by the inverse aspect ratio.
⎝ B ⎠
⎛ ∆B ⎞ r
⎜ ⎟≈ (4.3.4)
⎝ B ⎠ R(1 + sin 3φ sin θ )
Particles trapped between such mirrors according to the law of energy conservation is
1 1
µB + mvc2 = cons tan t or µ∆B + ∆ mvc2 = 0 it hold for
2 2
1 2 ⎛1 2 ⎞ 1
∆ mvc = ⎜ mvc ⎟ Here µ = mv ⊥2 that gives us magnetic moment
2 ⎝2 ⎠ max 2

∆B vc2 r
=− 2 = 〈〈1 (4.3.5)
B v ⊥ R(1 + sin 3φ sin θ )
Drift is in the vertical direction with velocity as
mv⊥2 v ⊥2τ PA
2
v drift = =
eB[4 + q(1 + sin 3φ sin θ )]R(1 + sin 3φ sin θ ) R(1 + sin 3φ sin θ )[4 + q(1 + sin 3φ sin θ )]
eBθ
Where τ PA =
−2
,cyclotron frequency.
m
The time required to fly particles from one mirror to another mirror is the time
qR(1 + sin 3φ sin θ )
.The particles moves a distance which is given by skin depth,
vc
δ out of a magnetic surface in the vertical direction.

4.3.4. SKIN DEPTH

qR(1 + sin 3φ sin θ ) mv⊥2 q ⎛v ⎞ q


δ = v drift = = rL ⎜ ⊥ ⎟
vc eBvc [4 + q(1 + sin 3φ sin θ )] ⎜v
⎝ c
⎟ [4 + q (1 + sin 3φ sin θ )]

qR1 / 2 (1 + sin 3φ sin θ )
1/ 2
δ = rL is the Hazarika’s diffusion coefficient
[4 + q(1 + sin 3φ sin θ )]r 1 / 2

Where
mv ⊥
rL = , finite larmor radius (FLR) for MCTC hub .Here we see that skin
eB[4 + q(1 + sin 3φ sin θ )]
depth is R
1/ 2
(1 + sin 3φ sin θ )1 / 2 factor more than the Tokomak .This thickness of banana like orbits we
may call the crescent of a moon .If we consider collisions than reversal of vc occurs, vc 〈〈v ⊥ . This means
that a part of a banana thickness therefore replaces the gyro radius in plane geometry then trapped particles
v2 R (1 + sin 3φ sin φ )
collision frequency is given by vt = 2
ν≈ ν , the no. of trapped particles is
vc r
proportional to the vc internal given by tapping condition i.e.,

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 39

nvc r
nt = =n (4.3.6)
v R (1 + sin 3θ sin φ )

4.3.5. HAZARIKA’S DIFFUSION COEFFICIENT

A stochastic process with δ as step size then yields the diffusion coefficient
⎡ R (1 + sin 3φ sin θ ) ⎤
1/ 2
n
D B = δ vt t = rL2ν t q 2 ⎢
2
⎥ This is Bohm diffusion
n ⎣ r ⎦
DH = rL νq , Hazarika’s diffusion coefficient
2 2

D ps
DH = , Hazarika’s diffusion coefficient, DPS is Pfrisch-Schluter diffusion
[4 + q(1 + sin 3φ sin θ )]2
coefficient, now the Bohm diffusion becomes
⎡ R (1 + sin 3φ sin θ ) ⎤
3/ 2

DB = DH ⎢ ⎥ (4.3.7)
⎣ r ⎦
4.3.6. HAZARIKA’S REGIME

This condition stands valid for trapping the particle inhibited by collision i.e.
vt qR(1 + sin 3φ sin θ )
〈1 (4.3.8)
vc
1/ 2
v 2 R 2 (1 + sin 3φ sin θ ) qR(1 + sin 3φ sin θ ) 3 / 2 ⎛ v ⎞
2
ν 3q = A Where A = ⎜ ⎟
vc r2 rλ D ⎜v ⎟
⎝ c⎠
Or λ D 〉 A 3 / 2 qR(1 + sin 3φ sin θ ) where λ D is the mean free path thus, the left regime is

qR(1 + sin 3φ sin θ )〈 λ D 〈 A 3 / 2 qR(1 + sin 3φ sin θ ) (4.3.9)


1
D B , D H , D PS ≈
λD
One has ( )
DB λ D = A3 / 2 qR(1 + sin 3φ sin θ ) = DH [λ D = qR(1 + sin 3φ sin θ )] where Bohm
diffusion is DB , D PS (λ D = qR )

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 40

Inner part is plateau regime (flat region), and then smooth transition from banana to Hazarika’s regime.

It culminates with two effects of importance


(I) Bootstrap current
(II) Ware effect

4.3.6.1. BOOTSTRAP CURRENT

The induction effect of high diffusion velocity leading to a current density in toroidal direction
1/ 2
4v − c dp ⎡ r ⎤ 1
J B = Bθ B = (4.3.10)
η c 4 Bθ dr ⎣ R(1 + sin 3φ sin θ ) ⎥⎦
⎢ [4 + q(1 + sin 3φ sin θ )]
As poloidal current is absent we get terms with toroidal field only
1/ 2
4d − πc dp ⎡ r ⎤ 1
rBθ =
rdr Bθ dr ⎣ R(1 + sin 3φ sin θ ) ⎥⎦
⎢ [4 + q(1 + sin 3φ sin θ )]
The high diffusion velocity leading to a current density in the toroidal direction is gives toroidal beta as

p 8π p
βθ = = (4.3.11)
Bφ2 Bθ2 [4 + q(1 + sin 3φ sin θ )]
2


Since the diffusion velocity should not exceed the magnetic field in plasma with finite resistivity. For
1 1
banana regime β < 3/ 2 2
,β = β pol which are in agreement with earlier results. Pfrisch –
A q q A2
2

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 41

Schluter diffusion is expressed by v D ≈ q 2 v cl , the classical diffusion velocity is given by


1
vcl = β v mag with magnetic diffusion velocity as we know that v D < v mag we get the plasma beta as
2
2vcl q 2 v cl
β< => v D ≈ which is known as Hazarika’s
[4 + q(1 + sin 3φ sin θ )]v D [4 + q(1 + sin 3φ sin θ )]
A2
diffusion expression. And from this we get β θ < 1 , therefore β < is
[4 + q(1 + sin 3φ sin θ )]
considerably different from earlier results that are obtained by the other authors.

4.3.6.2 WARE EFFECT


cE
Here the usual E/B drift is replaced by v D = for the ware effect in
Bθ [4 + q (1 + sin 3φ sin θ )]
MCTC hub.

4.3.7. CONFINEMENT TIME

β pol 〈 A1 / 2 For impurity transport as long as the temperature profile is flatter than as given by

Tn 2 but it is modified by Hazarika factor (1 + sin 3φ sin θ ) .If we put (sin 3φ sin θ = 0) in
2

2
vthH τ DH τ MH 〉 q 2 R 2 (1 + sin 3θ sin φ ) we can get
2

2
vthH τ DH τ MH 〉 q 2 R 2 this is given by Samain and Werkoff (1977)
τ DH is deflection time
τ MH is Maxwellian time for Hydrogen ions.

0.97 × 10 −16 ne r 3 R(1 + sin 3θ sin φ ) Bφ


2

τ Ee = 1/ 2
for experimental purpose also.
Te Ip

From the Maxwell’s equation we get the generalized MHD model

GENERALIZED MHD MODEL:


The generalized MHD equations are considered which are derived from the above basic
equations.
n 0 mi c ⎛ ∂ ⎞
⎜⎜ − mi n +
c
[φ ,−mi n]⎟⎟ = B0 ∇ c jc + ∇p × 2∇r cosθ • zˆ
B0 ⎝ ∂t B0 ⎠ c R(1 + sin 3φ sin θ )
(4.3.12)
c 2
− µmi n0 ∇ ⊥ mi n + mi gδφ • zˆ + mi g (φ − 1) • zˆ
B0
∂A
= −∇ cφ − η c jc + λ∇ 2⊥ jc (4.3.13)
c∂t

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 42

∂p c
+ [φ , p ] = χ ⊥ ∇ 2⊥ p (4.3.14)
∂t B 0
c 2 ˆ
Where jc = − ∇ ⊥ A term is responsible for feedback loop current, Â is electromagnetic potential, φ

is electrostatic potential and [ A, B ] = zˆ • ∇A × ∇B , Poisson’s

Eigen mode equation is obtained by linearizing (4.3.12)-(4.3.14) in Toroidal coordinates


( )
r ,θ , ξ for Rayleigh- Taylor instability (RTI) as

φ (∆r , θ , ζ ) = ∑ φˆ(θ + 2πl ) exp[inq ′∆r (θ + 2πl )t − inq 0 rmn


−1
∆rρ sin θ + in(q 0θ − ζ )]

(4.3.15)

δτ PA µτ PA
δ = µ=
n0 m r 2
i mn [ gLn ln gLn ] n0 m r 2
i mn [ gLn ln gLn ]
pˆ τ PA 4πn0 mi rmn
1/ 2 2
β= τ PA =
[ ]
2
2
(n0 mi rmn gLn ln gLn )1 / 2 Bθ2

τ PA
2
χ⊥ cτ PA
2
φ τ PA
2
λc 2 8πp
χ= φ= 2 λ= pˆ =
r 2
mn rmn B0 4πrmn2
B02
t A gc s2
t= A= ηˆ = ηn 2 q 2 g=
τ PA rmn Bθ R(1 + sin 3φ sin θ )
⎛ 1 ⎞
ρˆ = ρ [κ − ε (sθ − ρ sin θ ) cosθ ] κ = −⎜⎜1 − 2 ⎟⎟ε ρ = βL−p1
q⎝ ⎠
4/5
⎛ 3π ⎞
⎜ ⎟ λ3 / 5 s 4 / 5 (2l + 1)4 / 5
⎡ r ⎤ 2
Bφ 4
n = n0 ⎢ − ⎥ s = a0 = ⎝ ⎠
⎣ Ln ⎦ Bθ η (ρ − µ + δ )1 / 5
d (ln n0 ) d (ln p0 )
Ln −1 = − Lp −1 = −
drmn drmn
r
ε=
R(1 + sin 3φ sin θ )

For Low frequency ω pi2 τ PA


2
〈〈1 and for RT mode ρ 〈 2 Ln −1

∇φ = −γAˆ + η∇ 2⊥ Aˆ − λ∇ 4⊥ Aˆ (4.3.16)

Differential equation is given in terms of feedback loop current

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 43

{λC1 D 6 + λC 2 D 5 + (λC 3 − ηC1 ) D 4 − ηC 2 D 3 + (γC1 − ηC 3 ) D 2 + γC 2 D + γC 3 } Aˆ = 0


……… (4.3.19)
Where
d γf 2 γf ′[2 f (η − 1) + 2 f 2 (2λ − η ) − 2λf 5 ]
D= ; C1 = ; C2 =
dθ γ + f 2 η + λf ( 2
) {γ + f (η + λf )}
2 2 2

γρ
C3 = f 2γ 2 − µf 4γ + [κ + cos θ + (sθ − ρ sin θ )sin θ ]
γ + χf 2

Let the trivial solution be Aˆ = A sin(ϖt + θ ) (4.3.17)


A is the peak value loop current, ϖ is the frequency and θ is the phase difference.
DAˆ = A cos(ϖt + θ ) = − D 3 Aˆ = D 5 Aˆ
(4.3.18)
D 2 Aˆ = − A sin (ϖt + θ ) = − D 4 Aˆ = D 6 Aˆ
Initially θ =0 r = 0, t = 0
at
D 5 Aˆ = DAˆ = A ; D 3 Aˆ = − Aˆ ; D 6 Aˆ = D 4 Aˆ = D 2 Aˆ = 0 (4.3.19)
λC 2 + ηC1
Either A = 0 or γ = (4.3.20)
− C2
At θ = π / 2 at r = rmn ; t = 0

D 6 Aˆ = D 2 Aˆ = − A ; D 4 Aˆ = Aˆ = A (4.3.21)

the growth rate is given by


γ = −(λ + η ) (4.3.22)

γ = −(F + ηf ) [λF + ηf (η + λf )]
2 −1 4 2
(4.3.23)
here
[
F = f ′ 2 f (η − 1) + 2 f 2
(2λ − η ) − 2λf 5 ] (4.3.24)
me
For Low frequency γ 〈〈ω c and for RT mode gLn −1 〈〈1 and low − β : 〈〈1
mi
As we are interested in knowing the effect of finite conductivity along with other parameters so we shall
take the derivative of the growth rate with respect to finite conductivity and we can observe that the
derivative is positive or negative for destabilization or stabilization respectively.

STABILITY FOR PLASMA BETA AND LARGE ASPECT RATIO

The stability condition of the MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC)


HUB by using plasma beta and aspect ratio is studied which is as follows: − γ ∝ Ln (
Means the )
−1 −1 / 2

density gradient scale length stabilizes the system.


The growth is studied analytically as well as numerically, for the analytical case the derivative with respect
to density gradient scale length term gives us the negative quantity hence showing the stabilizing character.
Numerically we observe that the growth with density gradient scale length term stabilizes for larger values
of density gradient scale length term, hence one may opt for larger values of density gradient scale length
term which is exhibited in Fig.1

STABILITY FOR DENSITY GRADIENT SCALE LENGHT

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 44

The growth is studied analytically as well as numerically calculated for the analytical case the growth is
plotted against the number density gradient scale length which gives us the plot for different values of
number density gradient scale length we see here as the density gradient scale length increases the growth
rate with respect to number density gradient scale decreases thereby showing the stabilizing effect for the
considered system which is exhibited in Fig.4.3.1.
Fig.4.3.1

PLOT OF NUMBER DENSITY GRADIENT SCALE


LENGTH VS GROWTH RATE

0
1 2 3 4 5 6 7 8 9 10
GROWTH RATE

-0.5

-1
Series1
-1.5

-2

-2.5
NO. DENSITY GRADIENT SCALE LENGTH

STABILITY FOR FINITE RESISTIVITY


The growth is studied analytically as well as numerically for the analytical case the growth is plotted
against for finite conductivity which gives us the plot for different values of finite resistivity, we see here as
the finite resistivity increases the growth rate decreases thereby showing the stabilizing effect for the
considered system which is exhibited in Fig.4.3.2.

plot of finite resistivity vs growth rate

0
growth rate

1 2 3 4 5 6
-5 Series1

-10

-15
finite resistivity

Fig.4.3.2. Series1 : λ = 1, f = 1, f ′ = 1

plot of fluctuation vs grow th rate

0
1 2 3 4 5 6
-5
growth rate

-10 Series1

-15

-20
fluctuations

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 45

Fig 4.3.3. Series1 : η = 3, λ = 1, f ′ = 1

STABILITY FOR FLUCTUATIONS


The growth is studied analytically as well as numerically for the analytical case the growth is plotted
against the fluctuations which gives us the plot for different values of fluctuations we see here as the
fluctuations increases the growth rate decreases showing the stabilizing effect for the considered system
when the values of finite conductivity, current diffusivity and derivative of fluctuations remains constant,
which is exhibited in Fig.4.3.3.

PLOT OF FLUCTUATION VS ANGLE


(S- RHO*COS(THETA)

5
FLUCTUATIONS

4
3
Series1
2
1
0
1 2 3 4 5 6 7 8 9 10
ANGLE (IN RADIANS)

Fig.4.3.4. Series 1: η = 3, λ = 1 variance of angle in radians for the fluctuations


( s − ρ cos θ )

plot of current diffusivity vs growth


rate

0
-5 1 3 5 7 9
growth rate

-10
-15 Series1
-20
-25
-30
current diffusivity

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 46

Fig.4.3.5. Series1 : η = 3, f = 1, f ′ = 1
The given growth rate (MCTC) HUB is stabilized.

STABILITY FOR CURRENT DIFFUSIVITY


The growth is studied analytically as well as numerically for the analytical case the growth is plotted
against the current diffusivity which gives us the plot for different values of current diffusivity we see here
as the increases the current diffusivity growth rate decreases thereby showing the stabilizing effect for the
considered system which is exhibited in Fig.4.3.5.

PLOT OF COMPARISION OF TOKAMAK AND MCTC


GROWTH RATE

150

100 Series1

50 Series2

0
1 2 3 4 5 6 7 8 9 10
WAVE NUMBER

Fig.4.3.6. Series1 : Tokamak ; Series 2 : MCTC , forR = 1,θ = 60 o , φ = 30 o


The growth varies as γ ∝ (1 + sin 3θ sin φ )
−1 / 4

Fig.4.3.6. Comparison of finite conductivity governed growth in MAGNETIC CONFINEMENT


TOKAMAK COLLIDER (MCTC) HUB and Tokamak
We see that the growth rate of MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB is
more stabilized than the Tokamak for the parameter finite conductivity which is shown in Fig.4.3.6.

In the present study it is shown that MCTC hub is better than the tokomak case which is depicted in
the Fig.4.3.7 and the Fig.4.3.8.In Fig.4.3.7 it is shown that how MCTC hub is broader the tokomak case in
particle trapping .In Fig.4.3.8 it is shown that it takes less confinement time than the tokomak case.

PARTICLE TRAPPING IN HAZARIKA’S (BANANA) REGIME


Here we can observe that the particle trapped which is exhibited by the Hazarika’s regime
(banana) is broader than the Tokomak case in Fig.4.3.7.

Banana(Hazarika's)regim e

1
0
10 2
-1
9 -2 3
-3 Series1
8 4

7 5
6

Fig.4.3.7. The particle are trapped in showed region Hazarika’s (banana) regime

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 47

COMPARISION OF HAZARIKA'S (BANANA)REGIME FOR


MCTC(HUB) AND TOKAMAK

1
0
10 2
-2
9 3 Series1
-4
8 4 Series2
7 5
6

Series 1. Tokomak, Series 2.MCTC (HUB)


FIG.4.3.8.Comparision of Hazarika’s (banana) regime for MCTC (HUB) and Tokomak is shown for
θ = 45o , φ = 45o
It is observed from the above graph that the confinement time required for the MCTC (HUB) is much
lesser than the Tokomak case.

Condition for particle trapping: The velocity should be less than equal to the centrifugal force
v 2 ≤ 2rg , the motion of the particle is oscillatory and the particle never loses contact with the circular
path. v 〉 2rg , the particle leaves the circle and then describes a parabolic path. If v = 2rg the
2 2

motion of the particle becomes oscillatory and it will go unto diametrical path by performing the banana
(Hazarika’s) regime path.

In relevance to the earlier studies done by Pfrisch(1978)and Pfrisch and Schluter(1962),Samain


( )
and Werkoff(1977).If we substitute in the major radius with sin 3φ sin θ = 0 only R remains ,we get
the same results of Pfrisch(1978).The present study contains enhancement in the skin depth ,banana regime,
bootstrap current, ware effect, diffusion coefficient as Hazarika’s diffusion coefficient ,Hazarika’s factor
for MCTC hub .

4.3.8. APPLICATIONS:

In MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB one can observe two
types of cases which govern the system as the polarity of the magnetic field changes. (I)For current
generation, (II) for rockets and missiles, (III) Hybrid technology

4.3.8.1 Case I: FOR ELECTRICITY GENERATION


As the polarity of magnetic field changes the flow of plasma also changes say if in both of the torus for all
the four torus the magnetic field is in clockwise direction there will be collisional effect in the collider
region of MAGNETIC CONFINEMENT TOKAMAK COLLIDER(MCTC)HUB which will give rise to
more heat and friction and resulting in slowing down motion of plasma in collider region .Afterwards the
plasma becomes consistent in every cycle of flow, which can be observed in this region as well as in
MAGNETIC CONFINEMENT TOKAMAK COLLIDER(MCTC)HUB as a whole. Bhatia and Hazarika
(1995) showed that in space the self gravitating superposed plasma flow past each other stabilizes the
system. It can be useful for generation and getting the current density in enormous quantity which is useful
for generation of electricity.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 48

POWER LAW:
Here the definition of power is used to derive the power law.
dW
Power = Rate of change of work done = P =
dt
Work done = Force X Distance
Force= Pressure per unit area
p
F= where p is pressure and σA is cross sectional area of MAGNETIC CONFINEMENT
σA
TOKAMAK COLLIDER (MCTC) HUB
d
where γ = is growth rate.
dt
pR
W = (1 + sin 3θ sin φ ) , hence we get the power as
σA
γpR
P= (1 + sin 3θ sin φ ) In MW
σA

4.3.8.2 Case II: ROCKET AND MISSILES


When we have the change in polarity of magnetic field say in one torus it runs in anti-clockwise and in
other clockwise direction in such case we observe that the flow of plasma is accelerated in the collider
region of MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB and may or may not
become turbulent flow which is useful for propulsion system for the use in rockets, missiles and space-
craft etc. It is observed that the velocity drift in such case is (1 + sin 3θ sin φ ) times that of Tokomak
2

case. Here the plasma is acting as the superposed flowing one over the other hence enhancing the velocity
of resultant plasma which is observed by several researchers in past.

4.3.8.3 Case III: HYBRID TECHNOLOGY


Like the case II here we use the same type of system resulting into the different type of technology which is
prevalent in many places known as the Hybrid technology. The accelerated neutrons which can be
extracted from the MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB can be used in
Fission Chamber where we need the those neutrons as for the fusion purpose the fast neutrons are waste
products leading to the heating of plasma chamber, so it can be used through neutrons collecting blackest
used and can be channelized to the Uranium or plutonium based nuclear/atomic reactors.

4.3.8.4 Case IV: COMPUTERS AND TELEVISION


The growth rate is measured in per second (Hz) which gives us the speed compiling or formation of
plasma. If it is used in computer chips will give us the processing speed of the microprocessor. Similarly
we can enhance the speed of the normally used microprocessor by 1.5 times say if the speed is 2.8GHz in
the present condition the microprocessor speed becomes 4.2 GHz. The calculation speed of the
microprocessor becomes 4.2 Giga flops (i.e. 4.2 Giga floating points per second). If it is used in super
computer with calculation speed of 1.73 Teraflops, the resultant will be near about 150 Tera floating points
per second (150X1012 floating points per second).We can enhance the resolution of the computer monitor
screen as well as that of the plasma TVs confinement time can be reduced with better resolution. The
resolution is 24.75% better than the present best available computer monitor or plasma TVs .One particular
brand of plasma and LCD TVs are projecting that it can give 1:10,000 resolution , here in this particular
case it will be 1:15,000 resolution . No blurred images rather only crystal clear screen can view from 120
degrees wide angle without any diminishing images from side view angle.

REFERENCES
15. Doyle, E.J, Groebner, R. J. et al : Phys Fluids B 3,230(1991)
16. Itoh, S.I and Itoh, K: Phys Rev. Lett. 60,2276(1983)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 49

17. Hassam, A.B: Comments on Plasma Phys. Contr. Fusion 14,275(1991) and Phys. Fluids
B4,485(1992)
18. Sen, S and Weiland, J: Phys Fluids B4,485(1992)
19. Bhatia, P.K. and Hazarika,A.B.R.: Physica Scripta 53,57(1995)
20. Hazarika,A.B.R: National symposium on plasma Science &Technology, Rajkot(1998)
21. Hazarika,A.B.R: National symposium on plasma Science &Technology, Guwahati(2001)
22. Hazarika,A.B.R: National symposium on plasma Science &Technology, Ranchi(2003)
23. Hazarika,A.B.R: National symposium on plasma Science &Technology, Bhopal(2004)
24. Hazarika,A.B.R: Proceeding of National symposium on plasma Science &Technology, Cochin Univ.
of Sci. & Technology, Cochin(2005)
25. Hazarika,A.B.R. :Proceeding of 3rd Technical meeting of International Atomic Energy Agency on
Theory of Plasma Instabilities, Univ. of York, York, UK(2007a),31pp
26. Diamond, P.H : Plasma Physics and Controlled Nuclear Fusion Research
(IAEA,Vienna,1992)2,97(1992)
27. Hazarika,A.B.R.: Submitted in Nuclear Fusion (2007b)
28. Hazarika,A.B.R.: Submitted in Plasma Physics and controlled fusion(2007c)
29. Hazarika,A.B.R.: Submitted in Plasma Source Science and Technology(2007d)
30. Hazarika,A.B.R.: Submitted in Physica Scripta (2007e)
31. Pfirsch, D: Theoretical and computational plasma physics (1978), IAEA-SMR-31/21, pp59.
32. Pfrisch, D., SCHLUTER, A.: Max-Planck-Institut fur Physik und Astrophsik, Munich, Rep.
MPI/PA/7/62(1962).
33. Kerner, W: Z. Naturforsch. 33a,792(1978)
34. Samain,A., Wekoff, F: Nuc. Fus. 17,53(1977)

4.3.PART-B
PARABOLIC COORDINATES
FEEDBACK STABILIZATION OF RAYLEIGH-TAYLOR INSTABILITY IN MAGNETIC
CONFINEMENT TOKAMAK COLLIDER HUB (MCTC): A CONCEPTUAL DEVICE

A low - β and high aspect ratio Magnetic confinement Tokamak collider(MCTC)hub is taken into
consideration for Low-frequency stabilization process with toroidal coordinates playing the vital role as the
configuration is governed by the transport phenomena which subsides the effect on the unstable mode. The
present study is to stabilizes such system if density gradient ( ∇n ) plays against the gravity in the upward
direction thereby causing the R-T instability. Here the conductivity causes the implosion in the system
which can be stabilized by the sheared flow, density gradient ( ∇n ), and Hall current where as the finite
resistivity, with feed back loop current and current diffusivity stabilizes the system .The solution is done in
terms of feedback loop current and the differential equation is solved by using trivial solution as sine wave
with peak value for feedback loop current is assumed with phase difference of π / 2 is used to find the
appropriate solution of the differential equation .Above study is done theoretically to obtain the growth rate
for the stabilizing process. The transport phenomenon decreases by (1 + sin 3θ sin φ )−1 / 4 over what one
considers in classical Tokamak case.

Parabolic coordinates are taken into consideration


ξ = r − z = r (1 − cos ϑ )
η = r + z = r (1 + cos ϑ )
ϕ =φ
Eigen mode equation is obtained by linearizing (4.3.12)-(4.3.14) in Parabolic axisymmetry distribution
( )
coordinates ξ ,η , ϕ for Rayleigh- Taylor instability (RTI) as
φ (ξ ,η , ϕ ) = exp i[lϑ + mϕ − ωt ]{C1 X 1 (ξ ) + C 2 X 2 (η )}
(4.3.25)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 50

δτ PA µτ PA
δ = µ=
2
n0 mi rmn [ gLn ln gLn ] 2
n0 mi rmn [ gLn ln gLn ]
pˆ τ PA 4πn0 mi rmn
1/ 2 2
β= τ =
[ ]
2
PA
2
(n0 mi rmn gLn ln gLn )1 / 2 Bθ2

τ PA
2
χ⊥ cτ PA
2
φ τ PA
2
λc 2 8πp
χ= φ= λ= pˆ =
r 2
mn
2
rmn B0 4πrmn2
B02
t A gc 2
t= A= ηˆ = ηn 2 q 2 g= s
τ PA rmn Bθ R
⎛ 1 ⎞
ρˆ = ρ [κ − ε (sθ − ρ sin θ ) cosθ ] κ = −⎜⎜1 − 2 ⎟⎟ε ρ = βL−p1
q ⎝ ⎠
4/5
⎛ 3π ⎞
λ3 / 5 s 4 / 5 (2l + 1)
⎜ ⎟
4/5

⎡ r ⎤ 2
Bφ 4
n = n0 ⎢ − ⎥ s = a0 = ⎝ ⎠
⎣ Ln ⎦ Bθ η (ρ − µ + δ )1 / 5
d (ln n0 ) d (ln p0 )
Ln −1 = − Lp −1 = −
drmn drmn
r
ε=
R

For Low frequency ω pi2 τ PA


2
〈〈1 and for RT mode ρ 〈 2 Ln −1

∇φ = −γAˆ + η∇ 2⊥ Aˆ − λ∇ 4⊥ Aˆ (4.3.26)
4 ⎡ ∂ ∂ 1 ∂ ⎤
∇=
ξ +η⎢ξ ∂ξ + η ∂η + ξη ∂ϕ ⎥
⎣ ⎦
4 ⎡ ∂ ⎛ ∂ ⎞ ∂ ⎛ ∂ ⎞ 1 ∂2 ⎤
∇2 = ⎢ ⎜ξ ⎟+ ⎜η ⎟+ ⎥
ξ + η ⎣ ∂ξ ⎜⎝ ∂ξ ⎟⎠ ∂η ⎜⎝ ∂η ⎟⎠ ξη ∂ϕ 2 ⎦

4⎡ ∂ ∂ ⎤ 4 ⎡ ∂ ⎤
∇⊥ = ⎢ξ ∂ξ + η ∂η ⎥
ξ +η
∇ =
(ξ + η )ξη ⎢⎣ ∂ϕ ⎥⎦
⎣ ⎦
r ⎡ r ⎤ 1 1 R
B = Bϕ ⎢eˆϕ + eˆϑ ⎥ = − 0 (r − r0 )
⎣ Rq(r ) ⎦ q q 0 Ls r0
Lr
ζ = s 0 ∇r = r − r0
R0 ξ

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 51

p ≡ ( p(ξ ),0,0) n ≡ (n(ξ ),0,0) ϕ ≡ (0, ϕ (η ),0) A ≡ ( A(ξ ),0,0)


Equation (4.3.12) becomes

n0 mi c ⎛⎜ ∂ c ⎡⎛ ∂n ⎞⎛ ∂ϕ ⎞ ⎤⎞ B 2∇r cos θ • zˆ
− mi n + ⎢⎜⎜ ⎟⎟⎜⎜ ⎟⎟,−mi n ⎥ ⎟ = 0 ∇ c jc + ∇p ×
B0 ⎜⎝ ∂t B0 ⎣⎝ ∂ξ ⎠⎝ ∂η ⎠ ⎟
⎦⎠ c R(1 + sin 3θ sin φ )
c 2
− µmi n0 ∇ ⊥ mi n + mi gδφ • zˆ + mi g (φ − 1) • zˆ
B0

4 µξ ω p ⎛ ∂ϕ ⎞
2
ω2 ω
−ω + Ln − Ln ⎜⎜ ⎟⎟ = − c2 ∇ ∇ 2⊥ Aˆ + 2ω c L p ε cos ϑ − gLn c (4.3.27)
(ξ + η ) ω c ⎝ ∂η ⎠ ωp ωp
And
16 ∂ ⎡ ∂Aˆ ∂Aˆ ⎤
∇ ∇ 2⊥ Aˆ = ⎢ξ + η ⎥ (4.3.28)
ξη (ξ + η )2 ∂ϕ ⎣ ∂ξ ∂η ⎦
Equation (4.3.13) becomes
− 4 ∂φ 4 ⎡ ∂ ∂ ⎤ˆ ˆ 4 ˆ
− ωAˆ = + ηˆ ξ +η A + λ∇ ⊥ A (4.3.29)
(ξ + η ) ∂ϕ (ξ + η ) ⎣ ∂ξ ∂η ⎥⎦

Equation (4.3.14) becomes
ω p2 ⎛ ∂φ ⎞ ξ
−ω − L p ⎜⎜ ⎟⎟ = 4 χ L (4.3.30)
ω c ⎝ ∂η ⎠ (ξ + η ) ⊥ p
Solving (4.3.30-4.3.33) we get
−1
⎡ ω2 ⎛ k ⎞ 1 4ζω c ⎛ L ⎞⎤
ω = ⎢4 c2 ⎜ ⎟ − ⎜1 + n ⎟ ⎥
⎣ ( +
) ( )
⎢ ω p ⎜⎝ k ⊥ ⎟⎠ η + λk ⊥2 k ⊥ qω p2 L p η + λk ⊥2 ⎜⎝ L p ⎟⎠⎥

⎡ ω c 4 k ξ Ln ⎤
⎢2ω c L p ε cos ϑ − gLn 2 − (µ + χ ⊥ )⎥ (4.3.31)
⎢ ωp k⊥ ⎥
⎢ 4ω c k Lϕ
2 ⎥
⎢− 16 χ ⊥ω c k ξ ⎥
⎣ p ⊥ ( ⊥ )

⎢ ω 2 k η + λk 2 k 2 ω 2 η + k 2 λ
⊥ p ( ⊥ ) ⎥

The growth rate is given by the above equation and stability is given below
4ω c2 ⎛⎜ k ⎞⎛ 1 ⎞ ⎛
⎟⎜ ⎟〈⎜1 + Ln ⎞⎟
ω 2p ⎜⎝ k ⊥⎟⎜ 2 ⎟ ⎜ ⎟
⎠⎝ η + λk ⊥ ⎠ ⎝ Lp ⎠
ω 4ω 2 ⎛ k ⎞⎛ Lφ ⎞ 4k ξ Ln
[ gLn c2 + 2c ⎜ ⎟⎜ ⎟+ (µ + χ ⊥ ) ]〈 2ω c LpεCosϑ
ω p ω p ⎜⎝ k ⊥ ⎟⎠⎜⎝ η + λk ⊥2 ⎟

k⊥

me
For Low frequency γ 〈〈ω c and for RT mode gLn −1 〈〈1 and low − β : 〈〈1
mi

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 52

As we are interested in knowing the effect of finite conductivity along with other parameters so we shall
take the derivative of the growth rate with respect to finite conductivity and we can observe that the
derivative is positive or negative for destabilization or stabilization respectively.

The growth is studied analytically as well as numerically, for the analytical case the derivative with respect
to density gradient scale length term gives us the negative quantity hence showing the stabilizing character.
Numerically we observe that the growth with density gradient scale length term stabilizes for larger values
of density gradient scale length term, hence one may opt for larger values of density gradient scale length
term which is exhibited in Fig.4.3.9

STABILITY FOR DENSITY GRADIENT SCALE LENGHT


The growth is studied analytically as well as numerically calculated for the analytical case the growth is
plotted against the number density gradient scale length which gives us the plot for different values of
number density gradient scale length we see here as the density gradient scale length increases the growth
rate with respect to number density gradient scale decreases thereby showing the stabilizing effect for the
considered system which is exhibited in Fig.4.3.9.

PLOT OF DENSITY GRADIENT


SCALE LENGTH VS GROWTH
RATE
GROWTH RATE

3000
2000
1000
0
1 3 5 7 9 Series1

DENSITY GRADIENT
SCALE LENGTH( Ln)

Fig.4.3.9
ωc
Lp = 5, λ = 3, Lφ = 5.5, χ ⊥ = 2.2, µ = 3.5,η = 2, = 4 .5
ω p2

PLOT OF FINITE RESISTIVITY VS


GROWTH RATE

180
GROWTH RATE

175
170
Series1
165
160
155
1 3 5 7 9
FINITE RESISTIVITY

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 53

ωc
Fig.4.3.10: Ln = 4.34, Lp = 5, Lφ = 5.5, χ ⊥ = 2.2, µ = 3.5, λ = 3, = 4 .5
ω p2
STABILITY FOR FINITE RESISTIVITY
The growth is studied analytically as well as numerically for the analytical case the growth is plotted
against for finite conductivity which gives us the plot for different values of finite resistivity, we see here as
the finite resistivity increases the growth rate decreases thereby showing the stabilizing effect for the
considered system which is exhibited in Fig.4.3.10.

PLOT OF LOW FREQUENCY


FLUCTUATION VSGROWTH RATE

0
GROWTH RATE

1 2 3 4 5 6 7 8
-100000

-200000 Series1

-300000

-400000
LOW FREQUENCY
FLUCTUATION

Fig.4.3.10. Ln = 4.34, Lp = 5, Lφ = 5.5, χ ⊥ = 2.2, µ = 3.5,η = 2, λ = 3

STABILITY FOR FLUCTUATIONS


The growth is studied analytically as well as numerically for the analytical case the growth is plotted
against the fluctuations which gives us the plot for different values of fluctuations we see here as the
fluctuations increases the growth rate decreases showing the stabilizing effect for the considered system
when the values of finite conductivity, current diffusivity and derivative of fluctuations remains constant,
which is exhibited in Fig.4.3.11.Whereas the effect of pressure gradient is in contrast with the earlier
considered parameters shows destabilizing nature as it increases with the growth rate in Fig.4.3.12

STABILITY FOR CURRENT DIFFUSIVITY


The growth is studied analytically as well as numerically for the analytical case the growth is plotted
against the current diffusivity which gives us the plot for different values of current diffusivity we see here
as the increases the current diffusivity growth rate decreases thereby showing the stabilizing effect for the
considered system which is exhibited in Fig.4.3.13.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 54

PLOT OF LOW FREQUENCY


GROWTH RATE FLUCTUATION VSGROWTH RATE

3000
2000
Series1
1000
0
1 2 3 4 5 6 7 8
LOW FREQUENCY
FLUCTUATION(X10)

Fig.4.3.11: Ln = 4.34, Lp = 5, Lφ = 5.5, χ ⊥ = 2.2, µ = 3.5,η = 2, λ = 3

PLOT OF PRESSURE GRADIENT


SCALE LENGTH VS GROWTH RATE

3000
GROWTH RATE

2000
1000 Series1
0
-1000 1 3 5 7 9
PRESSURE GRADIENT SCALE
LENGTH

ωc
Fig.4.3.12: Ln = 4.34, λ = 3, Lφ = 5.5, χ ⊥ = 2.2, µ = 3.5,η = 2, = 4 .5
ω p2

PLOT OF CURRENT DIFFUSIVITY VS


GROWTH RATE

8
GROWTH RATE

4 Series1

0
1 2 3 4 5 6 7 8 9
CURRENT DIFFUSIVITY

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 55

ωc
Fig.4.3.13: Ln = 4.34, Lp = 5, Lφ = 5.5, χ ⊥ = 2.2, µ = 3.5,η = 2, = 4 .5
ω p2

REFERENCES
35. Doyle,E.J, Groebner,R. J. et al : Phys Fluids B 3,230(1991)
36. Shaing, K.C and Crume, E.C.: Phys Rev. Lett. 63,2369(1989)
37. Itoh, S.I and Itoh, K: Phys Rev. Lett. 60,2276(1983)
38. Hassam,A.B.:Comments on Plasma Phys. Contr. Fusion 14,275(1991) and Phys. Fluids B4,485(1992)
39. Sen,S and Weiland,J:Phys Fluids B4,485(1992)
40. Bhatia,P.K. and Hazarika,A.B.R.: Physica Scripta 52, 947(1995)
41. Hazarika,A.B.R: National symposium on plasma Science &Technolgy,Rajkot(1998)
42. Hazarika,A.B.R:National symposium on plasma Science &Technolgy, Guwahati(2001)
43. Hazarika,A.B.R: National symposium on plasma Science &Technolgy,Ranchi(2003)
44. Hazarika,A.B.R:National symposium on plasma Science &Technolgy,Bhopal(2004)
45. Hazarika,A.B.R: Proceeding of National symposium on plasma Science &Technolgy, Cochin Univ. of
Sci. & Technology, Cochin(2005)
46. Hazarika,A.B.R. :Proceeding of 3rd Technical meeting of International Atomic Energy Agency on
Theory of Plasma Instabilities, Univ. of York, York, UK(2007a)
47. Diamond, P.H : Plasma Physics and Controlled Nuclear Fusion Research
(IAEA,Vienna,1992)2,97(1992)
48. Hazarika,A.B.R.: Submitted in Nuclear Fusion (2007b)
49. Hazarika,A.B.R.: Submitted in Plasma Physics and controlled fusion(2007c)
50. Hazarika,A.B.R.: Submitted in Plasma Source Science and Technology(2007d)
51. Hazarika,A.B.R.: Submitted in Physica Scripta (2007e)

4.4. DIFFUSION ASSOCIATED NEOCLASSICAL INDEGENOUS SYSTEM OF HALL


ASSEMBLY (DANISHA): A HALL THRUSTER

DIFFUSION ASSOCIATED NEOCLASSICAL INDEGINOUS SYSTEM OF HALL ASSEMBLY


(DANISHA) FOR HALL EFFECT THRUSTER AND SUPPRESSION OF FLR & SHEARED
AXIAL FLOW ON RTI

Suppression of sheared axial flow and finite larmor radius (FLR) on Rayleigh-Taylor instability with
Diffusion associated neoclassical indigenous system of Hall assembly (DANISHA) is studied in toroidal
geometry coordinates for derived magneto hydrodynamic formulation for getting the thrust effect by using
such magnetic device used for first time .The DANISHA hall thruster works for 22000(twenty two
thousand hours) instead of 8000 hrs in case of SPT-100. The sheared axial flow is introduced into MHD

and FLR effect via
∂t
( )
→ −i ω + ik ⊥2 ρ i2 Ω i . The sheared axial flow with a lower peak velocity
suppresses the RT instability. It is observed that the FLR suppress the RT instability strongly than the
sheared axial flow. The results are same as in case of slab geometry.

INTRODUCTION
The present work has been divided into 10 sections: 1.Introdiction, 2.Basic equations, 3.Banana
regime, 4. Skin depth, 5.Hazarika’s diffusion coefficient, 6.Hazarika’s regime, 7.Confinement time,
8.Discission, 9.Conclusions, 10.Applications and at last 11.References.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 56

Earlier the classical transport phenomena is studied by Pfrisch and Schluter (1962) in which the
Pfrish –Schluter regime and other constants given by them afterwards Pfrisch (1978) studied the collisional
transport phenomena. Kerner (1978) computationally studied for MHD stability of tokomak class with
r r
fixed boundaries. Fluctuations are suppressed whereas the E × B shear is above a critical value with
sheared velocity; external source term with finite conductivity is included for stabilization of RTI by
Hazarika (1998, 2001) in BETA machine.
A conceptual device for greater energy is being considered hypothetically i.e., Magnetic confinement
Tokomak collider (MCTC) hub, Hazarika (2003, 2004) for RTI stabilization in Low- β plasma and for
Fuzzy Differential Inclusion (FDI) simulation is done to get the diffusion phenomena as we get new regime
(Hazarika’s regime) for skin depth is seen to be very sharp with new moon like crescent having advantage
over Tokomak. It is shown that velocity drift is also very much greater than that of Tokomak and BETA
machine. Hazarika (2005, 2007),Low-frequency is studied for thermal conductivity by Hazarika (2009a),
Feedback stabilization is studied in DTC by Hazarika (2009b),Hazarika(2009c) studied the classical
transport phenomena in Double Tokomak Collider (DTC).Magnetic confinement Tokomak collider
(MCTC) Hub is studied for neo classical theory of transport phenomena by Hazarika (2009d).Bhatia and
Hazarika(2007) showed the Hall effect and FLR on R-T instability where the hall effect enhances the
instability and the FLR suppresses the instability this forms the aspect of the paper. Similarly same effect is
shown by Yaun et al (2009),Gaungde et al (2005), Xiao-Ming et al (2002)

It is based on DUO TRIAD TOKAMAK COLLIDER (DANISHA) HUB with Low- β plasma having low
frequency fluctuation which is being stabilized for sheared velocity, finite conductivity and with other
parameters. The induced RTI is suppressed by above mentioned parameters and as a whole the classical
transport phenomena is taken into consideration. The heat conductivity is calculated, Banana (Hazarika’s)
regime is calculated where an important result regime for DUO TRIAD TOKAMAK COLLIDER
D ps
(DANISHA) HUB which is D H = i.e., the term in bracket is better off the Pfirsch-Schluter
[6 + sC h ]2
regime. After the Bohm diffusion the Hazarika’s diffusion coefficient is calculated. Bohm diffusion also
3/ 2
⎡ RC h ⎤
gets changed as DB = DH ⎢ ⎥ .Here we see that at first comes the Bohm diffusion than classical
⎣ r ⎦
plateau, Pfirsch-Schluter’s regime than comes the Hazarika’s regime for DUO TRIAD TOKAMAK
COLLIDER (DANISHA) HUB for transport phenomena one new result is found as
q 2 vcl
v⊥ = . The above facts compel one to study the classical phenomena along with collisional
[6 + sC h ]
transport phenomena, Mirror effect decreases drastically. Toroidal and poloidal beta are calculated. Earlier
Bhatia and Hazarika (1995) have studied the effect of self gravitating superposed plasma flowing past each
other which of use in the DUO TRIAD TOKAMAK COLLIDER (DANISHA) Hub’s collider region. The
two torii meets together at collider region which is the source region of collision or stability in DUO
TRIAD TOKAMAK COLLIDER (DANISHA) HUB .This may be considered of interest to particle
Physicist for quantum theory researchers and so on. The present work has been divided into 10 sections

Schematic diagram of DANISHA

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 57

DANISHA

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 58

DANISHA

DANISHA
HALL
THRUSTER

Cross-sectional view of DANISHA Hall thruster

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 59

Xenon Gas chamber

Lateral view of DANISHA Hall Thruster

BASIC EQUATIONS for VASIMR (DANISHA) ©

∂E + ω
= B+ C h (4.4.1)
∂x c
∂B ω pi E + 4πi
2

Ch = − neV+ (4.4.2)
∂x cω ci C h c

∂V+ eE +
V+ = + i(ω − C hω ci )V+ + gL−n1 (4.4.3)
∂x mi

2 2
∂V eB0 x ∂ C h B+ V+ ∂B0 x
V+ + = − (4.4.4)
∂x 8πme nC h B+ ∂x 2 B+ C h ∂x

C h B+ nV x
= j = constant (4.4.5)
B0

E x + iE y
E + (x ) = C h1 / 2 e −iωt (4.4.6)
B0 x
B+

Boundary conditions

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 60

V+ (− ∞ ) = 0
E + (+ ∞ ) = 0
B+ (− ω ) = 0 (4.4.7)

4
V ⎛ ω pi ⎞
γ = x2 ⎜⎜ l ⎟⎟ + gL−n1 is the growth rate for the VASIMR DANISHA© with axial velocity in
lω C h ⎝ c ⎠
consideration.
4
dγ 1 ⎛ ω pi ⎞
= 2 ⎜⎜ l ⎟⎟ , the derivative of growth rate with respect to axial velocity is positive showing
dV x lC h ⎝ c ⎠
that the axial velocity stabilizes the system.

4
ν ⎛ ω pi ⎞
γ = ⎜l ⎟ + gL−n1 Is the growth rate for the VASIMR DANISHA© with the FLR in
lC h2 ⎜⎝ c ⎟⎠
consideration for the system.

4
dγ 1 ⎛ ω pi ⎞
= 2 ⎜⎜ l ⎟⎟ , similarly the growth rate with respect to the FLR is positive giving us the
dν lC h ⎝ c ⎠
stabilizing effect for the system.

pRCh γ
P= (4.4.8)
σ
RF power dissipation is given by

ω pe
2
I ce2
jx Ex ≈ (4.4.9)
ν e c4

The power lost via gas excitation and subsequent line radiation can be estimated as

3/ 2
⎛T ⎞ me n0 ne 4 ⎡ − E exe ⎤
Pred ≈ 8π ⎜⎜ e ⎟⎟ exp ⎢ ⎥ (4.4.10)
⎝ me ⎠ Te E exe ⎣ Te ⎦

1/ 2
me ⎛ 4ΛLCh n0e 4ωce ⎞ c ⎡− E ⎤
I ce ≈ c 2
⎜ ⎟ exp ⎢ exe ⎥ (4.4.11)
L ⎜⎝ 3Te Eexeω ⎟⎠ Lω ⎣ Te ⎦ Where
1/ 2
⎡ − Eexe ⎤ ⎛ 8πn03 L3 miσ i e 6 ⎞
exp⎢ ⎥ ≈ ⎜⎜ 4
⎟⎟ (4.4.12)
⎣ Te ⎦ ⎝ me E exe ⎠

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 61

1/ 2
⎛ 2Λω ce Lmiσ i C h ⎞
I ce ≈ 4n0 c e ⎜⎜ 3 10
⎟⎟ Which is square root of Hazarika constant times the VASIMR.
⎝ 3ω Te E exe
3 5

VASIMR DANISHA©

1
γ =
τ
This provides the confinement time for the VASIMR DANISHA© as

−1
⎡ V ⎛ lω 1 ⎞
4
⎤ In secs
τ = = ⎢ 2X ⎜⎜ pi ⎟⎟ + gL−n1 ⎥ (4.4.13)
γ ⎢⎣ C h lω ⎝ c ⎠ ⎥⎦

Is 7 times the VASIMR gives us 7 X 8000 hrs=56000 hrs=2333.33 days=6.392 years

For power of DANISHA©

3 7
⎛ ωc ⎞ ⎛ c ⎞
PDANISHA = me nC ⎜3 ⎟ ⎜ ⎟ (4.4.14)
⎜ω
h ⎟ ⎜ lω ⎟
⎝ pi ⎠ ⎝ pi ⎠

C h =Hazarika constant for DANISHA VASIMR


First bracket term is velocity component; second bracket term is a constant non-dimensional quantity.
Considering initially that the VASIMR is having 1400 N/m as the power we get for DANISHA©

P (DANISHA©) = 18.525 times the VASIMR=18.525 X 1400 N/m = 25935 N/m (4.4.15)

COMPARISION OF PARABOLIC AND


PLANAR SYSTEM FOR SHEARED AXIAL
VELOCITY

2.5
NORMALIZEDGROWTH

2
RATE

1.5 Series1
1 Series2

0.5

0
1 2 3 4 5 6 7 8 9 10
NORMALIZED WAVE NUMBER
Fig.1.Series 1: Sheared axial
velocity with

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 62

Parabolic Coordinates, Series 2: Sheared


Axial velocity with planar coordinates.

PLOT FOR SHEARED AXIAL


VELOCITY VS GROWTH RATE IN
PARABOLIC COORDINATES

2.5
THRATE

2 Series1
Series2
1.5
GROW

Series3
1 Series4
0.5

0
1 2 3 4 5 6 7 8 9 10
WAVE NUMBER

Fig2.For V=105, 2X105, 3X105, 4X105

PLOT FOR FLR IN PARABOLIC


COORDINATES

2.5
GROWTH RATE

2
1.5 Series1
1 Series2
0.5
0
1 2 3 4 5 6 7 8 9 10
WAVE NUMBER

Fig.3 Series 1: FLR=1.0, Series 2: FLR=2.0

We can observe that the sheared axial velocity suppress the instability in the parabolic coordinates more
than the planar coordinates which is shown is in Fig.1.But for the parabolic coordinates with sheared axial
velocity, V= 105, 2X105, 3X105, 4X105 it remains static for higher than V= 2X105 is shown in the
Fig.2.FLR stabilizes the instability for the normalized value 2.0, whereas it shows some instability in the
initial stage for FLR=1.0 then stabilizes for higher wave number is exhibited in Fig.3.The results are in
affirmation to the results given by Qui et al(2002).

For Hazarika’s constant derivation

The basic equations which governs the DANISHA are as follows


1
ηJ = E + v × B , ∇P = ∇ p (4.4.16)
c
r r
E ≡ 0, E ,0 ( )
r r
(r
B ≡ Bθ ,0, Bφ ) (4.4.17)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 63

v ≡ (v ⊥ ,0, vc )
r r r
p ≡ ( p(r ),0,0) (4.4.18)
Here η ,finite conductivity, Te (electron temperature) ,E ( electric field), v ⊥i (perpendicular ion
r
velocity), χ (magnetic diffusivity), µ (viscosity) , p e (electron pressure), B (magnetic field), p i (ion
pressure ) ,q(safety factor) .

According to the geometry of the considered device the magnetic field also changes. The magnetic
field coils are arranged around the DANISHA hub in the toroidal way, the toroidal magnetic field is
r r
Bθ = 6 Bθ and the poloidal magnetic field is given by
r r
Bφ = Bφ (1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ ) .The total magnetic field is given
r r r
by B = eˆθ Bθ + eˆφ Bφ , (4.4.19)
r r r
B = 6 Bθ + Bφ (1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ ) (4.4.20)
B = Bθ [6 + s (1 + 4π sin 3φ sin θ − 2 sin φ − 2 sin θ )] (4.4.21)
where s is the magnetic ratio.
8πnT
Therefore the beta parameter, β = for
Bθ [6 + s(1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ )]
2 2

8πnT
DANISHA hub, the toroidal; βθ = (4.4.22)
Bθ2 [6 + s(1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ )]
2

8πnTs 2
Poloidal; where β φ = is Hazarika’s factor for
Bφ2 [6 + s(1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ )]
2

DANISHA hub
Here we have for equilibrium condition
r
∇p = J × B ` (4.4.23)
⎛ Bφ2 C h
2

∇p = −
1
R 2Ch
2
1
2
[ 2 ⎜
∇ 16 R 2 C h Bθ2 + ⎜1 +

]
16 Bθ2

RC
1
⎟⎟ h R 2 C 2 +
U
c 2η
R 2
C h
2
∇[φ ×
r
Bφ ]
⎝ ⎠ h

Here (4.4.24)

C h = (1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ ) is Hazarika’s constant for DANISHA


(4.4.25)
This is Hazarika’s DANISHA formula for equilibrium, where U is the feedback loop voltage which
considers here absent for the present study.

m
The resistivity η can be expressed by electron-ion collision frequency η = ν ei with this
e2n
ν ei mTc 2
= ν ei rL2 [6 + sC h ] , rL is the finite ion
−2
we get Hazarika’s diffusion term as D H =
e B [6 + sC h ]
2 2 2

ηc 2
larmor radius for DANISHA hub. Dm = is the magnetic diffusion coefficient describing the skin

effect. Magnetic diffusion for DANISHA is

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 64

ν ei mTc 2
= ν ei rL2 [6 + sC h ] , rL is finite larmor radius
−2
DmH =
e B [6 + sC h ]
2 2 2

BANANA REGIME

If we do not consider collisions still all the particles in DANISHA plasma could move freely
round the quad (four) tori along the field lines. Magnetic field differs and varies along the field lines a
( )
length of the order qR 1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ , a particle sees magnetic mirrors at a
⎛ ∆B ⎞
distance of the qR(1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ ) strength of mirrors ⎜ ⎟ ratio is given
⎝ B ⎠
by the inverse aspect ratio.
⎛ ∆B ⎞ r
⎜ ⎟≈ (4.4.26)
⎝ B ⎠ R(1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ )
The term in the denominator within bracket is Hazarika’s constant for DANISHA
Particles trapped between such mirrors according to the law of energy conservation is
1 1
µB + mvc2 = cons tan t or µ∆B + ∆ mvc2 = 0 it hold for
2 2
1 2 ⎛1 2 ⎞ 1
∆ mvc = ⎜ mvc ⎟ Here µ = mv ⊥2 that gives us magnetic moment
2 ⎝2 ⎠ max 2

∆B vc2 r
=− 2 = 〈〈1 (4.4.27)
B v ⊥ R(1 + 4π sin 3φ sin θ − 2 sin φ − 2 sin θ )

Drift is in the vertical direction with velocity as


mv⊥2 v ⊥2τ PA
2
eBθ
Where τ PA
−2
v drift = = =
, cyclotron frequency. The time
eB[6 + sC h ]RC h RC h [6 + sC h ] m
qRC h
required to fly particles from one mirror to another r mirror is the time .The particles moves a
vc
distance which is given by skin depth, δ out of a magnetic surface in the vertical direction.

SKIN DEPTH

qRC h mv ⊥2 q ⎛v ⎞ q
δ = v drift = = rL ⎜ ⊥ ⎟
vc eBvc [6 + sC h ] ⎜v
⎝ c
⎟ [6 + sC h ]

1/ 2
qR1 / 2 C h
δ = rL is the Hazarika’s diffusion coefficient (4.4.28)
[6 + sC h ]r 1 / 2

Where

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 65

mv⊥
rL = , finite larmor radius (FLR) for DANISHA hub .Here we see that skin depth is
eB[6 + sC h ]
1/ 2
R1 / 2 C h factor more than the Tokomak .This thickness of banana like orbits we may call the crescent of
a moon .If we consider collisions than reversal of vc occurs, vc 〈〈v ⊥ . This means that a part of a banana
thickness therefore replaces the gyro radius in plane geometry then trapped particles collision frequency is
v2 RC h
given by vt = v≈ v ,the no. of trapped particles is proportional to the vc internal given by
vc r
tapping condition i.e.,
nvc r
nt = =n (4.4.29)
v RC h

HAZARIKA’S DIFFUSION COEFFICIENT

A stochastic process with δ as step size then yields the diffusion coefficient
3/ 2
nt ⎡ RC ⎤
DB = δ 2 vt = rL2ν t q 2 ⎢ h ⎥ This is Bohm diffusion
n ⎣ r ⎦
DH = rL2νq 2 , Hazarika’s diffusion coefficient
D ps
DH = , Hazarika’s diffusion coefficient and is equal to 12.63787, DPS is Pfrisch-Schluter
[6 + sC h ]2
diffusion coefficient, now the Bohm diffusion becomes
3/ 2
⎡ RC ⎤
DB = DH ⎢ h ⎥ (4.4.30)
⎣ r ⎦
HAZARIKA’S REGIME

This condition stands valid for trapping the particle inhibited by collision i.e.
vt qRC h
〈1 (4.4.31)
vc
2
v 2 R 2Ch
2
qRC h 3 / 2 ⎛ v ⎞
ν 3q = A Where A = ⎜ ⎟ (4.4.32)
vc r2 rλ D ⎜v ⎟
⎝ c⎠
Or λ D 〉 A 3 / 2 qRC h where λ D is the mean free path thus, the left regime is

qRC h 〈 λ D 〈 A 3 / 2 qRC h (4.4.33)


1
D B , D H , D PS ≈
λD
One has D B (λ D )
= A 3 / 2 qRC h = D H [λ D = qRC h ] where Bohm diffusion is DB , D PS (λ D = qR )

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 66

Inner part is plateau regime (flat region), and then smooth transition from banana to Pfrisch-Schluter
regime then to Hazarika’s regime.

It culminates with two effects of importance


(I) Bootstrap current
(II) Ware effect

BOOTSTRAP CURRENT

The induction effect of high diffusion velocity leading to a current density in toroidal direction
1/ 2
4v − c dp ⎡ r ⎤ 1
J B = Bθ B = ⎢ ⎥ (4.4. 34)
η c 4 Bθ dr ⎣ RC h ⎦ [4 + qC h ]
As poloidal current is absent we get terms with toroidal field only
1/ 2
4d − cπ dp ⎡ r ⎤ 1
rBθ = ⎢ ⎥ (4.4. 35)
rdr Bθ dr ⎣ RC h ⎦ [6 + sC h ]
The high diffusion velocity leading to a current density in the toroidal direction is gives toroidal beta as

p 8π p
βθ = = (4.4. 36)
Bθ [6 + sC h ]
2 2 2

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 67

Since the diffusion velocity should not exceed the magnetic field in plasma with finite resistivity. For
1 1
banana regime β < 3/ 2 2
,β = β pol which are in agreement with earlier results. Pfrisch –
A q q A2
2

Schluter diffusion is expressed by v D ≈ q 2 v cl , the classical diffusion velocity is given by


1
vcl = β v mag with magnetic diffusion velocity as we know that v D < v mag we get the plasma beta as
2
2vcl q 2 vcl
β< => v D ≈ which is known as Hazarika’s diffusion expression. And from
[6 + sC h ]v D [6 + sC h ]
A2
this we get β θ < 1 , therefore β < is considerably different from earlier results that are
[6 + sC h ]
obtained by the other authors.

WARE EFFECT
cE
Here the usual E/B drift is replaced by v D = for the ware effect in DANISHA
Bθ [6 + sC h ]
hub.

DANISHA HALL Thruster

cE
vd = (4.4.37)
Bθ [6 + sC h ]

Drift velocity


vd = (4.4.38)
Bθ [6 + sC h ]kT

mcφ
Thrust = F = mv d = (4.4.39)
Bθ [6 + sC h ]kT

φ
F= in Newton units (4.4.40)
ω c [6 + sC h ]kT

CONFINEMENT TIME

β pol 〈 A1 / 2 For impurity transport as long as the temperature profile is flatter than as given by
2
Tn 2 but it is modified by Hazarika factor C h .If we put C h = 1 in
2
vthH τ DH τ MH 〉 q 2 R 2 C h we can get
2

2
vthH τ DH τ MH 〉 q 2 R 2 this is given by Samain and Werkoff (1977)
τ DH is deflection time
τ MH is Maxwellian time for Hydrogen ions.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 68

2
0.97 × 10 −16 ne r 3 RC h Bφ
τ Ee = 1/ 2
for experimental purpose also. (4.4.41)
Te Ip
In the present study it is shown that DANISHA hub is better than the tokomak case which is
depicted in the Fig.1 and the Fig.2. In Fig.1 it is shown that how DANISHA hub is broader the tokomak
case in particle trapping .In Fig.2 it is shown that it takes less confinement time than the tokomak case and
is epicentric whereas the tokomak case takes more time to come to the stabilized condition as compared to
DANISHA. Therefore the confinement will remain for longer period without any instability generated
therein.

PARTICLE TRAPPING IN HAZARIKA’S (BANANA) REGIME


Here we can observe that the particle trapped which is exhibited by the Hazarika’s regime
(banana) is broader than the Tokomak case in Fig.1.

HAZARIKA'S REGIME(BANANA)

1
15 10 2
14 5 3
13 4
12 0 5 Series1
11 6
10 9 8 7

Fig.1. The particle are trapped in showed region Hazarika’s (banana) regime which calculated from skin
depth eqn. (11), q=2.5, R/r=1.5, rL =3.5, θ = 0.1, φ = 0.2 in radians

COMPARISION OF TOKOMAK AND DUO TRIAD


TOKOMAK COLLIDER(DTTC)

1
15150 2
14 100 3
13 50 4 TOKOMA Series1
12 0 5 K
DTTC Series2
11 6
10 7
9 8

Series 1. Tokomak, Series 2.DANISHA (HUB)


FIG.2.Comparision of Hazarika’s (banana) regime for DANISHA (HUB) and Tokomak is shown for
θ = 0.1, φ = 0.2 in radians=2.5, R=1.5 in eqn. (14).
It is observed from the above graph that the confinement time required for the DANISHA (HUB) is much
lesser than the Tokomak case.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 69

PLOT OF FLR VS SKIN DEPTH

0.7
0.6
S KIN DE P T H

0.5
0.4 TOKOMAK Series1
0.3 DTTC Series2
0.2
0.1
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
FLR( x 0.1 )

Fig.3 Comparison of skin depth of tokomak and DANISHA for different FLR

From Fig.3 one can see that the skin depth of tokomak and DANISHA for different FLR which is
less for DANISHA case.

Condition for particle trapping: The velocity should be less than equal to the centrifugal force
v 2 ≤ 2rg , the motion of the particle is oscillatory and the particle never loses contact with the circular
path. v 〉 2rg , the particle leaves the circle and then describes a parabolic path. If v = 2rg , the
2 2

motion of the particle becomes oscillatory and it goes unto it attains diametrical path by performing the
banana (Hazarika’s) regime path.

The present study is in relevance to the earlier studies done by Pfrisch(1978)and Pfrisch and
Schluter(1962),Samain and Werkoff(1977).If we substitutive in the major radius with C h = 1 only R
remains ,we get the same results of Pfrisch(1978).The present study contains enhancement in the skin depth
,banana regime, bootstrap current, ware effect, diffusion coefficient as Hazarika’s diffusion coefficient
,Hazarika’s factor for DANISHA hub .The DANISHA hall thruster provides us 2.646 more thrust than the
SPT-100 thruster. The power is 18.5 times more than SPT-100. results are in agreement with Ning et al
(2009).

APPLICATIONS:

In DANISHA one can observe two types of cases which govern the system as the polarity of the
magnetic field changes. (I)For current generation, (II) for rockets and missiles, (III) Hybrid technology

Case I: FOR ELECTRICITY GENERATION


As the polarity of magnetic field changes the flow of plasma also changes say if in both of the torus for all
the four torus the magnetic field is in clockwise direction there will be collisional effect in the collider
region of DANISHA which will give rise to more heat and friction and resulting in slowing down motion
of plasma in collider region .Afterwards the plasma becomes consistent in every cycle of flow, which can
be observed in this region as well as in DANISHA as a whole. Bhatia and Hazarika (1995) showed that in
space the self gravitating superposed plasma flow past each other stabilizes the system. It can be useful for
generation and getting the current density in enormous quantity which is useful for generation of electricity.

POWER LAW:
Here the definition of power is used to derive the power law.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 70

dW
Power = Rate of change of work done = P = (4.4.42)
dt
Work done = Force X Distance
Force= Pressure per unit area
p
F= where p is pressure and σA is cross sectional area of DANISHA
σA
d
Where γ = is growth rate.
dt
pR
W = C h , hence we get the power as
σA
γpR
P= Ch In MW (4.4.43)
σA

Case II: ROCKET AND MISSILES


When we have the change in polarity of magnetic field say in one torus it runs in anti-clockwise and in
other clockwise direction in such case we observe that the flow of plasma is accelerated in the collider
region of DANISHA and may or may not become turbulent flow which is useful for propulsion system for
the use in rockets, missiles and space- craft etc. It is observed that the velocity drift in such case is C h
times that of Tokomak case. Here the plasma is acting as the superposed flowing one over the other hence
enhancing the velocity of resultant plasma which is observed by several researchers in past Bhatia and
Hazarika (1996).

Case III: HYBRID TECHNOLOGY


Like the case II here we use the same type of system resulting into the different type of technology which is
prevalent in many places known as the Hybrid technology. The accelerated neutrons which can be
extracted from the DUO TRIAD TOKAMAK COLLIDER (DANISHA) HUB can be used in Fission
Chamber where we need the those neutrons as for the fusion purpose the fast neutrons are waste products
leading to the heating of plasma chamber, so it can be used through neutrons collecting blackest used and
can be channelized to the Uranium or plutonium based nuclear/atomic reactors.

Case IV: COMPUTERS AND TELEVISION


The growth rate is measured in per second (Hz) which gives us the speed compiling or formation of
plasma. If it is used in computer chips will give us the processing speed of the microprocessor. Similarly
we can enhance the speed of the normally used microprocessor by 1.5 times say if the speed is 3.6GHz in
the present condition the microprocessor speed becomes 5.9 GHz. The calculation speed of the
microprocessor becomes 5.9 Giga flops (i.e. 5.9 Giga floating points per second). If it is used in super
computer with calculation speed of 1.73 Teraflops, the resultant will be near about 150 Tera floating points
per second (150X1012 floating points per second).We can enhance the resolution of the computer monitor
screen as well as that of the plasma TVs confinement time can be reduced with better resolution. The
resolution is 24.75% better than the present best available computer monitor or plasma TVs .One particular
brand of plasma and LCD TVs are projecting that it can give 1:1000000 resolution , here in this particular
case it will be 1:1500000 resolution . No blurred images rather only crystal clear screen can view from 172
degrees wide angle without any diminishing images from side view angle. This entire thing can be done by
using the nanotechnology and peizo-electrononics.

REFERENCES
52. Hazarika,A.B.R.: Submitted in Physics of Plasma (2009a)& 13th National symposium on plasma
Science &Technology, Rajkot(1998); 16th National symposium on plasma Science
&Technology, Guwahati(2001)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 71

53. Hazarika,A.B.R.: Submitted in Physics of Plasma (2009b)& 18th National symposium on


plasma Science &Technology, Ranchi(2003);19th National symposium on plasma Science
&Technology, Bhopal(2004)

54. Hazarika,A.B.R.: Submitted in Plasma of Plasmas (2009c)& Proceeding of 20th National


symposium on plasma Science &Technology, Cochin Univ. of Sci. & Technology,
Cochin(2005) ;

55. Hazarika, A.B.R.: Submitted in Physic of plasma (2009d)& Proceeding of 3rd Technical meeting
of International Atomic Energy Agency on Theory of Plasma Instabilities, Univ. of York, York,
UK(2007),31pp

56. Pfirsch, D: Theoretical and computational plasma physics (1978), IAEA-SMR-31/21, pp59.
57. Pfrisch, D., SCHLUTER, A.: Max-Planck-Institut fur Physik und Astrophsik, Munich, Rep.
MPI/PA/7/62(1962).
58. Kerner, W: Z. Naturforsch. 33a,792(1978)
59. Samain,A., Wekoff, F: Nuc. Fus. 17,53(1977)
60. Bhatia, P.K and Hazarika, A.B.Rajib :Phy Scr 53,57(1996)
61. Hazarika,A.B.R: Proceeding of National symposium of Plasma Science and
Technology(2009),Hamirpur(HP)
62. Bhatia, P.K. and Hazarika, A.B.Rajib : J Ind. Acad. Maths.29(1),141(2007)
63. Gaunge. J, Lin. H and Xiao -Ming. Q: Plasma Sci & Tech 7(3),2805(2005)
64. Xiao-Ming. Q, Lin .H, Guangde. J: Plasma Sci & Tech. 4(5),1429(2002)
65. Ning. Z, Yu. D, Li. H and Yan. G : Plasma Sci and Tech. Vol.11(2),194(2009)
15. Qui, X.M, Huang, L, Jian, G: Plasma Sci &Tech, 5, 1429(2002)
16. De Groot, J.S, Toor, A, Goldberg, S.M et al: Phys Plasmas 4, 1519(1997)
17. Haines, M.G.: IEEE transaction on Plasma Sci.26, 1275(1998)
18. Shumlak, U and Hartman, C.W.: Phys Rev. Lett. 75, 3285(1995)
19. Arber, T.D,Coppins, M, Scheffel, J: Phys Rev. Lett. 77, 1766(1996)
20. Ganguly, G: Phys Plasmas 4, 2322(1997)
21. Qui, X,M ,Huang,L and Jian,G.D: Chin. Phys Lett. 19,217(2002)
22. Turchi, P.J and Baker,W.L: J. Appl. Phys, 44,4936(1973)
23. Morozov, A.I: Introduction to Plasma Kinetics, Fizmat, Moscow(2006)
24. Choueiri, E Y: Physics of Plasmas, 8, 1411(2001)

4.5. DIFFUSION ASSOCIATED NEOCLASSICAL INDIGENOUS SYSTEM


OF HALL ASSEMBLY (DANISHA)

The present study is related different geometry of the Hall thrusters in which I have tried to get better
results than the simple Hall thrusters available at present with the future next generation device Duo Triad
Tokomak collider (DTTC) hub by the using new type of code Diffusion Associated Neoclassical
Indigenous Hall Assembly (DANISHA). Title:
Hall effect plasma thruster

In a Hall effect thruster, especially for use in maneuvering satellites, a stream or plume of ions, used to
produce the thrust is deflected, by appropriate adjustment of a magnetic field, so as to steer the satellite or
other vehicle. The channel, along which the ions are accelerated, is preferably flared outwardly at its open
end so as to avoid erosion which would otherwise be caused by the deflection. The adjustment of the
magnetic field is preferably achieved by dividing an outer magnetic

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 72

Hall thrusters were studied independently in the US and the USSR in the 1950s and '60s.
However, the concept of a Hall thruster was only developed into an efficient propulsion device in the
former Soviet Union, whereas in the US, scientists focused instead on developing girded ion thruster. Two
types of Hall thrusters were developed in the Soviet Union:

(A)Thruster with wide acceleration zone, SPD (Russian), SPT, (Stationary Plasma Thruster) at
Design Bureau Fake.

(B) Thruster with narrow acceleration zone, DAS in English: TAL, (Thruster with Anode Layer),
at the Central Research Institute for Machine Building (TsNIIMASH).

Soviet and Russian SPD thrusters


The common SPD design was largely the work of A. I. Morozov. SPD engines were operated since 1972.
They were mainly used for satellite stabilization in North-South and in East-West directions. Since then
until the late 1990s 118 SPD engines completed their mission and some 50 continued to be operated. Thrust
of the first generation of SPD engines, SPD-50 and SPD-60 was 20 and 30 mN respectively. In 1982 SPD-
70 and SPD-100 were introduced, their thrust being 40 mN and 83 mN. In the post-Soviet Russia high-
power (a few kilowatts) SPD-140, SPD-160, SPD-180, T-160 and low-power (less than 500 W) SPD-35
were introduced. SPT-100 has a life time of 8000 hrs, efficiency of 52%.Soviet and Russian DAS-type
engines include D-38 and D-55.

Soviet-built thrusters were introduced to the West in 1992 after a team of electric propulsion specialists,
under the support of the Ballistic Missile Defense Organization, visited Soviet laboratories and
experimentally evaluated the SPD-100 (i.e., a 100 mm diameter SPT thruster). Over 200 Hall thrusters have
been flown on Soviet/Russian satellites in the past thirty years. They were used mainly for station keeping
and small orbital corrections. Currently Hall Thruster research, design, and theoretical modeling are led by
experts at NASA Glenn Research Center and the Jet Propulsion Laboratory. A considerable amount of
development is being conducted in industry, such as Aerojet and Busek Co.

This technology was used on the European lunar mission SMART-1 and is used on a number of
commercial geostationary satellites.

What's a Hall Thruster? The Hall thruster is a type of plasma-based propulsion systems for space
vehicles. The amount of fuel that must be carried by a satellite depends on the speed with which the
thruster can eject it. Chemical rockets have very limited fuel exhaust speed. Plasmas can be ejected at much
higher speeds; therefore less fuel need be carried on board. The Hall thruster was invented in the late
1950's. Until the mid 1990's, it has been developed primarily by the Russians. During the past 30 years, the
Russian placed in orbit more than 100 Hall thrusters. However, the vast majority of satellites worldwide
have relied on chemical thrusters and, to a lesser extent, arc jet thrusters and ion thrusters.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 73

A conventional electrostatic ion thruster consists of two grids, an anode and a cathode, between which a
voltage drop occurs. Positively charged ions accelerate away from the anode toward the cathode grid and
through it. After the ions get past the cathode, electrons are added to the flow, neutralizing the output to
keep it moving. A thrust is exerted on the anode-cathode system, in a direction opposite to that of the flow.
Unfortunately, a positive charge builds up in the space between the grids, limiting the ion flow and,
therefore, the magnitude of the thrust that can be attained.

In a Hall thruster, electrons injected into a radial magnetic field neutralize the space charge. The
magnitude of the applied magnetic field is approximately 100-200 gauss, strong enough to trap the
electrons by causing them to spiral around the field lines in the coaxial channel. The magnetic field and a
trapped electron cloud together serve as a virtual cathode. The ions, too heavy to be affected by the field,
continue their journey through the virtual cathode. The movement of the positive and negative electrical
charges through the system results in a net force (thrust) on the thruster in a direction opposite that of the
ion flow.

Existing Hall thrusters can produce large jet velocities 10-30 km/s within the input power in the range
from hundred watts to tens of kilowatts. For the state-of-the-art thrusters operating in the power range of
above kilowatt, 50-60% of the input electric power goes to the kinetic power of the plasma jet. These
thrusters are capable to produce the thrust in the range 0.1-1 N. Since ion acceleration takes place in quasi-
neutral plasma, Hall thrusters are not limited by space-charge build up. Hence, higher current and thrust
densities than conventional ion thrusters can be achieved at discharge voltages from hundreds volts to a few
kilovolts. With such performance capabilities Hall thrusters can be used to keep satellites on
geosynchronous orbit (GEO), to compensate for atmospheric drag on satellite in low-earth orbits (LEO), to
raise a satellite from LEO to GEO and for interplanetary missions. Besides space applications, Hall
thrusters can be also useful for industrial applications such as plasma processing of materials.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 74

The Hall Thruster Concept

Presently in use is "Princeton Plasma Physics Laboratory: Fueling the Future" presents an overview of the
Laboratory's research program of PPPL's current major fusion experiment, the National Spherical Torus
Experiment, and descriptions of fusion devices proposed for the future. These include the National
Compact Stellarator Experiment, being built at PPPL, and the international ITER project. Information on
the application of plasma physics to solve near-term problems is also presented. The essential working
principle of the Hall thruster is that it uses an electrostatic potential to accelerate ions up to high speeds. In
a Hall thruster the attractive negative charge is provided by electron plasma at the open end of the thruster
instead of a grid. A radial magnetic field of a few milli Teslas is used to hold the electrons in place, where
the combination of the magnetic field and an attraction to the anode force a fast circulating electron current
around the axis of the thruster and only a slow axial drift towards the anode occurs.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 75

Hall thrusters are largely axially symmetric. This is a cross-section containing that axis. A schematic of a
Hall thruster is shown in the image to the right. An electric potential on the order of 300 volts is applied
between the anode and cathode. The central spike forms one pole of an electromagnet and is surrounded by
an annular space and around that is the other pole of the electromagnet, with a radial magnetic field in-
between. The propellant, such as xenon gas is fed through the anode, which has numerous small holes in it
to act as a gas distributor. Xenon propellant is used because of its high molecular weight and low ionization
potential. As the neutral xenon atoms diffuse into the channel of the thruster, they are ionized by collisions
with high energy circulating electrons (10–20 eV or 100,000 to 250,000 °C). Once ionized the xenon ions
typically have a charge of +1 though a small fraction (~10%) are +2.The xenon ions are then accelerated by
the electric field between the anode and the cathode. The ions quickly reach speeds of around 15,000 m/s
for a specific impulse of 1,500 seconds (15 kN·s/kg). Upon exiting however, the ions pull an equal number
of electrons with them, creating a plume with no net charge.

The axial magnetic field is designed to be strong enough to substantially deflect the low-mass electrons, but
not the high-mass ions which have a much larger gyro radius and are hardly impeded. The majority of
electrons are thus stuck orbiting in the region of high radial magnetic field near the thruster exit plane,
trapped in E×B (axial electric field and radial magnetic field). This orbital rotation of the electrons is a
circulating Hall current and it is from this that the Hall thruster gets its name. Collisions and instabilities
allow some of the electrons to be freed from the magnetic field and they drift towards the anode. About
30% of the discharge current is an electron current which doesn't produce thrust, which limits the energetic
efficiency of the thruster; the other 70% of the current is in the ions. Because the majority of electrons are
trapped in the Hall current, they have a long residence time inside the thruster and are able to ionize almost
all (~90%) of the xenon propellant. The ionization efficiency of the thruster is thus around 90%, while the
discharge current efficiency is around 70% for a combined thruster efficiency of around 63% (= 90% ×
70%).The magnetic field thus ensures that the discharge power predominately goes into accelerating the
xenon propellant and not the electrons, and the thruster turns out to be reasonably efficient. Compared to
chemical rockets the thrust is very small, on the order of 80 mN for a typical thruster. For comparison, the
weight of a coin like the US quarter or a 20-cent Euro coin is approximately 60 mN.

However, Hall thrusters operate at the high specific impulse that is achieved with ion thrusters. One
particular advantage of Hall thrusters, as compared to an ion thruster, is that the generation and acceleration
of the ions takes place in a quasi-neutral plasma and so there is no Child-Langmuir charge (space charge)
saturated current limitation on the thrust density, and thus thrust is high for electrically accelerated
thrusters. Another advantage is that these thrusters can use a wider variety of propellants supplied to the
anode, even oxygen, although something easily ionized is needed at the cathode. One propellant that is
starting to be used is liquid bismuth due to its low cost, high mass and low partial pressure.In this study, we
developed three computational techniques for the ECE radiation analysis of the Hall thruster. The first one

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 76

is the single particle approximation analysis. This is the simplest one among the approaches. We modeled
the plasma region of the Hall thruster with three parameters, the magnetic field, electron temperature, and
electron density distributions. These parameters are constant in a cell. We calculated the radiation with the
parameter distributions according to the observation angle. The frequency of a cell is determined by the
magnetic field of the cell. This analysis is easy to approach and does not require a high computing
performance. However, the results of this analysis don’t have detail results. The radiated electric field is
derived from the power, so there is no polarization information on the electric field. We moved on more
sophisticated analysis. The next one is the Particle-In-Cell (PIC) analysis. PIC is for analysis of
microscopic phenomena. Particle motion in the thruster channel region is simulated with the PIC method.
We selected electrons from the Maxwell-Boltzmann distribution for the speed of electrons. The Monte-
Carlo method was adopted in this selection. We solved the Lorentz force equation to get the motion data of
the electrons and analyzed the radiated electric field with the particle motions. Then, we took the Fourier
transform of the electric field to consider the radiation in the frequency domain. This approach is from
definition, the radiation is from charge acceleration. It is more realistic approach to the plasma. It uses same
parameter distributions, but the parameter in a cell is not constant any more because of adopting the Monte-
Carlo method. It also shows the polarization information of the radiation. However, we assume in this
analysis that the radiation is in free space. The channel plasma is considered as current sources for
radiation. The material constants of the plasma are concerned as free space.

4.6. FUZZY DIFFERENTIAL INCLUSION METHOD

RAYLEIGH-TAYLOR INSTABILITY IN MAGNETIC CONFINEMENT TOKAMAK COLLIDER


HUB (MCTC): A CONCEPTUAL DEVICE, WITH THE HELP OF FUZZY DIFFERENTIAL
INCLUSIONS.

A low - β and high aspect ratio Magnetic confinement Tokamak collider(MCTC)hub is taken into

consideration for Low-frequency stabilization process with Global Fourier-Bessel function coordinates
playing the vital role as the configuration is governed by the transport phenomena which subsides the effect
on the unstable mode. The present study is to stabilizes such system if density gradient ( ∇n ) plays against
the gravity in the upward direction thereby causing the R-T instability. Here the conductivity causes the
implosion in the system which can be stabilized by the sheared flow, density gradient ( ∇n ), and Hall
current where as the finite resistivity, collision frequency and kinematic viscosity destabilizes the system.
Above study is done theoretically to obtain the growth for the stabilizing process. The transport

phenomenon decreases by (1 + sin 3θ sin φ )−1 / 4 over what one considers in classical Tokamak case.

Eigen mode equation is obtained by linearizing Maxwell’s equation with global Fourier-Bessel expansion
function

(r ,θ , ξ ) for Rayleigh- Taylor instability (RTI) as

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 77

~
ξ (r ,θ , φ , t ) = X 2 X ∑ C l ,vξ l ,v (ψ ) exp[inφ − iγt + ilθ ] (4.6.1)
l ,v

with

[
X = R 2 (1 + sin 3θ sin φ ) + 2ψ cos θ
2
]
1/ 2

~
[
X = X 2 − R 2 (1 + sin 3θ sin φ ) δ
2 1/ 2
]
near magnetic axis ψ = 0 the flux surfaces are elliptic with a half –axis ratio.
The flux function is given by

⎡ ⎤
⎢ 2 2 ⎥
(
⎢ Z α − R δ (1 + sin 3θ sin φ ) + ⎥
2
)
p′
ψ =
2 1+α 2( ) ⎢
( ⎥
)
⎢α 2 X 2 − R 2δ (1 + sin 3θ sin φ ) + ⎥ (4.6.2)
⎢ ⎧ R 2
⎫ ⎥
2

⎢α 2 ⎨ X 2 − (1 + sin 3θ sin φ )2 ⎬ ⎥
⎢⎣ ⎩ 4 ⎭ ⎥⎦
R is major radius of magnetic axis and X, φ , Z denote the usual cylindrical coordinates

Pressure profile is given by p = p 0 − p ′ψ and a poloidal current profile


1/ 2
⎡ δ ⎤
T ≡ XBφ = R(1 + sin 3θ sin φ )⎢ B02 + 2ψp ′


1+α 2 ⎦ ( ) (4.6.3)

with p0 , p ′, B0 , δ , α as constants .
At axis flux is given by ψ = 0 is

Z 2 R′2
α =
2
9 4
Z 2 + R ′ 2 (1 − δ ) + R′
16
At boundary the flux function becomes the plasma boundary ψ =ψ b
Inverse aspect ratio is given by

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 78

⎡ 2⎛ α 2 ⎞ ⎤
⎢ Z ⎜⎜ 2 − 1⎟⎟ + ⎥
⎢ ⎝ R′ ⎠ ⎥
ψb Lp −1 ⎢α 2 (1 − δ − 2ε cos θ ) ⎥ (4.6.4)
ε= =
R′2 (
2 1+ α 2 ) ⎢


⎞ ⎥
2
2⎛ 3
⎢+ α ⎜ − 2ε cos θ ⎟ ⎥
⎢⎣ ⎝4 ⎠ ⎥⎦
Plasma beta for large aspect ratio is

1+α 2 ε2
βt =
2 q 2 (0 )(1 − δ )

1+ α 2
βp =
( 1+α 2 −δ ) q (0) is safety factor at, magnetic axis. (4.6.5)

2 Bφ
q (0 ) =
µJ φ R(1 + sin 3θ sin φ )

For equilibrium we need ∇p = J × B

∇p = ω c {
~
XαδR ′ 2 Lp −1 ( )
2

(
X 1+α 2 )2
(4.6.6)


[ (
s Lp −1 X 2 1 + α 2 − R ′ 2δ ′
}
) ]
X 1+α 2 ( )

From the Maxwell’s equation:

(ηρ + ν ) neJ = ∇φ + κ∇ T + εη d
2
B
c − µ∇ 2 2

dt 2
1
− gδρ − ∇p − J×B
eN e
on normalizing after applying the eqn. (4.2.2) we get

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 79

⎛ ν ⎞
⎜η + c ⎟
⎜ ρ ⎟
⎜ ⎧ ⎟
⎜ ⎡ ⎤⎫⎟

⎪ ⎢[(1 − δ ) + 2ε cos θ ] ⎥ ⎪ ⎟
−1 / 2

⎪ ⎢ ⎥⎪
γ 2ω c (1 + s ) ⎜
⎜ ⎪ R ′ cos θ ⎢[(1 − δ ) − 2ε sin θ ] ⎥ ⎪ ⎟ J
= ⎪ ⎟
⎜ µ⎪ ⎢ 1 ⎥ ⎪ ⎟ ne
c s2
⎜− ⎨ ⎢− (1 + 2ε cos θ ) ⎪
⎥⎬⎟
ρ ⎣ 2 ⎦
⎜ ⎪ ⎪⎟
⎪{(1 − δ + 2 cos θ )}
−3 / 2
⎜ ⎪⎟
⎜ ⎪+ cos θ [(1 − δ ) + 2ε cos θ ] ⎪⎟
⎜ ⎪ ⎪⎟
⎜ ⎪⎩+ l 2 + n s2 ⎪⎭ ⎟
⎝ ⎠
κ
in
− s − gLn −1 − Lp −1 −
ρ mk B
Lp −1
2
( )
ω c (J θ − J φ s )
1
+
eN e
(4.6.7)
First bracket gives us finite conductivity, collision frequency, and kinematic viscosity.
Second term the inverse density.
Third term denotes the gravity and density gradient scale length.
Fourth term the pressure gradient scale length.
Fifth term the thermal conductivity and
The last term in eqn. (14) is given by

1
ω c (J θ − sJ φ ) =
~
ω c XαδR ′ 2 Lp −1
{
( )2

eN e eN e X 1+α 2
2
( ) (4.6.8)


[ −1 2
(
s Lp X 1 + α − R ′ δ ′
}
2
) 2
]
X 1+α 2 ( )
is the Hall term

Bθ 1 d (n ) 1 d ( p)
s= Ln −1 = − Lp −1 = −
Bφ n dR p dR


ωc = R ′ = R(1 + sin 3θ sin φ )
nm

me
For Low frequency γ 〈〈ω c and for RT mode gLn −1 〈〈1 and low − β : 〈〈1
mi

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 80

Hence the growth rate is given


1/ 2
⎡⎧ ν c µ 2 2 ⎫⎤
⎢⎨η +
ρ
+ ∇ + l + ns2
ρ
( )
⎬⎥
⎢⎩ ⎭⎥
⎢ J ⎛ Lp −1 ⎞ ⎥
⎢ −1
− 1 − ⎜⎜ ⎟
−1 ⎟

γ c s2 ⎢ negLn ⎝ gLn ⎠ ⎥
= ⎢ ⎥
gLn −1 ω c (1 + s ) ⎢ κ Lp (
−1 2
) ⎥
⎢− k gLn −1 +
( ) ⎥
⎢ B ⎥
⎢ ωc ⎥
⎢ gLn −1eN (J θ − sJ φ ) ⎥
⎣ e ⎦
….. (4.6.9)

where J=
~
(
XαδR ′ 2 Lp −1 )
2
( )[ ( )
+ ω c 1 + α 2 Lp −1 X 2 1 + α 2 − R ′ 2δ ′ ]
(
Xω c 1 + α )
2 2

∂2 2∂ 1 ∂2
∇2 = + +
∂ψ 2 ψ∂ψ ψ 2 ∂θ 2
cot θ ∂ 1 ∂2
+ +
ψ 2 ∂θ ψ 2 sin 2 θ ∂φ 2
And

∂ ⎡~ 1 ~ ⎤
= 2 R ′ 2 ⎢ X + X 2 X −1 ⎥ cos θ
∂ψ ⎣ 2 ⎦
~
⎧ X −1 [(1 − δ ) − 2ε sin θ ]⎫
∂ 2
⎪ ⎪
= R ′ cos θ ⎨ 1 2 ~ −3 ~ ⎬
∂ψ 2 ⎪− X X + X cos θ ⎪
⎩ 2 ⎭
∂ ⎡~ 1 ~ ⎤
= −2ε sin θR ′ 2 ⎢ X + X 2 X −1 ⎥
∂θ ⎣ 2 ⎦
∂2 ⎡~ 1 ~ ⎤
= −2(ε cos θ ) R ′ 2 ⎢ X + X 2 X −1 ⎥
∂θ 2
⎣ 2 ⎦
2 ~ −1
+ 2ε sin θR ′ X 2 + X X
2 2
( ~ −2 2
)
Thus,

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 81

∂ 2 J 2 ∂J 1 ∂2J
∇ J=2
+ +
∂ψ 2 ψ ∂ψ ψ 2 ∂θ 2
(4.6.10)
cot θ ∂J 1 ∂2J
+ 2 + 2
ψ ∂θ ψ sin 2 θ ∂φ 2
Now,
We have on introduction of fuzzy logic and fuzzy differential equation
∂B
= [ f (t , B (t ) )] = ∇ × E
α

∂t
Where, B is a function of t and α∈[0, 1].

∂2B
= [ f (t , f (t , B (t )) )]
α

∂t 2

∀α ∈ [0,1]
Thus on using the above fuzzy modification on the operators we get
For, α=0

2 1 cot θ 1
∇2 J = 1+ .1 + .1 + .1 + .1
ψ ψ 2
ψ 2
ψ sin 2 θ
2

2 1 cot θ 1
=1 + .+ .+ .+ . (4.6.11)
ψ ψ 2
ψ 2
ψ sin 2 θ
2

For, α= 1,

2
∇ 2 J = f {ψ , f (ψ , J (ψ ) )} + [ f {ψ , J (ψ )}]
ψ
1 cot θ
+ .[ f {θ , f (θ , J (θ ) )}] + .[ f {θ , J (θ )]
ψ 2
ψ2
1
+ [ f {φ , f (φ , J (φ ) )}] (4.6.12)
ψ sin 2 θ
2

Hence, for α=0, the growth rate γ is given by

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 82

1
⎡ vc J ⎤ 2

⎢{η + ρ } × negLn −1 ⎥
⎢ ⎥
⎢ µ 2 1 ⎥
⎢+ ρnegLn −1 {1 + ψ + ψ 2 ⎥
⎢ ⎥
⎢ cot θ 1 ⎥
⎢ + 2 + 2 } ⎥
c2s ⎢ ψ ψ sin θ2
γ ⎥
= ⎢ ⎥
µ
gLn −1 (
ω c (1 + s) + l 2 + n 2
⎢ ) J
− 1⎥
⎢ ρ
s −1
negLn ⎥
⎢− ⎜⎜ ⎟−
(
⎢ ⎛ Lp −1 ⎞ k Lp −1 2 ⎥

)
⎢ ⎝ gLn ⎟⎠ k B gLn
−1
(−1
⎥ )
⎢ ⎥
ωc
⎢+
⎢⎣ gLn −1eN e θ
(J − sJ φ ) ⎥⎥

i.e.
1
⎡ ⎤ 2

⎢{η + v c } × J ⎥
⎢ ρ negLn −1 ⎥
⎢ ⎥
⎢+ µ 2 1 cot θ 1
{1 + + 2 + 2 + 2 }⎥
⎢ ρnegLn −1 ψ ψ ψ ψ sin θ ⎥
2

⎢ ⎥
c s ⎢ µ 2
( )
2
J ⎥
γ = gLn −1 . ⎢
+ l + n s2 − 1

ω c (1 + s ) ρ negLn −1
⎢ ⎥
⎢− ⎜⎜ ⎟ −
(
⎢ ⎛ Lp −1 ⎞ k Lp −1 2 ) ⎥

−1 ⎟
⎢ ⎝ gLn ⎠ k B gLn ( −1
) ⎥
⎢ ωc ⎥
⎢+ (J θ − sJ φ ) ⎥
⎢⎣ gLn −1 eN e ⎥⎦
….(4.6.13)

For,α=1, the growth rate γ is given by

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 83

1
⎡ ⎤ 2

⎢{η + v c } × J ⎥
⎢ ρ negLn −1 ⎥
⎢ ⎥
µ
⎢+
⎢ ρnegLn −1
{ f ( ψ , f (ψ , J (ψ ) ))⎥

⎢ ⎥
⎢+ 2 ( f (ψ , J (ψ ) )) ⎥
⎢ ψ ⎥
⎢ ⎥
⎢+ 1 ( f (θ , f (θ , J (θ ) ))) ⎥
⎢ ψ2 ⎥
⎢ ⎥
γ c 2 s ⎢ cot θ ⎥
= + . f (θ , J (θ ) )
gLn −1 ω c (1 + s) ⎢ ψ 2 ⎥
⎢ ⎥
⎢+ 1
. f (φ , f (φ , J (φ ) ))} ⎥
⎢ ψ 2 sin 2 θ ⎥
⎢ ⎥
⎢+ µ l 2 + n 2
( ) J
− 1 − ⎥⎥
⎢ ρ s
negLn −1
⎢ ⎥
⎢⎛ Lp −1 ⎞ k Lp −1 2
⎢⎜⎜ ⎟−
( ) ⎥

−1 ⎟
⎢⎝ gLn ⎠ k B gLn ( −1
) ⎥
⎢ ωc ⎥
⎢+ (J θ − sJ φ ) ⎥
⎢⎣ gLn −1eN e ⎥⎦
i.e.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 84

1
⎡ ⎤ 2

⎢{η + v c } × J ⎥
⎢ ρ negLn −1 ⎥
⎢ ⎥
µ
⎢+
⎢ ρnegLn −1
{ f (ψ , f (
ψ , J (ψ ) ))⎥

⎢ ⎥
⎢+ 2 ( f (ψ , J (ψ ) )) ⎥
⎢ ψ ⎥
⎢ ⎥
⎢+ 1 ( f (θ , f (θ , J (θ ) ))) ⎥
⎢ ψ2 ⎥
⎢ ⎥
c 2 s ⎢ cot θ ⎥
γ = gLn .
−1
+ . f (θ , J (θ ) )
ω c (1 + s) ⎢ ψ 2 ⎥
⎢ ⎥
⎢+ 1
. f (φ , f (φ , J (φ ) ))} ⎥
⎢ ψ 2 sin 2 θ ⎥
⎢ ⎥
⎢ µ 2
( )
⎢+ ρ l + n s negLn −1 − 1
2 J ⎥

⎢ ⎥
⎢ ⎛ Lp −1 ⎞ k Lp −1 2
⎢− ⎜⎜ ⎟−
( ) ⎥

−1 ⎟
⎢ ⎝ gLn ⎠ k B gLn ( −1
) ⎥
⎢ ωc ⎥
⎢+ (J θ − sJ φ ) ⎥
⎢⎣ gLn −1 eN e ⎥⎦
…… (4.6.14)

Now taking the average of the expression (4.6.13) and (4.6.14), we get the general growth rate as follows

1
⎡ vc J ⎤ 2

⎢{η + ρ } × negLn −1 ⎥
⎢ ⎥
⎢ µ 2 1 ⎥
⎢+ {1 + + 2 ⎥
ψ ψ ⎥
⎢ ρ negLn −1

⎢ cot θ 1 ⎥
⎢ + 2 + 2 } ⎥
c2s ⎢ ψ ψ sin θ 2
2γ= −1 ⎥
gLn . ⎢ ⎥
µ
ω c (1 + s) + l 2 + n 2
⎢ ( J
) − 1⎥
⎢ ρ
s −1
negLn ⎥
⎢ ⎛ −1

⎢− ⎜ Lp ⎟ − k Lp
−1 2
( ⎥
⎥ )
⎢ ⎜⎝ gLn −1 ⎟⎠ k B gLn −1 ⎥ ( )
⎢ ⎥
ωc
⎢+
⎢⎣ gLn −1eN e θ
(J − sJ φ ) ⎥⎥

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 85

1
⎡ ⎤ 2

⎢{η + v c } × J ⎥
⎢ ρ negLn −1 ⎥
⎢ ⎥
µ
⎢+
⎢ ρnegLn −1
{ f ( ψ , f (
ψ , J (ψ ) ))⎥

⎢ ⎥
⎢+ 2 ( f (ψ , J (ψ ) )) ⎥
⎢ ψ ⎥
⎢ ⎥
⎢+ 1 ( f (θ , f (θ , J (θ ) ))) ⎥
⎢ ψ2 ⎥
⎢ ⎥
c 2 s ⎢ cot θ ⎥
−1
gLn . + . f (θ , J (θ ) )
ω c (1 + s) ⎢ ψ 2 ⎥
⎢ ⎥
⎢+ 1
. f (φ , f (φ , J (φ ) ))} ⎥
⎢ ψ 2 sin 2 θ ⎥
⎢ ⎥
⎢+ µ l 2 + n 2
( ) J
−1 ⎥
⎢ ρ s
negLn −1 ⎥
⎢ ⎥
⎢ ⎛ Lp −1 ⎞ k Lp −1 2
⎢− ⎜⎜ ⎟−
( ) ⎥

+
−1 ⎟
⎢ ⎝ gLn ⎠ k B gLn ( −1
) ⎥ i.e. the general growth rate is γ
⎢ ωc ⎥
⎢+ (J θ − sJ φ ) ⎥
⎢⎣ gLn −1eN e ⎥⎦
1
⎡ vc J ⎤ 2

⎢{η + ρ } × negLn −1 ⎥
⎢ ⎥
⎢ µ 2 1 ⎥
⎢+ {1 + + 2 ⎥
ψ ψ ⎥
⎢ ρ negLn −1

⎢ cot θ 1 ⎥
1 ⎢+ 2 + 2 } ⎥
c s ⎢ ψ ψ sin θ 2
2
{ −1 ⎥ +
2 gLn . ⎢ µ ⎥
ω c (1 + s) + l + n

2 2
( J
) − 1⎥
⎢ ρ
s −1
negLn ⎥
⎢ ⎛ −1

⎢− ⎜ Lp ⎟ − k Lp
−1 2
(⎥
⎥ )
⎢ ⎜⎝ gLn −1 ⎟⎠ k B gLn −1 ⎥ ( )
⎢ ⎥
ωc
⎢+
⎢⎣ gLn −1eN e θ
(J − sJ φ ) ⎥
⎥⎦

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 86

1
⎡ ⎤ 2
v
⎢{η + c } × J ⎥
⎢ ρ negLn −1 ⎥
⎢ ⎥
µ
⎢+
⎢ ρnegLn −1
{ f ( ψ , f (
ψ , J (ψ ) ))⎥

⎢ ⎥
⎢+ 2 ( f (ψ , J (ψ ) )) ⎥
⎢ ψ ⎥
⎢ ⎥
⎢+ 1 ( f (θ , f (θ , J (θ ) ))) ⎥
⎢ ψ2 ⎥
⎢ ⎥
c 2 s ⎢ cot θ ⎥
−1
gLn . + . f (θ , J (θ ) )
ω c (1 + s) ⎢ ψ 2 ⎥
⎢ ⎥
⎢+ 1
. f (φ , f (φ , J (φ ) ))} ⎥
⎢ ψ 2 sin 2 θ ⎥
⎢ ⎥
⎢+ µ l 2 + n 2
( ) J
−1 ⎥
⎢ ρ s
negLn −1 ⎥
⎢ ⎥
⎢ ⎛ Lp −1 ⎞ k Lp −1 2
⎢− ⎜⎜ ⎟−
( ) ⎥

−1 ⎟
⎢ ⎝ gLn ⎠ k B gLn ( −1
) ⎥ }.
⎢ ωc ⎥
⎢+ (J θ − sJ φ ) ⎥
⎢⎣ gLn −1eN e ⎥⎦
One can obtain the magic term of Gypsy J /ψ which is quark for the use of the particle physics, as the
values of the terms are mentioned above.

As we are interested in knowing the effect of finite conductivity along with other parameters so we
shall take the derivative of the growth rate with respect to finite conductivity and we can observe that the
derivative is positive or negative for destabilization or stabilization respectively.

STABILITY FOR PLASMA BETA AND LARGE ASPECT RATIO

The stability condition of the MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC)


HUB by using plasma beta and aspect ratio is studied which is as follows:

(
− γ ∝ Ln −1 )−1 / 2
Means the density gradient scale length stabilizes the system.
The growth is studied analytically as well as numerically, for the analytical case the derivative with respect
to density gradient scale length term gives us the negative quantity hence showing the stabilizing character.
Numerically we observe that the growth with density gradient scale length term stabilizes for larger values
of density gradient scale length term, hence one may opt for larger values of density gradient scale length
term which is exhibited in Fig.1

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 87

STABILITY FOR DENSITY GRADIENT SCALE LENGHT


The growth is studied analytically as well as numerically for the analytical case the growth is plotted

against the wave number (l 2


)
+ n s2 for density gradient scale length which gives us the plot for different
values of density gradient scale length = (1, 2, 3, 4, 5) we see her as the density gradient scale length
increases the growth rate with respect toe the wave number decreases thereby showing the stabilizing effect
for the considered system which is exhibited in Fig.1.

PLOT OF GROWTH RATE VS WAVE NUMBER


FOR DENSITY GRADIENT SCALE LENGTH

20

LN-1=1 Series1
GROWTH RATE

15
Series2
10 LN-1=2 Series3
LN-1=3 Series4
LN-1=4
5
Series5
LN-1=5
0
1 2 3 4 5 6 7 8 9 10
WAVE NUMBER

Fig1.

Series1 : Ln −1 = 1, Series 2 : Ln −1 = 2, Series3 : Ln −1 = 3, Series 4 : Ln −1 = 4, Series5 : Ln −1 = 5

STABILITY FOR FINITE RESISTIVITY


The growth is studied analytically as well as numerically for the analytical case the growth is plotted

against the wav e number ( (l 2


)
+ n s2 for finite conductivity which gives us the plot for different values of
finite resistivity = (1, 2, 3, 4, 5) we see her as the finite resistivity increases the growth rate with respect toe
the wave number decreases thereby showing the stabilizing effect for the considered system which is
exhibited in Fig.2.

PLOT OF WAVE NUMBER VS


GROWTH RATE FOR FINITE RESISTIVITY

20

Series1
GROWTH RATE

15 ETA=5
Series2
10 K=4
LN-1=2
ETA=4 Series3
ETA=3 Series4
ETA=2
5
Series5
ETA=1
0
1 2 3 4 5 6 7 8 9 10
WAVE NUMBER

Fig2. Series1 : η = 5, Series 2 : η = 4, Series3 : η = 3, Series 4 : η = 2, Series5 : η = 1

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 88

STABILITY FOR PRESSURE GRADIENT SCALE LENGTH


The growth is studied analytically as well as numerically for the analytical case the growth is plotted

against the wave number( (l 2


)
+ n s2 for pressure gradient scale length which gives us the plot for different
values of pressure gradient scale length =(1,2,3,4,5)we see her as the pressure gradient scale length
increases the growth rate with respect to the wave number increases thereby showing the destabilizing
effect for the considered system which is exhibited in Fig.3.

PLOT OF WAVE NUMBER VS


GROWTH RATE FOR PRESSURE GRADIENT
SCALE LENGTH

20
Series1
GROWTH RATE

15 LP-1=5
Series2
10 LP-1=4
K=4
LN-1=2 Series3
LP-1=3 Series4
LP-1=2
5
LP-1=1 Series5
0
1 2 3 4 5 6 7 8 9 10
WAVE NUMBER

Fig...3.

Series1 : Lp −1 = 5, Series 2 : Lp −1 = 4, Series3 : Lp −1 = 3, Series 4 : Lp −1 = 2, Series5 : Lp −1 = 1

PLOT OF GROWTH VS WAVE NUMBER FOR ASPECT


RATIO
GROWTH RATE

10
ASPECT
ASPECT Series1
5 A=
A=1 Series2

0 Series3
1 2 3 4 5 6 7 8 9 10 Series4

WAVE NUMBER

Fig.4. Series1 : ε = 2, Series 2 : ε −1 = 3, Series3 : ε −1 = 4, Series 4 : ε −1 = 10


−1

STABILITY FOR ASPECT RATIO


The growth is studied analytically as well as numerically for the analytical case the growth is plotted

against the wave number (l 2


)
+ n s2 for aspect ratio which gives us the plot for different values of aspect
ratio = (2, 3, 4, 10) we see here as the aspect ratio increases the growth rate with respect to the wave
number decreases thereby showing the stabilizing effect for the considered system which is exhibited in
Fig.4.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 89

PLOT OF SKIN DEPTH VS GROWTH RATE

6
GROWTH RATE 5
4
3 Series1
2
1
0
1 2 3 4 5 6 7 8 9 10
SKIN DEPTH

Fig.5.

γ ∝ δ 3/ 4 As growth is directly proportional to δ i.e., as δ increases γ also increases in lower β


values stabilizes the system. We observe that the plasma beta is varying as

STABILITY FOR SKIN DEPTH


The growth is studied analytically as well as numerically for the analytical case the growth is plotted
against the skin depth which gives us the plot for different values of skin depth we see here as the growth
rate increases thereby showing the destabilizing effect for the considered system which is exhibited in
Fig.5.

PLOT OF GROWTH RATE VS WAVE NUMBER


FOR COLLISION FREQUENCY
25

20
GROWTH RATE

Series1
15 Series2
10 Series3
Series4
5

0
1 2 3 4 5 6 7 8 9 10
WAVE NUMBER

Fig.6. Series1 : ν C / ρ = 1, Series 2 : ν C / ρ = 2, Series3 : ν C / ρ = 3, Series 4 : ν C / ρ = 4


STABILITY FOR COLLISION FREQUENCY
The growth is studied analytically as well as numerically for the analytical case the growth is plotted

against the wave number (l 2


)
+ n s2 for collision frequency which gives us the plot for different values of
collision
frequency= (2, 3, 4, 10) we see here as the collision frequency increases the growth rate with respect to the
wave number increases thereby showing the destabilizing effect for the considered system which is
exhibited in Fig.6.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 90

STABILITY FOR THERMAL CONDUCTIVITY


The growth is studied analytically as well as numerically for the analytical case the growth is plotted

against the wave number (l 2


)
+ n s2 for thermal conductivity which gives us the plot for different values of
thermal conductivity = (5, 4, 3, 2, 1) we see here as the increases thermal conductivity the growth rate with
respect to the wave number increases thereby showing the destabilizing effect for the considered system
which is exhibited in Fig. 7.

PLOT OF WAVE NUMBER VS


GROWTH RATE FOR THERMAL CONDUCTIVITY

20

K=5 Series1
GROWTH RATE

15
Series2
10 LN-1=2
K=4 Series3
K=3 Series4
K=2
5
Series5
K=1
0
1 2 3 4 5 6 7 8 9 10
WAVE NUMBER

Fig.7.
Series1 : κ = 5, Series 2 : κ = 4, Series3 : κ = 3, Series4 : κ = 2, Series5 : κ = 1
STABILITY FOR HALL CURRENT
The growth is studied analytically as well as numerically for the analytical case the growth is plotted

against the wave number (l 2


)
+ n s2 for Hall current which gives us the plot for different values of Hall
current = (2, 5, 10) we see here as the Hall current increases the growth rate with respect to the wave
number increases thereby showing the destabilizing effect for the considered system which is exhibited in
Fig. 8.

PLOT OF WAVE NUMBER VS GROWTH FOR HALL


CURRENT

800
GROWTH RATE

600 Series1
400 Series2
200 Series3

0
1 2 3 4 5 6 7 8 9 10
WAVE NUMBER

Fig.8. Series1 : H = 2; Series 2 : H = 5; Series3 : H = 10

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 91

γ ∝ H −1 / 2 The given growth rate (MCTC) HUB is stabilized.

STABILITY FOR KINEMATIC VISCOSITY


The growth is studied analytically as well as numerically for the analytical case the growth is plotted

against the wave number (l 2


)
+ n s2 for kinematic viscosity which gives us the plot for different values of
kinematic viscosity = (2, 5, 10) we see here as the kinematic viscosity increases the growth rate with
respect to the wave number increases thereby showing the destabilizing effect for the considered system
which is exhibited in Fig. 9

PLOT OF WAVE NUMBER VS GROWTH RATE


FOR KINEMATIC VISCOSITY
GROWTH RATE

400
300 Series1
200 Series2
100 Series3
0
1 3 5 7 9
WAVE NUMBER

Fig.9. Series1; µ / ρ = 2; Series 2 : µ / ρ = 5; Series3 : µ / ρ = 10


STABILITY FOR CYCLOTRON FREQUENCY
The growth is studied analytically as well as numerically for the analytical case the growth is plotted

against the wave number (l 2


)
+ n s2 for cyclotron frequency which gives us the plot for different values of
cyclotron frequency = (2, 5, 10) we see here as the cyclotron frequency increases the growth rate with
respect to the wave number decreases thereby showing the stabilizing effect for the considered system
which is exhibited in Fig. 10

PLOT OF WAVE NUMBER VS GROWTH RATE FOR


CYCLOTRON FREQUENCY

80
GROWTH RATE

60
Series1
40 Series2
Series3
20

0
1 2 3 4 5 6 7 8 9 10
WAVE NUMBER

Fig. 10. Series1 : ω c = 2; series 2 : ω c = 5; Series3 : ω c = 10

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 92

PLOT OF COMPARISION OF
TOKAMAK AND MCTC

120
GROWTH RATE

100
80
Series1
60
Series2
40
20
0
1 2 3 4 5 6 7 8 9 10
WAVE NUMBER

Fig.11. Series1 : Tokamak ; Series 2 : MCTC , forR = 1,θ = 60 o , φ = 30 o

∝ (1 + sin 3θ sin φ )
−1 / 4
The growth varies as γ in Fig.11. Comparison of finite conductivity governed
growth in MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB and Tokomak.We see
that the growth rate of MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB is more
stabilized than the Tokamak for the parameter finite conductivity which is shown in Fig.11.
In the present study it is shown that MCTC hub is better than the tokomak case which is depicted in
the Fig.1 and the Fig.2.In Fig.1 it is shown that how MCTC hub is broader the tokomak case in particle
trapping .In Fig.2 it is shown that it takes less confinement time than the tokomak case.
PARTICLE TRAPPING IN HAZARIKA’S (BANANA) REGIME
Here we can observe that the particle trapped which is exhibited by the Hazarika’s regime
(banana) is broader than the Tokomak case in Fig.1.

Banana(Hazarika's)regim e

1
0
10 2
-1
9 -2 3
-3 Series1
8 4

7 5
6

Fig.1. The particle are trapped in showed region Hazarika’s (banana) regime

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 93

COMPARISION OF HAZARIKA'S
(BANANA)REGIME FOR MCTC(HUB) AND
TOKAMAK

1
0
10 2
-2
9 3 Series1
-4
8 4 Series2
7 5
6

Series 1. Tokomak, Series 2.MCTC (HUB)


FIG.2.Comparision of Hazarika’s (banana) regime for MCTC (HUB) and Tokomak is shown for
θ = 45o , φ = 45o
It is observed from the above graph that the confinement time required for the MCTC (HUB) is much
lesser than the Tokomak case.

Condition for particle trapping: The velocity should be less than equal to the centrifugal force
v 2 ≤ 2rg , the motion of the particle is oscillatory and the particle never loses contact with the circular
path. v 〉 2rg , the particle leaves the circle and then describes a parabolic path. If v = 2rg the
2 2

motion of the particle becomes oscillatory and it will go unto diametrical path by performing the banana
(Hazarika’s) regime path.

The present study is in relevance to the earlier studies done by Pfrisch(1978)and Pfrisch and
Schluter(1962),Samain and Werkoff(1977).If we substitutive in the major radius with
( )
sin 3φ sin θ = 0 only R remains ,we get the same results of Pfrisch(1978).The present study contains
enhancement in the skin depth ,banana regime, bootstrap current, ware effect, diffusion coefficient as
Hazarika’s diffusion coefficient ,Hazarika’s factor for MCTC hub .

REFERENCES
66. Doyle,E.J, Groebner,R. J. et al : Phys Fluids B 3,230(1991)
67. Itoh, S.I and Itoh, K: Phys Rev. Lett. 60,2276(1983)
68. Hassam,A.B.:Comments on Plasma Phys. Contr. Fusion 14,275(1991) and Phys. Fluids
B4,485(1992)
69. Sen,S and Weiland,J:Phys Fluids B4,485(1992)
70. Bhatia,P.K. and Hazarika,A.B.R.: Physica Scripta 53,57(1995)
71. Hazarika,A.B.R: National symposium on plasma Science &Technolgy,Rajkot(1998)
72. Hazarika,A.B.R:National symposium on plasma Science &Technology, Guwahati(2001)
73. Hazarika,A.B.R: National symposium on plasma Science &Technolgy,Ranchi(2003)
74. Hazarika,A.B.R:National symposium on plasma Science &Technolgy,Bhopal(2004)
75. Hazarika,A.B.R: Proceeding of National symposium on plasma Science &Technology, Cochin Univ.
of Sci. & Technology, Cochin(2005)
76. Hazarika,A.B.R. :Proceeding of 3rd Technical meeting of International Atomic Energy Agency on
Theory of Plasma Instabilities, Univ. of York, York, UK(2007a),31pp

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 94

77. Diamond, P.H : Plasma Physics and Controlled Nuclear Fusion Research
(IAEA,Vienna,1992)2,97(1992)
78. Hazarika,A.B.R.: Submitted in Nuclear Fusion (2007b)
79. Hazarika,A.B.R.: Submitted in Plasma Physics and controlled fusion(2007c)
80. Hazarika,A.B.R.: Submitted in Plasma Source Science and Technology(2007d)
81. Hazarika,A.B.R.: Submitted in Physica Scripta (2007e)
82. Pfirsch, D: Theoretical and computational plasma physics (1978), IAEA-SMR-31/21, pp59.
83. Pfrisch, D., SCHLUTER, A.: Max-Planck-Institut fur Physik und Astrophsik, Munich, Rep.
MPI/PA/7/62(1962).
84. Kerner, W: Z. Naturforsch. 33a,792(1978)
85. Samain,A., Wekoff, F: Nuc. Fus. 17,53(1977)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 95

4.7. PARABOLIC COORDINATE STUDY FOR INTERNATIONAL THERMONUCLEAR


EXPERIMENTAL REACTOR (ITER)

SUPPRESSION OF FLR & SHEARED AXIAL FLOW ON RTI IN PARABOLIC COORDINATES


FOR Z-PINCH IMPLOSIONS

Suppression of sheared axial flow and finite larmor radius (FLR) on Rayleigh-Taylor instability in Z-pinch
implosions is studied in parabolic coordinates for derived magneto hydrodynamic formulation. The sheared

axial flow is introduced into MHD and FLR effect via
∂t
( )
→ −i ω + ik ⊥2 ρ i2 Ω i . The sheared axial flow
with a lower peak velocity suppresses the RT instability. It is observed that the FLR suppress the RT
instability strongly than the sheared axial flow. The results are same as in case of slab geometry.

GENERALIZED MHD EQUATIONS: Present plasma model is for Z-pinch device considering the
parabolic coordinates which earlier show by Qui (2006) is planar coordinates for Rayleigh-Taylor
instability having FLR effect along with sheared axial velocity which suppress the instability crested due to
implosion in the z-pinch device.
∂ρ1
− v0 • ∇ρ1 + v1 • ∇ρ 0 = 0 (4.7.1)
∂t
∂ρ
ρ 0 ( 1 − v 0 • ∇v1 + v1 • ∇v0 ) = −∇p1 + j 0 × B1 + j1 × B0 + ρ1 g (4.7.2)
∂t
) 2 ∂B )
B0 = B(ξ )η → j 0 = ϕ (4.7.2)
µ 0 ∂ξ
) )
v0 = v(ξ )ϕ ; ρ 0 = ρ (ξ ) ; g = − gξ
[
Perturbation exp i (k cη + k ⊥ϕ − ωt )
) )
]
for this case k c = 0, k ⊥ = k

∂t
( )
→ −i ω + ik ⊥2 ρ i2 Ω i ; j1 =
1
µ0
(∇ × B0 ); ∇ • vr = 0 (4.7.4)

As we know the parabolic coordinates are as follows:


ξ = r − z = r (1 − cos ϑ )
η = r + z = r (1 + cos ϑ )
ϕ =φ
2 ⎡ ∂ ∂ ⎤ 1 ∂
∇=
ξ +η ⎢ξ ∂ξ + η ∂η ⎥ +
⎣ ⎦ ξη ∂ϕ
4 ⎡ ∂ ⎛ ∂ ⎞ ∂ ⎛ ∂ ⎞⎤ 1 ∂ 2
∇2 = ⎢ ⎜ξ ⎟+ ⎜η ⎟⎥ +
(ξ + η ) 2 ⎣ ∂ξ ⎜⎝ ∂ξ ⎟⎠ ∂η ⎜⎝ ∂η ⎟⎠⎦ ξη ∂ϕ 2
∂ ⎡ 2ρ dv 2kρ dv ⎤
2

(ξ + η ) ∂ξ ⎣ (ξ + η )
( )
ω + ik 2 ρ i2 Ω i − kν 1ξ +

ν 1ξ ⎥
(ξ + η ) dξ ⎦
( )
− ω _ ik 2 ρ i2 Ω i − kν k 2 ρv1ξ
(4.7.5)
2 ⎛ k2g ⎞ dρ
− ⎜⎜ ⎟⎟ v1ξ = 0
(ξ + η ) ⎝ ω + ik 2 ρ i2 Ω i − kν ⎠ dξ

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 96

Vi
ρi = ,Vi → Ion thermal velocity. Applying the boundary condition in the equation (4.7.5) we get
Ωi
ξ 〈− d ; ρ a = ρ (ξ ) ; v(ξ ) = V
− d ≤ ξ ≤ d (Velocity shear layer) ; ρ b = ρ (ξ )
⎛ ξ ⎞V
v(ξ ) = ⎜1 − ⎟ (4.7.6)
⎝ d⎠2
ξ 〉d ; ρ c = ρ (ξ ) ; v(ξ ) = 0 ; ξ = ± d (4.7.7)
v1ξ
is continuous
ω + ik 2 ρ i2 Ω i − kν
∆ f = f ξ =ξ + − f ξ =ξ − (4.7.8)

4 d 2 v1ξ
− k 2 v1ξ = 0 (4.7.9)
(ξ + η ) 2
dξ 2
4 d 2 v1ξ
− k 2 v1ξ = 0 where l = ξ / η the differential equation comes in this form
(1 + l ) 2
dx 2

k (1 + l )
2
d v1ξ
2
− k12 v1ξ = 0 where k1 = (4.7.10)
dx 2

v1ξ = A j e − k1ξ + C j e k1ξ (4.7.11)

k1 (k1 ρ i ) Vi 2
2
ω k1V 2
n= ; κ = 2 k1 d ; X = J −1 / 2
= −1 / 2
; X j = J j = −i (4.7.12)
k1 g g g
This is the dispersion relation in the final form
n 4 + a3 n 3 + a 2 n 2 + a1 n + a 0 = 0 (4.7.13)
Where
a3 = −2( X + 2 X i ) , (4.7.14)
⎛ 1 e − k1 + 1 ⎞
X2
a2 = ( X + 2 X i ) − ⎜⎜ + − k ⎟⎟ + 4 X i ( X + X i ) + X i2 ,
2
(4.7.15)
⎝ k1 e − 1 ⎠
k1 1

⎡ X 2 ⎛ 1 e − k1 + 1 ⎞ ⎤
⎛X2 ⎞⎛ 1 e − k1 + 1 ⎞ ⎢ ⎜ + ⎟ ⎥
a1 = X ⎜⎜ − 2 ⎟⎟⎜⎜ + − k1 ⎟⎟ + 2 X i ⎢ k1 ⎜⎝ k1 e − k1 − 1 ⎟⎠ ⎥ , (4.7.16)
⎝ k1 ⎠⎝ k1 e − 1 ⎠ ⎢− ( X + X )( X + 2 X )⎥
⎣ i i ⎦

2 e
− k1
+1 X ⎛ 1 e + 1⎞
2 − k1
a0 = 1 + ( X + 2 X i ) −k − ⎜ + ⎟ X i (X + X i )
e 1 − 1 k1 ⎜⎝ k1 e − k1 − 1 ⎟⎠
, (4.7.17)
⎡ e − k1 + 1 ⎤ X
+ X i2 ⎢( X + X i ) − − k ⎥ + (X + 2 X i )
2

⎣ e 1 − 1 ⎦ k1

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 97

COMPARISION OF PARABOLIC AND


PLANAR SYSTEM FOR SHEARED AXIAL
VELOCITY

2.5
NORMALIZEDGROWTH

2
RATE

1.5 Series1
1 Series2

0.5

0
1 2 3 4 5 6 7 8 9 10
NORMALIZED WAVE NUMBER

Fig.1.Series 1: Sheared axial velocity with


Parabolic Coordinates, Series 2: Sheared Axial velocity with planar coordinates.

PLOT FOR SHEARED AXIAL


VELOCITY VS GROWTH RATE IN
PARABOLIC COORDINATES

2.5
GROWTHRATE

2 Series1
Series2
1.5
Series3
1 Series4

0.5

0
1 2 3 4 5 6 7 8 9 10
WAVE NUMBER

Fig2.For V=105, 2X105, 3X105, 4X105

PLOT FOR FLR IN PARABOLIC


COORDINATES

2.5
GROWTH RATE

2
1.5 Series1
1 Series2

0.5
0
1 2 3 4 5 6 7 8 9 10
WAVE NUMBER

Fig.3 Series 1: FLR=1.0, Series 2: FLR=2.0

We can observe that the sheared axial velocity suppress the instability in the parabolic coordinates more
than the planar coordinates which is shown is in Fig.1.But for the parabolic coordinates with sheared axial

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 98

velocity, V= 105, 2X105, 3X105, 4X105 it remains static for higher than V= 2X105 is shown in the
Fig.2.FLR stabilizes the instability for the normalized value 2.0, whereas it shows some instability in the
initial stage for FLR=1.0 then stabilizes for higher wave number is exhibited in Fig.3.The results are in
affirmation to the results given by Qui et al(2002).

References:
[1] Qui,X.M, Huang,L, Jian ,G: Plasma Sci &Tech , 5, 1429(2002)
[2] De Groot, J.S, Toor, A, Goldberg, S.M et al: Phys Plasmas 4, 1519(1997)
[3] Haines,M.G.: IEEE transaction on Plasma Sci.26,1275(1998)
[4] Shumlak,U and Hartman,C.W.: Phys Rev. Lett. 75, 3285(1995)
[5] Arber,T.D,coppins,M,Scheffel,J: Phys Rev. Lett. 77, 1766(1996)
[6] Ganguly,G: Phys Plasmas 4,2322(1997)
[7] Qui,X,M,Huang,L and Jian,G.D: Chin. Phys Lett. 19,217(2002)
[8] Turchi,P.J and Baker,W.L: J. Appl. Phys,44,4936(1973)

4.7.1. SUPPRESSION OF RAYLEIGH-TAYLOR INSTABILITY IN LOW FREQUNCY


FLUCTUATION OF ITER

A low - β is considered in ITER for Low-frequency fluctuation here if one considers density gradient
( ∇n ) acts against the gravity causing the R-T instability which is suppressed by the use of shear velocity.
The present study is done in parabolic coordinates theoretically. It is based on ITER with Low- β plasma
having low frequency fluctuation which is being stabilized for sheared velocity, finite resistivity, and
current diffusivity with other parameters. The induced RTI is suppressed by above mentioned parameters
and as a whole the classical transport phenomena is taken into consideration. The heat conductivity is
calculated. The above facts compel one to study the classical phenomena of RTI and thereby the
suppression by different parameters. The growth rate in the system gives us the more stabilized and steady
plasma confinement. The finite resistivity, current diffusivity, density gradient scale length, magnetic shear
aspect ratio the feedback term is stabilizing parameters. Earlier Beyer et al (2007) studied this with EMHD
for the turbulence simulation of transport barrier relaxation in tokomak edge plasma which is the source
region of collision or stability in ITER. Yagi et al (1997) considered for the current diffusive mode whereas
the skin size ballooning mode in tokomaks is considered with ETG mode study by Hirose (2007).This
becomes the aspect for our present study the RT instability in ITER with low frequency also may be
considered of interest to particle Physicist for quantum theory researchers and so on.

We have tried to solve the model with parabolic coordinates with two different techniques.
I. By using the toroidal coordinates to get the differential equation then solving it for the
parabolic coordinates which also gives us the differential equation of second order to give
us the growth rate.
II. By using the parabolic coordinates directly to the model to get the growth rate.

BASIC EQUATIONS
The basic equations which governs are as follows
Rc
Ec = = ηJ − λ∆J (4.7.18)
ene
r
B ⎡ 1 ⎤
v ⊥i = 2 × ⎢− E − (∇pi + ∇ • Π i )⎥ (4.7.19)
B ⎣ ene ⎦
3ne dTe
= −∇ ⊥ • q e ⊥ + δne (4.7.20)
2dt
Π i = − µ∇ ⊥ v ⊥i (4.7.21)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 99

q e ⊥ = χ∇ ⊥ p e (4.7.22)

Here η ,finite conductivity, λ (current diffusivity), Π i (stress tensor),


Te (electron temperature) , q e ⊥ (
R
perpendicular heat conductivity of electron), Ec (parallel electric field), (friction term, Hall term),
ene
v ⊥i (perpendicular ion velocity), χ (magnetic diffusivity), µ (viscosity) , p e (electron
r
pressure), B (magnetic field), p i (ion pressure )
From the Maxwell’s equation we get the generalized MHD model.

GENERALIZED MHD MODEL:


The generalized MHD equations are considered which are derived from the above basic equations.
n 0 mi c ⎛ ∂ ⎞
⎜⎜ − mi n +
c
[φ ,−mi n]⎟⎟ = B0 ∇ c jc + ∇p × 2∇r cos θ • zˆ
B0 ⎝ ∂t B0 ⎠ c R
(4.7.23)
c 2
− µmi n0 ∇ ⊥ mi n + mi gδφ • zˆ + mi g (φ − 1) • zˆ
B0
∂A
= −∇ cφ − η c jc + λ∇ 2⊥ jc (4.7.24)
c∂t
∂p c
+ [φ , p ] = χ ⊥ ∇ 2⊥ p (4.7.25)
∂t B 0
c 2 ˆ
Where jc = − ∇ ⊥ A term is responsible for feedback loop current, Â is electromagnetic potential, φ

is electrostatic potential and [ A, B ] = zˆ • ∇A × ∇B , Poisson’s bracket.

CaseI: Eigen mode equation is obtained by linearizing (6)-(8) in Toroidal coordinates (r ,θ , ξ ) for
Rayleigh- Taylor instability (RTI) as

φ (∆r , θ , ζ ) = ∑ φˆ(θ + 2πl ) exp[inq ′∆r (θ + 2πl )t − inq 0 rmn


−1
∆rρ sin θ + in(q 0θ − ζ )]

Solving (4.7.23)- (4.7.25) we get a differential equation of second order

∂ 2φˆ ηˆ ⎛⎜ λˆs 2θ 2 ⎞
+ 1+ ( )
⎟ ρ − s 2θ 2ω 2 φˆ = 0 (4.7.26)
∂ϑ 2 ω ⎜⎝ ηˆ ⎟

Now changing the toroidal coordinate to parabolic coordinate we get

∂ 2φˆ (1 + η ) ∂φ (1 + η ) (η − 1) ηˆ
3
+ −
∂η 2 2 η ∂η η ω
(4.7.27)
⎛ λˆs 2θ 2 ⎞
⎜1 +
⎜ ηˆ ⎟ (
⎟ ρ − s 2θ 2ω 2 φˆ = 0 )
⎝ ⎠
∂φˆ ∂ 2φˆ
φˆ = exp(βη ) = β exp(βη ) = β 2 exp(βη ) (4.7.28)
∂η ∂η 2

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 100

After substituting the values we get a quadratic equation which gives growth rate
− D ± D 2 − 4CE
ω= (4.7.29)
2C
⎛ λˆ 2 2 ⎞ (1 + η )3 (η − 1)
C = ηˆs θ ⎜⎜1 + s θ ⎟⎟
2 2
(4.7.20)
⎝ ηˆ ⎠ η
β (1 + η )
D = β2 + (4.7.21)
2 η

⎛ λˆ ⎞ (1 + η )3 (η − 1)
E = ηˆρ 2 ⎜⎜1 + s 2θ 2 ⎟⎟ (4.7.22)
⎝ ηˆ ⎠ η

Parabolic coordinates are given by


ξ = r − z = r (1 − cos ϑ )
η = r + z = r (1 + cos ϑ )
ϕ =φ
Case II: ALTERNATIVE METHOD
If we try to get the parabolic axisymmetry coordinates directly for the generalized MHD model, we get
here different growth rate. Eigen mode equation is obtained by linearizing (6)-(8) in parabolic axisymmetry
( )
distribution coordinates ξ ,η , ϕ for Rayleigh- Taylor instability (RTI) as
φ (ξ ,η , ϕ ) = exp i[lϑ + mϕ − ωt ]{C1 X 1 (ξ ) + C 2 X 2 (η )} (4.7.23)

δτ PA µτ PA
δ = µ=
n0 m r 2
i mn [ gLn ln gLn ] n0 m r 2
i mn [ gLn ln gLn ]
pˆ τ PA 4πn0 mi rmn
1/ 2 2
β= τ PA =
[ ]
2
2
(n0 mi rmn gLn ln gLn )1 / 2 Bθ2

τ PA
2
χ⊥ cτ PA
2
φ τ PA
2
λc 2 8πp
χ= φ= λ= pˆ =
r 2
mn
2
rmn B0 4πrmn2
B02
t A gc 2
t= A= ηˆ = ηn 2 q 2 g= s
τ PA rmn Bθ R
⎛ 1 ⎞
ρˆ = ρ [κ − ε (sθ − ρ sin θ ) cosθ ] κ = −⎜⎜1 − 2 ⎟⎟ε ρ = βL−p1
q ⎝ ⎠
4/5
⎛ 3π ⎞
⎜ ⎟ λ3 / 5 s 4 / 5 (2l + 1)4 / 5
⎡ r2 ⎤ Bφ 4
n = n0 ⎢ − ⎥ s= a0 = ⎝ ⎠
⎣ Ln ⎦ Bθ η (ρ − µ + δ )1 / 5

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 101

d (ln n0 ) d (ln p0 )
Ln −1 = − Lp −1 = −
drmn drmn
r
ε= ,R=6.2 and r =2.0
R

For Low frequency ω pi2 τ PA


2
〈〈1 and for RT mode ρ 〈 2 Ln −1

Alternatively if we solve the eqn.(4.7.23)-(4.7.25) we get the following


∇φ = −γAˆ + η∇ 2⊥ Aˆ − λ∇ 4⊥ Aˆ (4.7.24)
4 ⎡ ∂ ∂ 1 ∂ ⎤
∇=
ξ +η ⎢ξ ∂ξ + η ∂η + ξη ∂ϕ ⎥
⎣ ⎦
4 ⎡ ∂ ⎛ ∂ ⎞ ∂ ⎛ ∂ ⎞ 1 ∂2 ⎤
∇2 = ⎢ ⎜ξ ⎟+ ⎜η ⎟+ ⎥
ξ + η ⎣ ∂ξ ⎜⎝ ∂ξ ⎟⎠ ∂η ⎜⎝ ∂η ⎟⎠ ξη ∂ϕ 2 ⎦

4 ⎡ ∂ ∂ ⎤ 4 ⎡ ∂ ⎤
∇⊥ =
ξ +η ⎢ξ ∂ξ + η ∂η ⎥ ∇ =
(ξ + η )ξη ⎢⎣ ∂ϕ ⎥⎦
⎣ ⎦
r ⎡ r ⎤ 1 1 R
B = Bϕ ⎢eˆϕ + eˆϑ ⎥ = − 0 (r − r0 )
⎣ Rq(r ) ⎦ q q 0 Ls r0
Lr
ζ = s 0 ∇r = r − r0
R0 ξ

p ≡ ( p(ξ ),0,0) n ≡ (n(ξ ),0,0) ϕ ≡ (0, ϕ (η ),0) A ≡ ( A(ξ ),0,0)


Equation (4.7.23) becomes

n0 mi c ⎛⎜ ∂ c ⎡⎛ ∂n ⎞⎛ ∂ϕ ⎞ ⎤⎞ B
− mi n + ⎢ ⎜⎜ ⎟⎟⎜⎜ ⎟⎟,− mi n⎥ ⎟ = 0 ∇ c jc
B0 ⎜⎝ ∂t B0 ⎣⎝ ∂ξ ⎠⎝ ∂η ⎠ ⎟
⎦⎠ c
2∇r cos θ • zˆ
+ ∇p ×
R(1 + sin 3θ sin φ )
c 2
− µmi n0 ∇ ⊥ mi n + mi gδφ • zˆ + mi g (φ − 1) • zˆ
B0

4 µξ ω p ⎛ ∂ϕ ⎞
2
ω c2
−ω + Ln − Ln ⎜⎜ ⎟⎟ = − 2 ∇ ∇ 2⊥ Aˆ
(ξ + η ) ω c ⎝ ∂η ⎠ ωp
(4.7.25)
ω
+ 2ω c L p ε cos ϑ − gLn c
ωp
4πn0 eB
And the ω p = plasma frequency and ω c =
2
cyclotron frequency has the usual definition
cmi mi c

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 102

16 ∂ ⎡ ∂Aˆ ∂Aˆ ⎤
∇ ∇ 2⊥ Aˆ = ⎢ξ + η ⎥ (4.7.26)
ξη (ξ + η )2 ∂ϕ ⎣ ∂ξ ∂η ⎦
Equation (4.7.24) becomes
− 4 ∂φ 4 ⎡ ∂ ∂ ⎤ˆ ˆ 4 ˆ
− ωAˆ = + ηˆ ξ +η A + λ∇ ⊥ A (4.7.27)
(ξ + η ) ∂ϕ (ξ + η ) ⎣ ∂ξ ∂η ⎥⎦

Equation (4.7.25) becomes


ω p2 ⎛ ∂φ ⎞ ξ
−ω − L p ⎜⎜ ⎟⎟ = 4 χ L (4.7.28)
ω c ⎝ ∂η ⎠ (ξ + η ) ⊥ p

In this case the η finite resistivity and λ current diffusivity.


Solving eqn. (4.7.25)-(4.7.28) which is the growth rate

−1
⎡ ω2 ⎛ k ⎞ 1 4ζω c ⎛ Ln ⎞⎟⎤
⎢ c ⎜ ⎟ ⎜ ⎥
ω= 4 2
( +
) ( − 1+
)
⎢ ω p ⎜⎝ k ⊥ ⎟⎠ η + λk ⊥2 k ⊥ qω p2 L p η + λk ⊥2 ⎜⎝ L p ⎟⎠⎥
⎣ ⎦
⎡ ω c 4 k ξ Ln ⎤
⎢2ω c L p ε cos ϑ − gLn 2 − (µ + χ ⊥ )⎥ (47.39)
⎢ ωp k⊥ ⎥
⎢ 4ω c k Lϕ
2 ⎥
⎢− 16 χ ⊥ω c k ξ ⎥
⎣ p ⊥ ( ⊥ )

⎢ ω 2 k η + λk 2 k 2 ω 2 η + k 2 λ
⊥ p ( ⊥ ) ⎥

The growth rate is given by the above equation and stability is given below

4ω c2 ⎛⎜ k ⎞⎛ 1 ⎞ ⎛
⎟⎜ ⎟〈⎜1 + Ln ⎞⎟
ω 2p ⎜⎝ k ⊥⎟⎜ 2 ⎟ ⎜ ⎟
⎠⎝ η + λk ⊥ ⎠ ⎝ Lp ⎠
ω c 4ω c2 ⎛⎜ k ⎞⎟⎛⎜ Lφ ⎞⎟
[ gLn 2 + 2 ⎜ ⎟⎜
ω p ω p ⎝ k ⊥ ⎠⎝ η + λk ⊥2 ⎟⎠
(4.7.40)
4kξ Ln
+ (µ + χ ⊥ ) ]〈 2ω c LpεCosϑ
k⊥

me
For Low frequency γ 〈〈ω c and for RT mode gLn −1 〈〈1 and low − β : 〈〈1
mi
As we are interested in knowing the effect of finite conductivity along with other parameters so we shall
take the derivative of the growth rate with respect to finite conductivity and we can observe that the
derivative is positive or negative for destabilization or stabilization respectively.

The growth rate is studied analytically as well as numerically, for the analytical case the derivative with
respect to density gradient scale length term gives us the negative quantity hence showing the stabilizing
character. Numerically we observe that the growth with density gradient scale length term stabilizes for
larger values of density gradient scale length term, hence one may opt for larger values of density gradient
scale length term which is exhibited in Fig.1

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 103

STABILITY FOR DENSITY GRADIENT SCALE LENGHT


The growth is studied analytically as well as numerically calculated for the analytical case the growth is
plotted against the number density gradient scale length which gives us the plot for different values of
number density gradient scale length we see here as the density gradient scale length increases the growth
rate with respect to number density gradient scale decreases thereby showing the stabilizing effect for the
considered system which is exhibited in Fig.1.

PLOT OF DENSITY GRADIENT


SCALE LENGTH VS GROWTH
RATE
GROWTH RATE

3000
2000
1000
0
1 3 5 7 9 Series1

DENSITY GRADIENT
SCALE LENGTH( Ln)

ωc
Fig.1: Lp = 5, λ = 3, Lφ = 5.5, χ ⊥ = 2.2, µ = 3.5,η = 2, = 4 .5
ω p2

PLOT OF FINITE RESISTIVITY


VS GROWTH RATE

180
GROWTH RATE

175
170
Series1
165
160
155
1 3 5 7 9
FINITE RESISTIVITY

ωc
Fig.2: Ln = 4.34, Lp = 5, Lφ = 5.5, χ ⊥ = 2.2, µ = 3.5, λ = 3, = 4 .5
ω p2
STABILITY FOR FINITE RESISTIVITY
The growth is studied analytically as well as numerically for the analytical case the growth is plotted
against for finite conductivity which gives us the plot for different values of finite resistivity, we see here as
the finite resistivity increases the growth rate decreases thereby showing the stabilizing effect for the
considered system which is exhibited in Fig.2.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 104

PLOT OF LOW FREQUENCY


FLUCTUATION VSGROWTH RATE

0
GROWTH
RATE

1 2 3 4 5 6 7 8
-200000 Series1

-400000
LOW FREQUENCY
FLUCTUATION

Fig.3. Ln = 4.34, Lp = 5, Lφ = 5.5, χ ⊥ = 2.2, µ = 3.5,η = 2, λ = 3


STABILITY FOR FLUCTUATIONS
The growth is studied analytically as well as numerically for the analytical case the growth is plotted
against the fluctuations which gives us the plot for different values of fluctuations we see here as the
fluctuations increases the growth rate decreases showing the stabilizing effect for the considered system
when the values of finite conductivity, current diffusivity and derivative of fluctuations remains constant,
which is exhibited in Fig.3 &Fig.4.

PLOT OF LOW FREQUENCY


FLUCTUATION VSGROWTH RATE
GROWTH RATE

3000
2000
Series1
1000
0
1 2 3 4 5 6 7 8
LOW FREQUENCY
FLUCTUATION(X10)

Fig. 4: Ln = 4.34, Lp = 5, Lφ = 5.5, χ ⊥ = 2.2, µ = 3.5,η = 2, λ = 3

PLOT OF PRESSURE GRADIENT


SCALE LENGTH VS GROWTH RATE

3000
GROWTH RATE

2000
1000 Series1
0
-1000 1 2 3 4 5 6 7 8 9 10

PRESSURE GRADIENT SCALE


LENGTH

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 105

ωc
Fig.5: Ln = 4.34, λ = 3, Lφ = 5.5, χ ⊥ = 2.2, µ = 3.5,η = 2, = 4 .5
ω p2
STABILITY FOR PRESSURE GRADIENT SCALE LENGHT
The growth is studied analytically as well as numerically calculated for the analytical case the growth is
plotted against the pressure gradient scale length which gives us the plot for different values of pressure
gradient scale length we see here as the pressure gradient scale length increases the growth rate with respect
to pressure gradient scale increases thereby showing the destabilizing effect for the considered system
which is exhibited in Fig.5 which is the only destabilizing factor.
STABILITY FOR CURRENT DIFFUSIVITY
The growth is studied analytically as well as numerically for the analytical case the growth is plotted
against the current diffusivity which gives us the plot for different values of current diffusivity we see here
as the current diffusivity increases growth rate decreases thereby showing the stabilizing effect for the
considered system which is exhibited in Fig.6.

PLOT OF CURRENT DIFFUSIVITY VS


GROWTH RATE

8
GROWTH RATE

4 Series1

0
1 2 3 4 5 6 7 8 9
CURRENT DIFFUSIVITY

ωc
Fig.6: Ln = 4.34, Lp = 5, Lφ = 5.5, χ ⊥ = 2.2, µ = 3.5,η = 2, = 4 .5
ω p2

PLOT OF MAGNETIC SHEAR


ASPECT RATIO LENGTH VS
GROWTH RATE
GROWTH RATE(

7.5
X 100 )

7
6.5
6
5.5
1 3 5 7 9 Series1
MAGNETIC SHEAR
ASPECT RATIO LENGTH
(X 10)

ωc
Fig.7: Ln = 4.34, Lp = 5, Lφ = 5.5, χ ⊥ = 2.2, µ = 3.5,η = 2, = 4 .5
ω p2
STABILITY FOR MAGNETIC SHEAR ASPECT RATIO LENGHT

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 106

The growth is studied analytically as well as numerically calculated for the analytical case the growth is
plotted against the magnetic shear aspect ratio length which gives us the plot for different values of
magnetic shear aspect ratio length increase, with respect to the growth rate decreases thereby showing the
stabilizing effect for the considered system which is exhibited in Fig.7.This term is responsible for the
feedback loop current and voltage.

The results suffice the results obtained by Yagi et al (1997) and that of Hirose (2007) & Bayer et al
(2007).So we conclude that the present results for stabilization of RTI in ITER is apropos.

REFERENCES
86. Doyle,E.J, Groebner,R. J. et al : Phys Fluids B 3,230(1991)
87. Shaing, K.C and Crume, E.C.: Phys Rev. Lett. 63,2369(1989)
88. Itoh, S.I and Itoh, K: Phys Rev. Lett. 60,2276(1983)
89. Hassam,A.B.:Comments on Plasma Phys. Contr. Fusion 14,275(1991) and Phys. Fluids B4,485(1992)
90. Sen,S and Weiland,J:Phys Fluids B4,485(1992)
91. Bhatia,P.K. and Hazarika,A.B.R.: Physica Scripta 52, 947(1995)
92. Hazarika,A.B.R: National symposium on plasma Science &Technolgy,Rajkot(1998)
93. Hazarika,A.B.R:National symposium on plasma Science &Technolgy, Guwahati(2001)
94. Hazarika,A.B.R: National symposium on plasma Science &Technolgy,Ranchi(2003)
95. Hazarika,A.B.R:National symposium on plasma Science &Technolgy,Bhopal(2004)
96. Hazarika,A.B.R: Proceeding of National symposium on plasma Science &Technolgy, Cochin Univ. of
Sci. & Technology, Cochin(2005)
97. Hazarika,A.B.R. :Proceeding of 3rd Technical meeting of International Atomic Energy Agency on
Theory of Plasma Instabilities, Univ. of York, York, UK(2007a)
98. Diamond, P.H : Plasma Physics and Controlled Nuclear Fusion Research
(IAEA,Vienna,1992)2,97(1992)
99. Hazarika,A.B.R.: Submitted in Nuclear Fusion (2007b)
100. Hazarika,A.B.R.: Submitted in Plasma Physics and controlled fusion(2007c)
101. Hazarika,A.B.R.: Submitted in Plasma Source Science and Technology(2007d)
102. Hazarika,A.B.R.: Submitted in Physica Scripta (2007e)

4.7.2.SUPPRESSION OF RAYLEIGH-TAYLOR INSTABILITY IN LOW FREQUNCY


FLUCTUATION OF ITER BY USING D-SHAPE COORDINATES

A low - β is considered in ITER for Low-frequency fluctuation here if one considers density gradient
( ∇n ) acts against the gravity causing the R-T instability which is suppressed by the use of shear
velocity. The present study is done in parabolic D-shape coordinates theoretically.

BASIC EQUATIONS
The basic equations which governs are same as (4.2.1-4.2.9) follows

D-shaped coordinates are given by


ξ = r − z = ae cos θ − b
η = r + z = ae cos θ + b
ϕ =φ
As a is minor radius for the ellipse of which the D-shape is cut and b is major radius of the ellipse or
D-shape, here we have considered e as the eccentricity of the ellipse for the present case D- shape
which is given by
b2 − a2
e=
a

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 107

∂ 1 ∂ 1 ∂
∇= + +
∂r r ∂θ r sin θ ∂φ

∂ ⎞ ⎡ 2 a e (ξ + η ) − 4 ⎤
2 2 2
⎛ ∂ 2ae ∂
∇ = 2⎜⎜ + ⎟⎟ ⎢1 − ⎥+
⎝ ∂ξ ∂η ⎠ ⎢⎣ (ξ + η ) 2
⎥⎦ a 2 e 2 (ξ + η ) − 4 ∂ϕ
2

∂2 1 ∂ ⎛ r∂ ⎞ 1 ∂2
∇2 = + ⎜ ⎟+ 2
∂r 2 r ∂θ ⎝ ∂θ ⎠ r sin θ ∂φ
2 2

⎛ ∂2 ∂2 ⎞⎡ 4 ⎤ ⎡ 2 2 4 ⎤ ∂2
∇ 2 = ⎜⎜ 2 + ⎟⎟ ⎢2 + a 2 e 2 − + 2 a e −
⎝ ∂ξ ∂η 2 ⎠⎣ (ξ + η )2 ⎥⎦ ⎢⎣ (ξ + η )2 ⎥⎦ ∂ξ∂η
4a 2 e 2 ∂2
+
(ξ + η ) a 2 e 2 (ξ + η ) − 4 ∂ϕ
2 2

Eigen mode equation is obtained by linearizing (4.2.6)-(4.2.8) in parabolic axisymmetry distribution


( )
coordinates ξ ,η , ϕ for Rayleigh- Taylor instability (RTI) as
φ (ξ ,η , ϕ ) = exp i[lϑ + mϕ − ωt ]{C1 X 1 (ξ ) + C 2 X 2 (η )}
(4.2.1)

δτ PA µτ PA
δ = µ=
n0 m r 2
i mn [ gLn ln gLn ] n0 m r 2
i mn [ gLn ln gLn ]
pˆ τ PA 4πn0 mi rmn
1/ 2 2
β= τ PA =
[ ]
2
2
(n0 mi rmn gLn ln gLn )1 / 2 Bθ2

τ PA
2
χ⊥ cτ PA
2
φ τ PA
2
λc 2 8πp
χ= φ= 2 λ= pˆ =
r2
mn rmn B0 4πrmn2
B02
t A gc s2
t= A= η = ηn q
ˆ 2 2
g=
τ PA rmn Bθ R
⎛ 1 ⎞
ρˆ = ρ [κ − ε (sθ − ρ sin θ ) cosθ ] κ = −⎜⎜1 − 2 ⎟⎟ε ρ = βL−p1
q ⎝ ⎠
4/5
⎛ 3π ⎞
⎜ ⎟ λ3 / 5 s 4 / 5 (2l + 1)4 / 5
⎡ r ⎤ 2
Bφ 4
n = n0 ⎢ − ⎥ s = a0 = ⎝ ⎠
⎣ Ln ⎦ Bθ η (ρ − µ + δ )1 / 5
d (ln n0 ) d (ln p0 )
Ln −1 = − Lp −1 = −
drmn drmn

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 108

r
ε= = 3 .1
R

For Low frequency ω pi2 τ PA


2
〈〈1 and for RT mode ρ 〈 2 Ln −1 .For ITER the major radius is 6.2m and the
cross section of the torus is parabolic in nature i.e. ‘D’ shaped with horizontal span of 2.6m and the vertical
span of 3.2m with minor radius being 2.0m.As the considered coordinate is parabolic horizontal is
ξ =2.6m and vertical is η =3.2m for the further calculation .

∇φ = −γAˆ + η∇ 2⊥ Aˆ − λ∇ 4⊥ Aˆ (4.7.2.2)

∂ ⎞ ⎡ 2 a e (ξ + η ) − 4 ⎤
2 2 2
⎛ ∂
∇ ⊥ = 2⎜⎜ + ⎟⎟ ⎢1 − ⎥
⎝ ∂ξ ∂η ⎠ ⎢⎣ (ξ + η )2 ⎥⎦

2ae ∂
∇ || =
a e (ξ + η ) − 4 ∂ϕ
2 2 2

r ⎡ r ⎤ 1 1 R
B = Bϕ ⎢eˆϕ + eˆϑ ⎥ = − 0 (r − r0 )
⎣ Rq(r ) ⎦ q q 0 Ls r0
Lr
ζ = s 0 ∇r = r − r0
R0 ξ

p ≡ ( p(ξ ),0,0) n ≡ (n(ξ ),0,0) ϕ ≡ (0, ϕ (η ),0) A ≡ ( A(ξ ),0,0)


Equation (4.2.6) becomes

n0 mi c ⎛⎜ ∂ c ⎡⎛ ∂n ⎞⎛ ∂ϕ ⎞ ⎤⎞ B 2∇r cos θ • zˆ
− mi n + ⎢⎜⎜ ⎟⎟⎜⎜ ⎟⎟,−mi n⎥ ⎟ = 0 ∇ || j|| + ∇p ×
B0 ⎝ ∂t⎜ B0 ⎣⎝ ∂ξ ⎠⎝ ∂η ⎠ ⎟
⎦⎠ c R
c 2
− µmi n0 ∇ ⊥ mi n + mi gδφ • zˆ + mi g (φ − 1) • zˆ
B0

4 µξ ω p ⎛ ∂ϕ ⎞
2
ω2 ω
−ω + Ln − Ln ⎜⎜ ⎟⎟ = − c2 ∇ ∇ 2⊥ Aˆ + 2ω c L p ε cos ϑ − gLn c (4.7.2.3)
(ξ + η ) ω c ⎝ ∂η ⎠ ωp ωp

Equation (4.2.7) becomes


− 4 ∂φ 4 ⎡ ∂ ∂ ⎤ˆ ˆ 4 ˆ
− ωAˆ = + ηˆ ξ +η A + λ∇ ⊥ A (4.7.2.4)
(ξ + η ) ∂ϕ (ξ + η ) ⎣ ∂ξ ∂η ⎥⎦

Equation (4.2.8) becomes
ω p2 ⎛ ∂φ ⎞ ξ
−ω − L p ⎜⎜ ⎟⎟ = 4 χ L (4.7.2.5)
ω c ⎝ ∂η ⎠ (ξ + η ) ⊥ p
Solving (4.7.2.3) – (4.7.2.5) we get a differential equation of second order

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 109

∂ 2φˆ ηˆ ⎛⎜ λˆs 2θ 2 ⎞
+ 1+ ( )
⎟ ρ − s 2θ 2ω 2 φˆ = 0 (4.7.2.6)
∂ϑ 2 ω ⎜⎝ ηˆ ⎟

Now changing the toroidal coordinate to D-shape coordinate we get

2
⎛ 4a 2 e 2 ⎞ ⎡ ∂ ∂ ⎤ ηˆ ⎛⎜ λˆs 2θ 2 ⎞⎟
⎜ ⎟ +
⎜ (ξ + η )2 ⎟ ⎢ ∂ξ ∂η ⎥ −
ω⎝ ⎜ 1+
η ⎠
ˆ ⎟ (
ρ − s 2θ 2ω 2 φˆ = 0 ) (4.7.2.7)
⎝ ⎠⎣ ⎦

∂φˆ ∂ 2φˆ
φˆ = exp(βη ) = β exp(βη ) = β 2 exp(βη ) (4.7.2.8)
∂η ∂η 2

After substituting the values we get a quadratic equation which gives us the growth rate as
− D ± D 2 − 4CE
ω= (4.7.2.9)
2C
⎛ λˆ ⎞
C = ηˆs 2θ 2 ⎜⎜1 + s 2θ 2 ⎟⎟ (4.7.2.10)
⎝ ηˆ ⎠
4a 2 e 2 β 3
D= (4.7.2.11)
(log φˆ) 2

⎛ λˆ ⎞
E = −ηˆ ⎜⎜1 + s 2θ 2 ⎟⎟ (4.7.2.12)
⎝ ηˆ ⎠

The growth rate is given by the above equation and stability is given below
4ω c2 ⎛⎜ k ⎞⎛ 1 ⎞⎛
⎟⎜ ⎟〈⎜1 + Ln ⎞⎟
ω 2p ⎜⎝ k ⊥ ⎟⎜ ⎟ ⎜⎝ Lp ⎟⎠
⎠⎝ η + λk ⊥
2

me
For Low frequency γ 〈〈ω c and for RT mode gLn −1 〈〈1 and low − β : 〈〈1
mi

As we are interested in knowing the effect of finite conductivity along with other parameters so we shall
take the derivative of the growth rate with respect to finite conductivity and we can observe that the
derivative is positive or negative for destabilization or stabilization respectively.

The growth rate is studied analytically as well as numerically, for the analytical case the derivative with
respect to density gradient scale length term gives us the negative quantity hence showing the stabilizing
character. Numerically we observe that the growth with density gradient scale length term stabilizes for
larger values of density gradient scale length term, hence one may opt for larger values of density gradient
scale length term which is shown in Fig.4.7.2.1

STABILITY FOR DENSITY GRADIENT SCALE LENGHT


The growth is studied analytically as well as numerically calculated for the analytical case the growth is
plotted against the number density gradient scale length which gives us the plot for different values of
number density gradient scale length we see here as the density gradient scale length increases the growth

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 110

rate with respect to number density gradient scale decreases thereby showing the stabilizing effect for the
considered system which is exhibited in Fig.4.7.2.1.

PLOT OF DENSITY GRADIENT


SCALE LENGTH VS GROWTH
RATE

340
GROWTH RATE

330
320 Series1
310
300
1 3 5 7 9
DENSITY GRADIENT
SCALE LENGTH

Fig.4.7.2.1.

PLOT OF FINITE RESISTIVITY


VS GROWTH RATE

318
GROWTH RATE

316
314
Series1
312
310
308
1 3 5 7 9
FINITE RESISTIVITY(x
0.1)

Fig.4.7.2.2

STABILITY FOR FINITE RESISTIVITY


The growth is studied analytically as well as numerically for the analytical case the growth is plotted
against for finite conductivity which gives us the plot for different values of finite resistivity, we see here as
the finite resistivity increases the growth rate decreases thereby showing the stabilizing effect for the
considered system which is exhibited in Fig.4.7.2.2.

STABILITY FOR CURRENT DIFFUSIVITY

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 111

The growth is studied analytically as well as numerically for the analytical case the growth is plotted
against the current diffusivity which gives us the plot for different values of current diffusivity we see here
as the current diffusivity increases growth rate decreases thereby showing the stabilizing effect for the
considered system which is exhibited in Fig.4.7.2.3.

PLOT OF CURRENT
DIFFUSIVETY VS GROWTH
RATE

370
350
GROWTH RATE

330
310 Series1
290
270
250
1 3 5 7 9
CURRENT DIFFUSIVITY

Fig.4.7.2.3.

STABILITY FOR WAVE NUMBER


The growth is studied analytically as well as numerically for the analytical case the growth is plotted
against the wave number which gives us the plot for different values of wave number we see here as the
wave number increases the growth rate at increases than it decreases showing the stabilizing effect for the
considered system when the values of finite conductivity, current diffusivity and derivative of fluctuations
remains constant, which is exhibited in Fig.4.7.2.4.

PLOT OF WAVE NUMBER VS


GROWTH RATE

450

400
GROWTH RATE

350
Series1
300

250

200
1 2 3 4 5 6 7 8 9
WAVE NUMBER(x0.1)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 112

Fig.4.7.2.4

The result suffices the results obtained by Yagi et al (1997) and that of Hirose (2007) & Bayer et al
(2007).So we conclude that the present results for stabilization of RTI in ITER is apropos.

REFERENCES
103. Doyle,E.J, Groebner,R. J. et al : Phys Fluids B 3,230(1991)
104. Shaing, K.C and Crume, E.C.: Phys Rev. Lett. 63,2369(1989)
105. Itoh, S.I and Itoh, K: Phys Rev. Lett. 60,2276(1983)
106. Hassam,A.B.:Comments on Plasma Phys. Contr. Fusion 14,275(1991) and Phys. Fluids B4,485(1992)
107. Sen,S and Weiland,J:Phys Fluids B4,485(1992)
108. Bhatia,P.K. and Hazarika,A.B.R.: Physica Scripta 52, 947(1995)
109. Hazarika,A.B.R: National symposium on plasma Science &Technolgy,Rajkot(1998)
110. Hazarika,A.B.R:National symposium on plasma Science &Technolgy, Guwahati(2001)
111. Hazarika,A.B.R: National symposium on plasma Science &Technolgy,Ranchi(2003)
112. Hazarika,A.B.R:National symposium on plasma Science &Technolgy,Bhopal(2004)
113. Hazarika,A.B.R: Proceeding of National symposium on plasma Science &Technolgy, Cochin Univ. of
Sci. & Technology, Cochin(2005)
114. Hazarika,A.B.R. :Proceeding of 3rd Technical meeting of International Atomic Energy Agency on
Theory of Plasma Instabilities, Univ. of York, York, UK(2007a)
115. Diamond, P.H : Plasma Physics and Controlled Nuclear Fusion Research
(IAEA,Vienna,1992)2,97(1992)
116. Hazarika,A.B.R.: Submitted in Nuclear Fusion (2007b)
117. Hazarika,A.B.R.: Submitted in Plasma Physics and controlled fusion(2007c)
118. Hazarika,A.B.R.: Submitted in Plasma Source Science and Technology(2007d)
119. Hazarika,A.B.R.: Submitted in Physica Scripta (2007e)
18.Beyer.P,Benkadda.S,Fuhr-Chaudier.G,Garbet.X,Ghendrih.Ph,Sarazin.Y(2007):Plasma
Phys. control. Fusion 49, 507
19. Yagi.M, Itoh.K, Itoh.S.I, Fukuyama.A and Azumi.M (1997): Phys Fluids B5 (10), 3702
20. Hirose.A (2007): Plasma Phys Contol. Fusion 49 ,145

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 113

4.8. GREEN’S FUNCTION SOLUTION OF STABILITY OF PLASMAS HELD BY RADIATION


PRESSURE IN PARABOLIC COORDINATES

In parabolic coordinates a simplified model of Brillioun scattering instability to arbitrarily


inhomogeneous plasma is presented. It is solved using green’s function, and integro-differential equation. It
is shown that the light pressure, if treated self- considerably is unstable. The earlier most studies of
stability of plasmas in the presence of electromagnetic waves are concerned with wave decay or parametric
effects, such as Brillouin scattering, in homogenous or slightly inhomogeneous plasmas Lin et al (1974) IN
CONTRAST THIS SHORT CONTRIBUTION is devoted to the stability investigation of arbitrarily
inhomogeneous plasmas held by light pressure , the importance of which was pointed out years ago
Forslund et al (1975).The Brillouin type instability is due to the effect that a density perturbation of twice
the local light wavelength modifies the electromagnetic wave in such a way that its radiation pressure tends
to increase the original perturbation Hora et al (1967).Since the frequency shift of the scattered
electromagnetic wave is a small it is a good approximation to use the same index of refraction for the
incident and scattered waves , at least far from the critical density. The essential features of possible
instabilities are thus preserved if a simplified response of the plasma to the light as described in the
following equations of Chen(1973), Mulser et al (1977),Lavol et al (1965), Tasso et al (1978) is assumed.
In Tasso et al (1978) the study is done in planar geometry whereas the presently following model is in the
parabolic coordinates to observe the similarity or dissimilarity:
r r r r r
⎛ ∂u 2u ∂u ⎞ 2c s2 ∂ρ 2 µρ ∂ r r *
ρ ⎜⎜ + ⎟⎟ = − − 〈 EE 〉 (4.8.1)
⎝ ∂t ξ + η ∂ξ ⎠ ξ + η ∂ξ ξ + η ∂ξ
r
∂ρ 2 ∂ rr
− ( ρu ) = 0 (4.8.2)
∂t ξ + η ∂ξ
4 ∂2E
+ k 02 n 2 E = 0 (4.8.3)
(ξ + η ) ∂ξ
2 2

ρ (ξ )
Where n2 = 1−
ρc
The parabolic coordinates are as follows:
ξ = r − z = r (1 − cos ϑ )
η = r + z = r (1 + cos ϑ ) (4.8.4)
ϕ =φ
2 ⎡ ∂ ∂ ⎤ 1 ∂
∇=
ξ +η ⎢ξ ∂ξ + η ∂η ⎥ + (4.8.5)
⎣ ⎦ ξη ∂ϕ

4 ⎡∂ ⎛ ∂ ⎞ ∂ ⎛ ∂ ⎞⎤ 1 ∂ 2
∇2 = ⎢ ⎜⎜ ξ ⎟⎟ + ⎜⎜η ⎟⎟⎥ + (4.8.6)
(ξ + η ) 2 ⎣ ∂ξ ⎝ ∂ξ ⎠ ∂η ⎝ ∂η ⎠⎦ ξη ∂ϕ
2

r
The symbols have the usual meaning (4) ( u is the ξ component of velocity, ρc is the critical density ,
r
E is the electric field , k 02 is the vacuum wave number , c s2 is the sound velocity).The equations are
valid if the fluid is considered to be 1-dimensional and isothermal , behaves hydrodynamically , and so that
the dissipation can be neglected.
r
We are interested in a time evolution of u in much longer period with respect to the period of
the light wave, and so it is justified to replace the averages on the rapid time scale in eqn (1) by the ξ

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 114

r r
dependent amplitudes E (ξ ) . Furthermore, k 02 n 2 is real and allows us to consider only real E (ξ )
r r*
so that we replace 〈 EE 〉 by E2.

If we assume a static equilibrium for the zero order, Eqns(4.8.1)-(4.8.3) reduce to


r r r r r
⎛ ∂u 2u ∂u ⎞ 2c s2 ∂ρ 2 µρ ∂E 2
ρ ⎜⎜ + ⎟⎟ = − − (4.8.7)
⎝ ∂t (1 + l ) ∂x ⎠ (1 + l ) ∂x (1 + l ) ∂x
r
∂ρ 2 ∂ rr
− ( ρu ) = 0 (4.8.8)
∂t (1 + l ) ∂x
r r
4 ∂2E − 2 x ∂E
r
+e + k 02 n 2 E = 0 (4.8.9)
(1 + l ) ∂x
2 2
∂x
η ∂ ∂
Where l = , x = log ξ , =
ξ ξ∂ξ ∂x
The perturbed quantities are given as
r r r
ρ = ρ 0 + ρ1 , E = E0 + E1 , u = 0 + u (4.8.10)
Together these equations given above after solving gives us
2c s2 ∂ρ 0 2 µρ 0 ∂E 0
2

+ =0 (4.8.11)
(1 + l ) ∂x (1 + l ) ∂x
4 ∂ 2 E0 − 2 x ∂E 0 2⎛ ρ 0 (ξ ) ⎞
+ e + k ⎜
⎜ 1 − ⎟⎟ E0 = 0 , ρ 0 〈 ρ c (4.8.12)
ρ
0
(1 + l ) 2 ∂x 2 ∂x ⎝ c ⎠
The linearized equations are
∂u 2c s2 ∂ρ1 2µρ 0 ∂ ( E 0 E1 ) ∂E 02
ρ0 =− − − µρ1 (4.8.13)
∂t (1 + l ) ∂x (1 + l ) ∂x ∂x
∂ρ1 2 ∂
− (ρ 0 u ) = 0 (4.8.14)
∂t (1 + l ) ∂x
4 ∂ 2 E1 ⎛ ρ ⎞ ρ
+ k 02 ⎜⎜1 − 0 ⎟⎟ E1 = k 02 E0 1 (4.8.15)
(1 + l ) ∂x ⎝ ρc ⎠ ρc
2 2

E1 can be expressed in terms of Green’s function


b
ρ ( x ′)
E1 = k ∫ G (x, x′)E (x′) ρ dx′
2 1
0 0 (4.8.16)
a c
By taking the derivative with respect to time of eqn.(4.8.13) and with the help of eqn.(4.8.16) the system
(4.8.13)-(4.8.15) reduces to
∂ 2u 2 ∂
2
( )
∂ E02 ∂ ( )
( ) 2 ρ0 ∂
⎡ b ∂ ⎤
ρ0 = c ρ u + µ ρ u + 2 µk ⎢ E 0 ∫ G ( x, x ′)E0 ( x ′) (ρ 0 u )dx ′⎥
ρ c ∂x ⎣ a
s 0 0 0
∂t 2
∂x 2
∂x ∂x ∂x ⎦
(4.8.17)
Or
ρ 0 u&& + Fu = 0 (4.8.18)
Where F is the integro-differential operator on u of the RHS of Eqn (4.8.17) .We can check that the
differential part of F is symmetric for vanishing u at the boundaries. The integral operator is also

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 115

symmetric because Green’s function inverts the symmetric operator on the LHS of Eqn(4.8.15) and then
has to be symmetric with respect to x interchange of x ′ .This property of Eqn.(4.8.18) allows a necessary
and sufficient condition of stability to be derived in the form of an energy principle as known from
Refs(5&6)
b
Let δW = ∫ uFudx i.e.
a
b
dx 2µk 02 b b
δW = c ∫ [( ρ u ) ] ∫ ∫ G (x, x′)E (x ′)(ρ u ) E 0 ( x )(ρ 0 u )x dx ′dx (4.8.19)
2
2
+2 x′
ρ0 ρc
s 0 x 0 0
a a a
If δW 〉 0 u vanishing at x =a and x =b, the system is stable .If for any test function u it
for all
vanishes at a and b, then δW 〈0 holds, the system is unstable.

This means that without the self-consistent reaction of the plasma to the light, the equilibrium is stable .Let
us investigate this self consistent response by analyzing the properties of Green’s function of eqn
(4.8.15).The operator
∂2 ⎛ ρ ⎞
L ≡ 2 + k 02 ⎜⎜1 − 0 ⎟⎟ (4.8.20)
∂x ⎝ ρc ⎠
has a null eigen value if
1/ 2
⎛ ρ ⎞
k 0 (b − a )⎜⎜1 − 0 ⎟⎟ = α crit ≈ 1 (4.8.21)
⎝ ρc ⎠
Where the square-root term indicates an average in x .With this condition is barely satisfied, Green’s
function becomes very large and can change sign. This is well understood from the fact that if L has the
eigenvalue λ , L−1 has the Eigen value λ −1 . So G ( x, x ′) can be negative and large if
−1 / 2
⎛ ρ ⎞
k 0 (b − a ) ≤ ⎜⎜1 − 0 ⎟⎟ (4.8.22)
⎝ ρc ⎠
This means that the double integral in expression can be made negative and large enough to yield δW 〈0
for a properly chosen test function for system being stable.

In conclusion, the plasma is unstable if the pondermotive force is perturbed self-consistently, no matter
how large the modulation of the light and the plasma inhomogeneity are. This is in agreement with Laval et
al (1965) and Tasso et al (1978)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 116

4.9. EQUILIBRIUM OF AN INCOMPRESSIBLE HEAVY FLUID IN PARABOLIC


COORDINATES

The condition of equilibrium for a fluid in horizontal strata with variable density is studied for
Rayleigh-Taylor instability by using parabolic coordinates. In past many researchers have done work on
Rayliegh-Taylor instability .Rayleigh (1883)did an innovative new work incompressible fluid flowing in
horizontal direction having variable density arranged in horizontal strata was studied in rectangular
coordinates .The present study is to present same paper in parabolic coordinates which gives us the same
results as was given by Rayleigh in the past.

GENERAL MHD MODEL:


In general the equation of continuity, condition of incompressibility, and equation of motion are given as
follows:

+ ∇ • (ρv ) = 0 (4.9.1)
dt
∇•v = 0 (4.9.2)
dv
ρ = −∇p − gρ (4.9.3)
dt
The conditions given above are at equilibrium for “v” the velocity, ρ the density ,p, the pressure , “g” , the
acceleration due to gravity after perturbation the quantities changes as

v ≡ (u, v, w) , ρ ≡ ρ 0 + ρ , p ≡ p + δp , g ≡ (0,0,− g )
Where ρ is function of ξ ,η , ϕ and the time t which is always small during the period contemplated.

As we know the parabolic coordinates are as follows:


ξ = r − z = r (1 − cos ϑ )
η = r + z = r (1 + cos ϑ )
ϕ =φ
2 ⎡ ∂ ∂ ⎤ 1 ∂
∇=
ξ +η ⎢ξ ∂ξ + η ∂η ⎥ +
⎣ ⎦ ξη ∂ϕ

4 ⎡∂ ⎛ ∂ ⎞ ∂ ⎛ ∂ ⎞⎤ 1 ∂ 2
∇2 = 2 ⎢
⎜⎜ ξ ⎟⎟ + ⎜⎜η ⎟⎟⎥ +
(ξ + η ) ⎣ ∂ξ ⎝ ∂ξ ⎠ ∂η ⎝ ∂η ⎠⎦ ξη ∂ϕ
2

Therefore the equation (2) gives us


2 ⎡ ∂u ∂v ⎤ 1 ∂w
∇•v =
ξ +η ⎢ξ ∂ξ + η ∂η ⎥ + =0
⎣ ⎦ ξη ∂ϕ
The equilibrium pressure is function of ϕ only, then the equation (4.9.3) becomes
du 2 dδp
ρ0 =−
dt ξ + η dξ
dv 2 dδp
ρ0 =− (4.9.4)
dt ξ + η dη

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 117

du 2 dδp
ρ0 =− − gρ
dt ξ + η dξ
From the equation of continuity we get
dρ w dρ
+ =0 (4.9.5)
dt ξη dϕ
1 ∂w
iκu + iκ ′v + =0 (4.9.6)
ξη ∂ϕ
d
κδp = −nρ 0 u, κ ′δp = −nρ 0 v, δp = − gρ − nρ 0 w, (4.9.7)
ξη dϕ
w dρ
inρ + =0 (4.9.8)
ξη dϕ

By Fourier’s theorem and the general theory of perturbation in equilibrium the complete solution in which
the variable quantities are considered as function of ξ ,η , ϕ as X ≡ exp(iκξ + iκ ′η + in) .The wave
2π 2π
lengths of the disturbances parallel to ξ ,η be λ, λ ′ where λ = ,λ′ = .
κ κ′
Eliminating u and v between eqn. (4.9.6) and first two equation (4.9.7)

We get ( )
i κ 2 + κ ′ 2 δp − −nρ 0
dw
=0 (4.9.9)
ξη dϕ
Next eliminating δp between (9) and the last of eqn.(7),we find that

d ⎛⎜ 1 dw ⎞⎟
( )
i κ 2 + κ ′ 2 ( gρ + inρ 0 w) + n
1
ρ0 = 0 (4.9.10)
ξη dϕ ⎜⎝ ξη dϕ ⎟⎠
Finally eliminating ρ between (8) and (10) we get
d ⎛⎜ 1 dw ⎞⎟ ⎛ dρ ⎞
1

ρ0

− κ 2 + κ ′2 ( )⎜⎜ g + ρ 0 ⎟w = 0

(4.9.11)
ξη dϕ ⎝ ξη dϕ ⎠ ⎝n
2
ξη dϕ ⎠
Therefore we get
dρ ⎧⎪ 1 dw ⎫
1 d 2w
ξη dϕ 2
− κ 2
+ (
κ ′ 2
w + ) ⎨
ρ 0 dz ⎪⎩ ξη dϕ
− κ 2
+ κ ′ 2 gw ⎪
⎬=0 ( ) (4.9.12)
n 2 ⎪⎭
Let us consider two fluids of different densities ρ1 , ρ 2 separated by a horizontal boundary (z = 0), for
simplicity we take κ ′ = 0 , then the equation (4.9.12) reduces to

− (κ 2 )w = 0
1 d 2w
(4.9.13)
ξη dϕ 2

Of which the solution is


w = Ae κϕ ξη
+ Be −κϕ ξη
(4.9.14)
For upper fluid A=0 and for lower fluid B=0, thus the upper and lower fluids respectively
− κϕ ξη κϕ ξη
are w = Be , w = Ae .The second boundary condition is obtained by integrating equation
(4.9.11) across the surface of transition as

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 118

⎡ dw ⎤ ⎡ dw ⎤ gκ 2 (ρ 02 − ρ 01 )
⎢ 0ρ −
⎥ ⎢ 0 ρ ⎥ − =0
⎢⎣ ξη dϕ ⎥⎦ 2 ⎢⎣ ξη dϕ ⎥⎦ 1 n2
Hence
ρ 01 − ρ 02
n 2 = gκ (4.9.15)
ρ 02 + ρ 01
If ρ 2 < ρ1 , then n 2 is positive, which indicates stability and harmonic oscillations whose frequency
increases without limit with κ as the wave length diminishes. Otherwise it is unstable if ρ 2 > ρ1 for
2
n being negative, the instability measured by the rate at which a small disturbances or perturbation is
multiplied in a given time is greater for all smaller wave lengths.

4.9.1. RAYLEIGH-TAYLOR INSTABILITY STABILIZATION IN LOW FREQUENCY BY


USING PARABOLIC COORDINATES FOR PHASE TRANSISITION WITH COULOUMB
CONDENSATION

The present study is to effects of various parameters such as density gradient scale length, magnetic shear
scale length ,magnetic diffusivity, Alfven velocity , wave number and finite resistivity on the system which
undergoes instability due to gradient ( ∇n ) which plays against the gravity in the upward direction thereby
causing the R-T instability in a low beta plasma. Here we have seen the conductivity causes the implosion
in the system which can be stabilized by the sheared flow, density gradient scale length (Ln), and Hall
current suppress in low frequency. The study is done theoretically by using parabolic coordinate as a local
frame of reference, the results are shown theoretically and solved numerically, magnetic shear scale length,
magnetic diffusivity stabilizes the system.

It is based on MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB with Low- β


plasma having low frequency fluctuation which is being stabilized for sheared velocity, finite conductivity
and with other parameters. The induced RTI is suppressed by above mentioned parameters and as a whole
the classical transport phenomena is taken into consideration. The heat conductivity is calculated ,
Banana(Hazarika’s )regime is calculated where an important result regime for MAGNETIC
CONFINEMENT TOKAMAK COLLIDER(MCTC)HUB which is
⎡ R(1 + sin 3θ sin φ ) ⎤
4

DH = D ps ⎢ ⎥⎦ i.e., the term in bracket is better off the Pfirsch-Schluter regime. After
⎣ r
the Bohm diffusion the Hazarika’s diffusion coefficient is calculated. Here we see that at first comes the
Bohm diffusion than classical plateau, Pfirsch-Schluter’s regime than comes the Hazarika’s regime for
MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB for transport phenomena one new
R 2 (1 + sin 3θ sin φ )q 2 vcl
result is found as v⊥ = . The above facts compel one to study the classical
r2
phenomena of RTI and thereby the suppression by different parameters, Mirror effect decreases drastically.
The growth rate becomes (1 + sin 3θ sin φ )−6 / 5 times that of Tokamak case which gives us the more

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 119

stabilized and steady plasma confinement. The finite conductivity is stabilizing parameter as it varies
(1 + sin 3θ sin φ )−2 / 5
times better off than Tokamak case. Earlier Bhatia and Hazarika (1995) have
studied the effect of self gravitating superposed plasma flowing past each other which of use in the
MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) Hub’s collider region. The two torii
meets together at collider region which is the source region of collision or stability in MAGNETIC
CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB .This may be considered of interest to particle
Physicist for quantum theory researchers and so on.

Schematic diagram of Magnetic Confinement Tokamak Collider (MCTC)


GENERALIZED MHD MODEL:
The generalized MHD equations are considered which are derived from the above basic
equations.
αν c ⎞ n0 mi c ⎛ ∂ ⎞ B

⎜1 + ⎟ ⎜⎜ − Λqi2 mi n +
c
[ ]
φ ,−Λqi2 mi n ⎟⎟ = 0 ∇ c jc
⎝ 1 + α ⎠ B0 ⎝ ∂t B0 ⎠ c
2∇r cos θ • zˆ c 2 2
+ ∇p × − µΛqi2 mi n0 ∇ ⊥ Λq i m i n
R(1 + sin 3θ sin φ ) B0
+ mi gδφ • zˆ + mi g (φ − 1) • zˆ
(4.9.16)
∂A
= −∇ cφ − η c jc + λ∇ 2⊥ jc (4.9.17)
c∂t
∂p c
+ [φ , p ] = χ ⊥ ∇ 2⊥ p +S(r) (4.9.18)
∂t B 0

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 120

c 2 ˆ
Where jc = − ∇ ⊥ A term is responsible for feedback loop current, Λ is Coulomb length , q i2 is ionic

charge , Â is electromagnetic potential, φ is electrostatic potential and [ A, B ] = zˆ • ∇A × ∇B , Poisson’s

Eigen mode equation is obtained by linearizing (4.9.16)-(4.9.18) in Toroidal coordinates


( )
r ,θ , ξ for Rayleigh- Taylor instability (RTI) as
⎡inq ′∆r (θ + 2πl )t ⎤
φ (∆r , θ , ζ ) = ∑ φˆ(θ + 2πl ) exp ⎢ ⎥
⎣ − inq r ∆rρ sin θ + in(q 0θ − ζ )⎦
−1
0 mn
(4.9.19)

δτ PA µτ PA
δ = µ=
Λq n 0 m r
2
i
2
i mn [ gLn ln gLn ] Λq n 0 m r
2
i
2
i mn [ gLn ln gLn ]
pˆ τ PA 4πΛqi2 n0 mi rmn
1/ 2 2
β= τ PA =
[ ]
2

(Λqi2 n0 mi rmn
2
gLn ln gLn )1 / 2 Bθ2

τ PA
2
χ⊥ cτ PA
2
φ τ PA
2
λc 2 8πp
χ= φ= 2 λ= pˆ =
r2
mn rmn B0 4πrmn2
B02
t A gc s2
t= A= ηˆ = ηn q 2 2
g=
τ PA rmn Bθ R(1 + sin 3θ sin φ )
⎛ 1 ⎞
ρˆ = ρ [κ − ε (sθ − ρ sin θ ) cosθ ] κ = −⎜⎜1 − 2 ⎟⎟ε ρ = βL−p1
q⎝ ⎠
4/5
⎛ 3π ⎞
⎜ ⎟ λ3 / 5 s 4 / 5 (2l + 1)4 / 5
⎡ r ⎤ 2
Bφ 4
n = n0 ⎢ − ⎥ s = a0 = ⎝ ⎠
⎣ Ln ⎦ Bθ η (ρ − µ + δ )1 / 5
d (ln n0 ) d (ln p0 )
Ln −1 = − Lp −1 = −
drmn drmn
r αν c
ε= ν n = 1+
R(1 + sin 3θ sin φ ) 1+α

For Low frequency ω pi2 τ PA


2
〈〈1 and for RT mode ρ 〈 2 Ln −1

∇φ = −γAˆ + η∇ 2⊥ Aˆ − λ∇ 4⊥ Aˆ (4.9.20)

Differential equation is given in terms of feedback loop current


{λC1 D 6 + λC 2 D 5 + (λC 3 − ηC1 ) D 4 − ηC 2 D 3
(4.9.21)
+ (γC − ηC ) D 2 + γC D + γC } Aˆ = 0
1 3 2 3
Where

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 121

d γf 2 γf ′[2 f (η − 1) + 2 f 2 (2λ − η ) − 2λf 5 ]


D= ; C1 = ; C2 =
dθ γ + f 2 η + λf ( 2
) {γ + f (η + λf )}
2 2 2

γρ
C3 = f 2γ 2 − µf 4γ + [κ + cos θ + (sθ − ρ sin θ )sin θ ]
γ + χf 2
here we consider
f 2 = 1 + (sθ − ρ sin θ )
2

Let the trivial solution be Aˆ = A sin(ϖt + θ ) (4.9.22)


A is the peak value loop current ,ϖ is the frequency and θ is the phase difference.
DAˆ = A cos(ϖt + θ ) = − D 3 Aˆ = D 5 Aˆ
(4.9.23)
D 2 Aˆ = − A sin (ϖt + θ ) = − D 4 Aˆ = D 6 Aˆ
Initially θ =0 r = 0, t = 0
at
D 5 Aˆ = DAˆ = A ; D 3 Aˆ = − Aˆ ; D 6 Aˆ = D 4 Aˆ = D 2 Aˆ = 0 (4.9.24)
λC 2 + ηC1
either A = 0 or γ = (4.9.25)
− C2
at θ = π / 2 at r = rmn ; t = 0

D 6 Aˆ = D 2 Aˆ = − A ; D 4 Aˆ = Aˆ = A (4.9.26)

the growth rate is given by


γ = −(λ + η ) (4.9.27)

γ = −(F + ηf ) [λF + ηf (η + λf )]
2 −1 4 2
(4.9.28)
here
[
F = f ′ 2 f (η − 1) + 2 f 2
(2λ − η ) − 2λf 5 ] (4.9.29)

me
For Low frequency γ 〈〈ω c and for RT mode gLn −1 〈〈1 and low − β : 〈〈1
mi

As we are interested in knowing the effect of finite conductivity along with other parameters so we shall
take the derivative of the growth rate with respect to finite conductivity and we can observe that the
derivative is positive or negative for destabilization or stabilization respectively.

STABILITY FOR PLASMA BETA AND LARGE ASPECT RATIO

The stability condition of the MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC)


HUB by using plasma beta and aspect ratio is studied which is as follows:

− γ ∝ (Ln −1 )
−1 / 2
Means the density gradient scale length stabilizes the system.
The growth is studied analytically as well as numerically, for the analytical case the derivative with respect
to density gradient scale length term gives us the negative quantity hence showing the stabilizing character.
Numerically we observe that the growth with density gradient scale length term stabilizes for larger values
of density gradient scale length term, hence one may opt for larger values of density gradient scale length
term which is exhibited in Fig.1

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 122

STABILITY FOR DENSITY GRADIENT SCALE LENGHT


The growth is studied analytically as well as numerically calculated for the analytical case the growth is
plotted against the number density gradient scale length which gives us the plot for different values of
number density gradient scale length we see here as the density gradient scale length increases the growth
rate with respect to number density gradient scale decreases thereby showing the stabilizing effect for the
considered system which is exhibited in Fig.1.

STABILITY FOR FINITE RESISTIVITY


The growth is studied analytically as well as numerically for the analytical case the growth is plotted
against for finite conductivity which gives us the plot for different values of finite resistivity, we see here as
the finite resistivity increases the growth rate decreases thereby showing the stabilizing effect for the
considered system which is exhibited in Fig.2.

Fig.1
PLOT OF NUMBER DENSITY GRADIENT SCALE
LENGTH VS GROWTH RATE

0
1 2 3 4 5 6 7 8 9 10
GROWTH RATE

-0.5

-1
Series1
-1.5

-2

-2.5
NO. DENSITY GRADIENT SCALE LENGTH

λ = of
Fig.2. Series1 : plot , f ′ = 1 vs growth rate
f = 1resistivity
1, finite

0
growth rate

1 2 3 4 5 6
-5 Series1

-10

-15
finite resistivity

Fig 3. Series1 : η = 3, λ = 1, f ′ = 1
STABILITY FOR FLUCTUATIONS
The growth is studied analytically as well as numerically for the analytical case the growth is plotted
against the fluctuations which gives us the plot for different values of fluctuations we see here as the
fluctuations increases the growth rate decreases showing the stabilizing effect for the considered system
when the values of finite conductivity, current diffusivity and derivative of fluctuations remains constant,
which is exhibited in Fig.3.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 123

plot of fluctuation vs growth rate

0
1 2 3 4 5 6
-5
growth rate

-10 Series1

-15

-20
fluctuations

PLO T O F F LU C T U A T IO N V S A N G LE

5
4
3
Ser ies1
2
1
0
1 2 3 4 5 6 7 8 9 10

A N GLE ( I N R A D I A N S )

Fig.4. Series 1: η = 3, λ = 1 variance of angle in radians for the fluctuations


( s − ρ cos θ )

plot of current diffusivity vs growth rate

0
1 2 3 4 5 6 7 8 9 10
-5

-10
growth rate

-15 Series1

-20

-25

-30
current diffusivity

Fig.5. Series1 : η = 3, f = 1, f ′ = 1
The given growth rate (MCTC) HUB is stabilized.

STABILITY FOR CURRENT DIFFUSIVITY

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 124

The growth is studied analytically as well as numerically for the analytical case the growth is plotted
against the current diffusivity which gives us the plot for different values of current diffusivity we see here
as the increases the current diffusivity growth rate decreases thereby showing the stabilizing effect for the
considered system which is exhibited in Fig.5.

PLOT OF COMPARISION OF
TOKAMAK AND MCTC

120
GROWTH RATE

100
80
Series1
60
Series2
40
20
0
1 2 3 4 5 6 7 8 9 10
WAVE NUMBER

Fig.6 Series1 : Tokamak ; Series 2 : MCTC , forR = 1, θ = 60 o , φ = 30 o


The growth varies as γ ∝ (1 + sin 3θ sin φ )
−1 / 4

Fig.6. Comparison of finite conductivity governed growth in MAGNETIC CONFINEMENT


TOKAMAK COLLIDER (MCTC) HUB and Tokamak
We see that the growth rate of MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB is
more stabilized than the Tokamak for the parameter finite conductivity which is shown in Fig.6.

PARTICLE TRAPPING IN HAZARIKA’S (BANANA) REGIME


Here we can observe that the particle trapped which is exhibited by the Hazarika’s regime (banana) is
broader than the Tokamak case in Fig.7.

Banana(Hazarika's)regime

1
0
10 2
-1
9 -2 3
-3 Series1
8 4

7 5
6

Fig.7. The particle are trapped in showed region Hazarika’s (banana) regime

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 125

COMPARISION OF HAZARIKA'S (BANANA)REGIME


FOR MCTC(HUB) AND TOKAMAK

1
0
10 2
-2
9 3 Series1
-4
8 4 Series2

7 5
6

Series 1. Tokamak ,Series 2.MCTC(HUB)


FIG.8.Comparision of Hazarika’s (banana) regime for MCTC (HUB) and Tokamak is shown for
θ = 45o , φ = 45 o
It is observed from the above graph that the confinement time required for the MCTC (HUB) is much
lesser than the Tokamak case.
Condition for particle trapping
v 2 ≤ 2rg ,The motion of the particle is oscillatory and the particle never loses contact with the
circular path. v 〉 2rg , the particle leaves the circle and then describes a parabolic path. If v = 2rg ,
2 2

the motion of the particle becomes oscillatory and it will go unto diametrical path by performing the banana
(Hazarika’s) regime path.

APPLICATIONS:

In MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB one can observe two
types of cases which govern the system as the polarity of the magnetic field changes. (I)For current
generation, (II) for rockets and missiles, (III) Hybrid technology

Case I: FOR ELECTRICITY GENERATION


As the polarity of magnetic field changes the flow of plasma also changes say if in both of the torus the
magnetic field is in clockwise direction there will be collisional effect in the collider region of MAGNETIC
CONFINEMENT TOKAMAK COLLIDER(MCTC)HUB which will give rise to more heat and friction
and resulting in slowing down motion of plasma in collider region .Afterwards the plasma becomes
consistent in every cycle of flow, which can be observed in this region as well as in MAGNETIC
CONFINEMENT TOKAMAK COLLIDER(MCTC)HUB as a whole. Bhatia and Hazarika (1995) showed
that in space the self gravitating superposed plasma flow past each other stabilizes the system. It can be
useful for generation and getting the current density in enormous quantity which is useful for generation of
electricity.

POWER LAW:
Here the definition of power is used to derive the power law.
dW
Power = Rate of change of work done = P =
dt
Work done = Force X Distance
Force= Pressure per unit area
p
F= where p is pressure and σA is cross sectional area of MAGNETIC CONFINEMENT
σA
TOKAMAK COLLIDER(MCTC)HUB

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 126

d
where γ = is growth rate .
dt
pR
W= (1 + sin 3θ sin φ ) , hence we get the power as
σA
γpR
P= (1 + sin 3θ sin φ ) in MW
σA

Case II: ROCKET AND MISSILES


When we have the change in polarity of magnetic field say in one torus it runs in anti-clockwise and in
other clockwise direction in such case we observe that the flow of plasma is accelerated in the collider
region of MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB and may or may not
become turbulent flow which is useful for propulsion system for the use in rockets, missiles and space-
craft etc. It is observed that the velocity drift in such case is (1 + sin 3θ sin φ ) times that of Tokamak
2

case. Here the plasma is acting as the superposed flowing one over the other hence enhancing the velocity
of resultant plasma which is observed by several researchers in past.

Case III: HYBRID TECHNOLOGY


Like the case II here we use the same type of system resulting into the different type of technology which is
prevalent in many places known as the Hybrid technology. The accelerated neutrons which can be
extracted from the MAGNETIC CONFINEMENT TOKAMAK COLLIDER (MCTC) HUB can be used in
Fission Chamber where we need the those neutrons as for the fusion purpose the fast neutrons are waste
products leading to the heating of plasma chamber, so it can be used through neutrons collecting blackest
used and can be channelized to the Uranium or plutonium based nuclear/atomic reactors.

Case IV: COMPUTERS AND TELEVISION


The growth rate is measured in per second (Hz) which gives us the speed compiling or formation of
plasma. If it is used in computer chips will give us the processing speed of the microprocessor. Similarly
we can enhance the speed of the normally used microprocessor by 1.5 times say if the speed is 2.8GHz in
the present condition the microprocessor speed becomes 4.2 GHz. The calculation speed of the
microprocessor becomes 4.2 Giga flops (i.e. 4.2 Giga floating points per second). If it is used in super
computer with calculation speed of 1.73 Teraflops, the resultant will be near about 150 Tera floating points
per second (2.6X1012 floating points per second).We can enhance the resolution of the computer monitor
screen as well as that of the plasma TVs confinement time can be reduced with better resolution. The
resolution is 24.75% better than the present best available computer monitor or plasma TVs .One particular
brand of plasma and LCD TVs are projecting that it can give 1:10,000 resolution , here in this particular
case it will be 1:15,000 resolution . No blurred images rather only crystal clear screen can view from 120
degrees wide angle without any diminishing images from side view angle.

REFERENCES
120. Doyle,E.J, Groebner,R. J. et al : Phys Fluids B 3,230(1991)
121. Shaing, K.C and Crume, E.C.: Phys Rev. Lett. 63,2369(1989)
122. Itoh, S.I and Itoh, K: Phys Rev. Lett. 60,2276(1983)
123. Hassam,A.B.:Comments on Plasma Phys. Contr. Fusion 14,275(1991) and Phys. Fluids B4,485(1992)
124. Sen,S and Weiland,J:Phys Fluids B4,485(1992)
125. Bhatia,P.K. and Hazarika,A.B.R.: Physica Scripta 52, 947(1995)
126. Hazarika,A.B.R: National symposium on plasma Science &Technolgy,Rajkot(1998)
127. Hazarika,A.B.R:National symposium on plasma Science &Technolgy, Guwahati(2001)
128. Hazarika,A.B.R: National symposium on plasma Science &Technolgy,Ranchi(2003)
129. Hazarika,A.B.R:National symposium on plasma Science &Technolgy,Bhopal(2004)
130. Hazarika,A.B.R: Proceeding of National symposium on plasma Science &Technolgy, Cochin Univ. of
Sci. & Technology, Cochin(2005)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 127

131. Hazarika,A.B.R. :Proceeding of 3rd Technical meeting of International Atomic Energy Agency on
Theory of Plasma Instabilities, Univ. of York, York, UK(2007a);Nuc. Fusion Dec(2007) edited by
H.R.Wilson and R.Vann in conference series
132. Diamond, P.H : Plasma Physics and Controlled Nuclear Fusion Research
(IAEA,Vienna,1992)2,97(1992)
133. Hazarika,A.B.R.: Submitted in Nuclear Fusion (2007b)
134. Hazarika,A.B.R.: Submitted in Plasma Physics and controlled fusion(2007c)
135. Hazarika,A.B.R.: Submitted in Plasma Source Science and Technology(2007d)
136. Hazarika,A.B.R.: Submitted in Physica Scripta (2007e)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 128

Chapter -5

5.1. Application of Plasma Physics:


Introduction – As mentioned in the introductory chapter of this treatise the interest in plasma
physics revived after the Second World War mainly due to its possible applications in the generation of
power by the process of fusion in thermonuclear reaction. We have discussed that the actual problem that it
standing as a barrier to the realisation of the goal is the problem of plasma confinement. Though various
suggestions have been made and extensive work has been done in this particular problem of plasma
confinement yet the fact remains that we are confronted with a problem which surpasses in difficulty all the
technical problems which the scientific advances of the twentieth century have yet given rise. Further, it is
not as yet clear how the losses suffered by confined plasma can be reduced due to instabilities which
generally set in. It is expected that with the further development of new experimental technique for
improving the vacuum technology, conformation of magnetic configurations to precise well defined
geometries, elimination of stray fields and other associated precision improvements in design it well be
possible to derive useful power by thermonuclear fusion process.
Beside these possible and important applications there are other fields where the properties of
plasma can be utilized for some applied results. The important field in which much interest has now
cantered around is the problem of generation of power by Magneto hydro dynamic generator process.

5.2. Magneto hydro dynamic generator –

It is well known that the physical principal which is utilized for the generation of electric power is
based upon Faraday’s law of electromagnetic induction. When a conductor is moved in a magnetic field the
electromotive force that develops across its two ends is proportional to dN/dt where N is the flux per unit
area in the magnetic field. In this process the mechanical energy of the rotor is converted to electrical
energy. Instead of moving the conductor in a translational direction the conductor is caused to rotate with in
the pole pieces of the magnet. In case of hydroelectric generator the energy required to maintain the
rotation is provided by the gravitational motion of the river water where as in case of turbines the rotation is
made by the high speed flow of the steam or fossil fuel.

If instead of a solid conductor, a conducting fluid (gas or a liquid) is allowed to flow through the
magnetic field then the system is called a magnet hydrodynamic Generator. The idea was suggested by
Faraday in 1831. With the development of plasma since the out put power of thermo ionic converters is of
the order of a few hundred watts, they cannot profitably complete with commercial power plants whose
general out put is of the order of 500 M.W. and whose general efficiency is of the order or 40-45%.
However, the possibility is there that these thermo ionic converters can be used in conjunction with nuclear
fuel reactors whose general efficiency is 30% to increase the value to above 40%.Physics a conducting
fluid became available and the problem of generation of power by utilizing the ionised gas as a conducting
fluid induced research work in various laboratories. Between 1938-44 Karlovitz designed a Magneto
hydrodynamic Generator by utilizing the products of combustion of natural gas as a working fluid and
electron beam ionization, but the out put was small due to low ionization attained.

5.3 Generation of micro utilizing high density plasma:

Plasma is a very promising medium for amplifying and generating microwaves. The Oscillatory
property of plasma discovered by Langmuir and Tonks serves as a starting point for such studies. The
natural electromagnetic oscillations with occur in a plasma are how ever incoherent and can find no
practical use. From (1959,1960) has published some results on the production of millimetre waves in which
the nonlinear voltage current characteristics of a mercury are have been utilized for the harmonic
generation of millimetre waves just as rectifying properties of silicon crystals have been used by Townes
and his co-workers for generating millimetre waves for micro wave spectroscopy. Cyclotron radiation of an
electron placed in a magnetic field has also been utilized by Twiss and Roberts (1958) for the generation of
high frequency electro magnetic oscillations.

Two methods have been suggested for utilizing the property of plasma for generating microwaves.
In the first method use is made of the interaction of a first electron beam with plasma. Achiever and

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 129

Feinberg (1951) as well as Lambert has indicated that it is possible to amplify high frequency oscillations
when the electron component of plasma interacts with a fast electron stream. Achiever and Feinberg (1951)
postulated that the state of boundless plasma is unstable. This results in the appearance of growing space
charge waves in the electron beam which are accompanied by the excitation of longitudinal electrostatic
waves of increasing amplitude in plasma.
Another method that has been suggested is the utilization of the inertial property of plasma for
tuning a resonant cavity. Let us consider a resonator, for example a resonant cavity in which a particular
resonant mode is excited and let the volume of the resonator be decreased by some mechanical means say
by pushing s piston.

5.4. Plasma diode:


An interesting possibility which has been suggested for direct conversion of heat energy to
electricity is a device called the plasma diode. It consists of two electrodes one and emitter and the other a
collector the emitters is heated to relapse the electrons by the thermo-emission process and the collector is
cooled so that electrons will condense on it. Let us denote the emitter by E and collector by C, and sty Oe
and Oc denote the work function of the electrodes respectively. In practice, the emitter is heated until the
electrons reach the potential when a potential one and condense on the collector. The electrons then are left
with a potential Vc=Oe-Oc which drives a current through an external load. The source of this current is the
emitter and if small amount of current is drawn which means that the emitter is moderately heated and the
device works in vacuum, then the distribution of potential is almost linear because no space charge has
accumulated. As is known in all vacuum diodes, the current is always limited by the space charge and the
actual distribution of potential is shown. The presence of this space charge limits the value of the current
which can be drawn from the diode. One of the methods by which space charge can be reduced is to use
closely spaced electrodes and the usual distance between the two does not exceed 10/4m. In such devices it
has been found that output current is smaller than the saturation current by as much as 10% and such
converters are useful for small out-put power devices not exceeding 5 watts. Due to high temperature
difference between the emitter and the collector, placing the two electrodes close together becomes a
difficult technical problem.
An alternative method to neutralise the space charge is to introduce positive ions within the diode- which is
usually done by introducing caesium vapour within it. Each cestrum atone has one valence electron and a
spherically symmetrical electronic charge. The caesium atoms have a tendency to be adsorbed on the
electrode material which is usually tungsten. The valence electron of the caesium atom on the electrode
surface is then shared by the mental inside the electrode. When the temperature is increased the kinetic
energy of the layer of caesium atoms is increased and caesium atoms try to leave the cathode surface. The
valence electrons are pulled by two forces, one the force of attraction by the caesium atoms and that of the
tungsten surface. As the ionization potential of the caesium atom is 3.87 eV. The electron detaches itself
from the caesium atoms but is kept attached to tungsten surface as the work function of tungsten is higher
(4.52 eV).

5.5. Nuclear Physics: The nuclear atom


We have frequently made use of the fact that every atom contains a massive, positively charge
nuclear, much smaller than the over all dimensions of the atom but nevertheless containing most of the total
mass of the atom. It is instructive to review the earliest experimental evidence for the existence of the
nuclear, the Rutherford scattering experiments. The experiments were carried out in 1910-1911 by Sir
Ernest Rutherford and two of his students, Hans Geiger and Ernest Marsdent at Cambridge, England.
The electron had been discovered in 1987 by Sir J.J. Thomson, and by 1910 its mass and charge
were quite accurately known. It had also been well established that, with the sole exception of hydrogen, all
atoms contain more than one electron. Thomson had proposed a model of the atom consisting of a
relatively large sphere of positive charge (about 2 or 3x10 8 cm. in diameter) within which was embedded,
like plums in a pudding, the electrons.
What Rutherford and his co-workers did was to project other particles at the atoms under
investigation, and from observations of the way in which the projected particles were deflected or scattered,
they drew conclusions about the distribution of charge within the target atoms.
At this time the high energy particle accelerators now in common use for nuclear physics research
had not yet been developed, and Rutherford had to use as projectiles the particles produced in natural
radioactivity, to be discussed later in this chapter. Some radioactive disintegration result in the emission of

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 130

alpha particles, these particles are now known to be identical with the nuclei of helium atoms, each
consisting of two electrons normally present in a neutral helium atom. Alpha particles ejected from
unstable nuclei with speeds of the order of 107 m/s.; and they can travel several centimetres through air; or
on the order of 0.1 mm through solid matter, before they are brought to rest by collisions.

A radio active source at the left emits alpha particles. Hick lead screens stop all particles except
those in a narrow beam defined by small holes. The beam then passes through a thin mental foil (gold,
silver and copper were used) and strikes a plate coated with zinc sulphide. A momentary flash or
scintillation can be observed visually on the screen whenever it is struck by an alpha particle, and the
number of particles that have been deflected through any angle from their original direction can therefore
be determined.

According to the Thomson model, the atoms of solid are packed together like marbles in a box.
The experimental fact that on alpha particle can pass right through a sheet of mental foil forces one to
concluded, if this model is correct, that the alpha particle is capable of actually penetrating the spheres of
positive charge granted that this is possible, we can compute the deflection it would undergo. The Thomson
atom is eclectically neutral, so outside the atom no force would be exerted on the alpha particle. Within the
atom, the electrical force would be due in part to the electrons and in part to the sphere of positive charge.
However, the mass of an alpha particle is about 7400 times that of an electron and from momentum
considerations it follows that the alpha particle can suffer only a negligible scattering as a consequence of
forces between it and the much less massive electrons. It is only interactions with the positive charge.
Which makes up most of the atomic mass that can deviate the alpha particle?
When electric charge distributed uniformly inside a spherical volume, the electric field at points
inside the sphere is proportional to the distance from the centre of the sphere, this can be proved using
Gauss’s law. Thus the positively charge alpha particle inside the Thomson atom would be repelled from the
centre of the sphere with a force proportional to its distance from the centre and its trajectory can be
computed for any initial direction. On the basis of such calculations, Rutherford predicated the number of
Alpha particles that should be scattered at any angle with respect to the original direction.

The experimental results did not agree with the calculation based on the Thomson atom. In
particular, many more particles were scattered through large angle scattering. Rutherford concluded that the
positive charge, instead of being spread through a sphere of atomic dimensions (2or 3x10 8 cm.) was
concentrated in a much smaller volume, which he called a nucleus, when an alpha particle approaches the
nucleus, the entire nuclear charge exerts a repelling effect on it down to extremely small separation, with
the consequence that much larger deviation can be produced.
Rutherford again computed the expected number of particles scattered through any angle,
assuming an inverse square law of force between the alpha particle and the nucleus of the scattering atom.
Within the limits of experimental accuracy, the computed and observed results were an agreement down to
distances of approach of about 10-12 cm. These experiments thus indicate that the size of the nucleus is no
larger than the order of 10-12 cm.

5.2. Applications in Microwave

The Double Tokomak Collider(DTC),Magnetic confinement tokomak collider( MCTC) hub,


Duo triad tokomak collider (DTTC) hub can used with the help of nanotechnology such as
the nano-torii has its application in microwave.

Materials and Methods

The Antenna Array:

Diagrams of the front radiating (patient) side and back (feedlines) side of the applicator can be
seen in figures 9-10. The applicator is an array of 27 Dual Concentric Conductor (DCC) microstrip
patch antennas printed on very thin (9mil) and flexible printed circuit board material. A microstrip

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 131

feedline network feeds the middle of all four sides of the powered patch, which is capacitvely
coupled to the radiating patch.

The array has overall dimensions of 20.8 by 43.2cm and can treat an area of up to 13 by 43cm.
Figure 9 shows the front or groundplane side of the applicator. The floating patch and large
groundplane can be clearly seen. Figure 10 shows the backside or feedlines side of the
applicator. On the feedline side of the applicator can be seen the powered patch, the microstrip
feedline matching network and the miniature on-board co-axial PMMX connector.

Figure 1 Complete diagram of radiating side of applicator showing scale.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 132

Figure 2 Complete diagram of feedlines side of applicator showing scale.

The geometry and electric field distribution of the DCC can be seen in figure 11 below.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 133

Figure 3 Side view of DCC antenna geometry showing near electric field lines.

The electric field from the radiating patch terminates on the groundplane through the gap
between patch and groundplane. It has been shown that this geometry produces a near field
which is dominated by components that are predominantly parallel to the plane of the radiating
patch above and near the gap and normal to the patch over its center. The antennas are
specified by the size of the rectangular hole in the groundplane and the width of the gap. In the
studied array the aperture size is 3cm with a gap size of 2.5mm.

Network Analyzer:

In the preceding section the methods used to match a load to a microstrip line were presented.
What was not discussed is how exactly the load impedance is determined. To determine the input
impedance the Swiss army knife of microwave measurement equipment, the vector network
analyzer with S parameter test set was used.

The HP 8753C vector network analyzer used for this project is shown below.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 134

Figure 4 Hewlert Packart 8753C vector network analyzer with 85047A S parameter test set.

The vector network analyzer processes the transmitted and reflected waves from a network to
give readings of input impedance, VSWR, return loss and many other network characteristics.
Because it uses a mathematical error correction/calibration technique and preserves both
magnitude and phase information from the signal this type of instrument can make very accurate
circuit measurements, even at microwave frequencies.

Network Analyzer Calibration Method

A short length of high quality coaxial cable is connected to the analyzer output. At the end of this
cable is attached, in sequence, a high quality 50 ohm load, a short circuit plug and a calibrated
open

The analyzer has stored, in its memory, mathematical models of these standard loads. The
analyzer sends out a signal and reads the reflections from each load. It can then mathematically
subtract out all the discontinuities between the analyzer output and the end of the cable. If the
cable is of high enough quality, its properties do not change when it is flexed. It can then be
attached to the unknown circuit component and the properties of the circuit measured without
distortion by the cable.

Time Domain Reflectometry

The 8753C analyzer also has the ability to do a Fourier transform of the frequency data into the
time domain, to provide the time domain response of the network. This method is called Time
Domain Reflectometry (TDR). The analyzer sends out a broadband step function and performs a
fast Fourier transform on the reflections to recover information on the reflections as a function of
time after the impulse. Taking into account the speed of light on the transmission line, the circuit
information may be displayed as a function of distance down the line. The resolution in time is
directly related, through the FFT, to the bandwidth of the frequency range in the step function.
The broader the range, the higher the resolution. The highest bandwidth range possible with the
8753C analyzer is from 39 MHz to 5.99 GHz. With this range, the smallest distance between two
discontinuities that the analyzer can distinguish is 10 millimeters.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 135

TDR mode is most useful when the analyzer is set to display the real component of the signal. In
frequency mode, when the display is set to real only, the real part of the impedance as a function
of frequency is plotted on a linear axis. In TDR mode

Figure 5 Standard TDR graph of microwave antenna showing how the impedance varies
from the ideal 50 ohms as a function of distance.

(see figure 13) the horizontal axis is distance. The vertical axis is a unitless quantity that
represents the strength of the reflection. The vertical scale runs from +1000 milli-units to -1000
milli-units. +1000 is said to be an open, -1000 is said to be a short and 0 on this scale is 50
Ohms. By looking at the plot, it can be seen where reflections are being generated, if the
discontinuities are inductive or capacitive in nature and what the impedance is at a point in
relation to a perfect open or short. This mode is very useful for checking solder joint connections
at the coaxial cable to microstrip transitions. Conditions where the center pin is shorted to the
ground plane or not sufficiently well soldered to the microstrip are easily spotted. It can also
characterize how clean a connection has been made by displaying the magnitude of the reflection
from that point.

Gating

Gating is another useful feature that is used often. The analyzer has the ability to set a gate around a region
( in either frequency space or distance/time) and ignore all other information that is not contained within
that measurement window.

Standard Measurement Procedure

The standard procedure for network-analyzer measurements was as follows. The unit was turned
on and allowed to warm up for at least five minutes. A short length of coaxial cable was
connected to the analyzer input. The analyzer was calibrated out to the end of the test cable. The
end of the cable was then attached to the RF connector jack on the PCB array edge. The
analyzer was put in TDR mode. The gate was set around the distance region of interest. The
analyzer was then switched into frequency mode and measurements were made. It is believed
that performing the measurements in this way improved measurement accuracy significantly by
reducing the noise and unwanted reflections to a minimum.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 136

All antenna measurements were performed in exactly the same way. Because of the radiation
pattern characteristics of the DCC patch antenna, it is sensitive to the lossy muscle medium it is
looking into. If the loading changes, so will its edge impedance and therefore its measured
characteristics. The antennas tested all looked into the same load, consisting of a distilled water
bolus and muscle phantom. In this way, we tried to recreate the load the antenna would see in a
clinical situation.

Network Analyzer Measurements:

All microwave transmission line matching algorithms start with a known load. In our case the load
was not known originally and had to be determined either theoretically or experimentally before a
matching network could be designed. The load, in our case, is the edge impedance of the
powered patch. An extensive search of the literature for an analytical solution was conducted. I
found that there does not exist a theoretical formula for the unique geometry of the DCC antenna.
Standard formulas are available for the edge impedance of rectangular microstrip patch over an
infinite groundplane. A few commercial microwave analysis programs were also investigated but
they would not produce a stable-believable analysis, due to limitations on computer memory for
physically large and complex antenna geometry. For these reasons, it was decided to
experimentally determine the edge impedance of the radiating patch using the vector network
analyzer.

Finding the Edge Impedance

The network analyzer uses reflections from the discontinuity of interest to make its measurements. For this
reason every care must be taken to minimize all other reflections. The matching network (see figure 16 )
has many discontinuities; including coax to microstrip transitions, bends, step changes in width and two T-
junctions. If we were to try to determine the patch edge impedance by looking into the beginning of the
network the results would be contaminated by the spurious reflections. Though, this method is fine for
determining the overall properties of the feedline-patch network. To eliminate these reflections a test board
was improvised. An older, professionally made antenna array (see figure 15)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 137

Figure 6 Older non-optimized antenna array used to determine the correct edge
impedance.

with non-optimized microstrip feedline network was altered to give an as accurate reading of
antenna patch edge impedance as possible. To minimize reflections, networks with the fewest
nd
bends between the 2 T-junction and the on-board coaxial connector were used. Then, using a
dremel tool, one arm of each T-junction was ground away, as cleanly as possible. This reduced
the network to one continuous length of microstrip with six bends and feeding the patch on one
side only (see figure 14). For this configuration, the input impedance signal that was cleaner than
before but still contaminated by multiple unwanted reflections. To remove these reflections, the
analyzer was set to TDR mode and the gate was placed just over the area where the feedline
meets the patch. In this way the analyzer mathematically ignores all other reflections except
those coming from the gated region. The analyzer was then returned to frequency mode and
accurate input impedance measurements of the input impedance of the patch edge were made.

Microwave Network Parameters

The analyzer was calibrated and connected to the antenna array as described above. The array
was then attached to the bolus and the bolus was firmly attached to the muscle phantom-making
sure no air gaps existed between the array, bolus and load. The TDR mode was used to set the
gating. The gate was set so that only reflections starting from the PMMX connector to the patch
edge were considered. The analyzer was then returned to frequency mode and the microwave
parameters were measured. The measured parameters are: input impedance, VSWR and return
loss. Each antenna was characterized separately and this information was entered into an Excel
spreadsheet. The spreadsheet calculated the overall averages and deviations of the above
parameters by row and column.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 138

Feedline Network Design:

With a known input impedance a suitable matching network could be designed. A Mathematica
notebook was written to help in the calculations (see appendix 1). An explanation of the general
matching techniques used is presented and then the slight variations investigated on different
columns in the array will be described. The design for this test array can be seen in figure 9. For
the particular geometry of the test array it was found that the edge impedance of the microstrip

patch was . The standard matching network used can be seen in figure 15.

Figure 7 Diagram of standard matching network showing the impedances at different


points and the microwave compensation techniques used.

The calculations were made for a microstrip width that would result in a characteristic impedance
of 46 Ohms. The distances a and b were set to be at least three times the width and the right
st
angle bends are microwave mitered. The base width of the 1 T-junction was calculated to give
st
an even 3dB split in power and to minimize reflections The bases of both 1 T-junctions continue
nd
on without a change in width to form the arms of the 2 T-junction. The radius of the curve c was
made as broad as possible within the space constraints. It was determined from previous
prototypes that when the curve c was made too sharply that it was a source of unwanted radiation
and insertion loss. The distance d was constrained to be at least 3W of the broader 23-Ohm line

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 139

nd
that forms the arms of the 2 T-junction. In this standard case, there were no specific constraints
st nd nd
on the length of line between the patch and the 1 and 2 T-junction. The base width of the 2
nd
T-junction was calculated to give an impedance of 11.5 Ohms. The base of the 2 T-junction is
nd
very short . A quarter wave transformer is then used to match the 11.5 Ohm 2 T-junction
input impedance to the 50 ohm microstrip line. The quarter wave transformer has a characteristic
impedance of 24 Ohms and a length of 5cm. After the quarter wave transformer the microstrip
line runs all the way to the PMMX connector with as few bends as possible and maintaining at
least a 3W distance to the nearest microstrip lines to minimize cross coupling between adjacent
lines. The length of feedline from the end of the quarter wave transformer to the PMMX connector
was constrained. The longest run was designed first then all following feedline runs were
constrained to be the same length. The length was fixed because, as was shown in the
microwave theory section, with a mismatched load, impedance varies as distance from the load.
The shorter runs were lengthened with short serpentine runs called meander lines. With all the
feedlines having the same length, then theoretically they will all have the same input impedance.

The preceding standard optimization was done on four of the nine columns. On the remaining five
columns additional matching techniques were investigated.

nd th
In column one the length between the patch and 2 T-junction was constrained to be 1/8 of a
wavelength (see figure 16).

th
Figure 8 Diagram of standard matching network plus additional 1/8 wavelength matching
section.

th
This 1/8 section was used to force the feedline/patch interface to be an anti-node in the standing
wave pattern. If the interface could be forced to be a node, the voltage would be a maximum and
the maximum amount of power would be delivered to the patch.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 140

st nd
In column four the distance between the 1 and 2 T-junctions was constrained to be 1/4
wavelength (see figure 17).

Figure 9 Diagram of standard matching network plus additional 1/4 wavelength matching
section.

st nd
This additional quarter wave transformer was used to eliminate reflections between the 1 and 2
T-junctions.

The sixth eighth and ninth columns used both the quarter wave transformer between the T-
th st
junctions and the 1/8 section between the 1 T-junction and the patch (see figure 18)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 141

th
Figure 10 Diagram of standard matching network plus additional 1/8 and ¼ wavelength
matching section.

These techniques were used concurrently in the hopes that there effects would be additive and
produce an aperture with improved matching and superior radiation characteristics.

Electric Field/SAR Scans

The computer-controlled three dimensional electric-field-probe scanning device used to


characterize the electric field radiating into homogeneous muscle-tissue equivalent liquid
phantom media from the antenna array can be seen in figure 19.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 142

Figure 11 Experimental setup for the mapping of the electric field at depth in muscle
equivalent liquid phantom.

The antenna arrays to be tested were first attached to a de-ionized de-gassed water bolus of
thickness .5-1.5cm. The bolus/array was then inserted into a large bag, constructed of the same
polyurethane material as the water bolus, with Plexiglas backing board to hold the flexible array
flat during the electric field scans. The backing board is printed with an orthogonal grid that the
bolus/array is aligned with. This assembly is then inserted into the liquid muscle scan tank and
leveled to ensure the array is not skewed relative to the scanning apparatus.

The scanning apparatus consists of an electric field probe, a three-axis computer controlled
servomotor motion system, an input/output card and a computer running the data acquisition
software. The electric field probe used is a Narda model 8010 miniature 3-axis probe. This probe
consists of three orthogonal diode dipole sensors housed in the tip of a miniature wand. Three
low level DC signals proportional to are transmitted to a low-noise differential
summing amplifier via high resistance leads and then to the computer for digitization. The
amplifier can be set to amplify all three components of the electric field or one component or any
combination of the three required. The squares of all three electric field components were
summed so that the total electric field squared could be recorded.

After the board/bolus is inserted, the electric field probe is positioned next. The probe is mounted
at a right angle on the end of a long Teflon rod. The probe is positioned over the center of an
antenna, making sure that it is orthogonal to the plane of the array. In this way it is certain that all
elements are orthogonal to each other and the probe will be correctly centered in the array.

The data acquisition program can now be started and the scanning parameters set. With this
system we can scan in planes parallel to the array surface, at any depth in muscle phantom

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 143

greater than the minimum 3.5mm distance, which represents the distance from the center of the 3
orthogonal dipole sensors to the probe tip. In practice, the antennas are scanned in parallel
planes to the antenna surface, 5mm and 10mm away from the surface on a 2.5mm grid. A
vertical cross section can also be scanned to record information on how the electric field varies
with depth. The scanning program creates data files that contain two dimensional arrays of DC
voltages as function of position. Another program is used to convert these values to SAR as a
function of position. The commercial data visualization software Surfer is used to plot contour and
surface maps of the experimental SAR data.

The above measurements/characterizations were performed on the most recent optimized array
and on several older non-optimized arrays. The newer optimized arrays are professionally
constructed by the PCB manufacturer Labtech LTD. The older non-optimized arrays were
manufactured in-house with a PCB home hobbyist kit. The older arrays are non-optimized in the
sense that no consideration was given to microwave matching techniques. The older arrays (see
figure 14 for an example) have the same basic feedline shape: one line splits into four via two T-
junctions. The width of the feedlines are the same throughout, and this width was constrained by
manufacturing concerns, not matching concerns. With the home hobbyist technique used it was
found nearly impossible to consistently produce quality lines of width less than .4mm. With the
given PCB geometry, a microstrip line of .4mm would have a characteristic impedance of 12
Ohms, a poor match for the 50 Ohm coaxial cable and PMMX connector. At both T-junctions the
12 Ohm feedline sees an input impedance looking into the base of only 6 Ohms. At the
patch/feedline interface, the 12 Ohm line sees a load of ~46 Ohms. It was thought that all of
these mis-matches must produce an antenna that is far from optimized. The older non-optimized
arrays were used as a control and their measured parameters were used to judge the success of
the different optimization techniques.

Microwave Theory

In this section the important microwave engineering concepts and nomenclature are summarized
with particular attention given to microwave transmission line matching techniques.

The range of the electromagnetic spectrum from 300 MHz to 300 GHz is commonly referred to as
the microwave range. For applications with wavelengths from 1 meter to 1 millimeter, low
frequency circuit analysis techniques can not be used; we must use transmission-line theory. In
transmission-line theory, the voltage and current along a transmission line can vary in magnitude
and phase as a function of position.

Many different types of microwave transmission lines have been developed over the years. In an
evolutionary sequence from rigid rectangular and circular waveguide, to flexible coaxial cable, to
planar stripline to microstrip line, microwave transmission lines have been reduced in size and
complexity. The microstrip transmission line is the technology employed in the current
hyperthermia applicator studied.

For fields having a sinusoidal time dependence and steady-state conditions, a field analysis of a
terminated lossless transmission line results in the following relations:

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 144

Figure 1 Diagram of lossless transmission line with load showing incident, reflected-
transmitted waves.

If an incident wave of the form , where is the phase constant or wave number given by
, is incident from the -z direction then the total voltage on the line can be written as a
sum of incident and reflected waves:

The total current on the line is

Where is the characteristic impedance of the microstrip line, that is, the impedance the
transmission line would have if it were infinitely long or ideally terminated. The incident wave has
been written in phasor notation and the common time dependence factor has not been written.

The amplitude of the reflected voltage wave normalized to the amplitude of the incident voltage
wave is known as the voltage reflection coefficient, Γ

where is the load impedance.

The total voltage and current waves on the line can then be written in terms of the reflection
coefficient as

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 145

From the previous equations we see that the voltage and current on the line are a superposition
of an incident and reflected wave. If the system is static, i.e. if and are not changing in time,
the superposition of waves will also be static. This static superposition of waves on the line is
called a standing wave.

Because of the complicated shape of this standing wave, the voltage will vary with position along
the line, from some minimum value to some maximum value. The ratio of to is one way
to quantify the mismatch of the line. This mismatch is called the standing wave ratio (SWR) or
voltage standing wave ratio (VSWR) and can be expressed as:

The SWR is a real number such that 1≤ SWR ≤ ∞ and with a perfect match SWR = 1. By
definition, impedance, characteristic or otherwise, is the ratio of the voltage to the current a
particular point on the line. The standing waves cause the impedance to fluctuate as a function of
distance from the load. The variation in impedance along the transmission line caused by the
line/load mismatch can be written.

Where is the distance from the load. If we substitute the expression for Γ in terms of the
impedances, the generalized input impedance of the load plus transmission line simplifies to:

With this equation the impedance anywhere along the line can be calculated if the load
impedance and characteristic impedance are known.

In the most basic sense, then, if the load impedance equals the line impedance, the reflection
coefficient is zero and the load is said to be matched to the line. All of the microwave impedance
matching techniques can be reduced to this simple idea: minimize the reflection of the incident
wave to as nearly zero as possible.

When the load is mismatched to the line and thus there is a reflection of the incident wave at the
load, the power delivered to the load is reduced. This loss is called return loss (RL) and is equal
(in dB) to

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 146

This ends the summary of the relevant general microwave engineering concepts. Some relations
specific to microstrip will now be discussed before moving on to discuss the compensation of
microstrip discontinuities.

The geometry of a typical microstrip line can be seen in figure 4.

Figure 2 Side view of microstrip showing actual and effective geometry.

Starting with a two-layer PCB the top layer is chemically etched away to leave copper traces of
width W, separated from the groundplane by a dielectric substrate of some thickness d and
relative permittivity .

Because of the anisotropic dielectric geometry, the microstrip line cannot support a true TEM
wave for the following reasons: a microstrip line has most of its electric field concentrated in the
region between the line and the groundplane; a small fraction propagates in the air above.
Because the speed of light is different in air and dielectric the boundary-value conditions at

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 147

the air-dielectric interface can not be met with a pure TEM wave and the exact fields constitute a
hybrid TM-TE wave. Because the dielectric substrate is electrically very thin , for this
application, the fields are quasi-TEM. Because the fields are quasi-TEM, good approximations for
the phase velocity, propagation constant, and characteristic impedance can be obtained from the
static solution.

The phase velocity in microstrip line is given by

and the propagation constant is given by

where is the effective dielectric constant and is given by

The effective dielectric constant is the dielectric constant of an equivalent homogenous


medium that replaces the air and dielectric layers.

The characteristic impedance of a microstrip line can be calculated, given the width W and
substrate thickness d with the result

If all microstrip based circuits consisted of a proper width straight feedline terminating in a load,
there would not be much need to worry about compensating for discontinuities. Even in this ideal
case, the transition from microwave source to microstrip line and from the microstrip to load can
be the source of large reflections. Typical microstrip discontinuities are junctions, bends, step
changes in width and the coaxial cable to microstrip junction. If these discontinuities are not
compensated, they introduce parasitic reactances that can lead to phase and amplitude errors,
input and output mismatch, and possibly spurious coupling. The strength of a particular
discontinuity is frequency dependent, where the higher the frequency, the larger is the
discontinuity. The following typical discontinuities and their compensation are discussed in
descending order of importance.

Impedance Mismatches

Quarter-Wave Transformer

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 148

A general mismatch in impedance between two points on a transmission line can be compensated with a
quarter-wave transformer. The quarter-wave transformer is a very useful matching technique that also
illustrates the properties of standing waves on a mismatched line. First, an impedance-based explanation of
how a quarter-wave transformer works will be described; then a more intuitive explanation that is
analogous to destructive interference in thin films will be discussed. A quarter wave transformer in
microstrip is shown in fig 5.

Figure 3 Diagram of quarter wave impedance transformer showing multiple reflections.

In a quarter-wave transformer, we want to match a load resistance to the characteristic


feedline impedance through a short length of transmission line of unknown length and
impedance. . The input impedance looking into the matching section of line is given by;

If we choose the length of the line = then , divide through by

and take the limit as to achieve

For a perfect transition with no reflections at the interface between microstrip and load, Γ =0 so
and this gives us a characteristic impedance as

which is the geometric mean of the load and source impedances. With this geometry, there will

be no standing waves on the feedline although there will be standing waves on the matching

section. Why was the value of chosen? In fact, any odd multiple (2n + 1) of will
also work.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 149

The astute reader may recognize these conditions as similar to those found in destructive
interference in thin films. In thin films, if light is incident on mediums with progressively higher
index of refraction, it will undergo a 180 degree phase change at both interfaces. For there to be

destructive interference, the path length difference must be . The microstrip quarter-wave

transformer works in exactly the same way. When the line length is precisely the reflected
wave from the load destructively interferes with the wave reflected from the interface and
they cancel each other out. It should be noted that this method can only match a real load. If the
load has an appreciable imaginary component, it must be matched differently. It can be
transformed into a purely real load, at a single frequency, by adding an appropriate length of
feedline.

Junctions

A junction between two dissimilar width sections also introduces a large discontinuity. A standard
T-junction power divider is shown in figure 6.

Figure 4 Diagram of T-junction power divider.

In this diagram, the input power is delivered to the intersection on a microstrip of width and
impedance . The line then branches into two arms with power, width and impedance given by
and respectively. The design equations for this divider are

This simplest type of matched T-junction is the lossless 3dB power divider. It can be seen from
the equations above that if the power will split evenly into the arms of the T with
each arm having half the original power. It is interesting to note that the impedances of the two

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 150

arms act just like resistors wired in parallel. To match the impedances of the arms of the T to the
impedance of the base, the arms must have twice the impedance of the base.

Another typical microstrip discontinuity results from a simple bend in the line. Figure 7 shows
some typical bend discontinuities and the required compensation techniques.

Figure 5 Different bend discontinuities in microstrip and their compensations.

The increased conductor area in the region of the bend produces a parasitic discontinuity
capacitance. This effect can be eliminated by making a smooth swept bend where there is no
change in the conductor area. The radius has to be r≥ 3W, which takes up a large amount of
space, space that is always at a premium. A more space-effective compensation method is to
miter the right angle bend.

Source-microstrip transition

To launch a wave, on the microstrip transmission line the microwave signal is brought from the
generator on a coaxial cable which connects to an on-board PCB mounted jack which is soldered
directly to the groundplane and feedlines. To minimize reflections in this process the generator,
coaxial cable and jack all have characteristic impedances of 50 ohms. The actual transferring of
the wave from the jack to the microstrip is the main source of reflections in this process. To
minimize these reflections the microstrip line impedance must match the impedance of the jack.

The compensation methods for a step change in width and the parasitic reactance of a T-junction
are shown in figure 8.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 151

Figure 6 Second order discontinuities and their compensation techniques.

These discontinuities are second order, only becoming significant at frequencies above 3 GHz. .
For this reason, these methods of compensation were not employed in this research.

We have reviewed basic transmission line theory, explaining the terms used to describe
microstrip circuits and the techniques used to match different elements in a circuit. In the next
section, we discuss how to apply this theory

History and Motivation

Facts about Cancer

Cancer is a group of diseases characterized by uncontrolled growth and spread of abnormally


transformed or mutated cells. If this spread is not controlled, death will eventually result. Cancer
is caused by both external (chemicals, radiation, and viruses) and internal (hormones, immune
response dysfunction, and inherited gene deficiencies) factors. Causal factors may act together
or in sequence to initiate or promote carcinogenesis.

About 2.6 million new cancer cases are expected to be diagnosed in 2000. This year about
552,200 Americans are expected to die of cancer—more than 1,500 people a day. Cancer is the
second leading cause of death in the US, exceeded only by heart disease. In the US, 1 of every 4
deaths is from cancer.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 152

Breast cancer is a malignant tumor that has developed from cells of the breast. Breast cancer is
the most common cancer among women, excluding non-melanoma skin cancers. The American
Cancer Society estimates that in 2000 182,800 new cases of invasive breast cancer (Stages I-IV)
will be diagnosed among women in the United States, producing in 2000, 41,200 deaths. Breast
cancer is the second leading cause of cancer death in women, exceeded only by lung cancer.

There are many treatment options for women with breast cancer including surgical removal of the
entire breast or lump, radiotherapy and various chemotherapy and hormone treatments. If a
cancer comes back after treatment it is called a recurrence. Nearly one third of these breast
cancer recurrences are on the chest wall. This chestwall recurrence of breast carcinoma is quite
deadly with only 25 to 30% of the patients surviving out to five years . An interesting fact about
chestwall recurrence is the large range in survival. Two years is the median survival following
recurrence but it ranges from a few months to 30 years . Successful treatment of chestwall
recurrence thus has the potential to add years to a patient’s life as well as significantly improve
the patients' quality of life

Hyperthermia as Cancer Treatment

Hyperthermia is the use of elevated tissue temperature for the treatment of cancer. Hyperthermia
therapy consists of elevating tissue temperature to the range 41 to 45° C, for an hour. When used
alone, it is thought that protein denaturation is the main cause of hyperthermic cell . Heat is also
thought to affect cells in the following ways: heat can alter the structure of plasma membranes
(blood vessel walls) and impair many membrane-related functions that can lead to cell death.
Heat also damages mitrochrondria and inhibits glycolysis and respiration. Heat can also inhibit
the synthesis and repair of damage to DNA, proteins, RNA, and heat damages polysomes and
microsomes.

While hyperthermia used alone is effective (when temperatures and thermal doses are sufficiently
high) heat is most commonly used as an adjuvant treatment. The two types of cancer treatments
most commonly used with hyperthermia are chemotherapy and radiotherapy. Chemotherapy is
the use of drugs to kill the cancer cells. These drugs, through various methods, disable the
reproductive abilities of cancerous cells. Radiation therapy is the use of x-rays, gamma-rays and
electrons as ionizing agents that interact with biologic material to produce highly reactive free
radicals, which result in biologic damage . The main effect of radiotherapy is to block the cell’s
ability to reproduce. Radiation and heat interact in more than a simply additive way. This
synergistic interaction of heat and radiation is interpreted as a heat-induced sensitization of cells
to radiation, termed heat radiosensitization or thermal radiosensitization . This synergistic
interaction is attributed to the hyperthermic effect of preventing the repair of radiation-induced
DNA strand breaks and the excision of damaged bases . It is believed that these effects are
caused by (1) heat-induced inactivation of DNA repair enzymes and/or (2) alteration of the
chromatin structure due to protein denaturation and aggregation, which causes decreased
accessibility of the damaged sites to the repair machinery. It has also been shown that mild
hyperthermia, when given concurrently with low-dose-rate irradiation can remove the low-dose-
rate sparing effect. There has also been no evidence that radiation results in an enhancement of
heat lesions, i.e. no radiation-induced heat sensitization takes place .

Hyperthermia with chemotherapy has not been studied as extensively as combinations with
radiation, but some strong rationales exist for its use. Hyperthermia enhances the cell-killing
effect of a number of chemotherapeutic agents, such as, cyclophosphamide, melphalan, cisplatin
and doxorubicin. Perhaps the most obvious effect is, that if heat is localized to the tumor volume,
the flow of blood to that area is increased in response to the elevated temperature as the body
attempts to cool the area with increased blood flow and thereby increase the concentration of
therapeutic chemicals delivered to the tumor relative to the rest of the body at a cooler
temperature. Heat also causes blood vessel walls, inside the tumor, to become more permeable
(leaky) causing drugs to leak into the heated tumor at a higher rate. The increased

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 153

chemotherapeutic effect at elevated temperatures can be caused by, altered pharmacokinetics or


pharmacodynamics, increased DNA damage, decreased DNA repair, reduced oxygen radical
detoxification, and increased membrane damage . In addition, concentrations of agents that are
not normally toxic at normal body temperature can become cytotoxic above 39° C and in some
cases; hyperthermia may partially overcome some types of drug resistance .

Heating Mechanisms

There are three primary methods of heating tissue in hyperthermia: 1) frictional losses from
molecular oscillations caused by an ultrasound pressure wave; 2) simple thermal conduction from
areas of high temperature to areas of low temperature and 3) Resistive and dielectric losses from
an applied electromagnetic field. Of these three, the present effort relies primarily on an applied
EM field to induce heating of superficial tissue at a depth up to 1cm, and on deep and thermal
conduction to heat slightly deeper and smooth the temperature distribution.

All living human tissue contains some amount of free charge. Free charges, which can interact
with an external electromagnetic field. Tissues with high water content and thus a large
percentage of polar molecules interact especially well. Blood, skin, muscle, internal organs and
tumors all contain large percentages of water. At microwave frequencies above 100 MHz, human
tissue can be considered as a lossy dielectric. The electrical properties of lossy human tissue
may be characterized in terms of its dielectric constant and electrical conductivity. As an example,

muscle has a dielectric constant of 51 and an electrical conductivity σ of 1.21 while

germanium, commonly used as a semiconductor, has a conductivity of 2.17 . At 915MHz,


dielectric losses in tissue predominate and heating results primarily from friction caused by polar
water molecules that rotate and oscillate to maintain alignment with the time-varying electric field.

The amount of microwave energy absorbed by tissue is given by the absorbed power density, in
watts per meter-cubed

where is the induced current density. The absorbed power density is also stated in terms
of power absorbed per kilogram of tissue or specific absorption rate (SAR):

where ρ is the density of tissue in kilograms per meter cubed. The SAR pattern is the quality used
most often to describe the heating properties of a particular hyperthermia applicator. In general
the 50% SAR level is considered to be the extent of effective heating. The qualities inherent in an
acceptable SAR pattern (see figure for an example) are, the 50% level extends to at least the
dimensions of the applicator. The SAR distribution inside the 50% contour is relatively flat with no
sharp peaks or valleys and rising smoothly everywhere to the maximum value.

Equipment and Techniques For Producing Hyperthermia in Superficial Tissues

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 154

The past two decades have seen considerable growth and development in electromagnetic
techniques available for producing superficial hyperthermia. The microwave waveguide applicator
is probably the most basic method for providing superficial hyperthermia by electromagnetic
means, it consists of a rectangular waveguide excited by a monopole feed. The dimensions of the
10
waveguide are selected so that a strong TE mode propagates at the chosen frequency.
Because human tissues are in general layered with a high-resistance fat layer between low-
10
resistance skin and muscle or tumor tissues, the TE mode is preferred because the electric field
is oriented tangential to the skin surface. This tangential electric field minimizes overheating of
the fat-muscle tissue interface because the high resistance fat appears in parallel to the low
resistance muscle or tumor layer. While this design did produce some useful heating for a few
limited clinical situations, the dimensions of the waveguide proved too large to conform
adequately to the usually contoured treatment sites. The dimensions of the waveguide were
reduced by loading the waveguide with high-dielectric material, reducing the wavelength in the
guide and therefore the aperture size. The field pattern of these applicators has a maximum in the
geometrical center and falls off to well below 50% of the maximum field at the waveguide edges.
To reduce this central hot spot and to increase the field strength at the edges, a coupling bolus is
used. A coupling bolus is a flexible bag attached to the waveguide face that circulates
temperature-controlled de-ionized de-gassed water, or in some applications, silicone oil.

Variable-absorption bolus's have also been studied as a way to increase the homogeneity of the
field pattern from a waveguide applicator. With this technique the de-ionized water bolus is
compartmentalized and the different compartments can be filled with a more highly absorbing
material such as saline solution. The compartments filled with saline will reduce the energy
transmitted and in this way the central maximum can be reduced while the heating at the edges is
not effected, resulting in a more uniform energy deposition in skin but at the cost of higher overall
power .

To address the problem of standard waveguide applicator’s non-uniform field distributions, horn
waveguide applicators were studied. Horn waveguide applicators utilize a flared opening to
spread the radiated field and to obtain a better impedance match to the tissue. These applicators
produced a more uniform field pattern with the 50% of maximum level being larger than the
standard waveguide but still not equal to the horn perimeter.. While these horn applicators had a
more uniform field pattern, they still suffered from being too large to effectively cover large
regions of tissue over contoured treatment sites.

A common problem for both of these methods is the non-adjustability of the electromagnetic field
pattern under the face of the applicator to tailor the field pattern for irregular tumor shapes. Thus
the next logical step was to make an applicator consisting of several waveguides together in an
array of radiating apertures. One such commercially available hyperthermia system is the
Microtherm 1000 (Labthermics Technologies Inc, Champagne IL) which has an array of 16
waveguides and integral water bolus on a movable support arm (see fig. 1).

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 155

Figure 1 The Microtherm 1000 hyperthermia applicator (Labthermics Technologies Inc,


Champagne IL)

Figure 2 Close up of the extendable bolus of the Microtherm 1000

The Microtherm 1000 can treat an area of 13 by 13 cm wide by 1.5 cm deep. The Microtherm
1000 is currently the standard of care in electromagnetic superficial hyperthermia. This is the
machine used at UCSF in treatments of superficial skin disease like chestwall recurrence of
breast carcinoma. The advantages of this machine over single-waveguide methods are that it can
cover a larger area than a single-aperture waveguide of identical size and that by adjusting the

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 156

power to the various elements, the field pattern can be shaped somewhat to adjust for irregular
tumor shapes. While this machine can cover more area, with improved heating uniformity, it still
suffers from one of the same failings of the single waveguides-it can not conform around curved
anatomy. While its 8cm thick water bolus helps it to conform somewhat to small curvature, it still
can not treat surface disease which spreads around the ribcage. It is useful primarily on flat
treatment sites.

Another heating approach makes use of an inductive-loop current sheet applicator, which is
smaller and lighter in weight than typical waveguide applicators and can be connected together in
hinged flexible arrays for contoured surfaces . While more compact than waveguide and horn
applicators, these applicators require great care when used in arrays to avoid under or over
heating the area between the adjacent apertures, especially when angled together over
contoured surface.

Recently there has been considerable interest in using printed-circuit-board (PCB) based
microwave radiators. Microwave patch, slot, and spiral radiators have been studied . It was found
that many PCB-based microwave radiators have a large electric field component orientated
normal to the fat-muscle interface. This strong normal field component falls off faster as a function
of distance, from the applicator face, compared to the tangential component, suggesting the use
of a thick water bolus to reduce the normal component in relation to the tangential component.

There is a commercially available Contact Flexible Microstrip Applicator (CFMA) which can treat
an area of roughly 12.5 by 24 cm. While the CFMA has the ability to conform to contoured
treatment sites, it is a single channel device. Thus there is no ability to shape the SAR pattern. If
the field must be reduced in one area, to avoid overheating a nipple for example, the power and
heating effectiveness must be reduced for the entire treatment site. Thus while this applicator can
treat large areas involving contoured anatomy, there is no provision to adjust the heating pattern
to accommodate patient specific anatomy or heterogeneous electrical and thermal tissue
properties.

Arrays of microstrip spiral antennas have also been used. An array of 25 individually controlled
spiral antennas built on a flexible PCB was studied by one group . It was found that the spiral
antennas produced a sharply peaked gaussian pencil beam under the center of the spiral. A
minimum 3cm thick water bolus was necessary to smooth the combined beam profile enough to
achieve useful heating without cold areas between spiral elements. While this method, in general,
was useful, the thick water bolus limited its use near complex contoured anatomy and increased
setup complexity and the power required.

As a way of avoiding the problem of awkward and heavy water bolus structures, Ryan et. al.
studied a dense array of overlapping spirals. This array produced a more spatially uniform field
with a thinner water bolus. The draw back to this technique was that the large overlap of spirals
needed for a uniform field severely restricted the size of the treatable area. It would seem that the
microstrip spiral, with its sharply peaked central pencil beam radiation pattern was not ideally
suited for hyperthermia treatments of large surface areas where homogeneity of the heating field
is needed.

In summary, the currently available technology for heating superficial tissues can not cover a
large enough treatment area, can not conform to curved treatment sites typically seen in the clinic
and can not provide sufficient adjustment of heating pattern to cover irregularly shaped treatment
sites.

From the previous evaluation of applicators, the following specifications were determined for an
ideal large area superficial hyperthermia applicator:

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 157

1. Flexibility-the ability to curve around contoured anatomy such as a rib cage.


2. Multi-element array with individual power control to each element.
3. Invisible to high-energy (6-20 MeV) electrons to allow simultaneous heating and
radiotherapy.
4. Lightweight-to increase patient comfort and mobility during treatment.
5. Minimum set up complexity
6. Low cost.

The Conformal Microwave Array (CMA) described in this thesis has the potential to fulfill all the
requirements of the above ideal applicator specifications for treating large area superficial
hyperthermia. The CMA is an array of microstrip patch antennas etched into a very flexible two
sided PCB (see fig 9-diagram of CMA). It is light in weight, extremely thin (9 mils), easy to use,
and inexpensive to manufacture compared to previous applicators.

Objectives

The main thrust of this thesis is to describe efforts to optimize the Conformal Microwave Array.
Optimization is desired in the sense that we want to produce the highest uniform output power
with the lowest possible input power. Specifically, the radiation efficiency (the ratio of power out to
power in), and the uniformity or balance of output of individual antennas was improved. To
achieve these goals, I have concentrated on applying microwave-engineering theory to the
microstrip line network which extends from the coax-to-microstrip RF connector on the PCB edge
and extends across the antenna array surface and splits to feed f the four sides of the radiating
microstrip patch.

5.3. FM Radio

FM means frequency modulation invented by Edward Armstrong of United States.

Digital radio describes radio communications technologies which carry information as a digital signal, by
means of a digital modulation method. Digital radio is very commonly used in microwave radio
communications. This is widely used in point-to-point microwave systems on the surface of the Earth
(terrestrial), in satellite communications carrying all kinds of digital information, and in deep space
communication systems, such as communications to and from the two Voyager space probes. Terrestrial
digital microwave communication systems can carry any form or digital information at all, including
multiplexed digitized voice or music signals, Internet traffic, financial traffic, military communications, etc.

The key breakthrough or key feature in digital microwave systems is that they can carry any kind of
information whatsoever - just as long at it has been expressed as a sequence of ones and zeroes. Earlier
radio communication systems had to be made expressly for a given form of communications:telephone,
Telegraph, or television, for example. All kinds of digital communications can be multiplexed or encrypted
at will.

Other common meanings of digital radio include digital audio broadcasting, digital television broadcasting,
short-range .Digital wireless communications and radio broadcasting that is delivered Via the Internet.

One-way digital radio

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 158

One-way standards are those used for broadcasting, as opposed to those used for two-way communication.
While digital broadcasting offers many potential benefits, its introduction has been hindered by a lack of
global agreement on standards. The Eureka 147 standard (DAB) for digital radio is the most commonly
used and is coordinated by the World DMB Forum, which represents more than 30 countries. This standard
of digital radio technology was defined in the late 1980s, and is now being introduced in many countries.
Commercial DAB receivers began to be sold in 1999 and, by 2006, 500 million people were in the
coverage area of DAB broadcasts, although by this time sales had only taken off in the UK and Denmark.
In 2006 there are approximately 1,000 DAB stations in operation.[1] There have been criticisms of the
Eureka 147 standard and so a new 'DAB+' standard has been proposed.

To date the following standards have been defined for one-way digital radio:

Digital audio broadcasting systems


o Eureka 147 (branded as DAB)
o DAB+
o Digital Radio Oceane
o FM band in-band on-channel (FM IBOC):
ƒ HD Radio (OFDM modulation over FM and AM band IBOC sidebands)
ƒ FMeXtra (FM band IBOC sub carriers)
ƒ Digital Radio Mondiale extension (DRM+) (OFDM modulation over AM band
IBOC sidebands)
o AM band in-band on-channel (AM IBOC):
ƒ HD Radio (AM IBOC sideband)
ƒ Digital Radio Mondiale (branded as DRM) for the short, medium and long
wave-bands
o Satellite radio:
ƒ WorldSpace in Asia and Africa
ƒ Sirius in North America
ƒ XM radio in North America
ƒ MobaHo! in Japan and the Republic of (South) Korea
o ISDB-TSB
o Systems also designed for digital TV:
ƒ DMB
ƒ DVB-H
• Internet radio
• Low-bandwidth digital data broadcasting over existing FM radio:
o Radio Data System (branded as RDS)
• Radio pagers:
o FLEX
o ReFLEX
o POCSAG
o NTT

Digital television broadcasting (DTV) systems


o Digital Video Broadcasting (DVB)
o Integrated Services Digital Broadcasting (ISDB)
o Digital Multimedia Broadcasting (DMB)
o Digital Terrestrial Television (DTTV or DTT) to fixed mainly roof-top antennas:
ƒ DVB-T (based on OFDM modulation)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 159

ƒ ISDB-T (based on OFDM modulation)


ƒ ATSC (based on 8VSB modulation)
ƒ T-DMB
o Mobile TV reception in handheld devices:
ƒ DVB-H (based on OFDM modulation)
ƒ MediaFLO (based on OFDM modulation)
ƒ DMB (based on OFDM modulation)
ƒ Multimedia Broadcast Multicast Service (MBMS) via the GSM EDGE and
UMTS cellular networks
ƒ DVB-SH (based on OFDM modulation)
o Satellite TV:
ƒ DVB-S (for Satellite TV)
ƒ ISDB-S
ƒ 4DTV
ƒ S-DMB
ƒ MobaHo!

See also software radio for a discussion of radios which use digital signal processing.

DAB adopters

Digital Audio Broadcasting (DAB), also known as Eureka 147, has been under development since the early
eighties, has been adopted by around 20 countries worldwide. It is based around the MPEG-1 Audio Layer
II audio codec and this has been co-ordinated by the WorldDMB. DAB receivers are selling well in some
markets.

WorldDMB announced in a press release in November 2006, that DAB would be adopting the HE-AACv2
audio codec, which is also known as eAAC+. Also being adopted are the MPEG Surround format, and
stronger error correction coding called Reed-Solomon coding.[2] The update has been named DAB+.
Receivers that support the new DAB standard began being released during 2007 with firmware updated
available for some older receivers.

DAB and DAB+ cannot be used for mobile TV because they do not include any video codecs. DAB related
standards Digital Multimedia Broadcasting (DMB) and DAB-IP are suitable for mobile radio and TV both
because they have MPEG 4 AVC and WMV9 respectively as video codecs. However a DMB video sub-
channel can easily be added to any DAB transmission - as DMB was designed from the outset to be carried
on a DAB subchannel. DMB broadcasts in Korea carry conventional MPEG 1 Layer II DAB audio services
alongside their DMB video services.

United States has opted for a proprietary system called HD Radio(TM) technology, a type of in-band on-
channel (IBOC) technology. Transmissions use orthogonal frequency-division multiplexing, a technique
which is also used for European terrestrial digital TV broadcast (DVB-T). HD Radio technology was
developed and is licensed by iBiquity Digital Corporation. It is widely believed that a major reason for HD
radio technology is to offer some limited digital radio services while preserving the relative "stick values"
of the stations involved and to insure that new programming services will be controlled by existing
licensees.

The FM digital schemes in the U.S. provide audio at rates from 96 to 128 kilobits per second (kbit/s), with
auxiliary "subcarrier" transmissions at up to 64 kbit/s. The AM digital schemes have data rates of about 48
kbit/s, with auxiliary services provided at a much lower data rate. Both the FM and AM schemes use lossy
compression techniques to make the best use of the limited bandwidth.

Lucent Digital Radio, USA Digital Radio (USADR), and Digital Radio Express commenced tests in 1999
of their various schemes for digital broadcast, with the expectation that they would report their results to

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 160

the National Radio Systems Committee (NRSC) in December 1999.[3] Results of these tests remain unclear,
which in general describes the status of the terrestrial digital radio broadcasting effort in North America.
Some terrestrial analog broadcast stations are apprehensive about the impact of digital satellite radio on
their business, while others plan to convert to digital broadcasting as soon as it is economically and
technically feasible

While traditional terrestrial radio broadcasters are trying to "go digital", most major US automobile
manufacturers are promoting digital satellite radio. HD Radio technology has also made inroads in the
automotive sector with factory-installed options announced by BMW, Ford, Hyundai, Jaguar, Lincoln,
Mercedes, MINI, Mercury, Scion, and Volvo. Beyond the U.S., commercial implementation of HD Radio
technology is gaining momentum around the world.[4]

Satellite radio is distinguished by its freedom from FCC censorship in the United States, its relative lack of
advertising, and its ability to allow people on the road to listen to the same stations at any location in the
country. Listeners must currently pay an annual or monthly subscription fee in order to access the service,
and must install a separate security card in each radio or receiver they use.

Ford and Daimler AG are working with Sirius Satellite Radio, previously CD Radio, of New York City,
and General Motors and Honda are working with XM Satellite Radio of Washington, D.C. to build and
promote satellite DAB radio systems for North America, each offering "CD quality" audio and about a
hundred channels.[citation needed]

Sirius Satellite Radio launched a constellation of three Sirius satellites during the course of 2000. The
satellites were built by Space Systems/Loral and were launched by Russian Proton boosters. As with XM
Satellite Radio, Sirius implemented a series of terrestrial ground repeaters where satellite signal would
otherwise be blocked by large structures including natural structures and high-rise buildings.

XM Satellite Radio has a constellation of three satellites, two of which were launched in the spring of 2001,
with one following later in 2005. The satellites are Boeing (previously Hughes) 702 comsats, and were put
into orbit by Sea Launch boosters. Back-up ground transmitters (repeaters) will be built in cities where
satellite signals could be blocked by big buildings.

The FCC has auctioned bandwidth allocations for satellite broadcast in the S band range, around 2.3 GHz.

The perceived wisdom of the radio industry is that the terrestrial medium has two great strengths: it is free
and it is local.[citation needed] Satellite radio is neither of these things; however, in recent years, it has grown to
make a name for itself by providing uncensored content (most notably, the crossover of Howard Stern from
terrestrial radio to satellite radio) and commercial-free, all-digital music channels that offer similar genres
to local broadcast favorites.

• It must be noted that "Digital Radio" has a limited listening distance from the tower site. FCC laws
currently show that 10% maximum digital signal of any US analog signal ratio. "There are still
some concerns that HD Radio on FM will increase interference between different stations even
though HD Radio at the 10% power level fits within the FCC spectral mask." HD Radio HD
Radio#cite note-14. "HD Radio" is only 2 channels in the USA, side by side with analog stations.
HD channel 1 may be on 93.2 FM, Analog station on 93.3, and HD channel 2 is on 93.4 FM.
Differing stations are multicasting on different frequencies, respectively.

• Also note that "HD Radio" is digital radio, but is not "high definition" as most of the US
population thinks. "HD" stands for "Hybrid Digital."

In the United Kingdom, 32.1% of the population own a DAB digital radio set.[5] The UK currently has the
world's biggest digital radio network, with 103 transmitters, two nation-wide DAB ensembles and 48 local
and regional DAB ensembles, broadcasting over 250 commercial and 34 BBC radio stations; 51 of these

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 161

stations are broadcast in London. However, the audio quality on DAB is lower than on FM, and some areas
of the country are not covered by DAB. To overcome this, the government intends to migrate the AM and
FM analogue services to digital in 2015. Digital radio stations are also broadcast on digital television
platforms, Digital Radio Mondiale on medium wave and shortwave frequencies as well as internet radio;
41% of digital radio users listen to digital radio through a television platform.[6]

Australia commenced regular digital audio broadcasting using the DAB+ standard in May 2009, after many
years of trialing alternative systems. Normal radio services operate on the AM and FM bands, as well as
four stations (ABC and SBS) on digital TV channels. The services are currently operating in five state
capital cities (Adelaide, Brisbane, Melbourne, Perth and Sydney) and are under trial in other capitals and
regional centers.

Japan has started terrestrial sound broadcasting using ISDB-Tsb and MobaHO! 2.6 GHz Satellite Sound
digital broadcasting

On 1 December 2005 South Korea launched its T-DMB service which includes both television and radio
stations. T-DMB is a derivative of DAB with specifications published by ETSI. More than 110,000
receivers had been sold in one month only in 2005.

Digital radio is now being provided to the developing world. A satellite communications company named
World Space is setting up a network of three satellites, including "AfriStar", "Asia Star", and "AmeriStar",
to provide digital audio information services to Africa, Asia, and Latin America. AfriStar and AsiaStar are
in orbit. AmeriStar cannot be launched from the United States as Worldspace transmits on the L-band and
would interfere with USA military as mentioned above.[citation needed].

Each satellite provides three transmission beams that can support 50 channels each, carrying news, music,
entertainment, and education, and including a computer multimedia service. Local, regional, and
international broadcasters are working with WorldStar to provide services.

A consortium of broadcasters and equipment manufacturers are also working to bring the benefits of digital
broadcasting to the radio spectrum currently used for terrestrial AM radio broadcasts, including
international shortwave transmissions. Over seventy broadcasters are now transmitting programs using the
new standard, known as Digital Radio Mondiale (DRM), and / commercial DRM receivers are available.
DRM's system uses the MPEG-4 based standard aacPlus to code the music and CELP or HVXC for speech
programs. At present these are priced too high to be affordable by many in the third world, however.

Low-cost DAB radio receivers are now available from various Japanese manufacturers, and WorldSpace
has worked with Thomson Broadcast to introduce a village communications center known as a Telekiosk to
bring communications services to rural areas. The Telekiosks are self-contained and are available as fixed
or mobile units

Two-way digital radio standards

• Digital cellular telephony:


o GSM
o UMTS (sometimes called W-CDMA)
o TETRA
o IS-95 (cdmaOne)
o IS-136 (D-AMPS, sometimes called TDMA)
o IS-2000 (CDMA2000)
o iDEN
• Digital Mobile Radio:
o Project 25 a.k.a. "P25" or "APCO-25"
o TETRA

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 162

o NXDN
• Wireless networking:
o Wi-Fi
o HIPERLAN
o Bluetooth
o DASH7
o ZigBee
• Military radio systems for Network-centric warfare
o JTRS (Joint Tactical Radio System- a flexible software-defined radio)
o SINCGARS (Single channel ground to air radio system)
• Amateur packet radio:
o AX.25
• Digital modems for HF:
o PACTOR
• Satellite radio:
o Satmodems
• Wireless local loop:
o Basic Exchange Telephone Radio Service
• Broadband wireless access:
o IEEE 802.16

References

1. ^ Digital Broadcast - bringing the future to you


2. ^ http://www.worlddab.org/upload/uploaddocs/WorldDMBPress%20Release_November.pdf
3. ^ Behrens, Steve. "Field testing resumes for radio’s digital best hope." Current, Aug. 16, 1999.
Available at http://www.current.org/tech/tech915r.html
4. ^ http://www.ibiquity.com/automotive
5. ^ Plunkett, John (2009-05-07). "Rajars: More than a third of UK is now listening to digital radio".
The Guardian. http://www.guardian.co.uk/media/2009/may/07/rajars-digital-radio. Retrieved
2009-05-07.
6. ^ Oatts, Joanne (2007-05-10). "Digital radio owners up 43%". Digital Spy.
http://www.digitalspy.co.uk/radio/a46373/digital-radio-owners-up-43-percent.html. Retrieved
2007-05-12.

v•d•e
Wireless video and data distribution methods

Advanced Wireless Services · Amateur television · Analog television · Digital radio · Digital television ·
Digital terrestrial television (DTT or DTTV) ·
Digital Video Broadcasting (DVB): Terrestrial - Satellite - Handheld · Multipoint Video Distribution
System (MVDS or DVB-MS) · HomeRF · Instructional Television Fixed Service (ITFS) now known as
Educational Broadband Service (EBS) · Ku band · Local Multipoint Distribution Service (LMDS) ·
Microwave · Mobile broadband · Mobile TV · Mobile WiMAX (IEEE 802.16e) · Mobile broadband
wireless access (IEEE 802.20) · Multichannel Multipoint Distribution Service (MMDS) now known as
Business Radio Service (BRS) · MVDS · MVDDS · Multimedia Broadcast Multicast Service (3G
MMMS) · Satellite Internet access · Satellite radio · Satellite television · UWB (IEEE 802.15.3) · Visual
sensor network · Wi-Fi (IEEE 802.11) · WiMAX (IEEE 802.16) · WRAN (IEEE 802.22) · Wireless local
loop (WLL) · Wireless broadband · Wireless USB · 3GPP Long Term Evolution (LTE) ·

4G

• "Digital, DTV, Internet, Mobile phone and MP3 Listening" - Dec 2006, RAJAR organisation.
• Online Terrestrial Radio - Search & and Listen to Live Radio Digitally

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 163

v•d•e
Analog and digital audio broadcasting

Terrestrial

Radio modulation AM · FM · COFDM

Frequency allocations LW · MW (MF) · SW (HF) · VHF (low/mid/high) · L band

DAB/DAB+ · DRM/DRM+ · HD Radio · IBOC · FMeXtra · CAM-D ·


Digital systems
ISDB-TSB

Satellite

Frequency allocations L band · S band · Ku band · C band

Digital systems SDR · DVB-SH · DAB-S · DMB-S · ADR

Commercial radio providers 1worldspace · Sirius (Canada) · XM (Canada) (see also: Sirius XM)

Codecs

AAC · HE-AAC · MPEG-1 Layer II · AMR-WB+

Subcarrier signals

AMSS · DirectBand · PAD · RDS/RBDS · SCA/SCMO

Related topics

Technical (Audio): Audio processing · Audio data compression


Technical (AM Stereo formats): Belar · C-QUAM · Harris · Magnavox · Kahn/Hazeltine
Technical (Emission): Digital radio · Error correction · Multipath propagation · SW Relay Station · AM
radio · AM broadcasting · Extended AM broadcast band · FM radio · FM broadcasting · FM broadcast
band · Cable radio
Cultural: History of radio · International broadcasting
Comparison of radio systems

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 164

5.6. HYBRID FUSION ENERGY GENERATION

Here we use the same type of system resulting into the different type of technology which is prevalent in
many places known as the Hybrid technology. The accelerated neutrons which can be extracted from the
DUO TRIAD TOKAMAK COLLIDER (DTTC) HUB can be used in Fission Chamber where we need the
those neutrons as for the fusion purpose the fast neutrons are waste products leading to the heating of
plasma chamber, so it can be used through neutrons collecting blackest used and can be channelized to the
Uranium or plutonium based nuclear/atomic reactors.
Nuclear fusion-fission hybrid could contribute to carbon-free energy future
January 27th, 2009
This illustration shows how a compact fusion-fission hybrid would fit
into a nuclear fuel cycle. The fusion-fission hybrid can use fusion
reactions to burn nuclear waste as fuel (people are shown for scale). It
would produce energy and could be used to help destroy the most
toxic, long-lived waste from nuclear power. The hybrid would be
made possible by a crucial invention from physicists at the University
of Texas at Austin called the Super X Divertor. Credit: Angela Wong
Physicists at the University of Texas at Austin have designed a
new system that, when fully developed, would use fusion to
eliminate most of the transuranic waste produced by nuclear
power plants.
The invention could help combat global warming by making nuclear
power cleaner and thus a more viable replacement of carbon-heavy
energy sources, such as coal.
"We have created a way to use fusion to relatively inexpensively
destroy the waste from nuclear fission," says Mike Kotschenreuther,
senior research scientist with the Institute for Fusion Studies (IFS)
and Department of Physics. "Our waste destruction system, we
believe, will allow nuclear power-a low carbon source of energy-to
take its place in helping us combat global warming."
Toxic nuclear waste is stored at sites around the U.S. Debate
surrounds the construction of a large-scale geological storage site at Yucca Mountain in Nevada, which
many maintain is costly and dangerous. The storage capacity of Yucca Mountain, which is not expected to
open until 2020, is set at 77,000 tons. The amount of nuclear waste generated by the U.S. will exceed this
amount by 2010.
The physicists' new invention could drastically decrease the need for any additional or expanded geological
repositories.
"Most people cite nuclear waste as the main reason they oppose nuclear fission as a source of power," says
Swadesh Mahajan, senior research scientist.
The scientists propose destroying the waste using a fusion-fission hybrid reactor, the centerpiece of which
is a high power Compact Fusion Neutron Source (CFNS) made possible by a crucial invention.
The CFNS would provide abundant neutrons through fusion to a surrounding fission blanket that uses
transuranic waste as nuclear fuel. The fusion-produced neutrons augment the fission reaction, imparting
efficiency and stability to the waste incineration process.
Kotschenreuther, Mahajan and Prashant Valanju, of the IFS, and Erich Schneider of the Department of
Mechanical Engineering report their new system for nuclear waste destruction in the journal Fusion
Engineering and Design.
There are more than 100 fission reactors, called "light water reactors" (LWRs), producing power in the
United States. The nuclear waste from these reactors is stored and not reprocessed. (Some other countries,

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 165

such as France and Japan, do reprocess the waste.)

The scientists' waste destruction system would work in two major steps.
First, 75 percent of the original reactor waste is destroyed in standard, relatively inexpensive LWRs. This
step produces energy, but it does not destroy highly radiotoxic, transuranic, long-lived waste, what the
scientists call "sludge."
In the second step, the sludge would be destroyed in a CFNS-based fusion-fission hybrid. The hybrid's
potential lies in its ability to burn this hazardous sludge, which cannot be stably burnt in conventional
systems.
"To burn this really hard to burn sludge, you really need to hit it with a sledgehammer, and that's what we
have invented here," says Kotschenreuther.
One hybrid would be needed to destroy the waste produced by 10 to 15 LWRs.
The process would ultimately reduce the transuranic waste from the original fission reactors by up to 99
percent. Burning that waste also produces energy.
The CFNS is designed to be no larger than a small room, and much fewer of the devices would be needed
compared to other schemes that are being investigated for similar processes. In combination with the
substantial decrease in the need for geological storage, the CFNS-enabled waste-destruction system would
be much cheaper and faster than other routes, say the scientists.
The CFNS is based on a tokomak, which is a machine with a "magnetic bottle" that is highly successful in
confining high temperature (more than 100 million degrees Celsius) fusion plasmas for sufficiently long
times.
The crucial invention that would pave the way for a CFNS is called the Super X Divertor. The Super X
Divertor is designed to handle the enormous heat and particle fluxes peculiar to compact devices; it would
enable the CFNS to safely produce large amounts of neutrons without destroying the system.
"The intense heat generated in a nuclear fusion device can literally destroy the walls of the machine," says
research scientist Valanju, "and that is the thing that has been holding back a highly compact source of
nuclear fusion."
Valanju says a fusion-fission hybrid reactor has been an idea in the physics community for a long time.
"It's always been known that fusion is good at producing neutrons and fission is good at making energy," he
says. "Now, we have shown that we can get fusion to produce a lot of neutrons in a small space."
Producing an abundant and clean source of "pure fusion energy" continues to be a goal for fusion
researchers. But the physicists say that harnessing the other product of fusion-neutrons-can be achieved in
the near term.
In moving their hybrid from concept into production, the scientists hope to make nuclear energy a more
viable alternative to coal and oil while waiting for renewable like solar and pure fusion to ramp up.
"The hybrid we designed should be viewed as a bridge technology," says Mahajan. "Through the hybrid,
we can bring fusion via neutrons to the service of the energy sector today. We can hopefully make a major
contribution to the carbon-free mix dictated by the 2050 time scale set by global warming scientists."
The scientists say their Super X Divertor invention has already gained acceptance in the fusion community.
Several groups are considering implemented the Super X Divertor on their machines, including the MAST
tokomak in the United Kingdom, and the DIIID (General Atomics) and NSTX (Princeton University) in the
U.S. Next steps will include performing extended simulations, transforming the concept into an engineering
project, and seeking funding for building a prototype.
Source: University of Texas at Austin

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 166

5.10. APPLICATION IN LIQUID CRYSTAL DISPLAY (LCD) AND ORGANIC LIGHT


EMMITING DIODE (OLED)

Application of the liquid crystal display and organic light emitting diode lies with the use of
nanotechnology by using Duo Triad Tokomak Collider (DTTC) hub as nano torii cluster hub.

We can enhance the resolution of the computer monitor screen as well as that of the plasma TVs
confinement time can be reduced with better resolution. The resolution is 24.75% better than the present
best available computer monitor or plasma TVs .One particular brand of plasma and LCD TVs are
projecting that it can give 1:1000000 resolution , here in this particular case it will be 1:1500000 resolution
. No blurred images rather only crystal clear screen can view from 172 degrees wide angle without any
diminishing images from side view angle. This entire thing can be done by using the nanotechnology and
peizo-electrononics.
Advanced Technology

FFD (Feed Forward)


Overdrive (Response Time Compensation)
Double Overdrive
Problems with Overdrive
Response time measurements
ClearMotiv
MagicSpeed / Response Time Acceleration (RTA)
Advanced Motion Accelerator (AMA)
Over Driving Circuit (ODC)
Fast Response LC + Special Driving
Rapid Response / Rapid Motion
Overdrive Panel Case Study (AUO 8ms)

AU Optronics Simulated Pulse Driving Technology (ASPD)

Sony X-Black
Acer CrystalBrite

BenQ Senseye
NEC AmbiBright
Samsung "Magic" Enhancements
Acer eColor Management
LG f-Engine
ColorComp

Dynamic Contrast
LG. Philips Digital Fine Contrast (DFC)
NEC Advanced DVM
APE (AUO Picture Enhancer) Technology
Acer Adaptive Contrast Management (ACM)

Black Frame Insertion (BFI)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 167

FFD (Feed Forward) – In 2001 NEC started developing new technologies used for their TV panels.
This idea is based on the fact that the widest colour change is from white to black, and for this change,
the maximum voltage is applied to the transistor. NEC’s idea was to apply twice the voltage in half the
time, for example instead of applying 1V over a time of 20ms, they changed it to applying 2V over a
time of 10ms. This meant that colour changes would be theoretically reduced significantly, but this
technique has never been applied according to NEC. The black > white transitions would remain
unaffected as they already had the maximum voltage applied to the transistors. This process is the
principal behind today’s ‘Overdrive’ technology:

Overdrive / Response Time Compensation (RTC) – this technology is based on applying an over-
voltage to the liquid crystals to motivate them into their orientation faster. This process forces them to a
full white (inactive) to black (active) transition first. The crystals can then drop back down to the
required grey level. This is helpful as the rise time of a crystal was always the slowest part (response time
= Tr + Tf). This technology does not help improve the ISO black > white transition much since that
already received the maximum voltage anyway, but transitions from grey > grey are significantly
reduced. The improvements in grey transitions however are helpful in producing a faster panel overall as
these changes have always been slower colour changes in TFT panels and it is important that the
response time is low across the whole range (0 – 255).

Double Overdrive - This was advancement on the traditional overdrive method, and involves applying
overdrive to not only the rise time, but to the fall time as well. This is supposed to improve response time
and overall quality.

Problems with Overdrive

In doing this over-volting, the response time as a whole is reduced but can unfortunately leaves some
colour trailing due to the intervening state that the pixel is forced to make. There is a certain risk of video
noise being visible on colour masses. Why? When the image is fixed, there is no problem - the pixels
don't change regardless of their values. That's the advantage of LCD. But imagine subtle colour shading.
When a tracking shot in a movie moves through those subtle colors, the pixels have to change from one
value to another, but the colors are really very close. Unfortunately, Overdrive temporarily causes a
much greater variation in the value of the pixel and since all the pixels don't react in the same way -
certain ones being faster than others -the result is that the viewer sees accentuated video noise. There
may also be some problems with Overdrive being used on TN panels which use dithering. Dithering is
normally invisible to the naked eye if the viewer is far enough away, but Overdrive could amplify the
visual nuisance stemming from the strong brightness escaping from the panel during the Overdrive
period. In real practice accentuated noise and "overdrive trailing" can be a symptom of poorly controlled
overdrive methods and can vary from one model to another.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 168

One other thing to note for Overdrive (RTC) enabled monitors is that running a TFT outside of it's
recommended refresh rate (i.e. not at 60Hz) can lead to a deterioration in the performance of this
technology and the panel responsiveness is adversely effected! Read the details here for more
information.

“Response Time”, How Do We Measure It Now?

Unfortunately, manufacturers have panels which are on one hand, clearly faster across grey transitions
than previous technologies, but on the other, have panels which have not improved on the black > white
change which is the ISO norm for measuring “response time”. They have instead now started to list their
panels with a response time quoted as being G2G to show that they have made improvements. If a TFT is
listed as a G2G response time, then you can be pretty sure the panel is using some form of ‘Overdrive’.
Remember though, the response time, even if it is quoted as G2G, is still only the fastest recorded
response time for the panel, and some transitions will still be slower.

Overdrive has allowed several panel manufacturers to improve the response times of their products
across grey transitions and there are now some panels available with an as low as 2ms quoted G2G
response time (e.g. Viewsonic VX922) and 1ms G2G (Hyundai S90D). More significantly the use of
overdrive has really improved practical responsiveness in the other panel technologies allowing P-MVA,
PVA and S-IPS equipped models to really offer performance to meet growing gaming needs. Typically
there have been several 'generations' of overdriven panels including (all G2G figures):

• TN Film - 4ms / 3ms, and now 2ms


• P-MVA - 8ms generation
• PVA / S-PVA - 8ms generation initially, but quickly changed to 6ms generation
• S-IPS - 8ms and 6ms, with 5ms now becoming more common.

Further Reading: An in depth look at Overdrive can be found here at X-bitlabs, including reviews of
many of Samsung and Viewsonic's first offering with this technology. An article at BeHardware about
Overdrive can be found here. There is also some information about the technology here at Tom’s
Hardware France.

ClearMotiv

Viewsonic call their overdrive based enhancement suite ‘ClearMotiv’. Bare in mind that they don’t
manufacturer any panels of their own, but claim that the panels they have used have improved response
time thanks to several technological changes which they have made with the electronics and hardware of
the monitors. The various technologies listed below may be used in part of in combination; it can vary
from one screen to another. The technologies available include:

1. Lower viscosity of the liquid crystals


2. Reducing the gap between cells by 30%, reported to improve response time by 50%
3. Impulse Driving Method - applying too much voltage at the start, but then reducing it to the
correct level, to kick start the crystals
4. Advanced Overdrive - they claim this also improves black > white and not just grey changes, but
this is debatable.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 169

5. Backlight shuttering - blinking the backlight off briefly during the liquid crystal cell transition.
Used only in LCD TV's at this stage. Designed to reduce perceived motion blur caused by the
human eye.
6. Black Frame Insertion - similar to backlight shuttering, but involves inserting a black frame to
hide the liquid crystal cell transition. Designed to reduce perceived motion blur caused by the
human eye.
7. Amplified Impulse Technology – This was originally listed in Viewsonic's documentation as a
feature in the electronics of the TFT which dynamically controls the amount of Overdrive being
used by the panel. Looking at their current whitepapers suggest it is more linked to their Impulse
Driving Method as listed above.

MagicSpeed / Response Time Acceleration (RTA)

Samsung’s own version of RTC / Overdrive technology. They always like to have their very own version
of technologies, and to be fair, they are one of the main panel manufacturers in the TFT market. There is
very little information available about the technology apart from that it is designed to boost grey
transition response times. At the end of the day, this is very similar to Overdrive, and as far as I know,
works on the same principal. Some models feature an option available through the OSD to disable RTA,
and this can show some noticeable differences in practice between active and inactive states.

Advanced Motion Accelerator (AMA)

BenQ's name for overdriven panels. Where the models also feature Black Frame Insertion (see below),
they are referred to as AMA-Z.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 170

Over Driving Circuit (ODC)

LG.Display calls their overdrive technology ODC and has used it to boost response times on both their
TN Film and S-IPS panel technologies. (Link: LG.Philips page)

Fast Response LC + Special Driving

This the name Chi Mei Optoelectronics give to their overdrive technology and is again designed to
"reduce residual image tail" and CMO state this will reduce or even eliminate motion blur

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 171

Rapid Response / Rapid Motion

NEC's own label for overdriven based displays offering improved grey to grey transitions.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 172

Case Study – AU Optronics (M190EN03 V0) 8ms P-MVA panel


Dell 1905FP vs. Viewsonic VP191B-2

While fundamentally the Dell and the Viewsonic are based on the same AU Optronics panel, the
electronics applied by the two manufacturers to utilize the panel are different. Performance of the two
monitors will therefore be a little different, but don’t forget that there will be many similarities because
of the mutual use of the AUO panel. Viewsonic have implemented their ClearMotiv technology into the
VP191B which offers not only the Overdrive which AUO have applied to the panel, but adds most
importantly the AIT (Amplified Impulse Technology). This dynamically controls the amount of
Overdrive used and is said to help reduce blurring of the image even more.

This is apparent from user observations of the two monitors. The Dell, using no extra features, just the
overdriven panel from AUO can show some slight trailing of colors in fast paced gaming. This is because
of the intervening state which the liquid crystals are forced to enter as part of the overdrive technology
(see above). This isn’t major, but the AIT used by Viewsonic has helped to reduce this a little. So
although the panels are the same, the electronics and hardware behind the panel can vary.

AU Optronics Simulated Pulse Driving Technology (ASPD)

AUO's 'Simulated Pulsed Driving' (ASPD) technology is designed to solve the issue of motion blur in
liquid crystal displays. AUO's Simulated Pulsed Driving (SPD) technology simulates impulse-type
displays with the adjustment of pixel driving and scanning backlight to reach a CRT-like image quality in
motion picture response time. The technology can greatly reduce motion blur, and enable the image
performance to reach optimal levels at 4ms equivalent gray to gray (8ms MPRT). The technology is also
known as one of the few technologies ready for mass production and can be applied both to WXGA
(1366x768) or Full HD (1920x1080) resolutions.

Sony X-Black Technology

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 173

Sony's X-Black / X-Brite technology was developed first of all for laptop panels, which has meant that
once they started to incorporate it into desktop displays; they could make the casing and bezels very
small and stylish. They've incorporated dual fluorescent lamps to light the displays and to help achieve
improved brightness over regular LCD panels. This has helped provide some impressive contrast ratios
too (including 1000:1) and the added brightness is being marketed as improving movie playback.

Sony have also researched a technique they've named "reflection reduction technology", in which several
layers of coating are applied instead of using traditional Anti Reflective coating (which gives you the
matte finish and can lead to some loss in colour, noticeably black, depth). The thickness of each of these
new layers Sony use is precisely calculated at one-quarter of the wavelength of light – so very thin! The
effect is to cancel out reflections before they get to the front of the display. They've improved the colour
reproduction (or so the marketing would certainly have you believe) by ditching the old AR style coating,
and the improved brightness and contrast have helped improve colour depth. The removal of the AR
coating from the panels has also helped them improve image sharpness according to their marketing.

Sony also claim to have improved the viewing angles of their displays by adding a special film coating
filter to the front of the panel, which helps reduce the restrictions on viewing angles caused by the
inability of the liquid crystals to respond uniformly. This is perhaps the biggest problem with TN film
panels today, as while colour reproduction has improved significantly as has response rate of pixels,
viewing angles have deteriorated. Panels like the 8ms Samsung TN film panel (in Hyundai L90D+ etc)
are a good example of this trend. With these new improvements by Sony to increase viewing angles, X-
Black certainly sounds promising on paper.

Sony also claims to have improved the graphics processor used by the panel, to address commands from
the graphics card and convert them to commands to the liquid crystals. They claim the hardware and
software improvements they have produced for the graphics processor has allowed resizing of images to
be improved as well as colorimetric processing advances.

There are a lot of varying opinions on the X-Black technology and its reflective nature. Some people say
it is fine, but a fair few say that it is too reflective. I would certainly be wary of it, and definitely try and
see an X-Black screen or laptop first to see if you think you would be ok with it. This has really been the
main gripe with the X-Black technology panels, but be wary of the marketing side of their displays as
well. While there are many claimed improvements to models using this technology, the advancements
may not be as fantastic as they would have you believe.

Acer CrystalBrite

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 174

Acer's reflective glossy screen coating is referred to as CrystalBrite and appears on some of the desktop
monitors as well as their laptops. The technology offers an ultra-fine, highly polished coating which
reportedly allows superior filtering of light and quicker image building. It is marketed as reducing
reflection from internal and external light sources, and improves colors and image quality. This includes
more vibrant and brighter images via backlight diffusion reduction, as well as superior contrast with
minimal ambient light scattering.

Acer CrytalBrite Whitepaper

BenQ Senseye

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 175

The marketing for Senseye says: “A pure digital image enhancement technology that automatically and
dynamically improves image quality. And a simple promise of higher definition visuals that are deeper,
richer and clearer. Experience Senseye technology today – and come one step closer to the true power of
the human eye.”

The idea behind the technology is to make the colors richer, and more vivid; and the image quality
sharper and clearer. The original image signal is processed through three engines:

• Contrast Enhancement Engine (CEE) – supposedly improves the contrast ratio making the
bright areas brighter, and the darker areas darker
• Colour Management Engine (CME) – adjusts red, blue and green colour depths and
supposedly improves skin colour tones
• Sharpness Enhancement Engine (SEE) – sharpens outlines and helps avoid blurring of edges

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 176

In reality the Senseye products merely offer a series of presets which the user can select like “photo,
movies, user” etc as well as a sensor chip designed to automatically alter the presets when required. The
colors and brightness / contrast are set for each selection, with the “user” option allowing you to change
them all manually.

The official information about the technology can be found here: http://www.benqsenseye.com/

NEC AmbiBright

Similar to BenQ's Senseye technology, this feature automatically adjusts the backlight depending on the
brightness of ambient lighting conditions. For example, if the sensor detects the ambient lighting
becoming darker, it reduces the backlight appropriately, which helps provide optimal readability and
reduce eyestrain. Further, if desired, you can set the display to automatically enter a power-saving mode
when the ambient lighting falls below a predetermined value (i.e. when office lights are shut off at the
end of the day), which can significantly reduce energy expenses. When you consider the number of
monitors used on trading floors and other display-heavy environments, this brightness function can
contribute significantly to a lower total cost of ownership.

Samsung “Magic” Enhancements:

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 177

• MagicTune - Image quality can be perfected quickly, accurately and easily using this software.
Stored on the desktop it enables fine image adjustments, and colour calibration functionality not
available using traditional menu systems. Perfect for photographers, designers and motion
graphic artists, MagicTune provides user-friendly on-screen image control. This is effectively a
small resource friendly application to adjust user settings. Power Strip is also an equivalent
piece of software to achieve this. The MagicTune software and further information can be found
on Samsung’s site here.
• MagicColour - This intelligent colour enhancement system enhances selective colors, such as
skin tones, making it ideal for multimedia applications, surfing the web, watching DVDs or
manipulating images from a digital camera. It is said to enhance skin tone colour, and make
other colors more vivid. It is essentially part of the screen’s presets, which alters the input signal
depending on the use
• MagicContrast - Ensures that the SyncMaster range of monitors deliver the very highest quality
image. As a result, the SyncMaster range boasts a market leading contrast ratio of 1000:1. This
is just a marketing term really, not a technology as such. The Samsung screens which offer high
contrast ratios are labeled with this term and should offer deep blacks and bright whites
• MagicBright – Provides a choice of five brightness settings designed to optimize different
content. The brightness of the monitor can now be simply adjusted to Game, Movie, Sports, and
Internet or Text modes. So, whether you're working, relaxing or surfing the web, the brightness
level will be adjusted accordingly to make it a much more enjoyable experience. This is a series
of monitor presets similar to BenQ Senseye
• MagicRotate – Software which will automatically switch the screens alignment when the
monitor is rotated between landscape and portrait modes. More info and downloads available
here
• MagicSpeed - see above
• MagicStand - uses a unique dual hinge to ensure the screen is perfectly positioned to provide
you with a comfortable viewing position. Now the screen can be moved vertically, swiveled and
tilted to suit your own preferences
• MagicNet – This software is the ultimate way to stream content to multiple screens across a
LAN, a single computer with MagicNet software can be used to control and deliver unique
content to multiple displays

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 178

Acer eColor Management

This is Acer's name for their selection of monitor preset modes for variations in brightness, contrast and
colors. These options are available on selected models via the 'Empowering Key' which gives the user
access to the Acer eColor Management OSD interface. According to the whitepaper, eColor
management enables control of the following parameters, depending on the preset chosen:
• Colour tracking technology - an advanced colour temperature adjustment, stabilizing screen
output
• YUV colour space conversion - from RGB, allowing luminance and chromaticity to be altered
independently
• Uniform-brightness - boosts the output of the display so that dark areas remain visible,
preventing colour wash-out even under bright ambient light or from a distance
• Fine contrast - allows intensity of bright or colored areas to be increased without causing wash-
out of dark areas
• Adaptive gamma - allows effective brightness and contrast levels of the monitor to be adjusted
scene by scene, depending on the content. Similar to dynamic contrast control
• Optimized sharpness
• Independent hue
• Ultra-saturation
• Adaptive colour

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 179

Preset modes available in this suite include standard, text, graphics, movie and user. Ultimately, these
remain the standard preset modes you would see from a lot of modern screens, and may or may not be of
much practical use, depending on the individual.

Acer eColour Management Whitepaper

LG f-Engine

LG's f-Engine form part of their monitor range OSD and offers a series of preset modes for adapting
colour and brightness to meet varying needs of the user. This gives access to settings for brightness,

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 180

ACE (Adaptive Color and Contrast Enhancement) and RCM (Real Color Management). RCM provides
the following settings: 0 = RCM disabled, 1 = enhancement of green, 2 = enhancement of skin tones, 3 =
overall color enhancement. One can quickly recognize how the each of these changes will respectively
affect the image appearance since a split screen is shown. The regular color picture is shown on the right
side; while the left side lets you preview the f-Engine settings' effects on the displayed picture.

ColorComp

This uniformity compensation and correction system aims to reduce any screen uniformity errors to
almost unnoticeable levels. ColorComp works by applying a digital correction to each pixel on the screen
to compensate for differences in colour and luminance. Each display is individually characterized during
production using a fully automated system which measures hundreds of points across the screen at
different grey levels. These measurements are used to build a three-dimensional correction matrix for the
display screen which is then stored inside the display. This data is used to compensate for the screen
uniformity, not only as a function of position on the display screen, but also as a function of grey level. If
desired, the ColorComp correction can be turned off in order to maximize the screen brightness.

Dynamic Contrast

Several manufacturers have introduced dynamic contrast controls to their monitors which are designed
to improve black and white levels and contrast of the display on the fly, in certain conditions. It is
supposed to help colors look more vivid and bright, text look sharper and enhance the extremes ends of
the colour scale, making blacks deeper and whites brighter. This is achieved by adjusting the brightness

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 181

of the backlighting rather than any adjustments at the matrix / panel level. The backlighting can be made
less intensive in dark scenes, to make them even darker, and more intensive, up to the maximum, in
bright scenes, to make them even brighter.

The official numbers for dynamic contrast are arrived at in the following manner: the level of white is
measured at the maximum of backlight brightness and the level of black is measured at its minimum. So
if the matrix has a specified contrast ratio of 1000:1 and the monitor’s electronics can automatically
change the intensity of backlight brightness by 300%, the resulting dynamic contrast is 3000:1. Of
course, the screen contrast – the ratio of white to black – is never higher than the monitor’s static
specified contrast ratio at any given moment, but the level of black is not important for the eye in bright
scenes and vice versa. That’s why the automatic brightness adjustment in movies is indeed helpful and
creates an impression of a monitor with a greatly enhanced dynamic range.

The downside is that the brightness of the whole screen is changed at once. In scenes that contain both
light and dark objects in equal measure, the monitor will just select some average brightness. Dynamic
contrast doesn’t work well on dark scenes with a few small, but very bright objects (like a night street
with lamp-posts) – the background is dark, and the monitor will lower brightness to a minimum,
dimming the bright objects as a consequence. Ideally this kind of enhancement shouldn't be used in
office work since it can prove distracting or problematic for colour work. However, movies and
sometimes gaming can offer some impressive improvements thanks to such technologies.

As ever, different manufacturers have their own versions of these technologies including those discussed
below.

Digital Fine Contrast (DFC)

On its initial release, LG.Philips DFC technology was marketed as being able to improve the contrast
ratio from a typical level of 700:1 to a massive 1600:1! It is supposed to help colors look more vivid and
bright, text look sharper and enhance the extremes ends of the colour scale, making blacks deeper and
whites brighter. This is a great benefit to gamers who have issues seeing enemies lurking in the shadows
and for photo / cinema users who want to improve colour quality. This technology is called the Digital
Fine Contrast engine (DFC) and consists of 3 elements:

o Auto Contents Recognition (ACR) - detects the type of content being viewed and decides how
to use the contrast adjustment engine to make the most of it. This is dependent on the mode
selection in the monitor's OSD, choosing between settings like 'Movie', 'Text', 'Games' etc. For

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 182

example, in 'Movie' mode, the DFC is enhanced for a maximum brightness and in 'Picture'
mode colors are deepened.
o Digital Contrast Enhancer (DCE) - This reduces black luminance.
o Digital Contrast Mapper (DCM) - Displays the image while ensuring that the enhanced
contrast is optimized.

The DFC is based on an automatic contrast booster controlled to the Look up Table (LUT) which is
reported to alter the gamma of the pixels, darkening dark areas and increasing brightness of the brighter
areas. The CCFL backlight tubes have also been replaced by a new generation which is capable of a
wider gamut:

Advanced DVM

NEC features their dynamic contrast on some of their models including the NEC LCD20WGX2.
Ultimately this technology runs under the same principle as DFC, but under a different name.

APE (AUO Picture Enhancer) Technology

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 183

AUO Picture Enhancer (APE) Technology integrates the input image data management and the dynamic
backlight control solution. The intrinsic image processing system circuit can dynamically adjust the
contrast, sharpness, hue, color temperature, and color saturation to accommodate the particular image.
Non-linear image processing can accommodate changes in the dynamics of human perception ideally
used to overcome an existing problem with LCD TVs where motion picture tends to lose its accuracy
during darker states. This technology provides a vivid and sharp image, retrieves natural colors, and
enhances color saturation, details in gray levels, and contrast ratio. With AUO’s Image Processing
Technology, customers can better enjoy details of dark and night scenes on movies.

Features:
• Sharpness Enhancement: Increase the hi-frequency signal to highlight detail information and
provide a sharp picture
• Color Saturation: Enlarge the gamut of input video slice to maximize the utilization of panels
and achieve superior visual stimulus
• Hue Refinement: By separating color space into several independent areas, all colors can be
modified separately without disturbing relative colors.
• Dynamic Backlight Dimming: This unique approach provides capability of backlight modulation
to relief the light leakage, hence to provide high contrast ratio up to 3000:1. Moreover, the
latest High Dynamic Contrast with LED utilizes local-adjustable LED backlight to enhance the
contrast ratio up to 10,000:1. The overall image quality is improved while saving 50% of power
consumption on average.

Acer Adaptive Contrast Management (ACM)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 184

Acer has their own name for dynamic contrast control as above. They market it as offering improved
detail in both dark and light scenes, as well as helping to reduce power consumption.

Acer ACM Whitepaper

Black Frame Insertion (BFI)

This was first unveiled at CEBIT 2006, and by inserting a black frame between images BenQ / AU
Optronics claim this helps "clean" the human eye from the perceived afterglow from retention of images
in the brain. They have named this technology, BFI (Black Frame Insertion). BenQ have a close
affiliation with AU Optronics and so BFI will be used in some of their range. BenQ use slightly different
terminology sometimes which you need to be aware of. They refer to their overdriven panels as having
'Advanced Motion Acceleration' (AMA) but those featuring Black Frame Insertion technology may be
referred to as AMA-Z as well. For example, the BenQ FP241W comes in two versions, the FP241W
without BFI and the FP241WZ with BFI.

BenQ comment that even a 0ms TFT would result in perceived afterglow due to the human eye mixing
images and introducing blur. This perceived motion blur effect is in large part due to the human visual
system and is something manufacturers are trying to overcome on their hold-type displays. This is the
reason behind looking at new technologies other than overdrive to help reduce blurring on these screens.
Other manufacturers such as Samsung are exploring technologies including backlight scanning but AU
Optronics / BenQ are favoring BFI instead.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 185

There are some misconceptions about the technology and I think it is important to realize that this does
NOT mean the screen will be running at 120Hz, or showing 120 fps. In reality, the screen will still
function at 60Hz / 60 fps, but some of them will be replaced with black frames. The technology will (at
least initially) offer three settings for timing of the black frame insertion allowing the user to find a level
they find comfortable. There is also an "off" option if required.

Organic light-emitting diode

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 186

Demonstration of a flexible OLED device

A green emitting OLED device

Sony XEL-1, the world's first OLED TV.[1]

An organic light emitting diode (OLED) is a light-emitting diode (LED) in which the emissive
electroluminescent layer is a film of organic compounds which emit light in response to an electric current.
This layer of organic semiconductor material is situated between two electrodes. Generally, at least one of
these electrodes is transparent.

OLEDs are used in television screens, computer monitors, small, portable system screens such as mobile
phones and PDAs, watches, advertising, information and indication. OLEDs are also used in light sources
for general space illumination and in large-area light-emitting elements. Due to their comparatively early
stage of development, they typically emit less light per unit area than inorganic solid-state based LED
point-light sources.

An OLED display functions without a backlight. Thus, it can display deep black levels and can also be
thinner and lighter than established liquid crystal displays. Similarly, in low ambient light conditions such

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 187

as dark rooms, an OLED screen can achieve a higher contrast ratio than an LCD screen using either cold
cathode fluorescent lamps or the more recently developed LED backlight.

There are two main families of OLEDs: those based upon small molecules and those employing polymers.
Adding mobile ions to an OLED creates a Light-emitting Electrochemical Cell or LEC, which has a
slightly different mode of operation.

OLED displays can use either passive-matrix (PMOLED) or active-matrix addressing schemes. Active-
matrix OLEDs (AMOLED) require a thin-film transistor backplane to switch each individual pixel on or
off, and can make higher resolution and larger size displays possible.

History

The first observations of electroluminescence in organic materials were in the early 1950s by A. Bernanose
and co-workers at the Nancy-Université, France. They applied high-voltage alternating current (AC) fields
in air to materials such as acridine orange, either deposited on or dissolved in cellulose or cellophane thin
films. The proposed mechanism was either direct excitation of the dye molecules or excitation of
electrons.[2][3][4][5]

In 1960, Martin Pope and co-workers at New York University developed ohmic dark-injecting electrode
contacts to organic crystals.[6][7][8] They further described the necessary energetic requirements (work
functions) for hole and electron injecting electrode contacts. These contacts are the basis of charge injection
in all modern OLED devices. Pope's group also first observed direct current (DC) electroluminescence
under vacuum on a pure single crystal of anthracene and on anthracene crystals doped with tetracene in
1963[9] using a small area silver electrode at 400V. The proposed mechanism was field-accelerated electron
excitation of molecular fluorescence.

Pope's group reported in 1965[10] that in the absence of an external electric field, the electroluminescence in
anthracene crystals is caused by the recombination of a thermalized electron and hole, and that the
conducting level of anthracene is higher in energy than the exciton energy level. Also in 1965, W. Helfrich
and W. G. Schneider of the National Research Council in Canada produced double injection recombination
electroluminescence for the first time in an anthracene single crystal using hole and electron injecting
electrodes,[11] the forerunner of modern double injection devices. In the same year, Dow Chemical
researchers patented a method of preparing electroluminescent cells using high voltage (500–1500 V) AC-
driven (100–3000 Hz) electrically-insulated one millimetre thin layers of a melted phosphor consisting of
ground anthracene powder, tetracene, and graphite powder.[12] Their proposed mechanism involved
electronic excitation at the contacts between the graphite particles and the anthracene molecules.

Device performance was limited by the previously poor electrical conductivity of organic materials.
However this was overcome with the discovery and development of highly conductive polymers.[13] For
more on the history of such materials, see conductive polymers.

Electroluminescence from polymer films was first observed by Roger Partridge at the National Physical
Laboratory in the United Kingdom. The device consisted of a film of poly(n-vinylcarbazole) up to 2.2
micrometres thick located between two charge injecting electrodes. The results of the project were patented
in 1975[14] and published in 1983.[15][16][17][18]

The first diode device was reported at Eastman Kodak by Ching W. Tang and Steven Van Slyke in 1987.[19]
This device used a novel two-layer structure with separate hole transporting and electron transporting
layers such that recombination and light emission occurred in the middle of the organic layer. This resulted
in a reduction in operating voltage and improvements in efficiency and led to the current era of OLED
research and device production.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 188

Research into polymer electroluminescence culminated in 1990 with J. H. Burroughes et al. at the
Cavendish Laboratory in Cambridge reporting a high efficiency green light-emitting polymer based device
using 100 nm thick films of poly(p-phenylene vinylene).[20]

Working principle

Schematic of a bilayer OLED: 1. Cathode (−), 2. Emissive Layer, 3. Emission of radiation, 4. Conductive
Layer, 5. Anode (+)

A typical OLED is composed of a layer of organic materials situated between two electrodes, the anode and
cathode, all deposited on a substrate. The organic molecules are electrically conductive as a result of
delocalization of pi electrons caused by conjugation over all or part of the molecule. These materials have
conductivity levels ranging from insulators to conductors, and therefore are considered organic
semiconductors. The highest occupied and lowest unoccupied molecular orbitals (HOMO and LUMO) of
organic semiconductors are analogous to the valence and conduction bands of inorganic semiconductors.

Originally, the most basic polymer OLEDs consisted of a single organic layer. One example was the first
light-emitting device synthesised by J. H. Burroughes et al., which involved a single layer of poly(p-
phenylene vinylene). However multilayer OLEDs can be fabricated with two or more layers in order to
improve device efficiency. As well as conductive properties, different materials may be chosen to aid
charge injection at electrodes by providing a more gradual electronic profile,[21] or block a charge from
reaching the opposite electrode and being wasted.[22] Many modern OLEDs incorporate a simple bilayer
structure, consisting of a conductive layer and an emissive layer.

During operation, a voltage is applied across the OLED such that the anode is positive with respect to the
cathode. A current of electrons flows through the device from cathode to anode, as electrons are injected
into the LUMO of the organic layer at the cathode and withdrawn from the HOMO at the anode. This latter
process may also be described as the injection of electron holes into the HOMO. Electrostatic forces bring
the electrons and the holes towards each other and they recombine forming an exciton, a bound state of the
electron and hole. This happens closer to the emissive layer, because in organic semiconductors holes are
generally more mobile than electrons. The decay of this excited state results in a relaxation of the energy
levels of the electron, accompanied by emission of radiation whose frequency is in the visible region. The
frequency of this radiation depends on the band gap of the material, in this case the difference in energy
between the HOMO and LUMO.

As electrons and holes are fermions with half integer spin, an exciton may either be in a singlet state or a
triplet state depending on how the spins of the electron and hole have been combined. Statistically three
triplet excitons will be formed for each singlet exciton. Decay from triplet states (phosphorescence) is spin
forbidden, increasing the timescale of the transition and limiting the internal efficiency of fluorescent
devices. Phosphorescent organic light-emitting diodes make use of spin–orbit interactions to facilitate
intersystem crossing between singlet and triplet states, thus obtaining emission from both singlet and triplet
states and improving the internal efficiency.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 189

Indium tin oxide (ITO) is commonly used as the anode material. It is transparent to visible light and has a
high work function which promotes injection of holes into the HOMO level of the organic layer. A typical
conductive layer may consist of PEDOT:PSS[23] as the HOMO level of this material generally lies between
the workfunction of ITO and the HOMO of other commonly used polymers, reducing the energy barriers
for hole injection. Metals such as barium and calcium are often used for the cathode as they have low work
functions which promote injection of electrons into the LUMO of the organic layer.[24] Such metals are
reactive, so require a capping layer of aluminium to avoid degradation.

Single carrier devices are typically used to study the kinetics and charge transport mechanisms of an
organic material and can be useful when trying to study energy transfer processes. As current through the
device is composed of only one type of charge carrier, either electrons or holes, recombination does not
occur and no light is emitted. For example, electron only devices can be obtained by replacing ITO with a
lower work function metal which increases the energy barrier of hole injection. Similarly, hole only devices
can be made by using a cathode comprised solely of aluminium, resulting in an energy barrier too large for
efficient electron injection.[25][26][27]

MATERIAL TECHNOLOGIES

Small molecules

Alq3,[19] commonly used in small molecule OLEDs.

Efficient OLEDs using small molecules were first developed by Dr. Ching W. Tang et al.[19] at Eastman
Kodak. The term OLED traditionally refers specifically to this type of device, though the term SM-OLED
is also in use.

Molecules commonly used in OLEDs include organometallic chelates (for example Alq3, used in the
organic light-emitting device reported by Tang et al.), fluorescent and phosphorescent dyes and conjugated
dendrimers. A number of materials are used for their charge transport properties, for example
triphenylamine and derivatives are commonly used as materials for hole transport layers.[28] Fluorescent
dyes can be chosen to obtain light emission at different wavelengths, and compounds such as perylene,
rubrene and quinacridone derivatives are often used.[29] Alq3 has been used as a green emitter, electron
transport material and as a host for yellow and red emitting dyes.

The production of small molecule devices and displays usually involves thermal evaporation in a vacuum.
This makes the production process more expensive and of limited use for large-area devices than other
processing techniques. However, contrary to polymer-based devices, the vacuum deposition process
enables the formation of well controlled, homogeneous films, and the construction of very complex multi-
layer structures. This high flexibility in layer design, enabling distinct charge transport and charge blocking
layers to be formed, is the main reason for the high efficiencies of the small molecule OLEDs.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 190

Coherent emission from a laser dye-doped tandem SM-OLED device, excited in the pulsed regime, has
been demonstrated.[30] The emission is nearly diffraction limited with a spectral width similar to that of
broadband dye lasers.[31]

Polymer light-emitting diodes

poly(p-phenylene vinylene), used in the first PLED.[20]

Polymer light-emitting diodes (PLED), also light-emitting polymers (LEP), involve an electroluminescent
conductive polymer that emits light when connected to an external voltage. They are used as a thin film for
full-spectrum colour displays. Polymer OLEDs are quite efficient and require a relatively small amount of
power for the amount of light produced.

Vacuum deposition is not a suitable method for forming thin films of polymers. However, polymers can be
processed in solution, and spin coating is a common method of depositing thin polymer films. This method
is more suited to forming large-area films than thermal evaporation. No vacuum is required, and the
emissive materials can also be applied on the substrate by a technique derived from commercial inkjet
printing.[32][33] However, as the application of subsequent layers tends to dissolve those already present,
formation of multilayer structures is difficult with these methods. The metal cathode may still need to be
deposited by thermal evaporation in vacuum.

Typical polymers used in PLED displays include derivatives of poly(p-phenylene vinylene) and
polyfluorene. Substitution of side chains onto the polymer backbone may determine the colour of emitted
light[34] or the stability and solubility of the polymer for performance and ease of processing.[35]

While unsubstituted poly(p-phenylene vinylene) (PPV) is typically insoluble, a number of PPVs and related
poly(naphthalene vinylene)s (PNVs) that are soluble in organic solvents or water have been prepared via
ring opening metathesis polymerization.[36][37][38]

Phosphorescent materials

Ir(mppy)3, a phosphorescent dopant which emits green light.[39]

Phorescent organic light emitting diodes use the principle of electrophosphorescence to convert electrical
energy in an OLED into light in a highly efficient manner,[40][41] with the internal quantum efficiencies of
such devices approaching 100%.[42]

Typically, a polymer such as poly(n-vinylcarbazole) is used as a host material to which an organometallic


complex is added as a dopant. Iridium complexes[41] such as Ir(mppy)3[39] are currently the focus of
research, although complexes based on other heavy metals such as platinum[40] have also been used.

The heavy metal atom at the centre of these complexes exhibits strong spin-orbit coupling, facilitating
intersystem crossing between singlet and triplet states. By using these phosphorescent materials, both
singlet and triplet excitons will be able to decay radiatively, hence improving the internal quantum
efficiency of the device compared to a standard PLED where only the singlet states will contribute to
emission of light.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 191

Applications of OLEDs in solid state lighting require the achievement of high brightness with good CIE
coordinates (for white emission). The use of macromolecular species like polyhedral oligomeric
silsesquioxanes (POSS) in conjunction with the use of phosphorescent species such as Ir for printed OLEDs
have exhibited brightnesses as high as 10,000 cd/m2.[43]

DEVICE ARCHITECTURES

Structure

• Bottom or top emission: Bottom emission devices use a transparent or semi-transparent bottom
electrode to get the light through a transparent substrate. Top emission devices[44][45] use a
transparent or semi-transparent top electrode emitting light directly. Top-emitting OLEDs are
better suited for active-matrix applications as they can be more easily integrated with a non-
transparent transistor backplane.

• Transparent OLEDs use transparent or semi-transparent contacts on both sides of the device to
create displays that can be made to be both top and bottom emitting (transparent). TOLEDs can
greatly improve contrast, making it much easier to view displays in bright sunlight.[46] This
technology can be used in Head-up displays, smart windows or augmented reality applications.
Novaled's[47] OLED panel presented in Finetech Japan 2010, boasts a transparency of 60-70%.

• Stacked OLEDs use a pixel architecture that stacks the red, green, and blue subpixels on top of
one another instead of next to one another, leading to substantial increase in gamut and color
depth, and greatly reducing pixel gap. Currently, other display technologies have the RGB (and
RGBW) pixels mapped next to each other decreasing potential resolution.

• Inverted OLED: In contrast to a conventional OLED, in which the anode is placed on the
substrate, an Inverted OLED uses a bottom cathode that can be connected to the drain end of an n-
channel TFT especially for the low cost amorphous silicon TFT backplane useful in the
manufacturing of AMOLED displays.[48]

Patterning technologies

Patternable organic light-emitting devices use a light or heat activated electroactive layer. A latent material
(PEDOT-TMA) is included in this layer that, upon activation, becomes highly efficient as a hole injection
layer. Using this process, light-emitting devices with arbitrary patterns can be prepared.[49]

Colour patterning can be accomplished by means of laser, such as radiation-induced sublimation transfer
(RIST).[50]

Organic vapour jet printing (OVJP) uses an inert carrier gas, such as argon or nitrogen, to transport
evaporated organic molecules (as in Organic Vapor Phase Deposition). The gas is expelled through a
micron sized nozzle or nozzle array close to the substrate as it is being translated. This allows printing
arbitrary multilayer patterns without the use of solvents.

Conventional OLED displays are formed by vapor thermal evaporation (VTE) and are patterned by
shadow-mask. A mechanical mask has openings allowing the vapor to pass only on the desired location.

Backplane technologies

For a high resolution display like a TV, a TFT backplane is necessary to drive the pixels correctly.
Currently, Low Temperature Polycrystalline silicon LTPS-TFT is used for commercial AMOLED displays.
LTPS-TFT has variation of the performance in a display, so various compensation circuits have been

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 192

reported.[44] Due to the size limitation of the excimer laser used for LTPS, the AMOLED size was limited.
To cope with the hurdle related to the panel size, amorphous-silicon/microcrystalline-silicon backplanes
have been reported with large display prototype demonstrations.[51]

Advantages

Demonstration of a 4.1" prototype flexible display from Sony

The different manufacturing process of OLEDs lends itself to several advantages over flat-panel displays
made with LCD technology.

• Future lower cost: Although the method is not currently commercially viable for mass
production, OLEDs can be printed onto any suitable substrate using an inkjet printer or even
screen printing technologies,[52] they could theoretically have a lower cost than LCDs or plasma
displays. However, it is the fabrication of the substrate that is the most complex and expensive
process in the production of a TFT LCD, so any savings offered by printing the pixels is easily
cancelled out by OLED's requirement to use a more costly LTPS substrate - a fact that is borne out
by the significantly higher initial price of AMOLED displays than their TFT LCD competitors. A
mitigating factor to this price differential going into the future is the cost of retooling existing lines
to produce AMOLED displays over LCDs to take advantage of the economies of scale afforded by
mass production.

• Light weight & flexible plastic substrates: OLED displays can be fabricated on flexible plastic
substrates leading to the possibility of Organic light-emitting diode roll-up display being
fabricated or other new applications such as roll-up displays embedded in fabrics or clothing. As
the substrate used can be flexible such as PET.[53], the displays may be produced inexpensively.

• Wider viewing angles & improved brightness: OLEDs can enable a greater artificial contrast
ratio (both dynamic range and static, measured in purely dark conditions) and viewing angle
compared to LCDs because OLED pixels directly emit light. OLED pixel colours appear correct
and unshifted, even as the viewing angle approaches 90 degrees from normal.

• Better power efficiency: LCDs filter the light emitted from a backlight, allowing a small fraction
of light through so they cannot show true black, while an inactive OLED element produces no
light and consumes no power.[54]

• Response time: OLEDs can also have a faster response time than standard LCD screens. Whereas
LCD displays are capable of a 1 ms response time or less[55] offering a frame rate of 1,000 Hz or
higher, an OLED can theoretically have less than 0.01 ms response time enabling 100,000 Hz
refresh rates.

Disadvantages

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 193

LEP display showing partial failure

An old OLED display showing wear

• Lifespan: The biggest technical problem for OLEDs was the limited lifetime of the organic
materials.[56] In particular, blue OLEDs historically have had a lifetime of around 14,000 hours to
half original brightness (five years at 8 hours a day) when used for flat-panel displays. This is
lower than the typical lifetime of LCD, LED or PDP technology—each currently rated for about
60,000 hours to half brightness, depending on manufacturer and model. However, some
manufacturers' displays aim to increase the lifespan of OLED displays, pushing their expected life
past that of LCD displays by improving light outcoupling, thus achieving the same brightness at a
lower drive current.[57][58] In 2007, experimental OLEDs were created which can sustain 400 cd/m2
of luminance for over 198,000 hours for green OLEDs and 62,000 hours for blue OLEDs.[59]

• Color balance issues: Additionally, as the OLED material used to produce blue light degrades
significantly more rapidly than the materials that produce other colors, blue light output will
decrease relative to the other colors of light. This differential color output change will change the
color balance of the display and is much more noticeable than a decrease in overall luminance.[60]
This can be partially avoided by adjusting colour balance but this may require advanced control
circuits and interaction with the user, which is unacceptable for some users. In order to delay the
problem, manufacturers bias the colour balance towards blue so that the display initially has an
artificially blue tint, leading to complaints of artificial-looking, over-saturated colors. More
commonly, though, manufacturers optimize the size of the R, G and B subpixels to reduce the
current density through the subpixel in order to equalize lifetime at full luminance. For example, a
blue subpixel may be 100% larger than the green subpixel. The red subpixel may be 10% smaller
than the green.

• Efficiency of blue OLEDs: Improvements to the efficiency and lifetime of blue OLED’s is vital
to the success of OLED’s as replacements for LCD technology. Considerable research has been
invested in developing blue OLEDs with high external quantum efficiency as well as a deeper blue
color.[61][62] External quantum efficiency values of 20% and 19% have been reported for red
(625 nm) and green (530 nm) diodes, respectively.[63][64] However, blue diodes (430 nm) have only

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 194

been able to achieve maximum external quantum efficiencies in the range between 4% to 6%.[65]
This is primarily due to two factors. Firstly, the human eye is less sensitive to the blue wavelength
compared to the green or red, so lower efficiency is expected. Secondly, by calculating the band
gap (Eg = hc/λ), it is clear that the shorter wavelength of the blue OLED results in a larger band
gap at 2.9 eV. This leads to higher barriers, so less efficiency is also expected.

• Water damage: Water can damage the organic materials of the displays. Therefore, improved
sealing processes are important for practical manufacturing. Water damage may especially limit
the longevity of more flexible displays.[66]

• Outdoor performance: As an emissive display technology, OLEDs rely completely upon


converting electricity to light, unlike most LCDs which are to some extent reflective; e-ink leads
the way in efficiency with ~ 33% ambient light reflectivity, enabling the display to be used
without any internal light source. The metallic cathode in an OLED acts as a mirror, with
reflectance approaching 80%, leading to poor readability in bright ambient light such as outdoors.
However, with the proper application of a circular polarizer and anti-reflective coatings, the
diffuse reflectance can be reduced to less than 0.1%. With 10,000 fc incident illumination (typical
test condition for simulating outdoor illumination), that yields an approximate photopic contrast of
5:1.

• Power consumption: While an OLED will consume around 40% of the power of an LCD
displaying an image which is primarily black, for the majority of images it will consume 60–80%
of the power of an LCD - however it can use over three times as much power to display an image
with a white background[67] such as a document or website. This can lead to disappointing real-
world battery life in mobile devices.

• Screen burn-in: Unlike displays with a common light source, the brightness of each OLED pixel
fades depending on the content displayed. The varied lifespan of the organic dyes can cause a
discrepancy between red, green, and blue intensity. This leads to image persistence, also known as
burn-in.[68]

Manufacturers and Commercial Uses

Magnified image of the AMOLED screen on the Google Nexus One smartphone using the RGBG system
of the PenTile Matrix Family.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 195

A 3.8 cm (1.5 in) OLED display from a Creative ZEN V media player

OLED technology is used in commercial applications such as displays for mobile phones and portable
digital media players, car radios and digital cameras among others. Such portable applications favor the
high light output of OLEDs for readability in sunlight and their low power drain. Portable displays are also
used intermittently, so the lower lifespan of organic displays is less of an issue. Prototypes have been made
of flexible and rollable displays which use OLEDs' unique characteristics. Applications in flexible signs
and lighting are also being developed.[69] Philips Lighting have made OLED lighting samples under the
brand name 'Lumiblade' available online.[70]

OLEDs have been used in most Motorola and Samsung colour cell phones, as well as some HTC, LG and
Sony Ericsson models.[71] Nokia has also recently introduced some OLED products including the N85 and
the N86 8MP, both of which feature an AMOLED display. OLED technology can also be found in digital
media players such as the Creative ZEN V, the iriver clix, the Zune HD and the Sony Walkman X Series.

The Google and HTC Nexus One smartphone includes an AMOLED screen, as does HTC's own Desire and
Legend phones. However due to supply shortages of the Samsung-produced displays, certain HTC models
will use Sony's Super LCD technology in the future.[72]

Other manufacturers of OLED panels include Anwell Technologies Limited,[73] Chi Mei Corporation,[74]
LG,[75] and others.[76]

DuPont stated in a press release in May 2010 that they can produce a 50-inch OLED TV in two minutes
with a new printing technology. If this can be scaled up in terms of manufacturing, then the total cost of
OLED TVs would be greatly reduced. Dupont also states that OLED TVs made with this less expensive
technology can last up to 15 years if left on for a normal eight hour day.[77][78]

Handheld computer manufacturer OQO introduced the smallest Windows netbook computer, including an
OLED display, in 2009.[79]

The use of OLEDs may be subject to patents held by Eastman Kodak, DuPont, General Electric, Royal
Philips Electronics, numerous universities and others.[80] There are by now literally thousands of patents
associated with OLEDs, both from larger corporations and smaller technology companies [1].

Samsung applications

By 2004 Samsung, South Korea's largest conglomerate, was the world's largest OLED manufacturer,
producing 40% of the OLED displays made in the world,[81] and as of 2010 has a 98% share of the global
AMOLED market.[82] The company is leading the world OLED industry, generating $100.2 million out of
the total $475 million revenues in the global OLED market in 2006.[83] As of 2006, it held more than 600
American patents and more than 2800 international patents, making it the largest owner of AMOLED
technology patents.[83]

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 196

Samsung SDI announced in 2005 the world's largest OLED TV at the time, at 21 inches (53 cm).[84] This
OLED featured the highest resolution at the time, of 6.22 million pixels. In addition, the company adopted
active matrix based technology for its low power consumption and high-resolution qualities. This was
exceeded in January 2008, when Samsung showcased the world's largest and thinnest OLED TV at the
time, at 31 inches and 4.3 mm.[85]

In May 2008, Samsung unveiled an ultra-thin 12.1 inch laptop OLED display concept, with a 1,280×768
resolution with infinite contrast ratio.[86] According to Woo Jong Lee, Vice President of the Mobile Display
Marketing Team at Samsung SDI, the company expected OLED displays to be used in notebook PCs as
soon as 2010.[87]

In October 2008, Samsung showcased the world's thinnest OLED display, also the first to be 'flappable' and
bendable.[88] It measures just 0.05 mm (thinner than paper), yet a Samsung staff member said that it is
"technically possible to make the panel thinner".[88] To achieve this thickness, Samsung etched an OLED
panel that uses a normal glass substrate. The drive circuit was formed by low-temperature polysilicon
TFTs. Also, low-molecular organic EL materials were employed. The pixel count of the display is 480 ×
272. The contrast ratio is 100,000:1, and the luminance is 200 cd/m². The colour reproduction range is
100% of the NTSC standard.

In the same month, Samsung unveiled what was then the world's largest OLED Television at 40-inch with a
Full HD resolution of 1920×1080 pixel.[89] In the FPD International, Samsung stated that its 40-inch OLED
Panel is the largest size currently possible. The panel has a contrast ratio of 1,000,000:1, a colour gamut of
107% NTSC, and a luminance of 200 cd/m² (peak luminance of 600 cd/m²).

At the Consumer Electronics Show (CES) in January 2010, Samsung demonstrated a laptop computer with
a large, transparent OLED display featuring up to 40% transparency[90] and an animated OLED display in a
photo ID card.[91]

Samsung's latest AMOLED smartphones use their Super AMOLED trademark, with the Samsung Wave
S8500 and Samsung i9000 Galaxy S being launched in June 2010.

Sony applications

Sony XEL-1, the world's first OLED TV.[1] (front)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 197

Sony XEL-1 (side)

The Sony CLIÉ PEG-VZ90 was released in 2004, being the first PDA to feature an OLED screen.[92] Other
Sony products to feature OLED screens include the MZ-RH1 portable minidisc recorder, released in
2006[93] and the Walkman X Series.[94]

At the Las Vegas CES 2007, Sony showcased 11-inch (28 cm, resolution 960×540) and 27-inch (68.5 cm,
full HD resolution at 1920×1080) OLED TV models.[95] Both claimed 1,000,000:1 contrast ratios and total
thicknesses (including bezels) of 5 mm. In April 2007, Sony announced it would manufacture 1000 11-inch
OLED TVs per month for market testing purposes.[96] On October 1, 2007, Sony announced that the 11-
inch model, now called the XEL-1, would be released commercially;[1] the XEL-1 was first released in
Japan in December 2007.[97]

In May 2007, Sony publicly unveiled a video of a 2.5-inch flexible OLED screen which is only 0.3
millimeters thick.[98] At the Display 2008 exhibition, Sony demonstrated a 0.2 mm thick 3.5 inch display
with a resolution of 320×200 pixels and a 0.3 mm thick 11 inch display with 960×540 pixels resolution,
one-tenth the thickness of the XEL-1.[99][100]

In July 2008, a Japanese government body said it would fund a joint project of leading firms, which is to
develop a key technology to produce large, energy-saving organic displays. The project involves one
laboratory and 10 companies including Sony Corp. NEDO said the project was aimed at developing a core
technology to mass-produce 40 inch or larger OLED displays in the late 2010s.[101]

In October 2008, Sony published results of research it carried out with the Max Planck Institute over the
possibility of mass-market bending displays, which could replace rigid LCDs and plasma screens.
Eventually, bendable, transparent OLED screens could be stacked to produce 3D images with much greater
contrast ratios and viewing angles than existing products.[102]

Sony exhibited a 24.5" prototype OLED 3D television during the Consumer Electronics Show in January
2010.[103]

References

1. Sony XEL-1:The world's first OLED TV, OLED-Info.com Nov.17 2008


2. A. Bernanose, M. Comte, P. Vouaux, J. Chim. Phys. 1953, 50, 64.
3. A. Bernanose, P. Vouaux, J. Chim. Phys. 1953, 50, 261.
4. A. Bernanose, J. Chim. Phys. 1955, 52, 396.
5. A. Bernanose, P. Vouaux, J. Chim. Phys. 1955, 52, 509.
6. Kallmann, H.; Pope, M. (1960). "Positive Hole Injection into Organic Crystals". The Journal of
Chemical Physics 32: 300. doi:10.1063/1.1700925.
7. Kallmann, H.; Pope, M. (1960). "Bulk Conductivity in Organic Crystals". Nature 186: 31.
doi:10.1038/186031a0.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 198

8. Mark, Peter; Helfrich, Wolfgang (1962). "Space-Charge-Limited Currents in Organic Crystals".


Journal of Applied Physics 33: 205. doi:10.1063/1.1728487.
9. Pope, M.; Kallmann, H. P.; Magnante, P. (1963). "Electroluminescence in Organic Crystals". The
Journal of Chemical Physics 38: 2042. doi:10.1063/1.1733929.
10. Kim, Seul Ong; Lee, Kum Hee; Kim, Gu Young; Seo, Ji Hoon; Kim, Young Kwan; Yoon, Seung
Soo (2010). "A highly efficient deep blue fluorescent OLED based on
diphenylaminofluorenylstyrene-containing emitting materials". Synthetic Metals 160: 1259.
doi:10.1016/j.synthmet.2010.03.020.
11. Jabbour, G. E.; Kawabe, Y.; Shaheen, S. E.; Wang, J. F.; Morrell, M. M.; Kippelen, B.;
Peyghambarian, N. (1997). "Highly efficient and bright organic electroluminescent devices with
an aluminum cathode". Applied Physics Letters 71: 1762. doi:10.1063/1.119392.
12. Mikami, Akiyoshi; Koshiyama, Tatsuya; Tsubokawa, Tetsuro (2005). "High-Efficiency Color and
White Organic Light-Emitting Devices Prepared on Flexible Plastic Substrates". Japanese Journal
of Applied Physics 44: 608. doi:10.1143/JJAP.44.608.
13. Mikami, A.; Nishita, Y.; Iida, Y. “High-efficiency Phosphorescent Organic Light-Emitting
Devices Coupled with Lateral Color-Conversion Layer.” SID Symposium Digest of Technical
Papers 2006. 37-1. 1376-1379
14. P. Chamorro-Posada, J. Martín-Gil, P. Martín-Ramos, L.M. Navas-Gracia, Fundamentos de la
Tecnología OLED (Fundamentals of OLED Technology). University of Valladolid, Spain (2008).
ISBN 978-84-936644-0-4. Available online, with permission from the authors, at the webpage:
http://www.scribd.com/doc/13325893/Fundamentos-de-la-Tecnologia-OLED

• Shinar, Joseph (Ed.), Organic Light-Emitting Devices: A Survey. NY: Springer-Verlag (2004).
ISBN 0-387-95343-4.
• Hari Singh Nalwa (Ed.), Handbook of Luminescence, Display Materials and Devices, Volume 1-3.
American Scientific Publishers, Los Angeles (2003). ISBN 1-58883-010-1. Volume 1: Organic
Light-Emitting Diodes
• Hari Singh Nalwa (Ed.), Handbook of Organic Electronics and Photonics, Volume 1-3. American
Scientific Publishers, Los Angeles (2008). ISBN 1-58883-095-0.
• Müllen, Klaus (Ed.), Organic Light Emitting Devices: Synthesis, Properties and Applications.
Wiley-VCH (2006). ISBN 3-527-31218-8
• Yersin, Hartmut (Ed.), Highly Efficient OLEDs with Phosphorescent Materials. Wiley-VCH
(2007). ISBN 3-527-40594-1

5.19.NANOTECHNOLOGY
FUSION BY USING CARBON NANO TUBE (CNT) FILLED WITH HYDROGEN AND OXYGEN
IN A PLASMA DEVICE

A low beta high aspect ratio tokomak is used to get the plasma confinement by using
methane(CH4 )or acetylene plasma which gets ionized to get hydrogen H and oxygen O filled in carbon
nano tube(cnt) as the medium. The study is done theoretically under high pressure. If graphite target is used
in a high temperature reactor while an inert gas is bled into the chamber .The nanotubes develop on the
cooler surfaces of the reactor, as the vaporized carbon condenses. A water cooled surface may be included
in the system to collect the nanotubes. As the carbon nano tube (cnt) filled with H and O are put afterwards
into the vacuum chamber it combines to form electrical waves /current is generated which can be extracted
to the generator which in turn gives us abundant source of energy. Application in electricity generation, in
rocket using Hall thruster, and in computer chip and so on.

SOURCE IONISATION STABILIZATION FO R-T INSTABILITY IN MAGNETIC CONFINEMENT


TOKOMAK COLLIDER (MCTC): A CONCEPTUL DEVICE
R-T instability which occurs due to density gradient acting against the gravity in the MCTC is theoretically
studied to stabilize by using the source ionization term in the present model. A low beta and high aspect
ratio MCTC hub is considered. The results are shown as source ionization being the stabilizing in character.
Application of such type of device if made in carbon nano tube (cnt) can be used for computer chip1.

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Invention of Dr.A.B.Rajib Hazarika’s Devices 199

Reference:
1) Larry Laufenberg (2008):www.cict.nasa.gov/infusion

Nanotubes (CNTs, SWNTs, DWNTs, MWNTs, TWNTs)

Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
Inventions of Dr.A.B.Rajib Hazarika on future devices 200

Nano-tube Synonyms: CNTs, carbon nano-tube, boron nitride nano-tube, BNNTs, halloysite
nanotube, buckytubes, C-60, buckminster fullerene, nano-tori, nano-torus, nano-bud, nano-onions, single
walled nano-tube, SWNTs, double walled nano-tube, DWNTs, multi walled nano-tube, MWNTs, thin
walled nanotubes, TWNTs, short nanotubes, conductive nanotubes, purified nanotubes, industrial grade
nanotubes,

Nano-tube General Descriptions: a) Electrical conductivity -- probably the best conductor of


electricity on a nano-scale level that can ever be possible.

b) Thermal conductivity -- comparable to diamond along the tube axis.

c) Mechanical -- probably the stiffest, strongest, and toughest fiber that can ever
exist.

d) Chemistry of carbon -- can be reacted and manipulated with the richness and
flexibility of other carbon molecules. Carbon is the basis of most materials we use
every day.

e) Molecular perfection -- essentially free of defects.

f) Self-assembly -- strong van der Waals attraction leads to spontaneous roping of


many nanotubes. Important in certain applications.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 201

Nanotube Chemical Properties Available:

a) Boron nitride nanotubes

b) Carbon nanotubes

c) Graphitized multi walled carbon nanotubes

d) OH functionalized carbon nanotubes

e) COOH functionalized carbon nanotubes

f) Industrial grade carbon nanotubes

g) Purified carbon nanotubes

h) Conductive nanotubes

i) Halloysite nanotubes

j) Inorganic nanotubes

k) Silicon nanotubes

Nanotube Physical Tube Structures Available:

a) SWNTs (Single walled nanotubes)

b) DWNTs (Double walled nanotubes)

c) MWNTs (Multi walled nanotubes)

d) TWNTs, (Thin walled carbon nanotubes)

e) Short Nanotubes

f) Industrial grade nanotubes

g) "Armchair" nanotubes

h) "Zigzag" nanotubes

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 202

i) Chiral armchair-zigzag nanotubes

Nanotube Potential Market Applications:

*Date: 15 NOV 2009: "New functionalised nanotubes


applications will come onto the market in the next few years that
will greatly increase global revenues to $1.4 billion plus by 2015;
driven mainly by the needs of the electronics and data storage,
defence, energy, aerospace and automotive industries. As
commercial-scale production ramps up, the significant decrease
in cost for these high performance materials will also drive new
applications. Up to now, most carbon nanotubes production has
been on a pilot-scale level; however scale-up of production by
large multi-nationals such as Arkema, Bayer Materials Science
and Showa Denko and access to cheaper nanotubes from
Russian and China will greatly increase commercialization
opportunities".

*Flat panel displays, conductive plastics, super composite fibers, superconductors,


and field storage batteries

*Micro-electronics / semiconductors

*Conducting Composites

*Controlled Drug Delivery/release

*Artificial muscles

*Super capacitors

*Batteries

*Field emission flat panel displays

*Field Effect transistors and Single electron transistors

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 203

*Nano lithography

*Nano electronics

*Doping

*Nano balance

*Nano tweezers

*Data storage

*Magnetic nanotube

*Nano gear

*Nanotube actuator

*Molecular Quantum wires

*Hydrogen Storage

*Noble radioactive gas storage

*Solar storage

*Waste recycling

*Electromagnetic shielding

*Dialysis Filters

*Thermal protection

*Nanotube reinforced composites

*Reinforcement of armor and other materials

*Reinforcement of polymer

*Avionics

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 204

*Collision-protection materials

*Fly wheels

Nanotube Packaging:

To standard sa

Nanotube TSCA (SARA Title III) Status:

Listed. For further information please call the E.P.A. at 1.202.554.1404.

Nanotube CAS Numbers:

a) 7440-44-0 (activated carbon)

b) 10043-11-5 (boron nitride)

c) 7440-21-3 (silicon)

Nanotube Safety Notice:

a) Before using, user shall determine the suitability of the product for its intended
use, and user assumes all risk and liability whatsoever in connection therewith.

b) Nanotubes might be hazardous to your health.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 205

5.22.DIFFUSION PROCESS IN DOUBLE TRIOS: A CONCEPTUAL NOVEL THEORY FOR


SAIPH STAR

In double trios the present study is done theoretically to get the collisional transport phenomena and
a search for new concept towards the insight into the behavior of Double trios which are symmetrical set of
six stars (globs) of the luminosity intensity of 50-52 magnitude found in concurrent doubles at Aldebaran
and Betelgeuse galaxies where the Saiph stars illuminates with such magnitude .This theory is novel
approach towards providing certain new results such as increase in the skin depth, diffusion constant,
diffusion coefficient and new regime (Banana) , confinement time calculated as there are at present no such
theory existing.

It is based on DOUBLE TRIOS with Low- β plasma having low frequency fluctuation which is being
stabilized for sheared velocity, finite conductivity and with other parameters. The induced RTI is
suppressed by above mentioned parameters and as a whole the classical transport phenomena is taken into
consideration. The heat conductivity is calculated, Banana (Hazarika’s) regime is calculated where an
D ps
important result regime for DOUBLE TRIOS which is DH = i.e., the term in bracket is better
[6 + sC h ]2
off the Pfirsch-Schluter regime. After the Bohm diffusion the Hazarika’s diffusion coefficient is calculated.
3/ 2
⎡ RC h ⎤
Bohm diffusion also gets changed as D B = D H ⎢ ⎥ .Here we see that at first comes the Bohm
⎣ r ⎦
diffusion than classical plateau, Pfirsch-Schluter’s regime than comes the Hazarika’s regime for DOUBLE
TRIOS for transport phenomena one new result is found as
q 2 vcl
v⊥ = . The above facts compel one to study the classical phenomena along with collisional
[6 + sC h ]
transport phenomena, Mirror effect decreases drastically. Toroidal and poloidal beta are calculated. Earlier
Bhatia and Hazarika (1996) have studied the effect of self gravitating superposed plasma flowing past each
other which of use in the Double trios’s collider region. The two trios meets together at collider region
which is the source region of collision or stability in DOUBLE TRIOS .This may be considered of interest
to particle Physicist for quantum theory researchers and so on. The present work has been divided into 10
sections

Schematic diagram of Double trios (DT)

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 206

DOUBLE TRIOS

BASIC EQUATIONS

The basic equations which governs the DOUBLE TRIOS are as follows
1
ηJ = E + v × B , ∇P = ∇p (5.22.1)
c
r
(
r
E ≡ 0, E ,0 )
r r
( r
B ≡ Bθ ,0, Bφ ) (5.22.2)

v ≡ (v ⊥ ,0, vc )
r r r
p ≡ ( p(r ),0,0) (5.22.3)
Here η ,finite Te (electron temperature) ,E ( electric field), v ⊥i (perpendicular ion
conductivity,
r
velocity), χ (magnetic diffusivity), µ (viscosity) , p e (electron pressure), B (magnetic field), p i (ion
pressure ) ,q(safety factor) ,r(intra stellar radius of stars respectively), R( inter stellar distance between the
stars respectively) as there are two trios of stars .

According to the geometry of the double trios the magnetic field also changes. The magnetic field
are arranged around the Double Trios (DT) in the intra stellar way, the magnetic field of individual star is
r r
Bθ = 6 Bθ and the inter stellar magnetic field is the field generated between the two stars in double trios
given by
r r
Bφ = Bφ (1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ ) .The total magnetic field is given
r r r
by B = eˆθ Bθ + eˆφ Bφ , (5.22.4)
r r r
B = 6 Bθ + Bφ (1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ ) (5.22.5)
B = Bθ [6 + s (1 + 4π sin 3φ sin θ − 2 sin φ − 2 sin θ )] (5.22.6)
where s is the magnetic ratio.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 207

8πnT
Therefore the beta parameter, β = for DT, the
Bθ2 [6 + s (1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ )]
2

8πnT
intra stellar; βθ =
Bθ2 [6 + s (1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ )]
2

8πnTs 2
Inter stellar; where β φ = is Hazarika’s factor for
Bφ2 [6 + s (1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ )]
2

DT
Here we have for equilibrium condition
r
∇p = J × B (5.22.7)

∇p = −
1
2
R Ch 2
2
1 2
∇ 16 R 2 C h Bθ2[ ]
⎛ Bφ2 C h
2


+ ⎜1 +
⎜ 16 Bθ2

RC
1
⎟⎟ h R 2 C 2 +
U
c 2η
R 2
C h
2
∇ φ ×
r
B[φ ]
⎝ ⎠ h

here

C h = (1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ ) is Hazarika’s constant for DT


(5.22.8)
This is Hazarika’s DOUBLE TRIOS formula for equilibrium, where U is the feedback loop voltage which
considers here absent for the present study.

m
The resistivity η can be expressed by electron-ion collision frequency η = ν ei with this
e2n
ν ei mTc 2
we get Hazarika’s diffusion term as DH = = ν ei rL2 , rL is the finite ion larmor radius
e B [6 + sC h ]
2 2 2

ηc2
for DT. Dm = is the magnetic diffusion coefficient describing the skin effect. Magnetic diffusion

for DT is

ν ei mTc 2
DmH = = ν ei rL2 , rL
e B [6 + sC h ]
2 2 2

Distance –Luminosity relation


Here the magnitude of Saiph star is 52 where as the magnitude of sun is considered to be 1.0, using the
given data we try to calculate the distance of the star from the sun as
⎛ m−M ⎞ ⎛ 52 −1 ⎞
⎜ +1 ⎟ ⎜ +1 ⎟
10.2 +1
DL = 10 ⎝ 5 ⎠
= 10 = 10
⎝ 5 ⎠
= 10 parsecs. 11.2

After finding the distance of Saiph star from the sun 1011.2 parsecs, a relationship is established for
finding out ratio of luminosity of star and sun with the ratio of distance of star and sun as

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 208

2
Lstar ⎛ D L ⎞ 0.4 (m − M )
=⎜ ⎟⎟ 10 (5.22.9)
Lsun ⎜⎝ Dsun ⎠

(1 −52 )
Lstar
Lsun
( 2
)
= 1011.2 10 0.4 = 10
22.4
X 10 −20.4 =100 (5.22.10)

Saiph star is 100 times more luminous than the sun.


Absolute magnitude M=52

⎛D ⎞
m = M + 5 log10 ⎜ L ⎟ (5.22.11)
⎝ 10 ⎠
⎛ 1011.2 ⎞
m = 52 + 5 log10 ⎜⎜ ⎟⎟ (5.22.12)
⎝ 10 ⎠
m = 52 + 5 log10 1010.2 ( ) (5.22.13)

m = 52 + 5 X 10.2 = 52+51=103 (5.22.14)


So, we get the apparent magnitude of Saiph star is m= 103

Where m, is apparent magnitude and M, is absolute magnitude of the Saiph star

Now using Eddington formula

Lstar ⎛ M saiph ⎞
= 3.3 X 10 4 ⎜⎜ ⎟⎟ (5.22.15)
Lsun ⎝ M sun ⎠

⎛ M saiph ⎞
100 = 3.3 X 10 4 ⎜⎜ ⎟⎟ (5.22.16)
⎝ M sun ⎠
M saiph
= 3.03 X 10 −3 (5.22.17)
M sun

Period –Luminosity relation

M saiph = −2.81 log10 P − (1.43 ± 0.1) (5.22.18)

52 = −2.81log10 P − (1.43 ± 0.1) (5.22.19)

52 + 1.44 = −2.81log10 P (5.22.20)


53.44
− = log10 P (5.22.21)
2 .8
log10 P = −19.1743772 (5.22.22)
−19.17
P = 10 sec (5.22.23)
Where P is the period of pulsation of star.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 209

−19.17 −1.17
Period of scintillation is 10 seconds ( 10 auto seconds) or pulsation of Saiph star. As the mass of
the star is very less than the mass of sun, the star is in the aberration period and shrinking due to the
gravitational collapse i.e., it is shines with the halo of 52 magnitudes such star are at present supernova
form of stage.

BANANA REGIME
If we do not consider collisions still all the particles in DOUBLE TRIOS (DT) plasma could move
freely round the sextant (six) gobs along the field lines. Magnetic field differs and varies along the field
lines a length of the order qR (1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ ) , a particle sees magnetic
⎛ ∆B ⎞
mirrors at a distance of the qR(1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ ) strength of mirrors ⎜ ⎟
⎝ B ⎠
ratio is given by the inverse aspect ratio.
⎛ ∆B ⎞ r
⎜ ⎟≈ (5.22.24)
⎝ B ⎠ R(1 + 4π + sin 3φ sin θ − 2 sin φ − 2 sin θ )
Particles trapped between such mirrors according to the law of energy conservation is
1 1
µB + mvc2 = cons tan t or µ∆B + ∆ mvc2 = 0 it hold for
2 2
1 2 ⎛1 2⎞ 1 2
∆ mvc = ⎜ mvc ⎟ Here µ = mv ⊥ that gives us magnetic moment
2 ⎝2 ⎠ max 2

∆B vc2 r
=− 2 = 〈〈1 (5.22.25)
B v ⊥ R(1 + 4π sin 3φ sin θ − 2 sin φ − 2 sin θ )

Drift is in the vertical direction with velocity as


mv ⊥2 v ⊥2τ PA
2
eBθ
Where τ PA
−2
v drift = = =
, cyclotron frequency. The time
eB[6 + sC h ]RC h RC h [6 + sC h ] m
qRC h
required to fly particles from one mirror to another r mirror is the time .The particles moves a
vc
distance which is given by skin depth, δ out of a magnetic surface in the vertical direction.
SKIN DEPTH

qRC h mv ⊥2 q ⎛v ⎞ q
δ = v drift = = rL ⎜ ⊥ ⎟
vc eBvc [6 + sC h ] ⎜v
⎝ c
⎟ [6 + sC h ]

1/ 2
qR 1 / 2 C h
δ = rL is the Hazarika’s diffusion coefficient (5.22.26)
[6 + sC h ]r 1 / 2
mv ⊥ 1/ 2
Where rL =
1/ 2
, finite larmor radius (FLR) for DT .Here we see that skin depth is R C h
eB[6 + sC h ]
factor more than the single star or comet/planets .This thickness of banana like orbits we may call the
crescent of a moon .If we consider collisions than reversal of vc occurs, vc 〈〈v ⊥ . This means that a part of
a banana thickness therefore replaces the gyro radius in plane geometry then trapped particles collision

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 210

v2 RC h
frequency is given by vt = v≈ v ,the no. of trapped particles is proportional to the vc internal
vc r
given by tapping condition i.e.,
nvc r
nt = =n (5.22.27)
v RC h

HAZARIKA’S DIFFUSION COEFFICIENT

A stochastic process with δ as step size then yields the diffusion coefficient
3/ 2
nt ⎡ RC ⎤
DB = δ 2 vt = rL2ν t q 2 ⎢ h ⎥ This is Bohm diffusion
n ⎣ r ⎦
DH = rL2νq 2 , Hazarika’s diffusion coefficient (5.22.28)
D ps
DH = , Hazarika’s diffusion coefficient and is equal to 12.63787, D PS is Pfrisch-Schluter
[6 + sC h ]2
diffusion coefficient, now the Bohm diffusion becomes
3/ 2
⎡ RC ⎤
DB = DH ⎢ h ⎥ (5.22.29)
⎣ r ⎦
HAZARIKA’S REGIME

This condition stands valid for trapping the particle inhibited by collision i.e.
vt qRC h
〈1 (5.22.30)
vc
2
v 2 R 2Ch
2
qRC h 3 / 2 ⎛ v ⎞
ν 3q = A Where A = ⎜ ⎟ (5.22.31)
vc r2 rλ D ⎜v ⎟
⎝ c⎠
Or λ D 〉 A3 / 2 qRC h where λ D is the mean free path thus, the left regime is

qRC h 〈 λ D 〈 A3 / 2 qRC h (5.22.32)


1
DB , DH , DPS ≈
λD
One has ( )
DB λ D = A 3 / 2 qRC h = DH [λ D = qRC h ] where Bohm diffusion is DB , DPS (λ D = qR )

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 211

Inner part is plateau regime (flat region), and then smooth transition from banana to Pfrisch-Schluter
regime then to Hazarika’s regime.

It culminates with two effects of importance


(I) Bootstrap current
(II) Ware effect

BOOTSTRAP CURRENT

The induction effect of high diffusion velocity leading to a current density in intra stellar direction
1/ 2
v
4 − c dp ⎡ r ⎤ 1
J B = Bθ B = ⎢ ⎥ (5.22.33)
η c 4 Bθ dr ⎣ RC h ⎦ [4 + qC h ]
As inter stellar current is absent we get terms with intra stellar field only
1/ 2
4d − cπ dp ⎡ r ⎤ 1
rBθ = ⎢ ⎥ (5.22.34)
rdr Bθ dr ⎣ RC h ⎦ [6 + sC h ]
The high diffusion velocity leading to a current density in the intra stellar direction is gives intra stellar beta
as

p 8π p
βθ = = (5.22.35)
Bφ2 Bθ2 [6 + sC h ]
2


Since the diffusion velocity should not exceed the magnetic field in plasma with finite resistivity. For
1 1
banana regime β < 3/ 2 2
,β = β pol which are in agreement with earlier results. Pfrisch –
A q q A2
2

Schluter diffusion is expressed by v D ≈ q 2 vcl , the classical diffusion velocity is given by


1
vcl = β v mag with magnetic diffusion velocity as we know that v D < v mag we get the plasma beta as
2

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 212

2vcl q 2 v cl
β< => v D ≈ which is known as Hazarika’s diffusion expression. And from
[6 + sC h ]v D [6 + sC h ]
A2
this we get β θ < 1 , therefore β < is considerably different from earlier results that are
[6 + sC h ]
obtained by the other authors Samain and Werkoff (1977), Pfrisch (1978), Kerner (1978), Moore(1982).

WARE EFFECT
cE
Here the usual E/B drift is replaced by vD = for the ware effect in DT.
Bθ [6 + sC h ]

CONFINEMENT TIME

β pol 〈 A1 / 2 For impurity transport as long as the temperature profile is flatter than as given by
2
Tn but it is modified by Hazarika factor C h .If we put C h = 1 in
2

2
vthH τ DH τ MH 〉 q 2 R 2 C h we can get
2

2
vthH τ DH τ MH 〉 q 2 R 2 this is given by Samain and Werkoff (1977)
τ DH is deflection time
τ MH is Maxwellian time for Hydrogen ions.

2
0.97 × 10 −16 ne r 3 RC h Bφ
τ Ee = 1/ 2
for experimental purpose also. (5.22.36)
Te Ip

In the present study it is shown that Double trios is better than the trios and gives more luminosity
and more energy is liberated in this case which is depicted in the Fig.1 and the Fig.2. In Fig.1 it is shown
that how DT is broader than the trios case in particle trapping .In Fig.2 it is shown that it takes less
confinement time than the trios case and is epicentric whereas the trios case takes more time to come to the
stabilized condition as compared to DT. Therefore the confinement will remain for longer period without
any instability generated therein.

PARTICLE TRAPPING IN HAZARIKA’S (BANANA) REGIME


Here we can observe that the particle trapped which is exhibits the remnant glow for the Saiph star
in Andromeda galaxy by the Hazarika’s regime (banana) for Double trios is broader than the Trios case in
Fig.1.

HAZARIKA'S REGIME(BANANA)

1
15 10 2
14 3
5
13 4
12 0 5 Series1
11 6
10 7
9 8

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 213

Fig.1. The particle are trapped in showed region Hazarika’s (banana) regime which calculated from skin
depth eqn.(5.22.26), q=2.5, R/r=1.5, rL =3.5, θ = 0.1, φ = 0.2 in radians

COMPARISION OF TRIOS AND DOUBLE TRIOS(DT)

1
15200 2
14 3
100
13 4 TOKOMA Series1
12 0 5
DTTC Series2
11 6
10 7
9 8

Series 1. Tokomak, Series 2.DTTC (HUB)


FIG.2.Comparision of Hazarika’s (banana) regime for Double Trios and Trios is shown for
θ = 0.1, φ = 0.2 in radians=2.5, R=1.5 in eqn. (5.22.29).
It is observed from the above graph that the confinement time required for the Double Trios is much lesser
than the Trios case.

Fig. 3.
Concurrent doubles at Aldebaran the tandem tidal trio at left similar to string below the loght spike
centerline , gobs except 6 gob tidals in total are traveling around together radially straight outward from
Aldebaram. Planets are shown in dark and comets are shown in white.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 214

Fig. 4.
Two tidal trios of dominoes, torqued by magnetic inductions in uniform synchronized rotations in polar
plane long axis (spin axis) eccentrics, Chorus girls who cannot do otherwise but obeys an invisible
choreographer named Allah.

Fig.5.
At Saiph two orbital planes (like two hat brims) the two magenta trios vector straight through the star
center, the two trios in grey also .The star since both trios are muffled in the star’s over plus lens halo.

Tidal twins are common, but not that common and have no sigh all on a single radial out from their star
.Two lines of three each in parallel , all six again of identical kind, and another variant of tidal twin pairs.

The study is in relevance to the earlier studies done by Pfrisch(1978)and Pfrisch and
Schluter(1962),Samain and Werkoff(1977).If we substitutive in the inter stellar distance with C h = 1
only R remains ,we get the same results of Pfrisch(1978).The present study contains enhancement in the
skin depth ,banana regime, bootstrap current, ware effect, diffusion coefficient as Hazarika’s diffusion
coefficient ,Hazarika’s factor for Double Trios moreover the halo and the luminosity profile with the
period of the star gives us pattern of the scintillation or pulsation of the star in double trios such as Saiph
Star.

REFERENCES
137. Hazarika,A.B.R.: Submitted in Physics of Plasma (2009a)& 13th National symposium on plasma
Science &Technology, Rajkot(1998); 16th National symposium on plasma Science
&Technology, Guwahati(2001)

138. Hazarika,A.B.R.: Submitted in Physics of Plasma (2009b)& 18th National symposium on


plasma Science &Technology, Ranchi(2003);19th National symposium on plasma Science
&Technology, Bhopal(2004)

139. Hazarika,A.B.R.: Submitted in Plasma of Plasmas (2009c)& Proceeding of 20th National


symposium on plasma Science &Technology, Cochin Univ. of Sci. & Technology,
Cochin(2005) ;

140. Hazarika, A.B.R.: Submitted in Physic of plasma (2009d)& Proceeding of 3rd Technical meeting
of International Atomic Energy Agency on Theory of Plasma Instabilities, Univ. of York, York,
UK(2007),31pp

141. Pfirsch, D: Theoretical and computational plasma physics (1978), IAEA-SMR-31/21, pp59.
142. Pfrisch, D., SCHLUTER, A.: Max-Planck-Institut fur Physik und Astrophsik, Munich, Rep.
MPI/PA/7/62(1962).
143. Kerner, W: Z. Naturforsch. 33a,792(1978)
144. Samain,A., Wekoff, F: Nuc. Fus. 17,53(1977)

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 215

145. Moore, Greydon (1982) Kaleidoscope zoo of fantastic objects orbits giant stars in trios.html
146. Bhatia, P K and Hazarika, A B Rajib : Phys Scr 53,57(1996)

APPLICATION IN AUTOMOBILES
Essentially every part of an automobile engine,
and in the entire vehicle, involves applications of
Physics, generally described as being
Engineering principles The two fields are pretty
close Engineering generally has a more
“practical” approach, of working to get results
that will be useful information toward actual
mechanisms, without necessarily actually
understanding WHY some particular formula is
as it is Physics is less concerned with
applications or usage of results, and more
concerned about understanding the details about
why some process proceeds as it does . I have
considered DANISHA Hall thrusters to thrust
the automobile as that provides the upward and
the front thrust to the vehicle which will make
the vehicle run faster and even one can assume
that its flies like the airplanes Sea Harrier or so.
In my entire life, I have only ever seen very superficial presentations regarding the functioning of an
automotive engine, usually just enough to tell various alternatives apart! Therefore, I have felt it
appropriate to present a Physics perspective on the subject!

THE FLYING CAR IS ON ITS WAY!


London, Feb 6 ,2011
Fed up with traffic jams? Here’s is some good new- ‘The flying cars’ is on its way.
An American company, Terrafugia transition, based near based near Boston, is to soon start manufacturing
“the flying cars”, called the Transition Road able Light sport Aircraft, which can be transformed from a car
to a plane in just 30 seconds .
The transition can fly at 115 mph and reach 65 mph on the road ; on the ground , with its wings tucked up
and in, it can fill up with petrol a normal filling station and fits in an average size garage, the ‘Sunday
Express’ reported.
‘The flying car’ is set go into production this year and is expected to cost between 125000 pounds and
160000 pounds, say its developers .Richards Gersh, of Terrafugia, the US company which makes the
vehicle and hopes to sell 200 a year, said:”This is an airplane first and foremost .The idea is you can drive it
to and from a regulation airport .fully fuelled, you can fly it for a range of 400 to 450 miles.
‘We have 100 orders so far. There are still some minor changes that need to be made because it has to meet
both road and aviation standards. However, we expect to be delivering at the end of this year’.
According to the CEO of the company, they have successfully test-flown ‘The flying car’ as many as 28
times -PTI
I am going to assume that you have at least a vague understanding of what goes on in an automotive
engine, and that words like piston, crankshaft, connecting rod, and cylinder are understood.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 216

In this drawing, we are looking at the end of the crankshaft, and the crankshaft is going to rotate counter-
clockwise Therefore, at this moment, the crankshaft is pushing the piston upward in the cylinder, and it is
currently half way up This is during the stage called Compression, where the gas-air mixture inside the
volume above the piston is getting compressed by the upward movement of the piston.
I’m going to simplify some things to clarify some points, such as treating valves as being able to operate
instantly, which they definitely do not do in real life However, if the intake valve had closed when the
piston was at is lowest point (90° of crankshaft rotation before this drawing), the total amount of gas-air
mixture in the COMPLETE cylinder (initially at approximately atmospheric pressure of 15 PSIA) is now
already squeezed into just the volume of the cylinder above the piston If you think about it, the initial gas-
air mixture is here already squeezed into HALF its original volume, and so it is already at about TWICE
the initial pressure (or now 30 PSIA)
We should clarify that pressures can be described in two different ways, Absolute and Gauge In this case,
we know that the air started out at the natural pressure of 15 PSI, which is an Absolute pressure, so it is
sometimes written PSIA If you measured that pressure with an air pressure gauge, it would read 0 PSI,
because there is no difference in pressure from natural This is called gauge pressure, and would be written
0 PSIG They mean the same thing, and absolute pressure is always 15 higher than gauge pressure Our
drawing therefore shows a situation where the pressure inside the cylinder is now at 30 PSIA or 15 PSIG
This discussion is going to be about the so-called spark-ignition or Otto cycle engine, the process that
virtually all cars and trucks operate on There are a couple common alternatives: The compression-ignition
or Diesel cycle and the Brayton or Joule cycle The majority of this discussion actually applies to all three,
but there are some differences In Physics-talk, an Otto cycle has an isentropic compression, followed by a
constant volume combustion explosion, followed by an isentropic expansion In contrast, a Diesel cycle has
an isentropic compression followed by a NON-explosive combustion at (relatively) constant pressure,
followed by the isentropic expansion Enough of that! They’re much alike in many ways, and you can
consult any College Engineering textbook regarding the differences
If we discuss a very popular engine, the so-called small-block Chevy V-8 engine, we can put some numbers
in here The bore (diameter of the cylinder) is 4”, and the stroke (twice the crankshaft throw radius) is 35”
The volume of that cylindrical volume is then (PI) * R2 * H or 31416 * 2 * 2 * 35 or around 44 cubic
inches (Since that engine has eight cylinders that are
each that volume, its total ‘displacement’ is 44 * 8 or
around 350 cubic inches This engine is generally
called the Chevy 350 V-8)
The area shown at the top of the drawing is an
additional volume that remains even when the piston is
at the very highest point, a location called TDC for
Top Dead Center, which will mean more in our second
drawing The space above the piston at TDC is
carefully designed In this specific case, it has a volume
of around 63 cubic inches
When the piston began its upward movement (at BDC,
bottom dead center), there was then a volume of gas-
air mixture above it of (44 + 63) or 503 cubic inches
When the piston has gotten to TDC, as in this drawing,
all that gas-air mixture has now been compressed into
the remaining 63 cubic inches The ratio of these
numbers, 503 / 63 is called the Compression Ratio of
the engine In this case, it is about 80
This drawing shows the moment when that gas-air
mixture is most compressed The 80 compression ratio
means that the 15 PSIA beginning mixture, is now at
about 80 times that pressure, or around 120 PSIA
(Technically not precisely, because of some really

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 217

technical characteristics of what happens when gases are compressed isentropically) The cylinder
compression is measured and is essentially this number Except that that device is a gauge, so the reading
would be 105 PSIG
Most superficial descriptions of automotive engines then say that the gas-air mixture is ignited at that
moment and that the even higher pressure of the exploding gas drives the piston down, turning the
crankshaft Reference is usually even made of ‘advancing the timing’ of the ignition spark, so it occurs
maybe 10° or 20° BEFORE TDC, so the explosion has a moment to build up its full power by the time it
gets to TDC If you look at this drawing for a while, you should be able to see that that is impossible! If the
explosion (and all its effects) occurred exactly at the moment shown in this drawing, at TDC, the crankshaft
would not be given any rotation at all! Virtually the entire force of the explosion initially acts to try to drive
the piston, connecting rod and crankshaft downward, out of the bottom of the engine, without giving it any
rotation at all! (When this actually happens, VERY bad things tend to happen to the engine!)
All actual internal combustion engines rely on KEEPING that explosion pressure for as long as possible! In
Calculus terms, the total effect regarding rotating the crankshaft is the Integral of the net force actually
applied to the crankshaft by that connecting rod for as long as there is explosive pressure inside the cylinder
In an engine that is operating properly, contributions to this Integral begin at the instant of ignition and end
when the exhaust valve begins to open The instantaneous force applied as torque in rotating the crankshaft
continuously changes during this “power stroke” It actually begins with a slight negative contribution since
ignition is timed to occur before TDC, but not much pressure yet develops since the flame is still spreading
inside the cylinder The contribution becomes exactly zero at TDC, and then quickly rises as the internal
burning and pressure continues and the leverage angle at the crankshaft improves Eventually, the piston
going down reduces the pressure, and engine cooling also does, and good design times the exhaust valve to
begin opening about when productive torque is no longer available
So, from a truly accurate (Physics) perspective, a VERY complicated graph of resultant torque would first
need to be determined, and then that graph would be Integrated to determine actual engine torque
generated, at that engine speed and under those conditions of spark advance and the rest Such analysis is
rarely actually done, and nearly always, simply experimental measurements of real engines is found by
experiment to learn these things
You might note that the pressure must be maintained within the cylinder throughout the entire power stroke
for decent performance This explains why an engine loses much of its power once the piston rings are worn
(and therefore leaking pressure) or the valve seats become worn or distorted (and therefore leaking
pressure) If the engine actually just relied on the instantaneous effects of the explosion, worn rings or
valves would be of minimum importance, but the fact that the basic design relies on HOLDING the
pressure before actually using it make those components extremely important
It turns out to be sort of fortunate that the “speed” of the explosion of the gasoline-air mixture is relatively
slow! Under the conditions that generally exist inside a cylinder (during highway cruising), the flame front
velocity is usually around 90 feet per second, or 60 mph Mark’s Standard Handbook for Mechanical
Engineers, Section 9, Internal Combustion Engines, Flame Speed Depending on exactly where the spark
plug is located, that flame front must travel two to four
inches in order to ignite all the gases in the cylinder At
90 ft/sec, this then requires around 0002 to 0004
second for the combustion to complete This might not
sound like much, but engines spin amazingly fast, and
these brief time durations of combustion always take
many degrees of crankshaft rotation
So even though the ignition occurred BEFORE TDC,
and the very start of the combustion actually acts to try
to make the engine run backwards, the ignition timing
is carefully scheduled so that MOST of the combustion
(and therefore combustion pressure on the piston head)
occurs AFTER TDC By the time that a maximum
amount of the gas-air mixture is burning, the
crankshaft has rotated a slight distance past TDC This

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 218

situation, and its consistency (due to consistency of the quality and burning characteristics of the gasoline),
enables a modern engine to avoid seriously trying to spin backwards! The mathematics below shows that,
for an engine speed around 1500 rpm (a normal driving situation) this is commonly around 10° AFTER
TDC, when the greatest explosion pressure is present in the combustion chamber Let’s look at some
preliminary calculations
It is very well established that the explosion, and therefore the heat created, causes the gases in the
combustion chamber to obey standard rules of Chemistry, such as the Ideal Gas Law Because of the sudden
heat, the gases try to expand immediately, but they cannot, so the pressure in those hot gases greatly and
rapidly increases Very consistently, the explosion pressure in an internal combustion engine rises to
between 35 and 5 times the compression pressure Since our example engine had a compression pressure of
120 PSIA, this results in a momentary explosion pressure that peaks at around 500 PSIA
Since the piston is 4” in diameter, the top surface of it is just PI * (4/2)2 or around 126 square inches Each
of those square inches experiences the 500 PSI(G) pressure, so the total force then instantaneously applied
to the top of the piston is 126 * 500 or around 6300 pounds (OK It is ACTUALLY the 500 PSIA, but there
is natural air pressure pressing against the UNDERSIDE of the piston as well, so the NET effect we are
interested in is due to the GAUGE pressure Not too different, but slightly!)
Because of the geometry of the situation when the crankshaft has progressed 10° after TDC, the force
diagram indicates that this downward force must be multiplied by (approximately) the sine of 10°, in order
to determine the tangential force applied to the crankshaft Approximately, because the connecting rod is no
longer parallel with the axis of the cylinder bore, the actual angle being slightly higher, and an exact angle
is easy to calculate with a thorough analysis For now, 10° will give an approximate result for our purposes
Therefore, the tangential (rotative) force actually transferred to the crankshaft is around 6300 * sin (10) or
6300 * 0174 or around 1100 pounds Since this force is applied to the throw of the crankshaft, at 175”
radius from the centerline of the crankshaft, the torque transferred to the crankshaft is therefore 1100 *
175” or 1100 * 0146 foot or 160 foot-pounds of torque This calculation is in ball-park agreement with the
published maximum torque curves for such engines, at 1500 rpm
Notice that the radial force applied to the crankshaft (bearings) is around 6300 * coos (10) or around 6200
pounds! At that moment, the vast majority of the power of the explosion is trying to drive the crankshaft
down out of the engine, without rotating it! And in seriously trying to abuse the bearings! Without engine
oil, under pressure, in the bearings, they do not last long with 6200 pounds force against them!
In traditional automotive thinking, this sort of makes sense! As long as the piston rings do not leak too
much and the valves do not leak too much, then those expanded gases inside the combustion chamber
cannot escape That means that, until the exhaust valve starts to open, all the pressure will act to push the
piston downward In order to get the most total power, it makes sense to keep that pressure acting as long as
possible This means that having the maximum pressure developed as soon as possible after TDC gives the
most possible available degrees of productive crankshaft rotation The benefit of this is seriously affected by
the fact that, as the piston moves downward, the volume inside the combustion chamber increases, so the
pressure drops (Ideal Gas Law) From a beginning combustion pressure of 500 PSIG in our example, at the
later instant when the crankshaft had rotated 45° the volume has increased such that the pressure drops to
around 200 PSIG (without any leakage) and by the time the crankshaft has advanced 90° the pressure is
down to around 125 PSIG The AVERAGE pressure during this 90° of rotation is referred to as Mean
Effective Pressure (mep) and is commonly around 200 for common engines under power (A V-8 engine
that is a four-cycle requires each piston to provide the engine power for 90° of crankshaft rotation) (This
description is for best conditions, fairly high power and revs)
There are several important points to be made here
• At lower engine speeds, such as at idling at say 500 rpm, the very same explosive force is created,
but since the crankshaft is rotating only 1/3 as rapidly, the maximum pressure tends to occur much
closer to TDC, say around 3° or 4° after TDC The sine of 3° is around 0052, so only around 57
foot-pounds of torque are (momentarily) transferred to the crankshaft The relatively long times
involved allow moderate leakage past the rings and through valve leakage The main effect,
though, is that the gases inside the cylinder are cooled by the presence of the water-cooled metal
(cylinder walls) engine block and head This cooling is necessary, to permit lubricants to keep the

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 219

engine running reliably, but it takes an amazing amount of energy from the cylinders! If an engine
is being operated at its rated 200 HP, the cooling system would then be removing around an
ADDITIONAL 400 HP worth of energy from the cylinder walls and heads (and then simply
throwing that energy away)! (This is part of why internal combustion engines have such terrible
overall efficiency, rarely higher than the low 20% range (discussed below)
The engine cooling system must be able to remove all that heat when the engine is under full load
and power, so cooling systems are really designed to be pretty efficient At 5,000 rpm, there is only
about 0003 seconds available to remove most of the extreme heat from the cylinder walls and head,
and the gases inside start out at around 4000°F As that heat is removed from the cylinder walls and
head, the gases inside cool down At 5,000 rpm, and in 0003 second, the amount of cooling is limited
But now look at that same engine while idling at 500 rpm That (cold) 200°F water flowing through the
block and heads now has ten times as long to cool everything as the piston descends In ten times as
long, the gases inside the cylinder can get really cooled off That is bad because lower temperature
means lower pressure (Ideal Gas Law) and so less pressure is left to push down on the piston
Between the natural (Ideal Gas) pressure reduction due to the expansion as the piston goes down,
and the forced cooling system cooling the gases and therefore also reducing the pressure, the
momentary 500 PSIG that existed near TDC quickly dissipates, and there is a rather brief and
somewhat weakened force/pressure pushing the piston down to create productive work The mep drops
way off and the full 90° of productive effect does not occur
At even slower engine speeds, the engine is not able to reliably create the necessary amount of
torque necessary to overcome friction and to drive the water pump, alternator and other systems, and to
provide enough momentum to a flywheel to do the work of exhausting, intaking and compressing the
gas-air mixture for the next explosion This is why an automotive engine cannot run reliably at under an
idling speed, often around 500 rpm, which is necessarily higher when the added load of an air
conditioner is running (generally then at least 700 rpm) because it requires additional
torque/horsepower
• At higher engine speeds, the crankshaft is rotating more rapidly so that it is farther past TDC at the
moment of maximum dynamic pressure in the cylinder (even though the ignition advance has been
increased even more) This again creates essentially the same explosive pressure in the cylinder
But now everything occurs in a shorter time,
so less leakage can occur and less gas cooling
occurs to the cooled walls of the engine block
and head This allows the advanced crankshaft
angles to be more able to transfer torque to
the crankshaft Our initial 6300 pounds of
force on the top of the piston is reduced (by
the increase in volume, since the piston has
moved downward more than half an inch) to
around 2500 at the point that it is 45° past
TDC This indicates that the torque transferred
to the crankshaft at that instant would be 2500
pounds * sin(45°) * 0146 foot or 250 foot-
pounds of torque
• At extremely high engine speeds, this is all
still true, except that the slowness of the
burning always causes some of the gas-air
mixture to not even burn until well after TDC
This results in a delay and a time-spreading of
the maximum dynamic pressure, to occur
even later after TDC This causes the first few
degrees of crankshaft rotation to not yet have
the full pressure developed inside the
combustion chamber (This cannot be avoided
and is due to the slow speed of the spread of the flame inside the cylinder) Therefore the mean

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 220

effective compression pressure starts to drop off at extremely high engine speeds (There are less
degrees of crankshaft rotation available before the exhaust valve starts to open) Therefore the
torque transferred to the crankshaft drops off at very high engine speeds, which is one of the main
reasons for a “top end” of engine performance (There are other reasons: At such high engine
speeds, the intake and exhaust valves are not open very long, and so the removal of waste gases
and the intake of new fuel-air mixture to replace it becomes less efficient Air flow speeds through
intake manifolds and exhaust systems becomes very high so extra frictional resistance exists And
of course, there is the matter of making sure the engine doesn’t fly apart!
In our example engine, at the situation shown here, our effective compression pressure is only 30
PSIG (the piston now being halfway down or a 2:1 compression ratio) Therefore the combustion
pressure is only around 125 PSIG and the total force on the piston is around 1600 pounds So even
though the geometry is the best possible, with a sine(90°) = 10, the total torque transferred to the
crankshaft is around 1600 * 10 * 0146 or around 230 foot-pounds of torque In real engines, it is
usually actually less than this because the cooling system has already removed some heat from the
gases The exhaust valve usually begins to start to open about then, since there is relatively little benefit
in staying closed due to the much lower pressure and force on the piston, which then releases the
remaining pressure in the combustion chamber
• These two effects just described, the extensive cooling and leakage at low rpm and the reduced
(delayed) effective compression pressure at high rpm, are the primary reasons that an engine has a
“torque curve” The actual torque developed would be relatively constant at various engine speeds
except for these two effects
• Nearly everyone’s impression about “octane ratings” of gasoline are exactly opposite of what is
the truth! Low octane gasoline burns VERY rapidly, so rapidly (and somewhat unpredictably) that
the crankshaft angle might not always get to TDC before maximum pressure is developed This
both wastes some of the explosion power (fighting itself as regarding rotating the crankshaft) and
represents the possibility of extreme wear on engine pistons and bearings, as compared to slower-
burning higher octane gasoline. Because high-octane gasoline burns more slowly, it is less subject
to “engine knock” (too much combustion before TDC) and generally is able to produce more total
engine torque because of the more reliable geometrical advantage of the later crankshaft angle
• It was discovered long ago that there was an advantage of triggering the ignition spark several
degrees of crankshaft BEFORE TDC Otherwise, the maximum pressure only develops so long
after TDC that the piston has dropped too far and less pressure and force are created, as described
above This spark advance situation causes the interesting effect that an engine begins building up
combustion pressure BEFORE TDC, which actually would have the effect of making the engine
rotate backwards! Some early engines had a serious susceptibility to this, and many people who
used early crank starters (before electric starters) were seriously injured when the engine kicked
backwards
Under the conditions of our engine running at 1500 rpm, the “flame-front speed” (essentially the
rapidity of the burning) is around 90 feet/second inside the combustion chamber A little geometry and
algebra easily shows that the flame front has progressed across half of the combustion chamber (2”)
while the crankshaft has rotated around 18° If the ignition spark was timed for around 15°BTDC, this
would suggest that the maximum pressure in the combustion chamber would then occur around 18°
later or 3° after TDC, as is commonly intended
The actual reality is quite a bit more complicated than this, and we have simplified some things in
the interest of clarity As the flame-front progresses across a combustion chamber, the exploding gases
act to additionally compress the gas-air mixture that has not yet ignited The result is that the pressure
created is not a symmetric smooth curve, but rather a curve that has generally greater pressures in the
later portion of the actual combustion Where we have considered the maximum combustion pressure to
occur when the flame-front has progressed halfway (2”) across the chamber, it generally occurs a little
later than that due to these very complicated effects
Actually, a reference in Mark’s Standard Handbook for Mechanical Engineers, Section 9 states that
the optimum spark advance is approximately 5/9 of the combustion time This means that more time
of combustion happens BEFORE TDC than after! This makes the point of the more powerful
portion coming late in the combustion process, to not only overcome that 5/9 of combustion that acted

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 221

to try to make the engine turn backwards, but also emphasizes the very small crankshaft angles
involved that are the main point of this description! If the combustion had proceeded linearly (as we
have implied in this simplified presentation) the engine would not even run (with standard spark
advance), since over half of the combustion time occurs before TDC!
As with most everything else here, the situation is actually a little more complicated than that! The
force applied to the top of the piston is proportional to the pressure of the gases applied to it, and
THAT is proportional to the temperature of the gases inside the combustion chamber (all other things
being equal!) The early part of the combustion process IS burning fuel and building up pressure, but
the TOTAL pressure is somewhat cumulative In ritzy math terms, it would be called the Calculus
Integral of the pressure over time So even though 5/9 of the TIME of burning may occur before TDC,
the pressure is still not fully developed by then, and the cumulative pressure AFTER TDC is much
higher Which is why the maximum torque is developed when the spark is advanced around 5/9 of the
total combustion time But note that ALL of the combustion needs to be DONE before the piston is able
to move very far down the cylinder, again meaning that the maximum force (pressure) is developed
fairly close to when the crankshaft throw is nearly straight up, the worst possible mechanical
(dis)advantage
Between the additional benefit of this later development of maximum combustion pressure, and the
negative value of the very early stages of combustion that occur before TDC, we have for simplicity
treated the two as effectively canceling out each other, and that the entire combustion process occurred
as if it happened instantaneously at a single 3° to 10° ATDC crankshaft angle In reality, they seem to
be relatively comparable effects, but there is no actual reason for insisting that they exactly cancel out
Spark advance, fuel-air ratio, octane rating of the fuel, temperature and many other effects affect each
of them differently
You can probably imagine why engine designers measure the performance of a new engine at every
possible crankshaft angle (actually spark advance angle) and RPM speed, to determine the very best
ignition advance for all situations The math to try to predict that precisely enough from theoretical
bases is really complex and it is far easier to simply build some engines and experimentally TRY many
different combinations of all those parameters When they find some combination that produces the
most output torque and power (at a particular engine speed), they note that and then teach the computer
to provide that advance and that fuel-air mixture!
They actually have the freedom to create an “advance curve” for most power or for best fuel
economy Usually, any vehicle you buy has an advance curve that is somewhere in between But this
issue explains why there are “performance chips” available for most computer controlled engines Such
chips simply replace the tame ignition advance curve with one that is better for maximum performance
Not much else is changed by such chips They tend to cause poorer fuel economy and poorer
environmental performance
I am somewhat surprised that no manufacturer has yet offered a “switchable chip” capability! If
three chips were installed, then normally the Middle (like previous chips) would be in effect When a
cruise control was engaged, a maximum economy chip would take over When the driver hit a special
GO button, for 30 seconds, a maximum performance chip would take over The best of three worlds, I
would think!
If an engine were intended to run at a constant speed with a constant load, it would be possible to
fine-tune the exact best spark advance angle However, vehicle engines must be able to go from idle to
maximum performance rather quickly There are many other engine conditions that also affect the
amount of ideal spark advance, such as the relative richness of gas in the mixture and whether the air
intake path is restricted or open All these things are important for the following reason When the
ignition spark occurs substantially before TDC, a significant combustion pressure starts to build up
even before TDC If the engine was not already spinning, this could act to make it rotate backwards!
Only the momentum of the crankshaft and flywheel makes it overcome this backward torque to get
past TDC when good things start to happen
Under some circumstances, too much of the mixture burns before TDC Prior to no-lead gasolines,
carbon deposits tended to develop inside the combustion chamber, and these deposits would sometimes

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 222

become very hot When fresh gas-air mixture would be introduced through the open intake valve, the
hot carbon could spontaneously ignite it before the spark plug fired This condition, while the intake
valve was still open, would send a flame-front backwards up through the intake manifold and
carburetor, causing what is called a backfire, flames actually coming up out of the carburetor If the
intake valve was nearly closed, then the explosion would try to rotate the engine backwards, which
causes incredible stresses in almost everything and some internal part might break Usually the head of
the piston is what would lose, and a part of the top of a piston would get blown out (down into the
crankcase) The engine would then make interesting wheezing sounds and it was essentially unusable
due to massive shaking and rattling! Since unleaded gasolines have been used, carbon deposits are less
common, and combined with computer controlled spark advance, backfiring is now very unusual
However, if too low an octane gasoline is used in an engine, the flame-front can sometimes travel
too rapidly across the combustion chamber It might seem odd, but LOW octane gasoline has faster
flame-front speeds than HIGH octane gasoline does! This causes too high a combustion pressure to
develop during the 5/9 of the combustion period that occurs before TDC That cylinder then does not
contribute to the intended productive power but rather causes an effect that partially tries to make the
engine run backwards This causes tremendous stresses to occur in an engine, and the power of the
explosion has no obvious method of release The rotational momentum of the engine and flywheel
permit the engine to continue past this event, but the instantaneous effect is usually a slight flexing of
the top of the piston head, which makes a very unique metallic sound This situation is called “engine
knock” or “ping” It is quite undesirable Regular engine knocking can cause a “blown piston” where a
hole is blown through the weakest surface of the combustion chamber, the piston head
• The numbers calculated above are Instantaneous values of the torque produced In a V-8 engine,
the eight cylinders fire during two revolutions of the engine, so a single cylinder has to provide the
power for 90° of crankshaft rotation Essentially, in a real engine, this is from just after TDC to a
point when the piston is slightly more than halfway down the cylinder, when the exhaust valve
begins to open The torque created right near TDC is minimal, it increases to a maximum value and
then drops off When the torque of an engine is measured, it is the AVERAGE that actually gets
measured Where we had calculated instantaneous torques as high as 300 ft-lb, the average will
always be less than that, in this case, around 200 ft-lb If the torque is calculated for every degree
of crankshaft rotation, and an average taken of those 90 values, the result should be very close to
the rated engine torque
Attached is a simple computer printout of such an analysis for the engine we have been discussing, at
Engine Analysis This particular analysis is very simplified, without ANY adjustment for the cooling effects
of the cooling system or even the fact that the exhaust valve is designed to begin to open well before BDC
But it might provide a useful insight into how these various things are all related

A little more about the cooling system, this is so closely associated with greatly reduced thermal efficiency
of the engine:
In many engines, the radiator hose is around 1 ½” in inside diameter, which gives around 2 square inches of
cross sectional area, a situation that is true for most parts of a well designed cooling system The water
pump pushes that water at around 15 ft/sec (10 mph) through the passageways, when the automatic
thermostat is fully opened This means that about (15 * 12 * 2) 360 cubic inches of water per second can be
circulated, which is about 12 pounds of water per second It is common for the water to be heated by around
15°F in taking that wasted heat away from the cylinder walls and heads It takes 1 Btu to raise one pound of
water by 1°F, so we’re talking about a MAXIMIUM of (12 * 15) 180 Btu/second of heat being removed
That might not sound like much, but it is! In an hour (3600 seconds), this COULD BE about 650,000 Btu!
(More than ten times as much heat as most entire houses need in the dead of winter!) Down below, we will
mention that 2544 Btu/hr is equal to one horsepower, so this MAXIMUM wasted heat represents around
250 horsepower or more of wasted energy from the gasoline (during hard acceleration, where the (stock)
engine is creating its maximum productive horsepower)
During normal driving, the amount of heat removed from the engine is less than this, for several reasons
The modulating thermostat is generally only partially open, which only allows a partial flow of what was

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 223

described above The combustion chambers do not contain as much burning gasoline as during a drag strip
run, so less fuel and therefore less energy is present that needs to be dealt with
We can therefore see that the cooling system is necessarily designed so that it CAN remove an enormous
fraction of all the energy/power that an internal combustion engine creates, which causes the “overall
thermal efficiency” of any conventional automotive engine to have low thermal efficiency, even separate
from all the mechanical losses related to the engine’s operation The calculations are extremely complex,
and include variations depending on water flow rates and cooling system design, but they generally indicate
that a conventional internal combustion engine cannot have an overall efficiency of greater than around the
low 30% range As noted below, there have been some experimental engines designed that have been
measured at around 28%, but the most efficient production engines are around 25% and most vehicles on
the highways now have engines which have around 21% overall efficiency
We might as well add another analysis here! This is NOT an analysis that qualifies for an actual
scientifically rigorous analysis, and is meant to simply provide some overall insights regarding what
happens to the heat created inside a standard engine
We will consider a normal driving situation, of a constant 60 mph trip for exactly one hour (covering 60
miles) on an Interstate highway We will assume that the engine is this small-block Chevy 350 we have
been using as an example We will further assume that the fairly large vehicle will get exactly 20 MPG
during this trip
We can see that we will use up exactly three gallons of gasoline for this trip Since each gallon of gasoline
contains around 126,000 Btu of chemical energy in it, we will therefore use up 378,000 Btu of chemical
energy
That particular vehicle has a rear axle gearing ratio that causes the engine to turn around 1800 rpm to
produce the power needed to maintain this constant 60 mph speed Our engine therefore spins 1800 times
every minute for 60 minutes or a total of 108,000 times during that hour trip Four cylinders fire during each
engine revolution, so we have a total of 432,000 cylinders fire during this trip
Therefore, EACH cylinder burns an amount of gasoline (assuming proper air-fuel mixture, ignition timing,
etc) which converts about 378,000 / 432,000 or 088 Btu of chemical energy into heat So if we consider the
firing of a single cylinder, we can say that about 088 Btu of heat is created during the combustion of the
gasoline inside the cylinder We know that energy cannot be created or destroyed, so this 088 Btu of heat
energy must go somewhere!
We know that around 21% of that energy is able to be productively converted into moving the vehicle, so
this accounts for about 018 Btu
We know that we brought in a fuel-air mixture which was around 44 cubic inches (at original ambient
temperature and pressure) Since we know that one pound of air takes up about 13 cubic feet at STP, the
amount of mixture we put into the cylinder is about 1/500 pound of mixture The engine will heat that up
briefly to near the 4,000°F that gasoline burns at, but then due to the 5:1 expansion of those gases by the
time the exhaust valve starts to open and due to the effect of the cooling system, the temperature of the
gases leaving the engine as exhaust (under the conditions of this constant speed driving) can be around
700°F (At drag strips, the exhaust gases can be far hotter than that, where they can cause the exhaust
headers to glow reddish during nighttime runs Since iron and steel begin to glow a dark red at around
800°F, we are assuming here that the exhaust is around the 700°F estimated here, due to not causing any
(usual) obvious glowing of the exhaust manifolds)
We also know that the Thermal Capacity of air is around 024 Btu/pound/°F We can therefore calculate an
approximate number for the amount of heat that the exhaust will carry away from our engine during our
one-hour trip Our one cylinder would therefore send away 1/500 pound * (700 - 70) temp rise * 024 or
around 030 Btu of heat
By implication, we can say that the remaining 039 Btu (087 - 018 - 030) of the heat must get carried away
by the cooling system

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 224

For THAT SPECIFIC SITUATION then, we can estimate that around 21% of the energy becomes useful
power to move the vehicle; 35% of the energy gets lost in the exhaust gases; and 44% gets lost due to the
cooling system and other radiation cooling effects
An interesting side-note to this analysis is that around 1970, the productive efficiency of such engines was
only around 15% and cooling system thermostats were designed to cause the cooling system to operate at
around 20°F cooler than in today’s engines, with the result being that the exhaust then carried away a much
larger fraction of the energy, around 45%, with the cooling system and other radiation then accounting for
around 40%
During that constant-speed trip, in ONE HOUR the cylinders therefore created 378,000 Btu of heat energy
from the chemical energy that was in the gasoline Of that energy, only about 79,000 Btu of that energy was
converted into moving the vehicle, while 132,000 Btu was lost in heat in the exhaust and 166,000 Btu was
lost through the cooling system and by radiation
We might also note that this analysis was for an engine speed of 1800 rpm and at minimum throttle opening
If that same engine was operating at 5400 rpm, it would clearly use around three times the amount of
gasoline, and since maximum power would then be desired, the fuel-air mixture would likely be more rich
So we might easily then be looking at a situation which would be dealing with far over a million Btu per
hour of chemical energy that was being converted into heat Of that energy, the cooling system (and natural
radiation) disposes of 166,000 Btu/hr and the exhaust gases dispose of an additional 113,000 Btu/hr of heat
All this to produce 79,000 Btu/hr of productive work in powering the vehicle! Separate from pointing out
here the incredible wastefulness of the operation of all internal combustion engines, we mention these
numbers because a COMPLETE medium-sized house in a cold climate generally only requires around
50,000 Btu/hr on the coldest February night! Vehicle engines THROW AWAY heat at several times the
rate your house uses similar heat that you pay heating bills for!
The TEMPERATURE of the exhaust seems certain to have a direct relationship with the camshaft lobe
shape of the exhaust valve cam I am not aware that anyone has ever carefully researched that issue But in a
very tame engine, where the exhaust valve waited until later in the power stroke before starting to open, the
Ideal Gas Law expansion of the gases would have dropped to a low temperature, while in a wilder engine
where, in order to improve breathing of the engine, the exhaust valve opened far earlier, the gases should
go into the exhaust header at both higher pressure and higher temperature There may be a way to identify
some aspects of a camshaft by simply measuring the (max) temperature of the exhaust! Just a thought!

There is a linked presentation in this Domain which analyzes the performance of vehicles regarding the
usage of the productive power supplied by any power source, which discusses the aerodynamic drag (front
of the vehicle having to push air out of the way and rear of the vehicle causing turbulence drag) and tire
friction drag It is possible to calculate the vehicle performance with an “ideal engine” in it For a medium
sized car, the best that vehicle could theoretically do is around 65 mpg of gasoline (using the energy in that
gallon with the minimal theoretical losses)note 1 A compact car has a theoretical maximum of around 80
mpgnote 2, and motorcycles can have theoretical maximum efficiencies of over 150 mpgnote 3 These figures
are for driving at a constant highway speed If someone wanted to be intentionally deceptive, it would be
possible to change the gearing of a vehicle to move at, say, 3 mph, walking speed, where both aerodynamic
and tire friction are far less, to get an experimental mileage number that was far higher!note 4 Some
companies have done this in the past, and then never bothered to mention that the 150 mpg mileage figure
they bragged about in advertising was only for a walking speed! These inserted comments are to relate to
the seemingly endless line of products that are sold that claim to cause a standard car engine to achieve 125
mpg or 150 mpg or 180 mpg, where readers tend to believe them! The Laws of Physics limit the
accomplishments of internal combustion engines as discussed here, and such claims are made only to sell
more products, and NO ONE can actually accomplish such things in a conventional car at highway speeds!

If an ideal engine could be built, it would NOT have any cooling done to the cylinders DURING the power
stroke, to allow the pressure inside the cylinder to remain as high as possible However, up above, we
mentioned that the cooling system, at 5,000 rpm engine speed, only has around 0003 second to remove the
heat from the one cylinder that happens to be firing at that moment This causes a necessity of ALWAYS
removing heat from the cylinders, and by a cooling system that is VERY effective! This has a negative

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 225

effect of chilling down the gases inside the cylinder that we want to be pushing the piston downward! So
the very existence of the cooling system necessarily reduces the power and torque that an engine can
create! (The reality is that heat is still removed from the heated metal for a longer time, continuously, but
then the cooling system is primarily busy removing heat from a DIFFERENT cylinder which has just fired
For simplicity, we are considering individual cylinders OK We know that a total of about 180 Btu/second is
being removed from the 5000 rpm engine, so in 0003 second, a little over 05 Btu gets removed
At 5,000 rpm, this is good! Only around 05 Btu gets removed while the piston is still trying to do
productive work, and in that very short period of time, the gases inside the cylinder cannot be overly
chilled, and so the overall performance is good (Normal automobile cooling systems are actually intended
to start to overheat at high revs like this, for the lower speed efficiency concepts being considered here!) In
our constant-speed example above, our engine was running at 1800 rpm, around 1/3 as fast, so the engine
has around three times as long to get rid of our 039 Btu of heat However, there is a new problem! The
cooling system is still just as efficient as it was during the 5,000 rpm operation! So it COULD still be
removing 05 Btu during each 0003 second, or around a total of 15 Btu of heat removed from the cylinder
This would cause FAR too much cooling of the gases in the cylinder and the capability of producing
horsepower or torque is greatly reduced! The gases could get cooled so quickly that the torque production
curve could drop to near zero very rapidly!
I know that you are way ahead of me now! At a 500 rpm idling speed, the very effective cooling system has
already had all sorts of time to BE ABLE TO remove virtually all the heat from those hot gases before the
crankshaft has even rotated by 45° It can never even get to having a beneficial mechanical leverage on the
crankshaft before it has already gone fizzle!
See the situation? The cooling system MUST have adequate performance to be able to remove enough heat
when the engine is wound out, but that results in it having too good a performance at all lower engine
speeds Such really good cooling performance makes engines last longer, so they have THAT going for
them! But the basic performance of all internal combustion engines is tremendously reduced by how well
the cooling system has to work!
The cooling system therefore also ALWAYS includes a “modulating thermostat” which partially blocks off
the water flow when the water temperature is less than the maximum it was designed for This minimizes
the chance of the engine cooling system ever removing too much heat and keeps the engine at a relatively
constant operating temperature The effects of the thermostat (both its design temperature and its actual
operation) have significant effects on calculations regarding the efficiency of an engine, and can cause
some calculations to be off
You might see why the cooling water pump is driven by the engine At high speed, it runs very fast, to
pump a lot of water to accomplish the full cooling described above At slower engine speeds, the water is
pushed more slowly so that it is able to capture less heat from the cylinder walls and heads But these things
do not eliminate the problem The slower water speeds reduce some of the numbers described above, but it
is still true that every running vehicle constantly discards more of the gasoline’s energy as wasted heat than
it uses to move the vehicle
Older vehicles also had their very large radiators very exposed openly to the air at the front of the vehicle,
because at that time COOLING was considered the central factor (related to engine survival!) As gasoline
got more expensive and actual engine efficiencies have improved (from about 15% to about 21%), modern
vehicles tend to have very small radiator openings in the front of a vehicle, along the theme of causing the
engine to operate at a higher temperature If the engine runs hotter, it consumes more of the undesirable
NOx and other pollutions, enabling the vehicles to pass more rigid pollution testing With that smaller
radiator opening, less air can pass through the radiator and also less air passes alongside the engine itself,
causing the now desirable higher engine temperatures! Now you know why! By the way, long ago, it was
easy to work on nearly any engine; because it was in such an open area of the vehicle Modern vehicles
have many accessories right against the engine, so it is often hard to actually even see the engine when the
hood is opened! That great difference is actually an intended difference!
In case you are curious, about 60% of the cylinder cooling is usually done through the cylinder walls and
the remaining 40% through cooling the heads. This will probably NEVER come up in Trivial Pursuit!

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 226

Another related subject: Remember that I mentioned above that the standard cooling system design
intentionally allows the engine to start to overheat when really revved up? (The expectation of the
designers is that no standard driving would ever involve extended driving at such high revs) If a vehicle is
to be used for towing a heavy trailer, generally there is an extra-cost option of a “heavy duty” cooling
system When towing such a trailer, the engine can spend longer times at higher engine speeds and loading,
where it would normally overheat The extra cost “heavy-duty” cooling system rarely involves stronger
water pumps or bigger hoses Almost always, it only involves REDUCING the size of the water pump
pulley (so it spins faster) and a thicker radiator (so there is more heat exchange surface to cool the water)
From the above discussion, you probably realize that such a “heavy-duty” cooling system causes the engine
to have WORSE efficiency and performance at low engine speeds, due to excessive cooling of the engine
cylinders then! Less heat remains in the hot compressed gases in the cylinder pushing the piston downward,
because the excessive cooling lowered that pressure due to the Ideal Gas Law! (The modulating thermostat
mostly resolves this complication)
Prior to around 1980, cars and trucks had large radiators and very free airflow through them, and engines
ran fairly cool Even the standard thermostats were 180°F, again permitting cool engine operation, with the
intention of enabling long engine life When fuel efficiency and air pollution came to be politically
important, the advantages described above, of intentionally reducing the effectiveness of the cooling system
to reduce the cylinder heat losses to (slightly) increase efficiency, started appearing Now, nearly all
vehicles have rather small radiators and they have small grilles allowing air in to them! Modern radiators
are actually too small to avoid overheating and so electric cooling fans are necessary to keep engines from
boiling over Similarly, modern thermostats are generally 195°F, which raises all the engine temperatures by
15°F Look in any engine compartment today and you see a clutter of things surrounding the engine That
was not the case long ago, when free air flow around an engine was desired for engine durability Now, the
highest possible engine operating temperature is used, (reduced cooling performance described above) to
improve engine efficiency and performance, which also reduces the amount of air pollution created in the
process Engine durability is less than it used to be, but people rarely seem to keep vehicles as long as they
used to, so it is apparently not considered a problem The technology of motor oils has greatly advanced, so
that oil is able to last much longer in today’s hotter engines than old oil would have lasted
Finally on this tangent: Consider dragsters (rails) in a ¼ mile drag race They have no radiators or water
pumps, but they are filled with (cold) water just before a race That seems certainly necessary to keep the
engine from blowing up But an ideal situation would be that the water was ferociously boiling at the Finish
Line when the engine was shut down, because that would indicate the highest possible engine (cylinder)
temperatures during the race I don’t know if any research has ever been done on this, but I suspect that if
two identical dragsters raced, the one that had had its engine running 30 seconds longer before the race
should always win! (Unless the engine is blown up!) The hotter engine cylinders should allow several
percent additional power to remain to drive the pistons downward, particularly at the important start of the
race Engine durability would probably be severely reduced, but people who drag-race only think of
winning! (Notice how Physics shows up in unexpected places and in unexpected ways?) (And, of course,
that seriously overheated engine is more likely to dangerously blow itself apart, too!)

Hemi Head Engine

For nearly 40 years, Chrysler has been aggressively promoting their hemi head engine For you gear heads,
do you know WHY a hemi is supposed to be better? In my experience, virtually no one seems to actually
know! I wonder if Dodge and Chrysler salespeople even know.
A hemi head is actually a (somewhat) hemispherical head Virtually all the other styles of overhead valve
engine heads have relatively flat pistons and heads that have a relatively shallow recess in their heads, for
the combustion to occur Remember the roughly 6 cubic inches that must remain at TDC? With a 4”
diameter cylinder, that equals roughly ½” in cylinder height, near the sides near zero and near the spark
plug maybe ¾ inch Now, a cylinder has to have both an intake valve and an exhaust valve, both in the head
(in overhead valve engines, the most efficient designs) The flat shape of the usual combustion chamber
limits the diameter of those valves, to well under HALF of the entire distance across the piston An engine

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 227

with 4” diameter pistons can therefore not have intake or exhaust valves which are larger than about 15” in
diameter By the way, the INTAKE valve is always larger in diameter than the exhaust valve Do you know
why? It is because the EXHAUST is DRIVEN OUT by the upward motion of the piston, while the
INTAKE is SUCKED IN by the downward motion It turns out that devices that suck air cause a lot more
turbulence, and so it is less easy to do The larger intake valves are therefore needed to provide the SAME
necessary flow rates for the cylinder to be most efficient
The hemi head uses a VERY deep combustion chamber, so that the distance across is about half the
circumference of a circle (156 * diameter) rather than being only slightly more than the diameter This
allows a lot more available space for the two valves The valves tend to therefore be at odd angles to benefit
from this added size The SINGLE actual advantage of a hemi head engine is that it has much larger
diameter valves! This allows the fuel-air mixture to get in easier and the exhaust to get out easier Bigger
valves is a very good thing, and the hemi head design is the simplest way to provide the space for really
large valves
Since the hemispherical chamber is so tall, a flat-top piston would allow too much remaining volume for a
good compression ratio, so all hemi head engines have to have dome-top pistons So if you ever see a
relatively flat-top piston, it is from a non-hemi, and a significantly domed piston is always from a hemi (An
engine can have flat-top pistons replaced with slightly domed pistons to increase compression ratio, but that
is a very different effect) Also, if you happen to see an unusually large valve, it is likely to have come from
a hemi engine
So, a hemi is not “magical” or anything, but merely is a design that permits bigger valves for better engine
breathing There is no other significant advantage of it And, actually, the domed piston somewhat interferes
with airflows and makes it less likely to get really uniform distribution of the gas-air mixture, and really
good removal of all exhaust products, so some of the benefits of being a hemi are given up in exchange
You may be aware that there are some newer engines that have four (smaller) valves per cylinder This
provides the improved breathing of the hemi while not having the disadvantages of domed pistons But the
engine is much more complex, and expensive

Camshafts and another interesting idea!

This discussion, and ALL such discussions, always considers an “ideal” engine! One where the valves open
and close instantly and completely, to allow each of the four Otto engine cycles to occur very distinctly and
separately However REAL engine valves are always operated by a camshaft which has lobes shaped to
push each valve open, against very strong spring pressure, and then where those springs cause the valves to
close after the camshaft lobe has passed The point being, each valve TAKES TIME to open and then to
close! The engine designer CHOSE a particular size and shape for the camshaft lobe for each specific
engine
It turns out that since the valves take that much time to open and to close, the “valve timing” meaning the
shape and timing of the camshaft lobe shape and position OVERLAPS
For example, the EXHAUST valve is ALWAYS designed to begin to open FAR BEFORE BDC (bottom
dead center) so it necessarily RELEASES the productive pressure inside the cylinder during the POWER
stroke Even worse, the exhaust valve CONTINUES TO STAY OPEN beyond the end of the exhaust stroke
and it is still open well into the INTAKE stroke! With BOTH valves then open, some of the fresh gas-air
mixture being sent INTO the cylinder goes completely through and OUT THE EXHAUST! In fact, much
of the rich sound of a high performance engine is due to this, where raw gas-air mixture goes through the
engine and is then ignited by the extremely hot metal surfaces of the exhaust manifolds!
But this is clearly extremely wasteful of the precious gas-air mixture that drivers pay for at the gas pump! If
you think about it, if an engine was IDEAL, there would be virtually NO exhaust sound at all, even without
any muffler The already completely burned up end products of the cylinder combustion would simply be
squeezed out of the cylinder as the piston rose
So an engine’s overall efficiency is also affected by the valve timing and duration This subject is very
complex because both sides of the street are involved IF an engine is to be designed to produce maximum

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 228

power, then it is important to get rid of as much of the old exhaust gases in order to get more fresh gas-air
mixture into the cylinder to burn This can be done by greatly INCREASING the length of time that the
valves are open It is essentially conceded that a significant amount of unused fresh gas-air mixture goes
through the engine unused, in order to be able to create the absolute maximum amount of power That
means that an engine that is set up for extra power is also WORSE on fuel efficiency It might have seemed
that the opposite should be true, but the REASON that the engine creates more power is because a LOT
more fuel-air mixture goes through the cylinders, and the fact that a good deal of that is lost is ignored!
Manufacturers therefore design very conservative camshafts for their vehicles to be sold, but for their
racecars that look the same, they have very different camshafts in them! Gear heads know that there are
STOCK camshafts, STREET camshafts, and various levels of RACING camshafts When the engine has a
STOCK camshaft, it idles smoothly and starts easily With the most extreme racing camshafts (such as used
in Dragsters), as peculiar as it sounds, the valves virtually never close! Both valves are ONLY closed for a
very brief time during the early part of the POWER stroke, in order to use the generated power to drive the
piston downward Beyond that, one or the other or both valves are at least partially open at all other times!
Anyone can instantly HEAR the effects of an engine with any exotic camshaft, because of that effect
mentioned above regarding the rich sounds of exhaust when a lot of fuel is burning IN THE EXHAUST
HEADERS! Such engines are also nearly impossible to cause to idle (with the valves rarely both being
closed!) and so such engines tend to need to spin at 2,000 rpm or more to keep running (rather than the 550
rpm common in conventional cars with stock camshafts) Finally, a normal starter motor is only able to spin
an engine a little faster than the needed 550 rpm for stable idling, so when you have an engine that cannot
idle at below 2,000 rpm, starting it is a real problem Around 40 years ago, some creative drag-racers
discovered that they could cause a standard starter to spin fast enough if it was powered by two or three or
even four batteries in series, instead of the standard one battery
In any case, the central point is that all camshafts have shapes which were developed by experimental
results! Thousands of failed designs eventually narrowed it down to cam lobe shapes that are now used
Amazingly enough, there is virtually NO theoretical basis for almost anything about a camshaft! It was all
Trial and Error! And after a hundred years or so, they have found cam lobe shapes that seem to be as good
as they can be, whether for economy or for power or anywhere in between A Physicist goes crazy when
some technology advanced simply by an endless number of bad guesses! We prefer that there is actually
some REASON and LOGIC behind trying new variants!

The interesting idea!

For 130 years, the engine valves have been pushed open by camshafts and against very strong valve springs
No one seems to have ever explored any other possibilities!
The thought that occurs to me is to get rid of the camshaft completely! Install VERY POWERFUL
electrical solenoids It seems certain that a 100-watt or 1000-watt solenoid should be able to OPEN a valve
VIRTUALLY INSTANTLY! A second similar solenoid should be able to CLOSE that valve just as fast
Rather than the existing situation where each valve GRADUALLY opens due to the leverage of the
camshaft lobe, this concept would allow IMMEDIATE AND FULL FLOW A standard camshaft lobe
causes each valve to follow a (roughly) sinusoidal path regarding being opened A mathematical Integration
of that motion shows that the actual total airflow is only around HALF of what would theoretically be
possible So, it seems to me that if extremely strong solenoids forced the valves to SNAP open and closed,
almost every aspect of engine performance should improve ENORMOUSLY!
• There would be NO wasted gas-air mixture passing through the cylinders, because those two
valves would NEVER both be open at the same time! Better fuel mileage
• The exhaust valve would NEVER open until AFTER the POWER stroke was totally completed,
so an increase in the net power output of the engine should result
• With the intake and exhaust valves being WIDE OPEN instantly, far easier and better flow of fuel
INTO the cylinder should occur, meaning greater engine power output, and far better purging of
exhaust gases should also occur, allowing more available volume in the cylinder for the next
incoming INTAKE stroke

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 229

BETTER fuel mileage AND much greater power production! Seems to me that Detroit Engineers should
have thought of this 50 years ago! But the concept of an engine without a camshaft is probably too “outside
the box” for the traditional thinking of corporate engine designers! Oh, well! But that would certainly be
one of MY first areas of exploration if I had authority in Detroit! It sure sounds extremely obvious to me!
Granted that there might be technical problems that cause it to not be usable, but if Detroit is spending
countless billions on E-85 engine designs (dead meat!) and Hydrogen fuel-cells (20 to 50 years from now)
and battery power (a 1980s-90s concept which had dismally failed), spending a few bucks to try really
high-powered solenoids seems worth trying!

If you have actually followed all of this, you now pretty much know most of the design basics in case you
ever decide to invent your own engine for your car! Very few people seem to have even heard of much of
this, and very few auto mechanics seem to know about these things or understand them I sort of wonder
how many of the Engineers at the automakers really know the Physics behind what they make blueprints
for! The mysterious way that large free-flowing radiators gave way to the smaller obstructed radiators of
today make me wonder if they had really understood these things before 1980 or so! I would hope that
engine designers of 1910 knew most of these things, because it is all just simple physics! At least they
SHOULD HAVE KNOWN!

For discussion’s sake, consider a hypothetical situation resembling the last drawing shown above The
crankshaft throw is fully horizontal, for the greatest possible geometrical mechanical transfer of torque to
the crankshaft Imagine that the full 6300 pound downward force on the piston could be applied under these
circumstances The torque transferred to the crankshaft would be 6300 * 10 * 0146 or 920 foot-pounds of
torque! This rather obvious result is many times higher than any actual automotive engine can develop! It
would also be relatively constant, and would not decrease at high or low engine speeds
This geometrical mechanical advantage was a standard feature of the old steam engine locomotives, where
the entire available steam force was always applied at the best possible mechanical advantage In
comparison, internal combustion engines are rather pitiful regarding mechanical efficiency! However, this
hypothetical arrangement is not possible in a normal automotive engine It is easy to see from geometrical
analysis that the piston necessarily has dropped exactly halfway down the cylinder, with the loss of almost
all compression advantages and there is no flexibility on this point

It is not commonly known, and certainly seldom published, that the very best experimental automotive
internal combustion engines are only around 28% efficient, when considering the energy in the gasoline
and that actually developed in the spinning crankshaft Many of the common automobile engines today are
only around 21% efficient (This is actually considered good, since common automotive engines of 1970
had BELOW 15% thermal efficiency!) (It has actually risen a little from that)
“Ground transportation vehicles are powered, by and large, exclusively by internal-combustion engines In
passenger vehicles in particular, the thermal efficiency of the [engine] cycle is of the order of 10 to 15
percent”
from Mark’s Standard Handbook for Mechanical Engineers, Tenth Edition (1995), page 9-29
(That particular reference had been composed for an earlier Edition of Marks of the late 1970s, and the
number had gotten somewhat outdated by the 1995 Edition)
In the discussion above, we have seen WHY the overall efficiency is so dreadfully low for ICEs The
cooling system MUST get rid of around 40% of the fuel’s energy, just to keep the engine from melting
down or warping and failing And the exhaust gases MUST carry away around another 40% of the energy
from the fuel That only leaves around 20% left which can be converted into useful mechanical energy Yes,
tweaking the exhaust system to reduce hot exhaust gas flow can help, but that also restricts the flow of
air/oxygen INTO the cylinders and also creates more work for the pistons to do in pushing the gases out
Ditto, adding a turbocharger (a supercharger that increases the amount of oxygen/air pushed into the
cylinders) which is powered by the exiting hot exhaust gases generally DOES have a positive benefit, but
the improvement due to having more fuel-air to burn has to overcome the significant power required to
force the exhaust out even harder in order to spin the turbine in the turbocharger No free lunch!

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 230

By increasing the temperature of the thermostat, in other words, reducing the effectiveness of the cooling
system and making the engine run hotter, a SLIGHT improvement in fuel economy is achieved However,
the hotter engine tends to heat incoming air up which REDUCES the air density and therefore reduces the
power produced by the engine Now you know WHY the engine seems to have more power if you replace
the modern 195°F thermostat with a 165°F one, but the engine creates more pollution due to poorer burning
and it also has worse gas mileage
Engine manufacturers have come up with many dozens of different ideas to try to (incrementally) reduce
the heat carried away by the cooling system and/or the heat carried away in the exhaust But as just noted,
all such changes tend to also have negative effects as well as the desired positive ones And so the fact that
most vehicles now on the road have around 21% overall thermal efficiency is NOT likely to significantly
change, IF ICEs are used in the future
Comment: You have certainly noticed that car manufacturers have been trying to explore hybrid cars,
electric cars, fuel cell (hydrogen) cars, ethanol (E-85) (for a while) and many goofy ideas Yes, they partly
are doing that because the public is wound up over all the energy issues in the news But doesn’t it seem
strange that they are spending billions of dollars on ideas which never seem to work out? There IS a reason
that they never bother to tell us about! Around 2004, I discovered some PUBLISHED reports by the Oil
Institute and other related organizations, which presented the data on consumption, usage and supplies of
fossil fuel supplies It scared the daylights out of me! Those (published) Reports were somewhat tricky in
how they present the data, where it was difficult to compare the values of the data on consumption, usage
and supplies (for each country and each year), but once the data is converted into the same units, the
SUPPLIES are VERY low That data indicated that the US (then) only had enough known petroleum to
supply our current needs for just over FOUR YEARS (if no imports were made) The people who think
natural gas is the answer for the future would see that only EIGHT years of supply of that was in the
ground under America That data (now somewhat out of date) is at Energy Supplies SEE why the
automotive engine manufacturers are trying to find some way to power the products they hope to sell in the
future?
YOU can actually confirm the overall efficiency for yourself with your own car! I will use the example
of one my Corvettes At a constant 60 mph on a straight and level Interstate Highway, I get around 25 mph,
which sounds GOOD for a Corvette! OK According to GM information, the frontal area of the car is
around 19 square feet, the aerodynamic coefficient of drag (due to the shape of the car, and which is fairly
constant for different speeds) is 0330 and the tire resistance drag is around 0015 (depending on tire type,
inflation pressure, temperature and speed) From this we can calculate that the Aerodynamic Drag at 60 mph
(88 ft/sec) is 19 * 0330 * (88)2/(13*32) pounds of force (the last factor being the air density in slugs per
cubic foot), which gives 1167 pounds of aerodynamic drag, at 60 mph (at 70 mph, it is easy to calculate
that it rises to 1589 pounds) Tire resistance drag is 0015 * 3200 pounds (the vehicle weight) or 48 pounds
at 60 mph (and around 60 pounds at 70 mph) This makes the Total Drag as 1167 + 48 or 1647 pounds at 60
mph (and 2189 pounds at 70 mph) (and 519 + 32 or 839 pounds at 40 mph)
Clarification Note: Many articles and web-pages, and even many respected textbooks (including Marks),
contain a serious error regarding the subject of the previous paragraph They apparently see the V2 in the
formula for Aerodynamic Drag, and they must believe that is therefore referring to some relationship to
Kinetic Energy (which is ½ * M * V2), so they add in a 05 in their formulas! Nope! It only turns out that it
is a fluke that there are two Vs in there and they happen to be identical! The relationship is actually one
regarding the analysis of the Momentum (lb-ft) of the air colliding with the frontal area of the vehicle
FIRST, we are HITTING the air with a velocity of 88 ft/second SECOND, the AMOUNT of air that we are
hitting is given by the density of air times its cross-sectional area, times its “length” (per second) The
coefficient of drag is essentially telling how quickly the air gets out of the way of the vehicle! So the
correct formula is D = ρ * CD * S * V2, indicating the usual designations for the air density rho, the
coefficient of drag, the frontal area of the vehicle and air velocity The formula might be more clearly
written as D = CD * V * (S * V * ρ), where the contents of the parentheses are simply the mass-flow rate of
the air, each second (or slugs / second) Multiply this by the velocity and end up with a Force!
At 60 mph, the total required horsepower to overcome this and maintain a constant speed is 1647 * 88 / 550
or 264 horsepower (at 70 mph it is 409 HP, a considerably higher drag load!) (The 550 is to convert feet-
pounds per second into horsepower) A horsepower is equivalent to 2544 Btu/hr (from above) so this is

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 231

67,200 Btu/hr (264 * 2544) of needed (or usable) output In one hour of driving at that constant speed, we
would therefore use up an amount of energy equal to 67,200 Btu (at 70 mph, 104,000 Btu)
A gallon of nearly any type of gasoline contains around 126,000 Btu of chemical energy In the hour of
driving, I would cover 60 miles and get the 25 mpg, which means that I would use 60/25 or 24 gallons of
gasoline That much gasoline has 126,000 * 24 or 302,000 Btu in it Since the car used 67,200 Btu to
maintain that 60 mph constant speed, the overall thermal efficiency is 67,200/302,000 or 222%
At 70 mph, I tend to get around 21 mpg, and therefore would use up 33 gallons in traveling those 70 miles,
or a gasoline energy content of 420,000 Btu So we would have 104,000/420,000 or around 248% overall
thermal efficiency Interestingly, the thermal efficiency is actually higher at the higher speed, but it is more
than overcome by the far greater total drag, which is why gasoline mileage goes down at high speeds
A primary reason for this disappointing efficiency is this unfortunate mechanical arrangement where the
majority of the force applied to the top of the pistons is NOT able to get transferred into torque in the
crankshaft but instead attempts to drive the whole crankshaft down out of the engine (Since pressure
remains in the cylinder, it eventually gets to a point of having a better mechanical advantage, but by then
the pressure in the cylinder has dropped quite a bit due to the piston lowering and the cooling system
effectiveness) A large amount of wasteful frictional and cooling system heating is the result of this inherent
characteristic of automotive engines, and the engine bearings take a serious beating The engine then needs
a variety of systems (lubrication system, cooling system, etc) to then discard all this heat energy that is
wasted
We mentioned above that enormous amounts of heat must be removed (and discarded) from the cylinder
walls and heads, an amount generally equal to 100% to 150% of the rated output of the engine This should
seem a shocking statement, that a 200 HP engine necessarily wastes 200 HP to 300 HP of energy through
its cooling system! A lot of this has to be wasted because, when the explosion first created the maximum
dynamic pressure in the cylinder, the piston had nowhere to go, being virtually at TDC (This is essentially
the definition of the Otto cycle engine, that of a constant volume combustion) So those 4000°F gases are
trapped above the piston, surrounded by a really efficient cooling system! Before the crankshaft has
advanced enough degrees to start being able to transfer useful torque to the crankshaft, the cooling system
has necessarily already greatly cooled off the hot gases! Does this seem like a poor design, or what?
Enormous waste of energy is built into the design! ALL internal combustion engines face this situation!
There is another way to indicate this poor overall efficiency of automotive engines Consider a small-sized,
reasonably aerodynamic automobile, with an engine that is considered efficient, traveling at a constant 60
mph on a highway, with no significant wind Because of the alleged efficiency, this vehicle gets 30 miles
per gallon at that constant speed
The total vehicle drag (F) can be shown to be around 140 pounds, 110 of which are due to aerodynamic
drag and 30 of which are due to tire resistance frictional losses The total actual power needed to overcome
this drag is given by F * V (velocity) Our numbers are then 140 pounds * 88 feet/second or around 12,300
ft-lbs/sec Dividing this by 550 converts it to horsepower, or around 22 actual horsepower (Very
streamlined cars will have even lower aerodynamic drag and so this required power could be even less)
Since this vehicle has a 30 mpg gasoline consumption, it would use up exactly two gallons of gasoline to
travel the 60 miles covered in one hour Each gallon of gasoline contains about 126,000 Btu of available
chemical energy Therefore, two gallons contains 252,000 Btu, so the vehicle is using 252,000 Btu/hr It is a
fact that 2544 Btu/hr is equal to one horsepower, so this amount of energy in the gasoline represents around
100 horsepower
The vehicle / engine efficiency would then be 22 hp / 100 hp, or around 22%, which confirms the earlier
statement about the overall efficiency of this equipment

Another tangent!

Long ago, it occurred to me that NO ONE actually NEEDS or USES the 451 horsepower of a recently
advertised car! That such great power is only ever used for less than 30 seconds at a time That otherwise,

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 232

most cars only need around 40 horsepower or less to cruise at constant speed on an expressway Detroit
never seemed to realize that, and they designed many vehicles with huge engines that were tremendous gas-
guzzlers
Around twenty years ago, in the late 1980s, I had a Oldsmobile Cutlass Ciera, which was a front-wheel
drive car I also had the carcass of an extremely old Volkswagen van from the 1960s Something then
occurred to me that has amazingly seemed to have never occurred to anyone in Detroit! If you have
followed all this stuff up to here, this should make incredibly good sense to you! The rear axle of the Ciera
didn’t actually do much other than support the rear end of the car So I dragged the Volkswagen “pancake”
engine and transaxle across my yard and saw that it probably would have fit under the rear of the Ciera (if I
removed the gas tank and put it somewhere else, as it was the ONLY apparent interference!)
So I was considering adding the second engine to the Ciera! A TINY engine! As near as I could tell, the
pancake engine was flat enough that no actual changes should have been necessary to the Ciera, to allow it
to remain at the same height In other words, from an appearance point-of-view, the Ciera would have
remained absolutely normal looking!
Have you caught on of why I thought this might be a good idea? At the time (late 1980s) I was one of very
few people who seemed to really care about fuel efficiency But I thought I had come up with a really good
and really obvious solution It seems to me that it still is just as valid today!
I was aware that my Ciera generally got around 17 or 18 MPG on the highway, but it was fun to drive
because the 38 liter engine had a decent amount of power
So, I intended to rig up the gas pedal where if it were pressed HALFWAY DOWN OR FARTHER, the
(front) Ciera engine would start up, but otherwise it would NEVER actually be running! So during normal
driving, the 12 liter Volkswagen engine would have powered the car I was confident that it would have
gotten at least 30 MPG, and reasonably likely around 35 MPG on the highway The little engine had enough
power to easily maintain the Ciera at 60 mph highway speed So the vehicle would have gotten about the
best gas mileage of any vehicle of that era (late 1980s)!
Now say that I wanted to do a hole-shot from a stoplight, or wanted to pass a car on a two-lane road The
Ciera engine would start up, and actually I would have had TWO engines both accelerating the Ciera! A
rear-wheel-drive AND a front-wheel-drive! It likely would have had better acceleration than any other
Ciera, due to the two engines!
So I would have wound up with a car that LOOKED absolutely normal, had acceleration at least as good as
original and maybe better, and yet possibly TWICE the gas mileage! Cool?
It turned out that my life got extremely busy and I never got around to doing that interesting experiment!
And since shortly after that I started driving my two Corvettes, and they are rear-wheel drive, I never
considered any personal motivation to remain! Also, I am not sure that I would have wanted to maltreat a
Corvette quite that badly!
I realize that adding an entire extra engine would add to the cost of manufacturing vehicles, when the
manufacturers hire people exclusively to find ways to eliminate a tenth of a penny from the cost of the
cigarette lighter! So maybe that is why they have never even thought of this concept Seems pretty obvious
to me though!
I must admit that I had earlier personally tried a truly stupid idea that was vaguely similar, and maybe those
bad memories caused me to think of it during the 1980s When I was in College, I drove two 1956 Ford
convertibles That year happened to have a very tall trunk The motors in those cars were considered
decently powerful, being V-8 292 cid engines But as a young kid who liked to fool around with cars, well
I happened to have rebuilt a Mercury 383 engine, and I had toyed around with the idea of replacing the 292
with the 383 for, as Tim Allen would say “more power!” But I thought I came up with a better idea yet! I
measured and measured and found a way to fit the large 383 engine inside the rear trunk of the ‘56 Ford! I
decided to install it backwards, with the idea of having a normal (but much shorter) driveshaft I got two
spur gears (an especially dumb idea!!!), and put one on the snout of the 292’s normal driveshaft and the
other was rigidly mounted to the actual second driveshaft The fact that the rear engine rotated backwards

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 233

was then a good thing, because the spur gears rotated in opposite directions in order to mesh together,
where either or both engines could then drive the car
Well, it actually worked, for a few days! If I ONLY used the 292, and left the other transmission out of
gear, only the second driveshaft rotated, and everything worked pretty normally And I drove it a little with
ONLY the 383 powering it It was a little flaky but generally worked fine, although I never really pushed it
hard The car’s handling was VERY strange, as the big old engine’s 500 extra pounds so far in the rear
made it somewhat spooky to drive
I was still young, and my knowledge of Physics and Engineering was still limited That’s my story and I am
sticking to it! But I had not realized that the big-bore 383 had a torque curve that was at much lower engine
speeds than the smaller 292 On the single day when I fired up both engines and thought I was going to have
really impressive acceleration from over 500 hp, that little detail very quickly sheared off all the teeth of the
gears! As I was sitting just a foot away, I was hit by several of them as they exited the scene! I guess there
was such a great torque difference between the very different engines that it happened at very low speed,
which might have kept me from being killed, which might have happened if they had sheared off at high
speed, when they were spinning very fast
I always wondered after that what might have happened if I had been more conservative and put a second
292 in the trunk! But I suspect my knowledge of gears at the time was not sufficient even for that So that
car pretty much just sat after that until I eventually took the rear engine back out
But maybe that experience caused me to even think that a second engine might make sense, many years
later By then, I had also seen in hot rod magazines where a few people had done what I had tried to do, but
far more successfully!
See below, where a different variant of this idea now seems extremely interesting! Instead of a rear
gasoline engine, two severely over-driven electric car starter motors! In that case, the actual engine (FWD)
would be smaller, barely enough to maintain highway speed, maybe 50 hp The electric motors would be so
severely over-driven (by feeding them 36 volts or 48 volts instead of 12) where they might provide a
BRIEF BURST of around 400 hp, but only for a maximum of 10 to 15 seconds (or else they would
overheat and self-destruct) The premise would be an ECONOMICAL vehicle that LOOKED NORMAL
but had a SMALLER than normal engine for really great gas mileage, but having the capability of a few
seconds of spectacular acceleration, like gear heads dream about!

V-8 versus in-line engines

People sometimes notice that many of the higher power automobiles have V-8 or V-6 engines, while nearly
all large trucks have in-line engines WHY is that? Well, for a disappointing reason! Stylists!
Until the 1940s, automobiles were TALL! The HOOD had to be tall because the vertical structure of inline
engines required a lot of vertical space under the hood But then stylists started wanting to sell cars that
were sleeker and lower A V-8 engine has its cylinders at a significant angle, which actually allows the
engine to be designed with several inches less overall height The extra pistons were also popular in
providing more power, but there had been I-8 and even I-12 engines used earlier An additional benefit for
the stylists was that the V-8 engine was SHORTER (front to back) than even an I-6, for additional
flexibility regarding styling
There is no significant difference in overall efficiency between a V-8 and an I-6, and if they have the same
number of cylinders and same displacement, the performance is very, very similar STYLING was
essentially the only real reason why V-8s took over for several decades!
In trucks, which are DESIGNED to be tall, there is no benefit of trying to save a couple vertical inches, so
straight-line engines are nearly universal The only other consideration is that a V-8 engine has more
moving parts, which will eventually wear out and fail Since trucks are intended to go as far as possible,
another reason for considering an inline engine!

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 234

Exhaust sound

Even I see this as a peculiar subject for a Theoretical Physicist to feel needs describing! But nearly any
good gear head can hear a small-block Chevy engine zoom by and KNOW which specific engine is in that
car How is this possible? It is almost entirely due to just two factors, the camshaft in the engine and the
exhaust system With a street cam, the valves are not open excessively long, which lets the engine start and
run reliably But that means that minimal fuel-air mixture passes THROUGH the cylinder without getting
burned there That minimal amount of fuel-air mixture gets into the very hot cast-iron exhaust header, where
it gets ignited, and there is MODERATE exhaust sound, which the muffler then pretty much muffles! Now,
with a more aggressive camshaft, the valves stay open longer, and that results in both the processing of
more gas-air mixture for greater power, but also more unburned gas-air mixture getting past the cylinder
This all still ignites in the hot exhaust header, and so the exhaust is always louder
But then the design of the exhaust headers then comes into play The cylinders fire in a different sequence in
different engines This all results in four separate surges of such unburned fuel-air mixture entering (each)
exhaust manifold from four different cylinders They do NOT fire equally spaced in time! FACTORY
exhaust manifolds rarely considered any issues of interference of the surges interfering with each other, and
different manufacturers’ different firing sequences resulted in exhaust that therefore sound differently The
exhaust manifolds (and engine firing sequences) on the small block Chevy engines of the 1950s and 1960s
were better designed than the others, which both created a unique sound pattern and also greatly reduced
the needed power to force the exhaust out You probably do NOT want to now get a more technical
explanation of this, which is VERY complex
For highest performance engines, custom-designed exhaust headers are used Not all of them are designed
really well, and some seem to have been designed to be decorative! But the really good ones were
Engineered to permit each surge of exhaust gases to arrive at the joining point without any interference of
any surge from any other cylinder An interesting Engineering problem to solve, and many companies that
sell custom exhaust manifolds did not seem to do the necessary calculations! (Personal opinion!) Finally,
many dragstrip engines have SEPARATE exhaust headers for each cylinder, to completely eliminate any
possible pressure conflicts that might use up some horsepower
By the way, exhaust header design is SPEED DEPENDENT Few of the designers seem to know that! The
well-designed headers are designed so that at red-line engine speed, each pressure surge from a cylinder is
able to clear before the next pressure surge arrives from a different cylinder

Flywheel

All internal combustion engines need to have a flywheel The fact that the explosive forces inside the
cylinders are brief and irregular means that there is NOT a consistent torque acting to turn the crankshaft A
heavy enough flywheel smooths out the irregularity It also has another effect which will be noted
momentarily
Early cars had VERY heavy flywheels Whether hand-cranked or with electric starters, that aided the
starting of engines, as it permitted variations in how much gasoline had gotten into each cylinder, by
allowing ANY cylinder which fired to increase the spinning speed so that the other cylinders could start
behaving correctly
Before around 1954, all cars had stick transmissions That meant that when the clutch was pushed in, the
engine could run with no external load A very heavy flywheel had the added benefit of keeping the engine
from blowing itself apart if the gas pedal was pushed all the way to the floor with the clutch released The
flywheel’s Rotational Inertia was designed to be enough where the engine had to take many seconds of
being floored without load, before the engine might rev up above its redline speed At that time, engines
were underpowered and also built like tanks, so they really rarely could rev up fast enough to do
themselves damage anyway Manufacturers LIKE if their new vehicles do not self-destruct!
In the 1950s and 1960s, muscle cars started being manufactured In general, the manufacturers chose to
install very heavy (thick) flywheels on their vehicles, such that the public would not be likely to over-rev
any of their vehicles and get bad Public Relations But they installed essentially identical but thinner

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 235

flywheels in vehicles that were considered high-performance Why? No FUNCTIONAL reason, actually
The thinner flywheels allowed the engines to run rougher, a disadvantage to the general public
Finally getting to the point here! The thinner flywheel had less Rotational Inertia (I) which meant that
TORQUE created by an engine which had the clutch disengaged WOULD REV UP FASTER! If a
moderately noisy exhaust system/muffler like a glass-pack was used, the SOUND of the engine revving up
unexpectedly fast SOUNDS like the engine is really powerful! It’s quite an interesting change, and the
sound effects are quite impressive!
Conveniently, both Ford and General Motors (and I assume Chrysler) used essentially identical flywheels
in nearly all their vehicles for many years Back then, when friends would bring their cars to me to improve
them, they rarely had enough money to buy the big carburetors and improved intake manifolds and exhaust
headers and camshafts to ACTUALLY make their cars hotter I did not have to charge them too much to
replace the stock (thick) flywheel with an identical one that was from a performance car (ie, thinner) and
also replace the stock muffler with a glasspack When they would first sit in their car and rev it up, they
were always amazed that it was still their car! Their IMPRESSION was that it sounded far more powerful!
The glasspack muffler was so that when they were driving (in other words, the clutch was engaged and the
engine was loaded), the fact that the engine was actually no more powerful would not be obvious, the
louder exhaust distracting their attention
Now, there IS a down-side to using a thinner flywheel, which I discovered one day back then A VERY cute
girl kept insisting on sitting in my (severely modified) car She talked me into letting her start the engine,
with the car inside my garage I had installed a VERY heavy duty clutch, and I was pretty sure that she
could never have pushed it down to do any shifting, so the car was not going to go anywhere But the crazy
girl pushed the gas pedal to the floor and kept it there! With the lighter flywheel and very powerful engine,
it revved up very fast to speeds which seemed likely to destroy itself Fortunately, I was sitting right there
and I grabbed the ignition key and turned it off, which was certainly the only reason I did not have lots of
expensive engine parts all over the garage! I never again allowed any girl to start the engine of any of my
hotter cars!

Piston Ring gap

If you have ever rebuilt an engine, you know that the instructions regarding ensuring a proper ring gap
always has exclamation points! Why? Here’s why When you rebuild the engine, all the metal is cold
Nearly all kinds of metal EXPANDS when it gets heated, which includes engine rings The engine block
never gets much chance of getting very hot because of the cooling system So we have a situation where the
cylinder does NOT significantly increase in diameter, while the rings inside it are exposed to extremely hot
(well over 3,000°F) gases So the rings expand, in all dimensions, a little bit The important dimension is
LENGTH, where when the engine is hot, the rings get LONGER, and the specified piston ring gap is based
on the Thermal Expansion Coefficient of the metal of the rings and the expected maximum temperature
they will be exposed to under hard and fast use of the engine Notice an interesting detail, which no one else
will ever tell you, and which has such minimal effect that it is never noticed, but the engine actually
becomes slightly more efficient, that is, las less blowby leakage past the rings, after it has gotten up to
operating temperature, than it does when the engine is still cold and the ring gaps are at their greatest
So IF you did not provide the specified ring gap, then when the engine got hot, the expanding metal of the
rings has nowhere to go! The softer metal of the piston generally loses this battle, and the engine either
seizes up or comes apart, both making for VERY bad days!

Stirling Engines?

I don’t really intend to provide a full explanation for the Stirling process here However, the word seems to
have become a buzzword, where when people (try to) talk about alternative energy ideas, the Stirling seems
to always be presented in glowing terms

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 236

The process IS quite interesting and it is definitely unique, which happens to appeal to me personally! But I
would note two facts for anyone who is ready to give somebody lots of money for something that allegedly
uses the Stirling process:
1. The Phillips engine (1936) and the later Stirling engine (1960) which was based on it, are more
technically called hot-air engines, in great distinction with the Internal Combustion engines that
we have been discussing here Both are probably closer related to the EXTERNAL
COMBUSTION Stanley Steamer steam-powered car of around 1906, in having their heat source
external to the device itself The Stirling showed a significantly greater overall thermal efficiency
than ICEs can have, but that is mostly due to the fact that extensive heat exchangers can capture
and recover a lot of exhaust heat to be used again Even though the pictures of Stirlings are very
pretty, in order to operate efficiently, they MUST operate at extremely high air pressures
(generally above 1,000 PSI) and at rather high temperatures (commonly above 1,200°F) These
requirements represent some big complications in making useful products One of the Stirlings that
got a lot of press was around 450 pounds and it produced 30 hp at 39% overall thermal efficiency,
and 40 hp at 333% efficiency That was a monstrous big engine for producing a disappointing
amount of power!
2. The other fact is just a matter that a Physicist considers! Given that this concept has been around
for more than 70 years, and more than 50 years as Stirlings, I tend to credit the American
industrial corporations with finding ways to make huge profits whenever they see any opportunity
The fact that NO product yet exists which are based on the Stirling process (and the practical
performance numbers cited above), that seems compelling proof that we probably should not
expect to see spectacular applications of Stirlings in our lifetimes! I could be wrong about this, but
I would not invest any money in the prospect!

Flame Speed Propagation

Another way of describing that flame speed characteristic is to say that the pressure increases within the
combustion chamber at a certain rate, such as of about 20 PSI/degree of crankshaft rotation (for the average
operating circumstances we have been considering) During the approximate 18° of crankshaft rotation we
have been considering (starting with advanced spark ignition), the pressure rises around 360 PSIA, from the
original 120 PSIA compression pressure up to around the 500 PSIA (485 PSIG) we have been discussing
All the other calculations are the same as above Again, because of many complexities in the details of how
the flame-front progresses and affects the remaining gas-air mixture, a constant value of such a number is
not precisely accurate Even the flame-front speed is not constant during the combustion process because, as
the local temperature and pressure increases due to the shock wave of the mixture that already burned, the
flame-front speed rises Therefore, the very late stages of the combustion process occur more rapidly that
we have suggested here However, it permits basic calculations and analysis It also presents a way of seeing
how and why the pressure and force are greater during the later stages of the combustion process
Many Engineers have spent their lifetimes in trying to ensure more smooth progress of the Flame Front
inside the cylinder Honda (CVCC) and Ford both promoted methods of swirling the air inside the cylinder
(in rather different ways) to try to improve this characteristic, and many other approaches have been tried
Back in the Stone Age, all engines were FLATHEADS But the location of the valves in a Flathead greatly
limited the airflows and performance and pretty much everything else Overhead valve engines created a
great improvement, which was adopted by all engine manufacturers Other improvements have generally
been more subtle
The actual thorough presentation of the mathematics follows the logic and the examples above There are
some additional complications (1) The actual angle between the connecting rod and the tangent to the
crankshaft throw is always slightly larger (better) than in the simplified geometry presented above See
Section 3 in Mark’s Standard Handbook for Mechanical Engineers for a good example of the geometrical
considerations and the force diagrams (2) A lot of characteristics are constantly changing A reasonably
accurate analysis should probably include calculations like those above for every degree of crankshaft
rotation, considering the instantaneous volume of the combustion chamber and the instantaneous pressure
due to the explosion, as well as the angle of the connecting rod and that of the crankshaft throw The
instantaneous torque transferred to the crankshaft would then be known for every degree of rotation A

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 237

numerical Integration could then determine the average (practical) torque that is developed (3) Exhaust
valves begin opening even while the power cycle is still proceeding, such that they will be adequately open
when the exhaust (upward) stroke begins A tradeoff in engine design is that the old waste gases must be
removed, and then the entire combustion chamber filled with new fresh gas-air mixture from the intake
valves, all in very small fractions of a second It is an imperfect arrangement Some exhaust gases always
remain in the cylinder, keeping some fresh gas-air mixture from ever being able to enter In both cases, the
valves are always slightly open during the early stages of compression (intake valves) and the late stages of
power (exhaust valves) All of these considerations act to reduce the actual amount of power that can be
developed in a real engine
This practical (average) torque is also lower than the maximum numbers presented here In a V-8 4-cycle
engine, each piston is responsible for developing torque over a 90° range of crankshaft rotation, before the
next piston can take over We have generally been discussing maximum instantaneous torque for specific
crankshaft positions It should be clear that the measured torque of any engine will be less, because it
represents the average of torque developed during that entire 90° of crankshaft rotation, because no other
cylinder is yet firing
The crankshaft angle torque curves vary greatly in shape for different engine speeds, being very narrow at
low engine speeds and rather broad and fairly constant at high engine speeds The very narrow angle range
of productive power for an engine at idle combines with the earlier mentioned geometrical disadvantage to
fully explain why automotive engines can stall at low idle speeds
Hydrogen as a Potential Fuel in Internal Combustion Engines
On first thought, Hydrogen SEEMS to be an ideal fuel for vehicles It burns with the only resulting product
being water vapor, so it comes across as infinitely Green! Billions of dollars in research is being done to try
to develop a so-called Hydrogen economy for the (distant) future Sadly, it is nearly inconceivable that it
could ever actually happen, except in impressive test-car demos! Here if I apply the DANISHA Hall
thruster it shows better results.
When EXPERTS mention Hydrogen in the future, they do NOT refer to BURNING it! They are talking
about a technology that NASA developed in the 1960s for spacecraft, where hydrogen gas can DIRECTLY
create electricity in a rather exotic device called a Fuel Cell There is NO flame at all! Theoretically, a really
good Fuel Cell might have nearly 100% efficiency But even NASA with its unlimited budget never
remotely came close to that But multi-million-dollar fuel cells have been in many satellites, and they
produced the few hundred watts of electricity needed by the electronics onboard the satellite The dream of
a future hydrogen fuel source is based on enormous advances in the technology which may be hundreds of
years away, of finding ways to make Fuel Cells that are very efficient, very high power, and very
inexpensive Don’t hold your breath!
The thousands of people who see ways to become rich by selling ANYTHING that refers to hydrogen,
simply see ways to take advantage of a public that does not know enough! So people send in their hundred
dollars for some shiny device that has the word HYDROGEN on it, and they think that their vehicle will
run faster, better, you know the pitch! Take the money and flush it down the toilet instead It will save you
some time and trouble!
Hydrogen has all sorts of DISADVANTAGES regarding being a motor fuel Primarily, it DOES NOT
EXIST NATURALLY and must be produced, by any of several processes that are all extremely expensive
and high-tech to actually do on any decent scale We have a presentation specifically about hydrogen, which
presents the facts An amusing detail is that to provide the same amount of chemical energy in a standard
tankful of gasoline, you would need to tow TWO FILLED semi-trailers of hydrogen! But it happens to
have another disadvantage which relates to the subject of this presentation
Hydrogen CANNOT simply be MIXED with gasoline as a lot of people now seem to claim and think!
GASEOUS hydrogen would simply create BUBBLES in the gasoline, and even if that is taken care of, it
provides NO actual power boost benefits at all! And if you think you can afford the equipment to maintain
LIQUID hydrogen at around minus 400°F, good luck! The advertisements for such products never mention
these sorts of details!

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 238

Flame Front Speed

Even if all the other hurdles are overcome regarding using Hydrogen as a fuel, it seems to have yet another
disadvantage, one that it shares with most other gaseous fuels: the speed at which a flame front travels is
rather slow for the purposes of conventional engines With an ideal Hydrogen-air mixture, a flame front can
travel at around 8 feet/second Mark’s Standard Handbook for Mechanical Engineers, Section 7, Gaseous
Fuels, graph For comparison, a gasoline-air mixture (compressed) creates a flame front speed that ranges
from around 70 feet/second up to around 170 feet/second in normal engines Mark’s Standard Handbook for
Mechanical Engineers, Section 9, Internal Combustion Engines, Flame Speed
(NOTE: There does not appear to be available any data regarding flame-front speed for Hydrogen gas when
compressed as in a car engine Therefore, we add the following discussion, which also shows the sort of far
more comprehensive Physics research that is the basis for essentially all the statements made in this
presentation)

First, everyone is taught in school that Hydrogen “simply” combines with Oxygen in the familiar 2H2
+ O2 ↔ 2 H2O That turns out to be an enormous simplification! There are actually 19 different
reactions that can and do happen! Each releases different amounts of energy (with two of them even
REQUIRING energy to occur!) In general, two or more of these reactions occur in rapid succession,
with the end result being the familiar reaction Physicists and Chemists analyze ALL of those 19
unique reactions, in order to better understand exactly what is going on and why In fact, the overall
reaction of Hydrogen with Oxygen can occur in two VERY different ways! The DESIRED one is by
burning (conflagration) which has the flame-front speed indicated, around 8 feet/second in the
atmosphere The UNDESIRED one is by explosion (detonation) which has a flame-front speed of
2,821 meters/second or 9,255 feet/second! That is around EIGHT TIMES the speed of sound and
many times faster than the fastest rifle bullet travels! It is incredibly dangerous when Hydrogen
decides to detonate, and science does not yet have a very complete understanding of why it sometimes
does! Our discussion will be about the DESIRED laminar flame-front process

Next, the velocity of the (laminar) flame-front is known to be very dependent on many different
variables Here is an equation that gives the flame-front velocity (speed):

(There are actually three different theories which exist to explain the motion of flame-front travel and
this equation happens to be from the one that seems to be the best Many of the equations involved are
far more complex than this one They were generally developed during the 1980s)

If a number of reasonable assumptions are made, this can be greatly simplified into:

The exponents are different for each type of fuel gas, and for Hydrogen they have been
experimentally determined (Milton and Keck 1984) to be is 126 and is 026

Note that all of this is based on ideal conditions; the perfect proportion of fuel and oxygen; perfect
mixing; etc, and that real conditions are often not ideal

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 239

If we assume that an engine has an (actual) compression ratio of 8:1, the pressure increase factor
therefore would be 8026 which is 1717 The natural flame-front speed of 8 feet/second would therefore
increase to 8 * 1717 or 137 feet/second We note that some 2004 research in Bergen, Norway shows a
maximum atmospheric flame-front speed for Hydrogen as 28 meters/second, which is slightly higher
than the 8 ft/sec cited above at 92 ft/second

This is still far slower than the measured flame-front speeds inside gasoline-fired internal combustion
engines (which is generally at least 90 feet/second during most driving) However, the dependence on
temperature causes some improvement in this situation Hydrogen burns at 2,755°C or 4,991°F The
heating of the gas occurs gradually during the process of the combustion, but if we assumed that the
hydrogen got up to that temperature, the temperature dependence factor in the equation above would
be around 18 to one This implies that the COMBINATION of the higher pressure and the higher
temperature MIGHT cause a flame-front speed which is comparable to that known to be in gasoline-
fired internal combustion engines But it does not appear that anyone has yet actually done such
experiments to validate that statement

If you have been following this reasoning, you now also know WHY engines NEED to have a
COMPRESSION RATIO! Simply burning gasoline at atmospheric pressure would have far too slow of a
flame front speed to be of any use in an engine! You never knew WHY before, did you? Now you do! It
also indicates WHY compression ratios of 2:1 or 3:1 are never seen in engines
Consider the inside of an engine cylinder in a normal car engine traveling down the highway The engine
may be rotating at 2,000 rpm, or 33 revolutions per second The piston must therefore move upward and
downward 33 times every second, and its (maximum) speed in the middle of its stroke is around 45
feet/second If a fuel burning in the cylinder is to actually push down on the piston, in order to do actual
work in propelling the vehicle, the fuel-air mixture needs to burn at a speed FASTER than the piston is
moving! Otherwise, the slow-burning mixture would actually act to SLOW DOWN the piston! It would not
only do productive work, but it would require work FROM the piston
The ACTUAL hydrogen flame-front speed inside an ICE might be sufficient for conventional burning as in
current ICE engines, but someone needs to do the experiments to confirm that! But it suggests that yet
another hurdle might lie in front of Hydrogen ever becoming a common motor fuel
By the way, the INTENDED usage of Hydrogen in vehicles is quite different from this! The much-
publicized Fuel Cell is a device which converts the energy in a fuel like Hydrogen DIRECTLY INTO
ELECTRICITY THERE IS NO BURNING INVOLVED! The premise for future vehicles is that they
might use Fuel Cells to provide electricity for electric motor drive systems which means that Mortuary
Services may be appropriate for the Internal Combustion Engine! But it may be another ten or twenty years
before fuel-cell technology has developed to the point of that becoming realistic
As an additional note here, when you see impressive demos on TV or in a video regarding Hydrogen being
used as a fuel for a vehicle, try to check to see the source of that Hydrogen! In general, such demos use
LIQUID Hydrogen (which is necessarily refrigerated to incredibly cold temperature, within a few
degrees of Absolute Zero!) LIQUID Hydrogen does not have the problem of the huge volume of
Hydrogen as a gas (where one pound takes up around 200 cubic feet) (one pound of liquid hydrogen takes
up less than ¼ cubic foot, almost 1,000 times smaller) Where we have discussed that one cubic foot of
Hydrogen gas only contains around 360 Btu of chemical energy, one cubic foot of Liquid Hydrogen
contains around 300,000 Btu of chemical energy in it, relatively comparable to the energy concentration of
gasoline (about one-third of it) So, for demonstration purposes, a fairly small amount of LIQUID Hydrogen
contains spectacular amounts of energy in it! Which then gives impressive performance by the demo
vehicle However; IF they used LIQUID Hydrogen, that (small) amount for the demo quite possibly cost
them tens of thousands of dollars to buy!
But you might notice that even Liquid Hydrogen only actually contains around 1/3 of the chemical energy
in it that gasoline does! A cubic foot of gasoline contains around 75 gallons, each of which contains around
126,000 Btu of chemical energy, for a total of around 945,000 Btu The cubic foot of Liquid Hydrogen

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 240

contains around 300,000 Btu And as we noted, a cubic foot of gaseous Hydrogen only contains around 360
Btu Another indication of WHY gasoline has been so popular - it is a very compact form of a lot of
chemical energy! And also an indication that ALL the outrageous claims that people now make regarding
Hydrogen (or variants of it) allegedly making enormous power, are simply deceptions
OK Finally, there are all kinds of hucksters who are trying to sell all manner of products that they claim
will give you tremendous improvements in the gas mileage of your vehicle by somehow injecting
Hydrogen into the engine This is really sad regarding how deceptive their presentations are Again, if you
would inject LIQUID hydrogen into any engine, you COULD add a large amount of additional
CHEMICAL ENERGY into the engine to be burned However, what they try to sell are tiny devices which
they claim are hydrogen generators You should realize from this presentation that even if you could
generate a cubic foot of hydrogen each minute (which is extremely difficult to do AND would require
many horsepower from the engine to generate the needed electricity to do it), that would only be adding
around 360 Btu of chemical energy in the hydrogen into the engine (remembering that a gallon of gasoline
contains 126,000 Btu of chemical energy in it) A demo where LIQUID hydrogen was injected COULD
show measurable improvement, but any device that tries to generate GASEOUS hydrogen to be injected is
simply an expensive joke!

An Interesting Situation!

Racing teams spend millions of dollars to try to gain a fraction of an mph speed over the competition I was
not intending to, but I have come across an absolutely effective method to gain VERY large increases in
vehicle speed for Indy, Formula, and Formula II and certain other racing vehicles The vehicle would have
to be completely built from scratch, but an Indy car would CERTAINLY gain at least 13 mph in
average speed IF I disclosed this concept to any Racing Team, within three minutes I could get them to
realize that I am right and even WHY I am right! In that three minutes, they would fully see why they could
gain the 13 mph (and actually probably somewhat more!) So, here is an interesting situation! Given that all
racing teams spend millions of dollars in the attempt to gain 05 mph average speed, what would they think
it might be worth to have at least a 13 mph speed advantage over all competition (at least until they also
learned the concept!)?
By the way, even though there are countless restrictions and rules controlling racing vehicles, no such rule
is violated or even challenged by this concept
They obviously would never offer me “millions of dollars” without knowing what it was for, but once they
heard those three minutes, they would likely see that they then would no longer need me! From THEIR
point-of-view, they would see it reasonable to say to me “Hey, Polack, here’s a hundred bucks for your
idea” Well, I may be Polish, but virtually no one has ever thought that I was stupid! At least THAT stupid!
I don’t really see any obvious way to resolve this, except that maybe a few hundred thousand could be put
in Escrow (prior to hearing the brief description) and with some “performance payment” which would also
then be paid to me (per mph increase, for example)
Sadly, it is a similar situation to where I have been seriously taken advantage of in the past and which I am
currently feeling it necessary to be cautious about regarding several current inventions in other subjects
And I really see no logical way that I could feel safe regarding disclosing all the important information
about! So it strikes me as simply an Interesting Situation!

Research

I have done extensive research and design work regarding the hypothetical engine concept mentioned
somewhat above In October 2002, I actually discovered a way where I could accomplish essentially what
was discussed up there, including later building a small prototype engine I cobbled that strange engine
together out of mostly standard lawnmower engine parts (with a few very peculiar parts!) I definitely got
carried away with testing it (in June 2004) as I saw the horsepower and torque output keep increasing
Given that a standard Briggs and Stratton lawnmower engine is rated at 35 HP (at 3600 rpm), I became
quite excited when my strange engine was producing well over 12 HP at that engine speed For reasons that

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 241

can only be attributed to enthusiasm, I wound it out higher! At approximately 6300 rpm, it was briefly
producing just over 43 HP, when the mechanical strength of the generic lawnmower engine parts showed
that they could not survive There was a massive disintegration, and it was quite fortunate that I happened to
be standing in a place where I was not injured (or killed)
It represents a very unusual engine, which may not be very compatible with modern automotive
manufacturing technology
However, I later (late 2004) came up with a rather different concept of the same basic invention, which
probably has massive application It involves a retrofit modification of a conventional V-8 engine Relatively
few different parts are needed, generally using most of the original engine parts, including the block, heads,
oil pump, water pump and all accessories The heads need to have some machining done to them, and a
different (and very strange) crankshaft and camshaft are needed, along with different connecting rods It
does NOT seem compatible with I-block or V-6 engines
I am not interested in assisting giant corporations to make additional billions in profits; however I would be
quite interested in advancing a retro-fit system and am open to the possibility of a mutual business effort
regarding manufacturing and providing suitable kits
The resulting (small block Chevy) is an engine that idles at around 60 rpm (instead of 600 rpm) so that it
uses only 1/10 the fuel at stop signs and in rush hour traffic It has greater torque output, on the order of 500
lb-ft, compared to the common 200 lb-ft that many V-8 engines produce Also, where conventional engines
produce that maximum torque only at around 1800 rpm, this engine had a relatively flat torque curve, even
generating close to that 500 lb-ft near the 60 rpm idle speed! (Which is partly why it is able to idle at such a
slow [actual slower than heartbeat-rate] speed) The result of all these differences are that this engine has
better gas mileage (by over 50% improvement) while also having acceleration performance that massively
out-performs any conventional engine
The specific levels of these improvements depend on some features of any specific engine design and
construction these figures are based on what is called the small-block Chevy (327 or 350 cid) engine
I am NOT sure that the concept is compatible with V-6 engines, and I doubt that it is compatible with any
four cylinder engines The fact that V-8s have pretty much become dinosaurs may mean that this concept
would have no possible future
I do not intend to be providing Engineering assistance to individual people who only want to win trophies
at a drag strip! Someone would have to convince me that there was a credible possibility that this
improvement might actually advance to the stage of becoming a retro-fit kit, with credible marketing
arrangements for millions of drivers to benefit from it
This same general theme has resulted in yet another variant! In June 2009, I discovered a way to make an
engine which is extremely different than either of the above, but which has the capability of even better
performance and fuel economy, as well as several other surprising benefits If this one works as calculated,
it might represent an enormous advance in automotive design I am currently working toward getting a
prototype built

An Entirely Different Approach to a Hybrid!

I am NOT a fan of so-called Hybrid cars, as I see the potential advantages to be minimal, far smaller than
the public is generally told However, there is a concept that I gradually came up with that is technically a
Hybrid vehicle The generic Oldsmobile that I had started modifying was to keep the good gas mileage of
its moderate-sized engine, and would have still LOOKED like the original car, but it was intended to be
able to accelerate at a rate that dragsters would be proud of, using around 840 horsepower for extremely
impressive hole-shots at traffic lights!
I would be willing to help Detroit or Toyota or someone else to build this practical vehicle and probably
economically priced vehicle, which has some vague similarities to some parts of the Tesla electric sports
car!

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 242

Long ago, I realized that NO driver ever actually USES the huge horsepower of the over-powered cars that
are sold, EXCEPT for a maximum of less than 30 seconds at a time In all the time I have owned my
Corvettes, and an Austin-Healey 3000 and other sports cars, there has NEVER been any time where I had
my foot to the floor for more than 15 seconds, and that was during a quarter-mile drag where the vehicle
went from zero to around 120 mph in around 13 seconds So it occurred to me that it really is foolish for
people to buy cars that have giant engines that are advertised as 470 horsepower or 505 horsepower! At all
times other than those few seconds, the driver has to be paying for gasoline that is being burned for the
CAPABILITY of that power and acceleration
In an entire year of owning and driving a Corvette, I doubt that there are more than a twenty times when I
really use massive power for more than maybe three seconds at a time I realized that meant that I actually
USED all the power that Corvettes are known for, for maybe ONE MINUTE TOTAL per year!
I had started assembling an experimental vehicle, based on a 1985 Oldsmobile Cutlass Ciera 30 liter V6
front-wheel drive car I then had (It was later vandalized beyond possible repair, so I have not yet again
pursued the project with any other car [yet]) The car was mid-sized, capable of holding five or six people, a
pretty standard vehicle Its moderate-sized engine permitted tolerable acceleration but never anything really
interesting (to a Corvette owner!)
I noticed that the rear wheels (of the front-wheel-drive car) really did not do anything other than support the
rear of the car!
I also knew that even a STANDARD car battery can contain around 80 ampere-hours of electric power in
it, which, at 12 volts, is about 1 kWh (80 * 12 Wh, as discussed above) That meant that the one standard
battery could provide about 15 horsepower for an hour, but that also meant that it contained enough power
to provide 15 * 60 or 90 horsepower for one minute, or 180 horsepower for 30 seconds, or 360 horsepower
for 15 seconds! (A deep-discharge battery has even more energy capacity)
So my experiment was/is to be a car like the generic Cutlass Ciera, with its standard 120 hp engine, but
where EACH of the rear wheels was replaced by an electric-motor-driven wheel, driven directly from
TWO(*) batteries in series (Total, two motors, resembling car starter motors, and four standard car batteries
in the trunk, a rather minimal added expense beyond the modest cost of the standard Cutlass Ciera!
Maybe it would represent adding $1,000 to the cost of NEARLY ANY front-wheel-drive car And what
would be the result?
In the process of turning the engine to start a vehicle, it can briefly draw around 500 amperes of electricity
from a (single) battery At around 10 volts, that is around 5,000 watts Since each horsepower is equal to 746
watts, a normal starter has the capability of producing around 7 horsepower or so (ball park, each Make and
Model and engine size is different, and with modern vehicles with tiny motors, they need less horsepower
during starting so most MODERN starters have less capability
Just adding 7 (times two) horsepower would not be worth the trouble But starter motors are designed to be
durable enough to reliably start the vehicle for many years So long ago, people learned that in order to start
engines that had really exotic camshafts, a standard starter and battery just didn’t cut it, it didn’t turn the
engine fast enough to start So what was their solution? You guessed it! They used the SAME starter motor,
but ran it on 24 volts instead of 12! Two batteries in series! In Electrical Engineering, a standard formula is
that the POWER is proportional to the SQUARE of the voltage, if all other variables are kept the same
Instead of the starter producing around 7 horsepower to start the exotic engine, it produces around 28
horsepower So at drag strips, you often hear starter motors which sound like dentist’s drills because they
are spinning so fast BUT AN IMPORTANT FACT IS THAT THEY STILL LAST FOR A DECENT
TIME!
My experiment was to use that (conservative) arrangement in the Ciera, four batteries The experiment
would therefore be expected to add around 56 horsepower (28 * 2) extra to the 120 horsepower of the
conventional engine Not spectacular, but the total of 176 horsepower would actually have greater benefit
than that, because the 120 horsepower RATING of the standard engine actually got far less horsepower to
the wheels! So I figured that my rather economical experiment should provide GREATER THAN 50%
faster acceleration, likely close to double the acceleration Given that using 24-volts to power race car
starters has long shown that the starter survives pretty well, I consider that a very conservative experiment!

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 243

Of course, the next step would be to try THREE batteries for each of the starter motors I am not aware of
anyone who has done that before, so it is not clear how long the starter could operate before becoming toast
However, the simple fact that it is NEVER intended to be powered for more than 3 to 10 seconds at a time,
figures to allow the starter windings plenty of time to cool back down!
In any case, using three batteries for each starter motor should produce as much as 7 * 32 or 63 horsepower
at each rear wheel, or 126 additional horsepower I am suspecting that an innocent looking Cutlass Ciera
with a putt-putt engine should have impressive acceleration with 126 additional horsepower!
And of course, my sugar plum dreams would require at least TRYING four batteries for each! That would
be 7 * 42 or 112 horsepower at each rear wheel, or 224 additional horsepower Now keep in mind that these
experiments would all use GENERIC STARTER MOTORS, and that the recent Tesla sports car uses a
very exotic (and very expensive) motor and battery pack that has proven that even greater power could be
had Imagine if EACH of the rear wheels could provide 360 horsepower for 15 seconds, then that vehicle
should have acceleration that would be beyond belief!
I intended to put an activating switch under the gas pedal, where when I would floor it, the Ciera engine
might be producing its 120 hp, PLUS the horsepower from EACH rear wheel, or a total of a lot of
horsepower (but for only 15 seconds max!)
Under all NORMAL driving, the Ciera would get the excellent gas mileage that its small engine could
provide, and that engine could probably be even smaller, a four-cylinder instead But for those few seconds
when acceleration was desired, it could be spectacular!
Note that this vehicle was essentially ALREADY approved by the government safety testing and all the
rest, so it would immediately be street-legal The tire-grip might not permit it, but 0-60 in less than 3
seconds seems possible! FAR faster than ANY car on any road today!
And all from only maybe a $1,000 increase in the cost of the vehicle! Or the sky’s the limit on cost for
creative variants!
The giant vehicle manufacturers all design and build either under-powered tiny vehicles that get great gas
mileage or they design and build vehicles with hyper-performing high-horsepower engines that perform
great but which have lousy gas mileage The approach I have described above is better than both, in that it
combines the best of both general designs! And at a vehicle price that would not be much above their
current under-powered offerings!
I guess that what I have described here is a sort of Hybrid vehicle, since the gasoline engine would drive
several alternators that would recharge the batteries after a performance show But it entirely different from
what the vehicle manufacturers think is a Hybrid!
However, in my intent of modifying my Ciera, I was aware of two problems that seemed possibly hard to
overcome I knew that standard car starter motors only generate around 7 horsepower, where I wanted much
more The other problem is a result of that, in that a standard car battery is designed to have the energy drain
rate of the standard starter
I considered re-wiring a standard starter to have fewer windings of heavier wires, so that it drew a lot more
current, and therefore generated more power However, with my target of hundreds of horsepower, I was
not really sure whether my modification of a starter motor would cut it! So I was quite excited when the
Tesla came along and it has a single electric motor which they rate at 180 horsepower! And equally, their
battery-pack is clearly capable of supplying the electricity very rapidly for such horsepower So the Tesla
apparently has the resolutions to BOTH of the issues that had concerned me! And where the Tesla needs to
be able to withstand that level of energy flow continuously, all I would need would be a max of about 15
seconds worth I suspect that would mean that less-expensive batteries might be sufficient and the motor
could be designed to have an operating lifetime comparable to car starters, measured in minutes!
In any case, I believe my approach makes a lot more sense than what any of the giant vehicle
manufacturers are now selling or designing, primarily since it can allow “nearly stock” vehicles, for both
government safety approvals and for vehicle pricing that the public might be able to afford
In a “don’t do this at home” theme, there IS a possible safety issue Say that one of the motors burned out or
didn’t start and the other one worked Then ONE rear wheel would be producing a lot of torque and power,

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 244

which seems likely to cause the vehicle to instantly go out of control A bad deal! A related issue could be
related to whip-lash injuries for occupants when all that extra power suddenly kicked in If you have been in
any high performance vehicle during a serious hole-shot, you know how you are thrown back into the seat!
So this sort of concept would need a good deal of safety testing to make sure that unexpected things did not
suddenly occur

Theoretical Mileage of a Sedan Car

Calculated at 60 mph constant highway speed


These are ball-park numbers used to simply show you how this all works You could probably obtain the
frontal area of your vehicle and the drag coefficient of it from the vehicle manufacturer
We learned before that the Dynamic Pressure is related to the Momentum in the air and is simply the
product of the mass-flow of the air times the speed In the examples here, the one square foot cross-sectional
area is air’s density times volume (1/415 slug/cu ft * 88 f/s) times the velocity in feet per second (88 f/s)
which is 186 pounds of Dynamic Pressure force
A Large Sedan might have a frontal area of 22 square feet and a drag coefficient of around 043 Therefore,
we would have an Aerodynamic Drag of 186 * 22 * 043 or 176 pounds The Tire Drag for that vehicle
weight would be about 45 pounds so the total Drag is about 220 pounds
This drag is multiplied by the velocity (88) to get 19,500 ft-lb/second used to move the vehicle We can
convert this into horsepower (354) or watts ( 26,400 ) or Btu/hr ( 90,000 ) We know that a gallon of
gasoline contains around 126,000 Btu of chemical energy in it, but also that automotive engines and
equipment are not particularly efficient at around 21%, meaning that we then only get to use 26,500 Btu of
that energy to move the vehicle
So if we start with one gallon (26,500 Btu of available energy, and we know that we would need 90,000
Btu to drive an entire hour, we can see that our vehicle would travel 265/90 of that hour before running out
of gasoline! This is just under 18 minutes, and since we are going 60 mph, we are going one mile per
minute, and so we know that the car we just described would get around 18 mpg mileage It ain’t that
complicated!

Theoretical Mileage of a Compact Car

Calculated at 60 mph constant highway speed


These are ball-park numbers used to simply show you how this all works You could probably obtain the
frontal area of your vehicle and the drag coefficient of it from the vehicle manufacturer
We learned before that the Dynamic Pressure is related to the Momentum in the air and is simply the
product of the mass-flow of the air times the speed In the examples here, the one square foot cross-sectional
area is density times volume (1/415 slug/cu ft * 88 f/s) times the velocity in feet per second (88 f/s) which
is 186 pounds of Dynamic Pressure force
A Compact might have a frontal area of 17 square feet and a drag coefficient of around 040 Therefore, we
would have an Aerodynamic Drag of 186 * 17 * 04 or 125 pounds The Tire Drag for that vehicle weight
would be about 30 pounds so the total Drag is about 155 pounds
This drag is multiplied by the velocity (88) to get 13,500 ft-lb/second used to move the vehicle We can
convert this into horsepower (248) or watts ( 18,500 ) or Btu/hr ( 63,000 ) We know that a gallon of
gasoline contains around 126,000 Btu of chemical energy in it, but also that automotive engines and
equipment are not particularly efficient at around 21%, meaning that we then only get to use 26,500 Btu of
that energy to move the vehicle

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 245

So if we start with one gallon (26,500 Btu of available energy, and we know that we would need 63,000
Btu to drive an entire hour, we can see that our vehicle would travel 265/63 of that hour before running out
of gasoline! This is just under 25 minutes, and since we are going 60 mph, we are going one mile per
minute, and so we know that the car we just described would get around 25 mpg mileage

Theoretical Mileage of a Motorcycle

Calculated at 60 mph constant highway speed


We learned before that the Dynamic Pressure is related to the Momentum in the air and is simply the
product of the mass-flow of the air times the speed In the examples here, the one square foot cross-sectional
area is density times volume (1/415 slug/cu ft * 88 f/s) times the velocity in feet per second (88 f/s) which
is 186 pounds of Dynamic Pressure force
A medium-sized motorcycle might have a frontal area of 7 square feet and a drag coefficient of around 04
Therefore, we would have an Aerodynamic Drag of 186 * 7 * 04 or 52 pounds The Tire Drag for that
vehicle weight would be about 5 pounds so the total Drag is about 57 pounds
This drag is multiplied by the velocity (88) to get 5,000 ft-lb/second used to move the vehicle We can
convert this into horsepower (92) or watts ( 6,800 ) or Btu/hr ( 23,000 ) We know that a gallon of gasoline
contains around 126,000 Btu of chemical energy in it, but also that automotive engines and equipment are
not particularly efficient at around 21%, meaning that we then only get to use 26,500 Btu of that energy to
move the vehicle
So if we start with one gallon (26,500 Btu of available energy, and we know that we would need 23,000
Btu to drive an entire hour, we can see that our vehicle would travel 265/23 of that hour before running out
of gasoline! This is just under 70 minutes, and since we are going 60 mph, we are going one mile per
minute, and so we know that the car we just described would get around 70 mpg mileage

Tricky Theoretical Mileage of a Car

Calculated at THREE mph constant speed

We learned before that the Dynamic Pressure is related to the Momentum in the air and is simply the
product of the mass-flow of the air times the speed In the examples here, the one square foot cross-sectional
area is density times volume (1/415 slug/cu ft * 44 f/s) times the velocity in feet per second (44 f/s) which
is 00465 pound of Dynamic Pressure force
Let’s again consider that gas-guzzler big sedan We just said that a Large Sedan might have a frontal area of
22 square feet and a drag coefficient of around 043 Therefore, we would have an Aerodynamic Drag of
00465 * 22 * 043 or 044 pound The Tire Drag for that vehicle weight would normally be about 45 pounds
but in the spirit of deception, we might fill them to 90 PSI, without bothering to tell anyone that we did
that! This effect and the very low speed would mean the tire sidewalls would hardly heat up at all from
flexing, and the Tire Drag might be as low as 5 pounds the total Drag (under these strange conditions) is
therefore about 54 pounds
This drag is multiplied by the velocity (44) to get 24 ft-lb/second used to move the vehicle We can convert
this into horsepower (004) or watts (32) or Btu/hr ( 110 ) We know that a gallon of gasoline contains
around 126,000 Btu of chemical energy in it, but also that automotive engines and equipment are not
particularly efficient at around 21%, meaning that we then only get to use 26,500 Btu of that energy to
move the vehicle
So if we start with one gallon (26,500 Btu of available energy, and we know that we would need 110 Btu to
drive an entire hour, we can see that our vehicle would travel 265/011 hours before running out of gasoline!
This is about 240 hours! Since we are going 3 mph, so we know that the big sedan car we just described
would EXPERIMENTALLY DEMONSTRATE around 720 mpg mileage! In fact, if some advertiser
thought it would cause some cars to be sold, they would certainly do such a ridiculous test, just to be able to

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 246

keep themselves from being sued for claiming 720 miles per gallon! There actually were a variety of
companies that did such things, in massively twisting the conditions to make their product look
astoundingly good, but the government and the marketplace gradually caused them to fade
It actually turns out that the Drag Coefficient is probably even lower than the tiny amount we calculated
above, because all the airflows would be laminar rather than turbulent So the gasoline in this silly test
might last even LONGER than just 10 constant days of driving at 3 MPH! And if the tire pressures were
increased even more, 1,000 miles-per-gallon might be a claim that could be made without being sued!
Scary, huh?
But you may see the point in this silly discussion Say that I was disreputable and I wanted you to buy
“magic roses” which must be placed on top of the engine in your car, and I wanted to be able to put ads on
TV that said that you would get 720 miles per gallon A LOT of people would buy such things! Snake oil is
what it used to be called! But see that such a disreputable operation could actually DO an incredibly slow
speed test run, which could be documented by Observers to have been done, and they could then never get
sued for those outrageous claims!

Automotive-related presentations in this Domain

Physics in an Automotive Engine


Physics in an Automotive Vehicle
Battery-Powered (and Hybrid and Hydrogen) Vehicless
Hydrogen as an Automotive Fuel-source
Physics of SUV Rollover Accidents (first presented on the Internet January 2002)
An Absolutely GREEN Transportation and Freight System Which Is 20 times More Efficient than
Cars and Trucks and Airplanes, Cheaper and Faster! (invented in 1989)
A Super-Inter-Cooler High-Efficiency Engine (first presented on the Internet in 2002)
An Inexpensive and Simple Dynamometer for Vehicles (invented around 1966)
Road Talker Ridge Patterns in Highways for Warning Messages (invented in 1995)
A Simple System to Eliminate Hi-Speed Police Chases (invented in 1997)
Automotive Diagnostic Device Based on Vibrations (invented in 1998)
TireChek Precise Tire Pressure Monitoring (invented in 1995)
Simple System to Provide Urban Drivers in Real-Time Traffic Conditions (first Internet in 2000)
Fuel Efficiency Effects of Driving with Headlights On
A Simple Oil Change Alert Monitor (invented in 1998)
The Physics of How Police Radar Works
A Different Tire Construction Concept, for softer ride (first presented on the Internet 1998)
An Urban Snowplow Truck that Minimizes Snowpiles (invented in 1975)

Energy-Related presentations in this Domain:

Global Warming Calculated by a Physicist


Strict Scientific Analysis of Consequences of Fossil-Fuel Burning
Global Warming and Climate Change - Possible Physics Solutions
Unlimited Hot Water FOR FREE, while Solving Global Warming! (biodecomposition)
Heat Your Whole House FOR FREE, while Solving Global Warming! (biodecomposition)
Two Systems: to (1) collect sunlight heat and (2) store it for a winter, for ANY building!
Published Current Energy Resources Remaining in the Earth (Scary!)
Making all (Black) Asphalt Roads, Rooftops and Parking Lots White can reduce Global Warming!
Global Warming Issues Regarding HEAT Sent into the Atmosphere
Global warming Issues Regarding Carbon Dioxide, and Sealevels Rising
Hydrogen as an Fuel-source Replacement

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 247

A 100%-Solar Home Heating System For Virtually Any Climate


Solar Electricity from PV Photovoltaic Cells
An Absolutely GREEN Transportation and Freight System Which Is 20 times More Efficient than
Cars and Trucks and Airplanes, Cheaper and Faster (2000 mph)! (invented in 1989)
Batteries or Hybrids as an Fuel-source Replacement
Wind-Power for Making Electricity (Residential, some Watts)
Wind-Power for Making Electricity (Community, MegaWatts) (a million construction jobs and 12,000
MegaWatts of electricity)
The Earth’s Wobbling (Precession) as a Source for around 63,000 MegaWatts of Energy
The Earth’s Rotation as a Source for Energy
Waste Nuclear Power For Making Electricity And Heat?
The Physics of Efficiency In Electric Power Plants
Individual Ways of Reducing Your Energy Usage
Methods of Storing Energy for Later
How Much Energy Comes From the Sun? And Why is there Global Warming?
How does the Sun create so much energy?
Inventions Which Might Help Deal With Coming Energy Catastrophes
An Invention to Efficiently Make Electricity from Solar
Enormous Heating of the Atmosphere by the Alaska Pipeline
Air Conditioning without Huge Electric Bills and GREEN, without Freon
A Method of Storing Summer Heat to (Nearly) Entirely Heat a House all Winter
The Sophisticated Woodstove I Invented in 1973
The Physics of Wood as a Heating Fuel
Why is the North Pole Heating Faster than the rest of the Earth?
Scientific Explanation of Airplane Flight
A Possible way to greatly reduce Aerodynamic Drag of Airplanes

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 248

5.2.Applications in Microwave

The Double Tokomak Collider(DTC),Magnetic confinement tokomak collider( MCTC) hub,


Duo triad tokomak collider (DTTC) hub can used with the help of nanotechnology such as
the nano-torii has its application in microwave.

Materials and Methods

The Antenna Array:

Diagrams of the front radiating (patient) side and back (feedlines) side of the applicator can be
seen in figures 9-10. The applicator is an array of 27 Dual Concentric Conductor (DCC) microstrip
patch antennas printed on very thin (9mil) and flexible printed circuit board material. A microstrip
feedline network feeds the middle of all four sides of the powered patch, which is capacitvely
coupled to the radiating patch.

The array has overall dimensions of 20.8 by 43.2cm and can treat an area of up to 13 by 43cm.
Figure 9 shows the front or groundplane side of the applicator. The floating patch and large
groundplane can be clearly seen. Figure 10 shows the backside or feedlines side of the
applicator. On the feedline side of the applicator can be seen the powered patch, the microstrip
feedline matching network and the miniature on-board co-axial PMMX connector.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 249

Figure 1 Complete diagram of radiating side of applicator showing scale.

Figure 2 Complete diagram of feedlines side of applicator showing scale.

The geometry and electric field distribution of the DCC can be seen in figure 11 below.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 250

Figure 3 Side view of DCC antenna geometry showing near electric field lines.

The electric field from the radiating patch terminates on the groundplane through the gap
between patch and groundplane. It has been shown that this geometry produces a near field
which is dominated by components that are predominantly parallel to the plane of the radiating
patch above and near the gap and normal to the patch over its center. The antennas are
specified by the size of the rectangular hole in the groundplane and the width of the gap. In the
studied array the aperture size is 3cm with a gap size of 2.5mm.

Network Analyzer:

In the preceding section the methods used to match a load to a microstrip line were presented.
What was not discussed is how exactly the load impedance is determined. To determine the input
impedance the Swiss army knife of microwave measurement equipment, the vector network
analyzer with S parameter test set was used.

The HP 8753C vector network analyzer used for this project is shown below.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 251

Figure 4 Hewlert Packart 8753C vector network analyzer with 85047A S parameter test set.

The vector network analyzer processes the transmitted and reflected waves from a network to
give readings of input impedance, VSWR, return loss and many other network characteristics.
Because it uses a mathematical error correction/calibration technique and preserves both
magnitude and phase information from the signal this type of instrument can make very accurate
circuit measurements, even at microwave frequencies.

Network Analyzer Calibration Method

A short length of high quality coaxial cable is connected to the analyzer output. At the end of this
cable is attached, in sequence, a high quality 50 ohm load, a short circuit plug and a calibrated
open

The analyzer has stored, in its memory, mathematical models of these standard loads. The
analyzer sends out a signal and reads the reflections from each load. It can then mathematically
subtract out all the discontinuities between the analyzer output and the end of the cable. If the
cable is of high enough quality, its properties do not change when it is flexed. It can then be
attached to the unknown circuit component and the properties of the circuit measured without
distortion by the cable.

Time Domain Reflectometry

The 8753C analyzer also has the ability to do a Fourier transform of the frequency data into the
time domain, to provide the time domain response of the network. This method is called Time
Domain Reflectometry (TDR). The analyzer sends out a broadband step function and performs a
fast Fourier transform on the reflections to recover information on the reflections as a function of
time after the impulse. Taking into account the speed of light on the transmission line, the circuit
information may be displayed as a function of distance down the line. The resolution in time is
directly related, through the FFT, to the bandwidth of the frequency range in the step function.
The broader the range, the higher the resolution. The highest bandwidth range possible with the
8753C analyzer is from 39 MHz to 5.99 GHz. With this range, the smallest distance between two
discontinuities that the analyzer can distinguish is 10 millimeters.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 252

TDR mode is most useful when the analyzer is set to display the real component of the signal. In
frequency mode, when the display is set to real only, the real part of the impedance as a function
of frequency is plotted on a linear axis. In TDR mode

Figure 5 Standard TDR graph of microwave antenna showing how the impedance varies
from the ideal 50 ohms as a function of distance.

(see figure 13) the horizontal axis is distance. The vertical axis is a unitless quantity that
represents the strength of the reflection. The vertical scale runs from +1000 milli-units to -1000
milli-units. +1000 is said to be an open, -1000 is said to be a short and 0 on this scale is 50
Ohms. By looking at the plot, it can be seen where reflections are being generated, if the
discontinuities are inductive or capacitive in nature and what the impedance is at a point in
relation to a perfect open or short. This mode is very useful for checking solder joint connections
at the coaxial cable to microstrip transitions. Conditions where the center pin is shorted to the
ground plane or not sufficiently well soldered to the microstrip are easily spotted. It can also
characterize how clean a connection has been made by displaying the magnitude of the reflection
from that point.

Gating

Gating is another useful feature that is used often. The analyzer has the ability to set a gate around a region
( in either frequency space or distance/time) and ignore all other information that is not contained within
that measurement window.

Standard Measurement Procedure

The standard procedure for network-analyzer measurements was as follows. The unit was turned
on and allowed to warm up for at least five minutes. A short length of coaxial cable was
connected to the analyzer input. The analyzer was calibrated out to the end of the test cable. The
end of the cable was then attached to the RF connector jack on the PCB array edge. The
analyzer was put in TDR mode. The gate was set around the distance region of interest. The
analyzer was then switched into frequency mode and measurements were made. It is believed
that performing the measurements in this way improved measurement accuracy significantly by
reducing the noise and unwanted reflections to a minimum.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 253

All antenna measurements were performed in exactly the same way. Because of the radiation
pattern characteristics of the DCC patch antenna, it is sensitive to the lossy muscle medium it is
looking into. If the loading changes, so will its edge impedance and therefore its measured
characteristics. The antennas tested all looked into the same load, consisting of a distilled water
bolus and muscle phantom. In this way, we tried to recreate the load the antenna would see in a
clinical situation.

Network Analyzer Measurements:

All microwave transmission line matching algorithms start with a known load. In our case the load
was not known originally and had to be determined either theoretically or experimentally before a
matching network could be designed. The load, in our case, is the edge impedance of the
powered patch. An extensive search of the literature for an analytical solution was conducted. I
found that there does not exist a theoretical formula for the unique geometry of the DCC antenna.
Standard formulas are available for the edge impedance of rectangular microstrip patch over an
infinite groundplane. A few commercial microwave analysis programs were also investigated but
they would not produce a stable-believable analysis, due to limitations on computer memory for
physically large and complex antenna geometry. For these reasons, it was decided to
experimentally determine the edge impedance of the radiating patch using the vector network
analyzer.

Finding the Edge Impedance

The network analyzer uses reflections from the discontinuity of interest to make its measurements. For this
reason every care must be taken to minimize all other reflections. The matching network (see figure 16 )
has many discontinuities; including coax to microstrip transitions, bends, step changes in width and two T-
junctions. If we were to try to determine the patch edge impedance by looking into the beginning of the
network the results would be contaminated by the spurious reflections. Though, this method is fine for
determining the overall properties of the feedline-patch network. To eliminate these reflections a test board
was improvised. An older, professionally made antenna array (see figure 15)

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 254

Figure 6 Older non-optimized antenna array used to determine the correct edge
impedance.

with non-optimized microstrip feedline network was altered to give an as accurate reading of
antenna patch edge impedance as possible. To minimize reflections, networks with the fewest
nd
bends between the 2 T-junction and the on-board coaxial connector were used. Then, using a
dremel tool, one arm of each T-junction was ground away, as cleanly as possible. This reduced
the network to one continuous length of microstrip with six bends and feeding the patch on one
side only (see figure 14). For this configuration, the input impedance signal that was cleaner than
before but still contaminated by multiple unwanted reflections. To remove these reflections, the
analyzer was set to TDR mode and the gate was placed just over the area where the feedline
meets the patch. In this way the analyzer mathematically ignores all other reflections except
those coming from the gated region. The analyzer was then returned to frequency mode and
accurate input impedance measurements of the input impedance of the patch edge were made.

Microwave Network Parameters

The analyzer was calibrated and connected to the antenna array as described above. The array
was then attached to the bolus and the bolus was firmly attached to the muscle phantom-making
sure no air gaps existed between the array, bolus and load. The TDR mode was used to set the
gating. The gate was set so that only reflections starting from the PMMX connector to the patch
edge were considered. The analyzer was then returned to frequency mode and the microwave
parameters were measured. The measured parameters are: input impedance, VSWR and return
loss. Each antenna was characterized separately and this information was entered into an Excel
spreadsheet. The spreadsheet calculated the overall averages and deviations of the above
parameters by row and column.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 255

Feedline Network Design:

With a known input impedance a suitable matching network could be designed. A Mathematica
notebook was written to help in the calculations (see appendix 1). An explanation of the general
matching techniques used is presented and then the slight variations investigated on different
columns in the array will be described. The design for this test array can be seen in figure 9. For
the particular geometry of the test array it was found that the edge impedance of the microstrip

patch was . The standard matching network used can be seen in figure 15.

Figure 7 Diagram of standard matching network showing the impedances at different


points and the microwave compensation techniques used.

The calculations were made for a microstrip width that would result in a characteristic impedance
of 46 Ohms. The distances a and b were set to be at least three times the width and the right
st
angle bends are microwave mitered. The base width of the 1 T-junction was calculated to give
st
an even 3dB split in power and to minimize reflections The bases of both 1 T-junctions continue
nd
on without a change in width to form the arms of the 2 T-junction. The radius of the curve c was
made as broad as possible within the space constraints. It was determined from previous
prototypes that when the curve c was made too sharply that it was a source of unwanted radiation
and insertion loss. The distance d was constrained to be at least 3W of the broader 23-Ohm line

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 256

nd
that forms the arms of the 2 T-junction. In this standard case, there were no specific constraints
st nd nd
on the length of line between the patch and the 1 and 2 T-junction. The base width of the 2
nd
T-junction was calculated to give an impedance of 11.5 Ohms. The base of the 2 T-junction is
nd
very short . A quarter wave transformer is then used to match the 11.5 Ohm 2 T-junction
input impedance to the 50 ohm microstrip line. The quarter wave transformer has a characteristic
impedance of 24 Ohms and a length of 5cm. After the quarter wave transformer the microstrip
line runs all the way to the PMMX connector with as few bends as possible and maintaining at
least a 3W distance to the nearest microstrip lines to minimize cross coupling between adjacent
lines. The length of feedline from the end of the quarter wave transformer to the PMMX connector
was constrained. The longest run was designed first then all following feedline runs were
constrained to be the same length. The length was fixed because, as was shown in the
microwave theory section, with a mismatched load, impedance varies as distance from the load.
The shorter runs were lengthened with short serpentine runs called meander lines. With all the
feedlines having the same length, then theoretically they will all have the same input impedance.

The preceding standard optimization was done on four of the nine columns. On the remaining five
columns additional matching techniques were investigated.

nd th
In column one the length between the patch and 2 T-junction was constrained to be 1/8 of a
wavelength (see figure 16).

th
Figure 8 Diagram of standard matching network plus additional 1/8 wavelength matching
section.

th
This 1/8 section was used to force the feedline/patch interface to be an anti-node in the standing
wave pattern. If the interface could be forced to be a node, the voltage would be a maximum and
the maximum amount of power would be delivered to the patch.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 257

st nd
In column four the distance between the 1 and 2 T-junctions was constrained to be 1/4
wavelength (see figure 17).

Figure 9 Diagram of standard matching network plus additional 1/4 wavelength matching
section.

st nd
This additional quarter wave transformer was used to eliminate reflections between the 1 and 2
T-junctions.

The sixth eighth and ninth columns used both the quarter wave transformer between the T-
th st
junctions and the 1/8 section between the 1 T-junction and the patch (see figure 18)

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 258

th
Figure 10 Diagram of standard matching network plus additional 1/8 and ¼ wavelength
matching section.

These techniques were used concurrently in the hopes that there effects would be additive and
produce an aperture with improved matching and superior radiation characteristics.

Electric Field/SAR Scans

The computer-controlled three dimensional electric-field-probe scanning device used to


characterize the electric field radiating into homogeneous muscle-tissue equivalent liquid
phantom media from the antenna array can be seen in figure 19.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 259

Figure 11 Experimental setup for the mapping of the electric field at depth in muscle
equivalent liquid phantom.

The antenna arrays to be tested were first attached to a de-ionized de-gassed water bolus of
thickness .5-1.5cm. The bolus/array was then inserted into a large bag, constructed of the same
polyurethane material as the water bolus, with Plexiglas backing board to hold the flexible array
flat during the electric field scans. The backing board is printed with an orthogonal grid that the
bolus/array is aligned with. This assembly is then inserted into the liquid muscle scan tank and
leveled to ensure the array is not skewed relative to the scanning apparatus.

The scanning apparatus consists of an electric field probe, a three-axis computer controlled
servomotor motion system, an input/output card and a computer running the data acquisition
software. The electric field probe used is a Narda model 8010 miniature 3-axis probe. This probe
consists of three orthogonal diode dipole sensors housed in the tip of a miniature wand. Three
low level DC signals proportional to are transmitted to a low-noise differential
summing amplifier via high resistance leads and then to the computer for digitization. The
amplifier can be set to amplify all three components of the electric field or one component or any
combination of the three required. The squares of all three electric field components were
summed so that the total electric field squared could be recorded.

After the board/bolus is inserted, the electric field probe is positioned next. The probe is mounted
at a right angle on the end of a long Teflon rod. The probe is positioned over the center of an
antenna, making sure that it is orthogonal to the plane of the array. In this way it is certain that all
elements are orthogonal to each other and the probe will be correctly centered in the array.

The data acquisition program can now be started and the scanning parameters set. With this
system we can scan in planes parallel to the array surface, at any depth in muscle phantom

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 260

greater than the minimum 3.5mm distance, which represents the distance from the center of the 3
orthogonal dipole sensors to the probe tip. In practice, the antennas are scanned in parallel
planes to the antenna surface, 5mm and 10mm away from the surface on a 2.5mm grid. A
vertical cross section can also be scanned to record information on how the electric field varies
with depth. The scanning program creates data files that contain two dimensional arrays of DC
voltages as function of position. Another program is used to convert these values to SAR as a
function of position. The commercial data visualization software Surfer is used to plot contour and
surface maps of the experimental SAR data.

The above measurements/characterizations were performed on the most recent optimized array
and on several older non-optimized arrays. The newer optimized arrays are professionally
constructed by the PCB manufacturer Labtech LTD. The older non-optimized arrays were
manufactured in-house with a PCB home hobbyist kit. The older arrays are non-optimized in the
sense that no consideration was given to microwave matching techniques. The older arrays (see
figure 14 for an example) have the same basic feedline shape: one line splits into four via two T-
junctions. The width of the feedlines are the same throughout, and this width was constrained by
manufacturing concerns, not matching concerns. With the home hobbyist technique used it was
found nearly impossible to consistently produce quality lines of width less than .4mm. With the
given PCB geometry, a microstrip line of .4mm would have a characteristic impedance of 12
Ohms, a poor match for the 50 Ohm coaxial cable and PMMX connector. At both T-junctions the
12 Ohm feedline sees an input impedance looking into the base of only 6 Ohms. At the
patch/feedline interface, the 12 Ohm line sees a load of ~46 Ohms. It was thought that all of
these mis-matches must produce an antenna that is far from optimized. The older non-optimized
arrays were used as a control and their measured parameters were used to judge the success of
the different optimization techniques.

Microwave Theory

In this section the important microwave engineering concepts and nomenclature are summarized
with particular attention given to microwave transmission line matching techniques.

The range of the electromagnetic spectrum from 300 MHz to 300 GHz is commonly referred to as
the microwave range. For applications with wavelengths from 1 meter to 1 millimeter, low
frequency circuit analysis techniques can not be used; we must use transmission-line theory. In
transmission-line theory, the voltage and current along a transmission line can vary in magnitude
and phase as a function of position.

Many different types of microwave transmission lines have been developed over the years. In an
evolutionary sequence from rigid rectangular and circular waveguide, to flexible coaxial cable, to
planar stripline to microstrip line, microwave transmission lines have been reduced in size and
complexity. The microstrip transmission line is the technology employed in the current
hyperthermia applicator studied.

For fields having a sinusoidal time dependence and steady-state conditions, a field analysis of a
terminated lossless transmission line results in the following relations:

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 261

Figure 1 Diagram of lossless transmission line with load showing incident, reflected-
transmitted waves.

If an incident wave of the form , where is the phase constant or wave number given by
, is incident from the -z direction then the total voltage on the line can be written as a
sum of incident and reflected waves:

The total current on the line is

Where is the characteristic impedance of the microstrip line, that is, the impedance the
transmission line would have if it were infinitely long or ideally terminated. The incident wave has
been written in phasor notation and the common time dependence factor has not been written.

The amplitude of the reflected voltage wave normalized to the amplitude of the incident voltage
wave is known as the voltage reflection coefficient, Γ

where is the load impedance.

The total voltage and current waves on the line can then be written in terms of the reflection
coefficient as

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 262

From the previous equations we see that the voltage and current on the line are a superposition
of an incident and reflected wave. If the system is static, i.e. if and are not changing in time,
the superposition of waves will also be static. This static superposition of waves on the line is
called a standing wave.

Because of the complicated shape of this standing wave, the voltage will vary with position along
the line, from some minimum value to some maximum value. The ratio of to is one way
to quantify the mismatch of the line. This mismatch is called the standing wave ratio (SWR) or
voltage standing wave ratio (VSWR) and can be expressed as:

The SWR is a real number such that 1≤ SWR ≤ ∞ and with a perfect match SWR = 1. By
definition, impedance, characteristic or otherwise, is the ratio of the voltage to the current a
particular point on the line. The standing waves cause the impedance to fluctuate as a function of
distance from the load. The variation in impedance along the transmission line caused by the
line/load mismatch can be written.

Where is the distance from the load. If we substitute the expression for Γ in terms of the
impedances, the generalized input impedance of the load plus transmission line simplifies to:

With this equation the impedance anywhere along the line can be calculated if the load
impedance and characteristic impedance are known.

In the most basic sense, then, if the load impedance equals the line impedance, the reflection
coefficient is zero and the load is said to be matched to the line. All of the microwave impedance
matching techniques can be reduced to this simple idea: minimize the reflection of the incident
wave to as nearly zero as possible.

When the load is mismatched to the line and thus there is a reflection of the incident wave at the
load, the power delivered to the load is reduced. This loss is called return loss (RL) and is equal
(in dB) to

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 263

This ends the summary of the relevant general microwave engineering concepts. Some relations
specific to microstrip will now be discussed before moving on to discuss the compensation of
microstrip discontinuities.

The geometry of a typical microstrip line can be seen in figure 4.

Figure 2 Side view of microstrip showing actual and effective geometry.

Starting with a two-layer PCB the top layer is chemically etched away to leave copper traces of
width W, separated from the groundplane by a dielectric substrate of some thickness d and
relative permittivity .

Because of the anisotropic dielectric geometry, the microstrip line cannot support a true TEM
wave for the following reasons: a microstrip line has most of its electric field concentrated in the
region between the line and the groundplane; a small fraction propagates in the air above.
Because the speed of light is different in air and dielectric the boundary-value conditions at

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 264

the air-dielectric interface can not be met with a pure TEM wave and the exact fields constitute a
hybrid TM-TE wave. Because the dielectric substrate is electrically very thin , for this
application, the fields are quasi-TEM. Because the fields are quasi-TEM, good approximations for
the phase velocity, propagation constant, and characteristic impedance can be obtained from the
static solution.

The phase velocity in microstrip line is given by

and the propagation constant is given by

where is the effective dielectric constant and is given by

The effective dielectric constant is the dielectric constant of an equivalent homogenous


medium that replaces the air and dielectric layers.

The characteristic impedance of a microstrip line can be calculated, given the width W and
substrate thickness d with the result

If all microstrip based circuits consisted of a proper width straight feedline terminating in a load,
there would not be much need to worry about compensating for discontinuities. Even in this ideal
case, the transition from microwave source to microstrip line and from the microstrip to load can
be the source of large reflections. Typical microstrip discontinuities are junctions, bends, step
changes in width and the coaxial cable to microstrip junction. If these discontinuities are not
compensated, they introduce parasitic reactances that can lead to phase and amplitude errors,
input and output mismatch, and possibly spurious coupling. The strength of a particular
discontinuity is frequency dependent, where the higher the frequency, the larger is the
discontinuity. The following typical discontinuities and their compensation are discussed in
descending order of importance.

Impedance Mismatches

Quarter-Wave Transformer

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 265

A general mismatch in impedance between two points on a transmission line can be compensated with a
quarter-wave transformer. The quarter-wave transformer is a very useful matching technique that also
illustrates the properties of standing waves on a mismatched line. First, an impedance-based explanation of
how a quarter-wave transformer works will be described; then a more intuitive explanation that is
analogous to destructive interference in thin films will be discussed. A quarter wave transformer in
microstrip is shown in fig 5.

Figure 3 Diagram of quarter wave impedance transformer showing multiple reflections.

In a quarter-wave transformer, we want to match a load resistance to the characteristic


feedline impedance through a short length of transmission line of unknown length and
impedance. . The input impedance looking into the matching section of line is given by;

If we choose the length of the line = then , divide through by

and take the limit as to achieve

For a perfect transition with no reflections at the interface between microstrip and load, Γ =0 so
and this gives us a characteristic impedance as

which is the geometric mean of the load and source impedances. With this geometry, there will

be no standing waves on the feedline although there will be standing waves on the matching

section. Why was the value of chosen? In fact, any odd multiple (2n + 1) of will
also work.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 266

The astute reader may recognize these conditions as similar to those found in destructive
interference in thin films. In thin films, if light is incident on mediums with progressively higher
index of refraction, it will undergo a 180 degree phase change at both interfaces. For there to be

destructive interference, the path length difference must be . The microstrip quarter-wave

transformer works in exactly the same way. When the line length is precisely the reflected
wave from the load destructively interferes with the wave reflected from the interface and
they cancel each other out. It should be noted that this method can only match a real load. If the
load has an appreciable imaginary component, it must be matched differently. It can be
transformed into a purely real load, at a single frequency, by adding an appropriate length of
feedline.

Junctions

A junction between two dissimilar width sections also introduces a large discontinuity. A standard
T-junction power divider is shown in figure 6.

Figure 4 Diagram of T-junction power divider.

In this diagram, the input power is delivered to the intersection on a microstrip of width and
impedance . The line then branches into two arms with power, width and impedance given by
and respectively. The design equations for this divider are

This simplest type of matched T-junction is the lossless 3dB power divider. It can be seen from
the equations above that if the power will split evenly into the arms of the T with
each arm having half the original power. It is interesting to note that the impedances of the two

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 267

arms act just like resistors wired in parallel. To match the impedances of the arms of the T to the
impedance of the base, the arms must have twice the impedance of the base.

Another typical microstrip discontinuity results from a simple bend in the line. Figure 7 shows
some typical bend discontinuities and the required compensation techniques.

Figure 5 Different bend discontinuities in microstrip and their compensations.

The increased conductor area in the region of the bend produces a parasitic discontinuity
capacitance. This effect can be eliminated by making a smooth swept bend where there is no
change in the conductor area. The radius has to be r≥ 3W, which takes up a large amount of
space, space that is always at a premium. A more space-effective compensation method is to
miter the right angle bend.

Source-microstrip transition

To launch a wave, on the microstrip transmission line the microwave signal is brought from the
generator on a coaxial cable which connects to an on-board PCB mounted jack which is soldered
directly to the groundplane and feedlines. To minimize reflections in this process the generator,
coaxial cable and jack all have characteristic impedances of 50 ohms. The actual transferring of
the wave from the jack to the microstrip is the main source of reflections in this process. To
minimize these reflections the microstrip line impedance must match the impedance of the jack.

The compensation methods for a step change in width and the parasitic reactance of a T-junction
are shown in figure 8.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 268

Figure 6 Second order discontinuities and their compensation techniques.

These discontinuities are second order, only becoming significant at frequencies above 3 GHz. .
For this reason, these methods of compensation were not employed in this research.

We have reviewed basic transmission line theory, explaining the terms used to describe
microstrip circuits and the techniques used to match different elements in a circuit. In the next
section, we discuss how to apply this theory

<< History and Materials and


Motivation Methods>>

^ Table of Contents ^

History and Motivation

Facts about Cancer

Cancer is a group of diseases characterized by uncontrolled growth and spread of abnormally


transformed or mutated cells. If this spread is not controlled, death will eventually result. Cancer
is caused by both external (chemicals, radiation, and viruses) and internal (hormones, immune
response dysfunction, and inherited gene deficiencies) factors. Causal factors may act together
or in sequence to initiate or promote carcinogenesis.

About 2.6 million new cancer cases are expected to be diagnosed in 2000. This year about
552,200 Americans are expected to die of cancer—more than 1,500 people a day. Cancer is the
second leading cause of death in the US, exceeded only by heart disease. In the US, 1 of every 4
deaths is from cancer.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 269

Breast cancer is a malignant tumor that has developed from cells of the breast. Breast cancer is
the most common cancer among women, excluding non-melanoma skin cancers. The American
Cancer Society estimates that in 2000 182,800 new cases of invasive breast cancer (Stages I-IV)
will be diagnosed among women in the United States, producing in 2000, 41,200 deaths. Breast
cancer is the second leading cause of cancer death in women, exceeded only by lung cancer.

There are many treatment options for women with breast cancer including surgical removal of the
entire breast or lump, radiotherapy and various chemotherapy and hormone treatments. If a
cancer comes back after treatment it is called a recurrence. Nearly one third of these breast
cancer recurrences are on the chest wall. This chestwall recurrence of breast carcinoma is quite
deadly with only 25 to 30% of the patients surviving out to five years . An interesting fact about
chestwall recurrence is the large range in survival. Two years is the median survival following
recurrence but it ranges from a few months to 30 years . Successful treatment of chestwall
recurrence thus has the potential to add years to a patient’s life as well as significantly improve
the patients' quality of life

Hyperthermia as Cancer Treatment

Hyperthermia is the use of elevated tissue temperature for the treatment of cancer. Hyperthermia
therapy consists of elevating tissue temperature to the range 41 to 45° C, for an hour. When used
alone, it is thought that protein denaturation is the main cause of hyperthermic cell . Heat is also
thought to affect cells in the following ways: heat can alter the structure of plasma membranes
(blood vessel walls) and impair many membrane-related functions that can lead to cell death.
Heat also damages mitrochrondria and inhibits glycolysis and respiration. Heat can also inhibit
the synthesis and repair of damage to DNA, proteins, RNA, and heat damages polysomes and
microsomes.

While hyperthermia used alone is effective (when temperatures and thermal doses are sufficiently
high) heat is most commonly used as an adjuvant treatment. The two types of cancer treatments
most commonly used with hyperthermia are chemotherapy and radiotherapy. Chemotherapy is
the use of drugs to kill the cancer cells. These drugs, through various methods, disable the
reproductive abilities of cancerous cells. Radiation therapy is the use of x-rays, gamma-rays and
electrons as ionizing agents that interact with biologic material to produce highly reactive free
radicals, which result in biologic damage . The main effect of radiotherapy is to block the cell’s
ability to reproduce. Radiation and heat interact in more than a simply additive way. This
synergistic interaction of heat and radiation is interpreted as a heat-induced sensitization of cells
to radiation, termed heat radiosensitization or thermal radiosensitization . This synergistic
interaction is attributed to the hyperthermic effect of preventing the repair of radiation-induced
DNA strand breaks and the excision of damaged bases . It is believed that these effects are
caused by (1) heat-induced inactivation of DNA repair enzymes and/or (2) alteration of the
chromatin structure due to protein denaturation and aggregation, which causes decreased
accessibility of the damaged sites to the repair machinery. It has also been shown that mild
hyperthermia, when given concurrently with low-dose-rate irradiation can remove the low-dose-
rate sparing effect. There has also been no evidence that radiation results in an enhancement of
heat lesions, i.e. no radiation-induced heat sensitization takes place .

Hyperthermia with chemotherapy has not been studied as extensively as combinations with
radiation, but some strong rationales exist for its use. Hyperthermia enhances the cell-killing
effect of a number of chemotherapeutic agents, such as, cyclophosphamide, melphalan, cisplatin
and doxorubicin. Perhaps the most obvious effect is, that if heat is localized to the tumor volume,
the flow of blood to that area is increased in response to the elevated temperature as the body
attempts to cool the area with increased blood flow and thereby increase the concentration of
therapeutic chemicals delivered to the tumor relative to the rest of the body at a cooler
temperature. Heat also causes blood vessel walls, inside the tumor, to become more permeable
(leaky) causing drugs to leak into the heated tumor at a higher rate. The increased

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 270

chemotherapeutic effect at elevated temperatures can be caused by, altered pharmacokinetics or


pharmacodynamics, increased DNA damage, decreased DNA repair, reduced oxygen radical
detoxification, and increased membrane damage . In addition, concentrations of agents that are
not normally toxic at normal body temperature can become cytotoxic above 39° C and in some
cases; hyperthermia may partially overcome some types of drug resistance .

Heating Mechanisms

There are three primary methods of heating tissue in hyperthermia: 1) frictional losses from
molecular oscillations caused by an ultrasound pressure wave; 2) simple thermal conduction from
areas of high temperature to areas of low temperature and 3) Resistive and dielectric losses from
an applied electromagnetic field. Of these three, the present effort relies primarily on an applied
EM field to induce heating of superficial tissue at a depth up to 1cm, and on deep and thermal
conduction to heat slightly deeper and smooth the temperature distribution.

All living human tissue contains some amount of free charge. Free charges, which can interact
with an external electromagnetic field. Tissues with high water content and thus a large
percentage of polar molecules interact especially well. Blood, skin, muscle, internal organs and
tumors all contain large percentages of water. At microwave frequencies above 100 MHz, human
tissue can be considered as a lossy dielectric. The electrical properties of lossy human tissue
may be characterized in terms of its dielectric constant and electrical conductivity. As an example,

muscle has a dielectric constant of 51 and an electrical conductivity σ of 1.21 while

germanium, commonly used as a semiconductor, has a conductivity of 2.17 . At 915MHz,


dielectric losses in tissue predominate and heating results primarily from friction caused by polar
water molecules that rotate and oscillate to maintain alignment with the time-varying electric field.

The amount of microwave energy absorbed by tissue is given by the absorbed power density, in
watts per meter-cubed

where is the induced current density. The absorbed power density is also stated in terms
of power absorbed per kilogram of tissue or specific absorption rate (SAR):

where ρ is the density of tissue in kilograms per meter cubed. The SAR pattern is the quality used
most often to describe the heating properties of a particular hyperthermia applicator. In general
the 50% SAR level is considered to be the extent of effective heating. The qualities inherent in an
acceptable SAR pattern (see figure for an example) are, the 50% level extends to at least the
dimensions of the applicator. The SAR distribution inside the 50% contour is relatively flat with no
sharp peaks or valleys and rising smoothly everywhere to the maximum value.

Equipment and Techniques For Producing Hyperthermia in Superficial Tissues

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 271

The past two decades have seen considerable growth and development in electromagnetic
techniques available for producing superficial hyperthermia. The microwave waveguide applicator
is probably the most basic method for providing superficial hyperthermia by electromagnetic
means, it consists of a rectangular waveguide excited by a monopole feed. The dimensions of the
10
waveguide are selected so that a strong TE mode propagates at the chosen frequency.
Because human tissues are in general layered with a high-resistance fat layer between low-
10
resistance skin and muscle or tumor tissues, the TE mode is preferred because the electric field
is oriented tangential to the skin surface. This tangential electric field minimizes overheating of
the fat-muscle tissue interface because the high resistance fat appears in parallel to the low
resistance muscle or tumor layer. While this design did produce some useful heating for a few
limited clinical situations, the dimensions of the waveguide proved too large to conform
adequately to the usually contoured treatment sites. The dimensions of the waveguide were
reduced by loading the waveguide with high-dielectric material, reducing the wavelength in the
guide and therefore the aperture size. The field pattern of these applicators has a maximum in the
geometrical center and falls off to well below 50% of the maximum field at the waveguide edges.
To reduce this central hot spot and to increase the field strength at the edges, a coupling bolus is
used. A coupling bolus is a flexible bag attached to the waveguide face that circulates
temperature-controlled de-ionized de-gassed water, or in some applications, silicone oil.

Variable-absorption bolus's have also been studied as a way to increase the homogeneity of the
field pattern from a waveguide applicator. With this technique the de-ionized water bolus is
compartmentalized and the different compartments can be filled with a more highly absorbing
material such as saline solution. The compartments filled with saline will reduce the energy
transmitted and in this way the central maximum can be reduced while the heating at the edges is
not effected, resulting in a more uniform energy deposition in skin but at the cost of higher overall
power .

To address the problem of standard waveguide applicator’s non-uniform field distributions, horn
waveguide applicators were studied. Horn waveguide applicators utilize a flared opening to
spread the radiated field and to obtain a better impedance match to the tissue. These applicators
produced a more uniform field pattern with the 50% of maximum level being larger than the
standard waveguide but still not equal to the horn perimeter.. While these horn applicators had a
more uniform field pattern, they still suffered from being too large to effectively cover large
regions of tissue over contoured treatment sites.

A common problem for both of these methods is the non-adjustability of the electromagnetic field
pattern under the face of the applicator to tailor the field pattern for irregular tumor shapes. Thus
the next logical step was to make an applicator consisting of several waveguides together in an
array of radiating apertures. One such commercially available hyperthermia system is the
Microtherm 1000 (Labthermics Technologies Inc, Champagne IL) which has an array of 16
waveguides and integral water bolus on a movable support arm (see fig. 1).

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 272

Figure 1 The Microtherm 1000 hyperthermia applicator (Labthermics Technologies Inc,


Champagne IL)

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 273

Figure 2 Close up of the extendable bolus of the Microtherm 1000

The Microtherm 1000 can treat an area of 13 by 13 cm wide by 1.5 cm deep. The Microtherm
1000 is currently the standard of care in electromagnetic superficial hyperthermia. This is the
machine used at UCSF in treatments of superficial skin disease like chestwall recurrence of
breast carcinoma. The advantages of this machine over single-waveguide methods are that it can
cover a larger area than a single-aperture waveguide of identical size and that by adjusting the
power to the various elements, the field pattern can be shaped somewhat to adjust for irregular
tumor shapes. While this machine can cover more area, with improved heating uniformity, it still
suffers from one of the same failings of the single waveguides-it can not conform around curved
anatomy. While its 8cm thick water bolus helps it to conform somewhat to small curvature, it still
can not treat surface disease which spreads around the ribcage. It is useful primarily on flat
treatment sites.

Another heating approach makes use of an inductive-loop current sheet applicator, which is
smaller and lighter in weight than typical waveguide applicators and can be connected together in
hinged flexible arrays for contoured surfaces . While more compact than waveguide and horn
applicators, these applicators require great care when used in arrays to avoid under or over

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 274

heating the area between the adjacent apertures, especially when angled together over
contoured surface.

Recently there has been considerable interest in using printed-circuit-board (PCB) based
microwave radiators. Microwave patch, slot, and spiral radiators have been studied . It was found
that many PCB-based microwave radiators have a large electric field component orientated
normal to the fat-muscle interface. This strong normal field component falls off faster as a function
of distance, from the applicator face, compared to the tangential component, suggesting the use
of a thick water bolus to reduce the normal component in relation to the tangential component.

There is a commercially available Contact Flexible Microstrip Applicator (CFMA) which can treat
an area of roughly 12.5 by 24 cm. While the CFMA has the ability to conform to contoured
treatment sites, it is a single channel device. Thus there is no ability to shape the SAR pattern. If
the field must be reduced in one area, to avoid overheating a nipple for example, the power and
heating effectiveness must be reduced for the entire treatment site. Thus while this applicator can
treat large areas involving contoured anatomy, there is no provision to adjust the heating pattern
to accommodate patient specific anatomy or heterogeneous electrical and thermal tissue
properties.

Arrays of microstrip spiral antennas have also been used. An array of 25 individually controlled
spiral antennas built on a flexible PCB was studied by one group . It was found that the spiral
antennas produced a sharply peaked gaussian pencil beam under the center of the spiral. A
minimum 3cm thick water bolus was necessary to smooth the combined beam profile enough to
achieve useful heating without cold areas between spiral elements. While this method, in general,
was useful, the thick water bolus limited its use near complex contoured anatomy and increased
setup complexity and the power required.

As a way of avoiding the problem of awkward and heavy water bolus structures, Ryan et. al.
studied a dense array of overlapping spirals. This array produced a more spatially uniform field
with a thinner water bolus. The draw back to this technique was that the large overlap of spirals
needed for a uniform field severely restricted the size of the treatable area. It would seem that the
microstrip spiral, with its sharply peaked central pencil beam radiation pattern was not ideally
suited for hyperthermia treatments of large surface areas where homogeneity of the heating field
is needed.

In summary, the currently available technology for heating superficial tissues can not cover a
large enough treatment area, can not conform to curved treatment sites typically seen in the clinic
and can not provide sufficient adjustment of heating pattern to cover irregularly shaped treatment
sites.

From the previous evaluation of applicators, the following specifications were determined for an
ideal large area superficial hyperthermia applicator:

7. Flexibility-the ability to curve around contoured anatomy such as a rib cage.


8. Multi-element array with individual power control to each element.
9. Invisible to high-energy (6-20 MeV) electrons to allow simultaneous heating and
radiotherapy.
10. Lightweight-to increase patient comfort and mobility during treatment.
11. Minimum set up complexity
12. Low cost.

The Conformal Microwave Array (CMA) described in this thesis has the potential to fulfill all the
requirements of the above ideal applicator specifications for treating large area superficial
hyperthermia. The CMA is an array of microstrip patch antennas etched into a very flexible two

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 275

sided PCB (see fig 9-diagram of CMA). It is light in weight, extremely thin (9 mils), easy to use,
and inexpensive to manufacture compared to previous applicators.

Objectives

The main thrust of this thesis is to describe efforts to optimize the Conformal Microwave Array.
Optimization is desired in the sense that we want to produce the highest uniform output power
with the lowest possible input power. Specifically, the radiation efficiency (the ratio of power out to
power in), and the uniformity or balance of output of individual antennas was improved. To
achieve these goals, I have concentrated on applying microwave-engineering theory to the
microstrip line network which extends from the coax-to-microstrip RF connector on the PCB edge
and extends across the antenna array surface and splits to feed f the four sides of the radiating
microstrip patch.

5.3. FM Radio

FM means frequency modulation invented by Edward Armstrong of United States.

Digital radio describes radio communications technologies which carry information as a digital signal, by
means of a digital modulation method. Digital radio is very commonly used in microwave radio
communications. This is widely used in point-to-point microwave systems on the surface of the Earth
(terrestrial), in satellite communications carrying all kinds of digital information, and in deep space
communication systems, such as communications to and from the two Voyager space probes. Terrestrial
digital microwave communication systems can carry any form or digital information at all, including
multiplexed digitized voice or music signals, Internet traffic, financial traffic, military communications, etc.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 276

The key breakthrough or key feature in digital microwave systems is that they can carry any kind of
information whatsoever - just as long at it has been expressed as a sequence of ones and zeroes. Earlier
radio communication systems had to be made expressly for a given form of communications:telephone,
Telegraph, or television, for example. All kinds of digital communications can be multiplexed or encrypted
at will.

Other common meanings of digital radio include digital audio broadcasting, digital television broadcasting,
short-range .Digital wireless communications and radio broadcasting that is delivered Via the Internet.

One-way digital radio

One-way standards are those used for broadcasting, as opposed to those used for two-way communication.
While digital broadcasting offers many potential benefits, its introduction has been hindered by a lack of
global agreement on standards. The Eureka 147 standard (DAB) for digital radio is the most commonly
used and is coordinated by the World DMB Forum, which represents more than 30 countries. This standard
of digital radio technology was defined in the late 1980s, and is now being introduced in many countries.
Commercial DAB receivers began to be sold in 1999 and, by 2006, 500 million people were in the
coverage area of DAB broadcasts, although by this time sales had only taken off in the UK and Denmark.
In 2006 there are approximately 1,000 DAB stations in operation.[1] There have been criticisms of the
Eureka 147 standard and so a new 'DAB+' standard has been proposed.

To date the following standards have been defined for one-way digital radio:

Digital audio broadcasting systems


o Eureka 147 (branded as DAB)
o DAB+
o Digital Radio Oceane
o FM band in-band on-channel (FM IBOC):
ƒ HD Radio (OFDM modulation over FM and AM band IBOC sidebands)
ƒ FMeXtra (FM band IBOC sub carriers)
ƒ Digital Radio Mondiale extension (DRM+) (OFDM modulation over AM band
IBOC sidebands)
o AM band in-band on-channel (AM IBOC):
ƒ HD Radio (AM IBOC sideband)
ƒ Digital Radio Mondiale (branded as DRM) for the short, medium and long
wave-bands
o Satellite radio:
ƒ WorldSpace in Asia and Africa
ƒ Sirius in North America
ƒ XM radio in North America
ƒ MobaHo! in Japan and the Republic of (South) Korea
o ISDB-TSB
o Systems also designed for digital TV:
ƒ DMB
ƒ DVB-H
• Internet radio
• Low-bandwidth digital data broadcasting over existing FM radio:
o Radio Data System (branded as RDS)

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 277

• Radio pagers:
o FLEX
o ReFLEX
o POCSAG
o NTT

Digital television broadcasting (DTV) systems


o Digital Video Broadcasting (DVB)
o Integrated Services Digital Broadcasting (ISDB)
o Digital Multimedia Broadcasting (DMB)
o Digital Terrestrial Television (DTTV or DTT) to fixed mainly roof-top antennas:
ƒ DVB-T (based on OFDM modulation)
ƒ ISDB-T (based on OFDM modulation)
ƒ ATSC (based on 8VSB modulation)
ƒ T-DMB
o Mobile TV reception in handheld devices:
ƒ DVB-H (based on OFDM modulation)
ƒ MediaFLO (based on OFDM modulation)
ƒ DMB (based on OFDM modulation)
ƒ Multimedia Broadcast Multicast Service (MBMS) via the GSM EDGE and
UMTS cellular networks
ƒ DVB-SH (based on OFDM modulation)
o Satellite TV:
ƒ DVB-S (for Satellite TV)
ƒ ISDB-S
ƒ 4DTV
ƒ S-DMB
ƒ MobaHo!

See also software radio for a discussion of radios which use digital signal processing.

DAB adopters

Digital Audio Broadcasting (DAB), also known as Eureka 147, has been under development since the early
eighties, has been adopted by around 20 countries worldwide. It is based around the MPEG-1 Audio Layer
II audio codec and this has been co-ordinated by the WorldDMB. DAB receivers are selling well in some
markets.

WorldDMB announced in a press release in November 2006, that DAB would be adopting the HE-AACv2
audio codec, which is also known as eAAC+. Also being adopted are the MPEG Surround format, and
stronger error correction coding called Reed-Solomon coding.[2] The update has been named DAB+.
Receivers that support the new DAB standard began being released during 2007 with firmware updated
available for some older receivers.

DAB and DAB+ cannot be used for mobile TV because they do not include any video codecs. DAB related
standards Digital Multimedia Broadcasting (DMB) and DAB-IP are suitable for mobile radio and TV both
because they have MPEG 4 AVC and WMV9 respectively as video codecs. However a DMB video sub-
channel can easily be added to any DAB transmission - as DMB was designed from the outset to be carried
on a DAB subchannel. DMB broadcasts in Korea carry conventional MPEG 1 Layer II DAB audio services
alongside their DMB video services.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 278

United States has opted for a proprietary system called HD Radio(TM) technology, a type of in-band on-
channel (IBOC) technology. Transmissions use orthogonal frequency-division multiplexing, a technique
which is also used for European terrestrial digital TV broadcast (DVB-T). HD Radio technology was
developed and is licensed by iBiquity Digital Corporation. It is widely believed that a major reason for HD
radio technology is to offer some limited digital radio services while preserving the relative "stick values"
of the stations involved and to insure that new programming services will be controlled by existing
licensees.

The FM digital schemes in the U.S. provide audio at rates from 96 to 128 kilobits per second (kbit/s), with
auxiliary "subcarrier" transmissions at up to 64 kbit/s. The AM digital schemes have data rates of about 48
kbit/s, with auxiliary services provided at a much lower data rate. Both the FM and AM schemes use lossy
compression techniques to make the best use of the limited bandwidth.

Lucent Digital Radio, USA Digital Radio (USADR), and Digital Radio Express commenced tests in 1999
of their various schemes for digital broadcast, with the expectation that they would report their results to
the National Radio Systems Committee (NRSC) in December 1999.[3] Results of these tests remain unclear,
which in general describes the status of the terrestrial digital radio broadcasting effort in North America.
Some terrestrial analog broadcast stations are apprehensive about the impact of digital satellite radio on
their business, while others plan to convert to digital broadcasting as soon as it is economically and
technically feasible

While traditional terrestrial radio broadcasters are trying to "go digital", most major US automobile
manufacturers are promoting digital satellite radio. HD Radio technology has also made inroads in the
automotive sector with factory-installed options announced by BMW, Ford, Hyundai, Jaguar, Lincoln,
Mercedes, MINI, Mercury, Scion, and Volvo. Beyond the U.S., commercial implementation of HD Radio
technology is gaining momentum around the world.[4]

Satellite radio is distinguished by its freedom from FCC censorship in the United States, its relative lack of
advertising, and its ability to allow people on the road to listen to the same stations at any location in the
country. Listeners must currently pay an annual or monthly subscription fee in order to access the service,
and must install a separate security card in each radio or receiver they use.

Ford and Daimler AG are working with Sirius Satellite Radio, previously CD Radio, of New York City,
and General Motors and Honda are working with XM Satellite Radio of Washington, D.C. to build and
promote satellite DAB radio systems for North America, each offering "CD quality" audio and about a
hundred channels.[citation needed]

Sirius Satellite Radio launched a constellation of three Sirius satellites during the course of 2000. The
satellites were built by Space Systems/Loral and were launched by Russian Proton boosters. As with XM
Satellite Radio, Sirius implemented a series of terrestrial ground repeaters where satellite signal would
otherwise be blocked by large structures including natural structures and high-rise buildings.

XM Satellite Radio has a constellation of three satellites, two of which were launched in the spring of 2001,
with one following later in 2005. The satellites are Boeing (previously Hughes) 702 comsats, and were put
into orbit by Sea Launch boosters. Back-up ground transmitters (repeaters) will be built in cities where
satellite signals could be blocked by big buildings.

The FCC has auctioned bandwidth allocations for satellite broadcast in the S band range, around 2.3 GHz.

The perceived wisdom of the radio industry is that the terrestrial medium has two great strengths: it is free
and it is local.[citation needed] Satellite radio is neither of these things; however, in recent years, it has grown to
make a name for itself by providing uncensored content (most notably, the crossover of Howard Stern from
terrestrial radio to satellite radio) and commercial-free, all-digital music channels that offer similar genres
to local broadcast favorites.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 279

• It must be noted that "Digital Radio" has a limited listening distance from the tower site. FCC laws
currently show that 10% maximum digital signal of any US analog signal ratio. "There are still
some concerns that HD Radio on FM will increase interference between different stations even
though HD Radio at the 10% power level fits within the FCC spectral mask." HD Radio HD
Radio#cite note-14. "HD Radio" is only 2 channels in the USA, side by side with analog stations.
HD channel 1 may be on 93.2 FM, Analog station on 93.3, and HD channel 2 is on 93.4 FM.
Differing stations are multicasting on different frequencies, respectively.

• Also note that "HD Radio" is digital radio, but is not "high definition" as most of the US
population thinks. "HD" stands for "Hybrid Digital."

In the United Kingdom, 32.1% of the population own a DAB digital radio set.[5] The UK currently has the
world's biggest digital radio network, with 103 transmitters, two nation-wide DAB ensembles and 48 local
and regional DAB ensembles, broadcasting over 250 commercial and 34 BBC radio stations; 51 of these
stations are broadcast in London. However, the audio quality on DAB is lower than on FM, and some areas
of the country are not covered by DAB. To overcome this, the government intends to migrate the AM and
FM analogue services to digital in 2015. Digital radio stations are also broadcast on digital television
platforms, Digital Radio Mondiale on medium wave and shortwave frequencies as well as internet radio;
41% of digital radio users listen to digital radio through a television platform.[6]

Australia commenced regular digital audio broadcasting using the DAB+ standard in May 2009, after many
years of trialing alternative systems. Normal radio services operate on the AM and FM bands, as well as
four stations (ABC and SBS) on digital TV channels. The services are currently operating in five state
capital cities (Adelaide, Brisbane, Melbourne, Perth and Sydney) and are under trial in other capitals and
regional centers.

Japan has started terrestrial sound broadcasting using ISDB-Tsb and MobaHO! 2.6 GHz Satellite Sound
digital broadcasting

On 1 December 2005 South Korea launched its T-DMB service which includes both television and radio
stations. T-DMB is a derivative of DAB with specifications published by ETSI. More than 110,000
receivers had been sold in one month only in 2005.

Digital radio is now being provided to the developing world. A satellite communications company named
World Space is setting up a network of three satellites, including "AfriStar", "Asia Star", and "AmeriStar",
to provide digital audio information services to Africa, Asia, and Latin America. AfriStar and AsiaStar are
in orbit. AmeriStar cannot be launched from the United States as Worldspace transmits on the L-band and
would interfere with USA military as mentioned above.[citation needed].

Each satellite provides three transmission beams that can support 50 channels each, carrying news, music,
entertainment, and education, and including a computer multimedia service. Local, regional, and
international broadcasters are working with WorldStar to provide services.

A consortium of broadcasters and equipment manufacturers are also working to bring the benefits of digital
broadcasting to the radio spectrum currently used for terrestrial AM radio broadcasts, including
international shortwave transmissions. Over seventy broadcasters are now transmitting programs using the
new standard, known as Digital Radio Mondiale (DRM), and / commercial DRM receivers are available.
DRM's system uses the MPEG-4 based standard aacPlus to code the music and CELP or HVXC for speech
programs. At present these are priced too high to be affordable by many in the third world, however.

Low-cost DAB radio receivers are now available from various Japanese manufacturers, and WorldSpace
has worked with Thomson Broadcast to introduce a village communications center known as a Telekiosk to
bring communications services to rural areas. The Telekiosks are self-contained and are available as fixed
or mobile units

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 280

Two-way digital radio standards

• Digital cellular telephony:


o GSM
o UMTS (sometimes called W-CDMA)
o TETRA
o IS-95 (cdmaOne)
o IS-136 (D-AMPS, sometimes called TDMA)
o IS-2000 (CDMA2000)
o iDEN
• Digital Mobile Radio:
o Project 25 a.k.a. "P25" or "APCO-25"
o TETRA
o NXDN
• Wireless networking:
o Wi-Fi
o HIPERLAN
o Bluetooth
o DASH7
o ZigBee
• Military radio systems for Network-centric warfare
o JTRS (Joint Tactical Radio System- a flexible software-defined radio)
o SINCGARS (Single channel ground to air radio system)
• Amateur packet radio:
o AX.25
• Digital modems for HF:
o PACTOR
• Satellite radio:
o Satmodems
• Wireless local loop:
o Basic Exchange Telephone Radio Service
• Broadband wireless access:
o IEEE 802.16

References

7. ^ Digital Broadcast - bringing the future to you


8. ^ http://www.worlddab.org/upload/uploaddocs/WorldDMBPress%20Release_November.pdf
9. ^ Behrens, Steve. "Field testing resumes for radio’s digital best hope." Current, Aug. 16, 1999.
Available at http://www.current.org/tech/tech915r.html
10. ^ http://www.ibiquity.com/automotive
11. ^ Plunkett, John (2009-05-07). "Rajars: More than a third of UK is now listening to digital radio".
The Guardian. http://www.guardian.co.uk/media/2009/may/07/rajars-digital-radio. Retrieved
2009-05-07.
12. ^ Oatts, Joanne (2007-05-10). "Digital radio owners up 43%". Digital Spy.
http://www.digitalspy.co.uk/radio/a46373/digital-radio-owners-up-43-percent.html. Retrieved
2007-05-12.

v•d•e
Wireless video and data distribution methods

Advanced Wireless Services · Amateur television · Analog television · Digital radio · Digital television ·
Digital terrestrial television (DTT or DTTV) ·
Digital Video Broadcasting (DVB): Terrestrial - Satellite - Handheld · Multipoint Video Distribution

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 281

System (MVDS or DVB-MS) · HomeRF · Instructional Television Fixed Service (ITFS) now known as
Educational Broadband Service (EBS) · Ku band · Local Multipoint Distribution Service (LMDS) ·
Microwave · Mobile broadband · Mobile TV · Mobile WiMAX (IEEE 802.16e) · Mobile broadband
wireless access (IEEE 802.20) · Multichannel Multipoint Distribution Service (MMDS) now known as
Business Radio Service (BRS) · MVDS · MVDDS · Multimedia Broadcast Multicast Service (3G
MMMS) · Satellite Internet access · Satellite radio · Satellite television · UWB (IEEE 802.15.3) · Visual
sensor network · Wi-Fi (IEEE 802.11) · WiMAX (IEEE 802.16) · WRAN (IEEE 802.22) · Wireless local
loop (WLL) · Wireless broadband · Wireless USB · 3GPP Long Term Evolution (LTE) ·

4G

• "Digital, DTV, Internet, Mobile phone and MP3 Listening" - Dec 2006, RAJAR organisation.
• Online Terrestrial Radio - Search & and Listen to Live Radio Digitally

v•d•e
Analog and digital audio broadcasting

Terrestrial

Radio modulation AM · FM · COFDM

Frequency allocations LW · MW (MF) · SW (HF) · VHF (low/mid/high) · L band

DAB/DAB+ · DRM/DRM+ · HD Radio · IBOC · FMeXtra · CAM-D ·


Digital systems
ISDB-TSB

Satellite

Frequency allocations L band · S band · Ku band · C band

Digital systems SDR · DVB-SH · DAB-S · DMB-S · ADR

Commercial radio providers 1worldspace · Sirius (Canada) · XM (Canada) (see also: Sirius XM)

Codecs

AAC · HE-AAC · MPEG-1 Layer II · AMR-WB+

Subcarrier signals

AMSS · DirectBand · PAD · RDS/RBDS · SCA/SCMO

Related topics

Technical (Audio): Audio processing · Audio data compression


Technical (AM Stereo formats): Belar · C-QUAM · Harris · Magnavox · Kahn/Hazeltine
Technical (Emission): Digital radio · Error correction · Multipath propagation · SW Relay Station · AM
radio · AM broadcasting · Extended AM broadcast band · FM radio · FM broadcasting · FM broadcast
band · Cable radio
Cultural: History of radio · International broadcasting
Comparison of radio systems

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 282

Nanotubes (CNTs, SWNTs, DWNTs, MWNTs, TWNTs)

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 283

Nano-tube Synonyms: CNTs, carbon nano-tube, boron nitride nano-tube, BNNTs, halloysite
nanotube, buckytubes, C-60, buckminster fullerene, nano-tori, nano-torus, nano-bud, nano-onions,
single walled nano-tube, SWNTs, double walled nano-tube, DWNTs, multi walled nano-tube,
MWNTs, thin walled nanotubes, TWNTs, short nanotubes, conductive nanotubes, purified
nanotubes, industrial grade nanotubes,

Nano-tube General Descriptions: a) Electrical conductivity -- probably the best conductor


of electricity on a nano-scale level that can ever be possible.

b) Thermal conductivity -- comparable to diamond along the tube axis.

c) Mechanical -- probably the stiffest, strongest, and toughest fiber that can
ever exist.

d) Chemistry of carbon -- can be reacted and manipulated with the richness


and flexibility of other carbon molecules. Carbon is the basis of most
materials we use every day.

e) Molecular perfection -- essentially free of defects.

f) Self-assembly -- strong van der Waals attraction leads to spontaneous


roping of many nanotubes. Important in certain applications.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 284

Nanotube Chemical Properties Available:

a) Boron nitride nanotubes

b) Carbon nanotubes

c) Graphitized multi walled carbon nanotubes

d) OH functionalized carbon nanotubes

e) COOH functionalized carbon nanotubes

f) Industrial grade carbon nanotubes

g) Purified carbon nanotubes

h) Conductive nanotubes

i) Halloysite nanotubes

j) Inorganic nanotubes

k) Silicon nanotubes

Nanotube Physical Tube Structures Available:

a) SWNTs (Single walled nanotubes)

b) DWNTs (Double walled nanotubes)

c) MWNTs (Multi walled nanotubes)

d) TWNTs, (Thin walled carbon nanotubes)

e) Short Nanotubes

f) Industrial grade nanotubes

g) "Armchair" nanotubes

h) "Zigzag" nanotubes

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 285

i) Chiral armchair-zigzag nanotubes

Nanotube Potential Market Applications:

*Date: 15 NOV 2009: "New functionalised nanotubes


applications will come onto the market in the next few years
that will greatly increase global revenues to $1.4 billion plus
by 2015; driven mainly by the needs of the electronics and
data storage, defence, energy, aerospace and automotive
industries. As commercial-scale production ramps up, the
significant decrease in cost for these high performance
materials will also drive new applications. Up to now, most
carbon nanotubes production has been on a pilot-scale
level; however scale-up of production by large multi-
nationals such as Arkema, Bayer Materials Science and
Showa Denko and access to cheaper nanotubes from
Russian and China will greatly increase commercialization
opportunities".

*Flat panel displays, conductive plastics, super composite fibers,


superconductors, and field storage batteries

*Micro-electronics / semiconductors

*Conducting Composites

*Controlled Drug Delivery/release

*Artificial muscles

*Super capacitors

*Batteries

*Field emission flat panel displays

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 286

*Field Effect transistors and Single electron transistors

*Nano lithography

*Nano electronics

*Doping

*Nano balance

*Nano tweezers

*Data storage

*Magnetic nanotube

*Nano gear

*Nanotube actuator

*Molecular Quantum wires

*Hydrogen Storage

*Noble radioactive gas storage

*Solar storage

*Waste recycling

*Electromagnetic shielding

*Dialysis Filters

*Thermal protection

*Nanotube reinforced composites

*Reinforcement of armor and other materials

*Reinforcement of polymer

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 287

*Avionics

*Collision-protection materials

*Fly wheels

Nanotube Packaging:

To standard sa

Nanotube TSCA (SARA Title III) Status:

Listed. For further information please call the E.P.A. at 1.202.554.1404.

Nanotube CAS Numbers:

a) 7440-44-0 (activated carbon)

b) 10043-11-5 (boron nitride)

c) 7440-21-3 (silicon)

Nanotube Safety Notice:

a) Before using, user shall determine the suitability of the product for its
intended use, and user assumes all risk and liability whatsoever in
connection therewith.

b) Nanotubes might be hazardous to your health.

c) Please visit this CDC / NIOSH "Safe Nanotechnology" Informational


Exchange

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 288

5.6. HYBRID FUSION ENERGY GENERATION

Here we use the same type of system resulting into the different type of technology which is prevalent in
many places known as the Hybrid technology. The accelerated neutrons which can be extracted from the
DUO TRIAD TOKAMAK COLLIDER (DTTC) HUB can be used in Fission Chamber where we need the
those neutrons as for the fusion purpose the fast neutrons are waste products leading to the heating of
plasma chamber, so it can be used through neutrons collecting blackest used and can be channelized to the
Uranium or plutonium based nuclear/atomic reactors.
Nuclear fusion-fission hybrid could contribute to carbon-free energy future
January 27th, 2009
This illustration shows how a compact fusion-fission hybrid would fit
into a nuclear fuel cycle. The fusion-fission hybrid can use fusion
reactions to burn nuclear waste as fuel (people are shown for scale). It
would produce energy and could be used to help destroy the most
toxic, long-lived waste from nuclear power. The hybrid would be
made possible by a crucial invention from physicists at the University
of Texas at Austin called the Super X Divertor. Credit: Angela Wong
Physicists at the University of Texas at Austin have designed a
new system that, when fully developed, would use fusion to
eliminate most of the transuranic waste produced by nuclear
power plants.
The invention could help combat global warming by making nuclear
power cleaner and thus a more viable replacement of carbon-heavy
energy sources, such as coal.
"We have created a way to use fusion to relatively inexpensively
destroy the waste from nuclear fission," says Mike Kotschenreuther,
senior research scientist with the Institute for Fusion Studies (IFS)
and Department of Physics. "Our waste destruction system, we
believe, will allow nuclear power-a low carbon source of energy-to
take its place in helping us combat global warming."
Toxic nuclear waste is stored at sites around the U.S. Debate
surrounds the construction of a large-scale geological storage site at Yucca Mountain in Nevada, which
many maintain is costly and dangerous. The storage capacity of Yucca Mountain, which is not expected to
open until 2020, is set at 77,000 tons. The amount of nuclear waste generated by the U.S. will exceed this
amount by 2010.
The physicists' new invention could drastically decrease the need for any additional or expanded geological
repositories.
"Most people cite nuclear waste as the main reason they oppose nuclear fission as a source of power," says
Swadesh Mahajan, senior research scientist.
The scientists propose destroying the waste using a fusion-fission hybrid reactor, the centerpiece of which
is a high power Compact Fusion Neutron Source (CFNS) made possible by a crucial invention.
The CFNS would provide abundant neutrons through fusion to a surrounding fission blanket that uses
transuranic waste as nuclear fuel. The fusion-produced neutrons augment the fission reaction, imparting
efficiency and stability to the waste incineration process.
Kotschenreuther, Mahajan and Prashant Valanju, of the IFS, and Erich Schneider of the Department of
Mechanical Engineering report their new system for nuclear waste destruction in the journal Fusion
Engineering and Design.
There are more than 100 fission reactors, called "light water reactors" (LWRs), producing power in the
United States. The nuclear waste from these reactors is stored and not reprocessed. (Some other countries,
such as France and Japan, do reprocess the waste.)

The scientists' waste destruction system would work in two major steps.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 289

First, 75 percent of the original reactor waste is destroyed in standard, relatively inexpensive LWRs. This
step produces energy, but it does not destroy highly radiotoxic, transuranic, long-lived waste, what the
scientists call "sludge."
In the second step, the sludge would be destroyed in a CFNS-based fusion-fission hybrid. The hybrid's
potential lies in its ability to burn this hazardous sludge, which cannot be stably burnt in conventional
systems.
"To burn this really hard to burn sludge, you really need to hit it with a sledgehammer, and that's what we
have invented here," says Kotschenreuther.
One hybrid would be needed to destroy the waste produced by 10 to 15 LWRs.
The process would ultimately reduce the transuranic waste from the original fission reactors by up to 99
percent. Burning that waste also produces energy.
The CFNS is designed to be no larger than a small room, and much fewer of the devices would be needed
compared to other schemes that are being investigated for similar processes. In combination with the
substantial decrease in the need for geological storage, the CFNS-enabled waste-destruction system would
be much cheaper and faster than other routes, say the scientists.
The CFNS is based on a tokomak, which is a machine with a "magnetic bottle" that is highly successful in
confining high temperature (more than 100 million degrees Celsius) fusion plasmas for sufficiently long
times.
The crucial invention that would pave the way for a CFNS is called the Super X Divertor. The Super X
Divertor is designed to handle the enormous heat and particle fluxes peculiar to compact devices; it would
enable the CFNS to safely produce large amounts of neutrons without destroying the system.
"The intense heat generated in a nuclear fusion device can literally destroy the walls of the machine," says
research scientist Valanju, "and that is the thing that has been holding back a highly compact source of
nuclear fusion."
Valanju says a fusion-fission hybrid reactor has been an idea in the physics community for a long time.
"It's always been known that fusion is good at producing neutrons and fission is good at making energy," he
says. "Now, we have shown that we can get fusion to produce a lot of neutrons in a small space."
Producing an abundant and clean source of "pure fusion energy" continues to be a goal for fusion
researchers. But the physicists say that harnessing the other product of fusion-neutrons-can be achieved in
the near term.
In moving their hybrid from concept into production, the scientists hope to make nuclear energy a more
viable alternative to coal and oil while waiting for renewable like solar and pure fusion to ramp up.
"The hybrid we designed should be viewed as a bridge technology," says Mahajan. "Through the hybrid,
we can bring fusion via neutrons to the service of the energy sector today. We can hopefully make a major
contribution to the carbon-free mix dictated by the 2050 time scale set by global warming scientists."
The scientists say their Super X Divertor invention has already gained acceptance in the fusion community.
Several groups are considering implemented the Super X Divertor on their machines, including the MAST
tokomak in the United Kingdom, and the DIIID (General Atomics) and NSTX (Princeton University) in the
U.S. Next steps will include performing extended simulations, transforming the concept into an engineering
project, and seeking funding for building a prototype.
Source: University of Texas at Austin

5.10. APPLICATION IN LIQUID CRYSTAL DISPLAY (LCD) AND ORGANIC LIGHT


EMMITING DIODE (OLED)

Application of the liquid crystal display and organic light emitting diode lies with the use of
nanotechnology by using Duo Triad Tokomak Collider (DTTC) hub as nano torii cluster hub.

We can enhance the resolution of the computer monitor screen as well as that of the plasma TVs
confinement time can be reduced with better resolution. The resolution is 24.75% better than the present
best available computer monitor or plasma TVs .One particular brand of plasma and LCD TVs are
projecting that it can give 1:1000000 resolution , here in this particular case it will be 1:1500000 resolution
. No blurred images rather only crystal clear screen can view from 172 degrees wide angle without any

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 290

diminishing images from side view angle. This entire thing can be done by using the nanotechnology and
peizo-electrononics.
Advanced Technology

FFD (Feed Forward)


Overdrive (Response Time Compensation)
Double Overdrive
Problems with Overdrive
Response time measurements
ClearMotiv
MagicSpeed / Response Time Acceleration (RTA)
Advanced Motion Accelerator (AMA)
Over Driving Circuit (ODC)
Fast Response LC + Special Driving
Rapid Response / Rapid Motion
Overdrive Panel Case Study (AUO 8ms)

AU Optronics Simulated Pulse Driving Technology (ASPD)

Sony X-Black
Acer CrystalBrite

BenQ Senseye
NEC AmbiBright
Samsung "Magic" Enhancements
Acer eColor Management
LG f-Engine
ColorComp

Dynamic Contrast
LG. Philips Digital Fine Contrast (DFC)
NEC Advanced DVM
APE (AUO Picture Enhancer) Technology
Acer Adaptive Contrast Management (ACM)

Black Frame Insertion (BFI)

FFD (Feed Forward) – In 2001 NEC started developing new technologies used for their TV panels.
This idea is based on the fact that the widest colour change is from white to black, and for this change,
the maximum voltage is applied to the transistor. NEC’s idea was to apply twice the voltage in half the
time, for example instead of applying 1V over a time of 20ms, they changed it to applying 2V over a
time of 10ms. This meant that colour changes would be theoretically reduced significantly, but this
technique has never been applied according to NEC. The black > white transitions would remain
unaffected as they already had the maximum voltage applied to the transistors. This process is the
principal behind today’s ‘Overdrive’ technology:

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 291

Overdrive / Response Time Compensation (RTC) – this technology is based on applying an over-
voltage to the liquid crystals to motivate them into their orientation faster. This process forces them to a
full white (inactive) to black (active) transition first. The crystals can then drop back down to the
required grey level. This is helpful as the rise time of a crystal was always the slowest part (response time
= Tr + Tf). This technology does not help improve the ISO black > white transition much since that
already received the maximum voltage anyway, but transitions from grey > grey are significantly
reduced. The improvements in grey transitions however are helpful in producing a faster panel overall as
these changes have always been slower colour changes in TFT panels and it is important that the
response time is low across the whole range (0 – 255).

Double Overdrive - This was advancement on the traditional overdrive method, and involves applying
overdrive to not only the rise time, but to the fall time as well. This is supposed to improve response time
and overall quality.

Problems with Overdrive

In doing this over-volting, the response time as a whole is reduced but can unfortunately leaves some
colour trailing due to the intervening state that the pixel is forced to make. There is a certain risk of video
noise being visible on colour masses. Why? When the image is fixed, there is no problem - the pixels
don't change regardless of their values. That's the advantage of LCD. But imagine subtle colour shading.
When a tracking shot in a movie moves through those subtle colors, the pixels have to change from one
value to another, but the colors are really very close. Unfortunately, Overdrive temporarily causes a
much greater variation in the value of the pixel and since all the pixels don't react in the same way -
certain ones being faster than others -the result is that the viewer sees accentuated video noise. There
may also be some problems with Overdrive being used on TN panels which use dithering. Dithering is
normally invisible to the naked eye if the viewer is far enough away, but Overdrive could amplify the
visual nuisance stemming from the strong brightness escaping from the panel during the Overdrive
period. In real practice accentuated noise and "overdrive trailing" can be a symptom of poorly controlled
overdrive methods and can vary from one model to another.

One other thing to note for Overdrive (RTC) enabled monitors is that running a TFT outside of it's
recommended refresh rate (i.e. not at 60Hz) can lead to a deterioration in the performance of this
technology and the panel responsiveness is adversely effected! Read the details here for more
information.

“Response Time”, How Do We Measure It Now?

Unfortunately, manufacturers have panels which are on one hand, clearly faster across grey transitions
than previous technologies, but on the other, have panels which have not improved on the black > white
change which is the ISO norm for measuring “response time”. They have instead now started to list their
panels with a response time quoted as being G2G to show that they have made improvements. If a TFT is
listed as a G2G response time, then you can be pretty sure the panel is using some form of ‘Overdrive’.
Remember though, the response time, even if it is quoted as G2G, is still only the fastest recorded

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 292

response time for the panel, and some transitions will still be slower.

Overdrive has allowed several panel manufacturers to improve the response times of their products
across grey transitions and there are now some panels available with an as low as 2ms quoted G2G
response time (e.g. Viewsonic VX922) and 1ms G2G (Hyundai S90D). More significantly the use of
overdrive has really improved practical responsiveness in the other panel technologies allowing P-MVA,
PVA and S-IPS equipped models to really offer performance to meet growing gaming needs. Typically
there have been several 'generations' of overdriven panels including (all G2G figures):

• TN Film - 4ms / 3ms, and now 2ms


• P-MVA - 8ms generation
• PVA / S-PVA - 8ms generation initially, but quickly changed to 6ms generation
• S-IPS - 8ms and 6ms, with 5ms now becoming more common.

Further Reading: An in depth look at Overdrive can be found here at X-bitlabs, including reviews of
many of Samsung and Viewsonic's first offering with this technology. An article at BeHardware about
Overdrive can be found here. There is also some information about the technology here at Tom’s
Hardware France.

ClearMotiv

Viewsonic call their overdrive based enhancement suite ‘ClearMotiv’. Bare in mind that they don’t
manufacturer any panels of their own, but claim that the panels they have used have improved response
time thanks to several technological changes which they have made with the electronics and hardware of
the monitors. The various technologies listed below may be used in part of in combination; it can vary
from one screen to another. The technologies available include:

8. Lower viscosity of the liquid crystals


9. Reducing the gap between cells by 30%, reported to improve response time by 50%
10. Impulse Driving Method - applying too much voltage at the start, but then reducing it to the
correct level, to kick start the crystals
11. Advanced Overdrive - they claim this also improves black > white and not just grey changes,
but this is debatable.
12. Backlight shuttering - blinking the backlight off briefly during the liquid crystal cell transition.
Used only in LCD TV's at this stage. Designed to reduce perceived motion blur caused by the
human eye.
13. Black Frame Insertion - similar to backlight shuttering, but involves inserting a black frame to
hide the liquid crystal cell transition. Designed to reduce perceived motion blur caused by the
human eye.
14. Amplified Impulse Technology – This was originally listed in Viewsonic's documentation as a
feature in the electronics of the TFT which dynamically controls the amount of Overdrive being
used by the panel. Looking at their current whitepapers suggest it is more linked to their Impulse
Driving Method as listed above.

Have a look here for Viewsonic’s documentation about ClearMotiv:

http://www.fastresponsetime.com/en/guide.htm

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 293

MagicSpeed / Response Time Acceleration (RTA)

Samsung’s own version of RTC / Overdrive technology. They always like to have their very own version
of technologies, and to be fair, they are one of the main panel manufacturers in the TFT market. There is
very little information available about the technology apart from that it is designed to boost grey
transition response times. At the end of the day, this is very similar to Overdrive, and as far as I know,
works on the same principal. Some models feature an option available through the OSD to disable RTA,
and this can show some noticeable differences in practice between active and inactive states.

Advanced Motion Accelerator (AMA)

BenQ's name for overdriven panels. Where the models also feature Black Frame Insertion (see below),
they are referred to as AMA-Z.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 294

Over Driving Circuit (ODC)

LG.Display calls their overdrive technology ODC and has used it to boost response times on both their
TN Film and S-IPS panel technologies. (Link: LG.Philips page)

Fast Response LC + Special Driving

This the name Chi Mei Optoelectronics give to their overdrive technology and is again designed to
"reduce residual image tail" and CMO state this will reduce or even eliminate motion blur

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 295

Rapid Response / Rapid Motion

NEC's own label for overdriven based displays offering improved grey to grey transitions.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 296

Case Study – AU Optronics (M190EN03 V0) 8ms P-MVA panel


Dell 1905FP vs. Viewsonic VP191B-2

While fundamentally the Dell and the Viewsonic are based on the same AU Optronics panel, the
electronics applied by the two manufacturers to utilize the panel are different. Performance of the two
monitors will therefore be a little different, but don’t forget that there will be many similarities because
of the mutual use of the AUO panel. Viewsonic have implemented their ClearMotiv technology into the
VP191B which offers not only the Overdrive which AUO have applied to the panel, but adds most
importantly the AIT (Amplified Impulse Technology). This dynamically controls the amount of
Overdrive used and is said to help reduce blurring of the image even more.

This is apparent from user observations of the two monitors. The Dell, using no extra features, just the
overdriven panel from AUO can show some slight trailing of colors in fast paced gaming. This is because
of the intervening state which the liquid crystals are forced to enter as part of the overdrive technology
(see above). This isn’t major, but the AIT used by Viewsonic has helped to reduce this a little. So
although the panels are the same, the electronics and hardware behind the panel can vary.

AU Optronics Simulated Pulse Driving Technology (ASPD)

AUO's 'Simulated Pulsed Driving' (ASPD) technology is designed to solve the issue of motion blur in
liquid crystal displays. AUO's Simulated Pulsed Driving (SPD) technology simulates impulse-type
displays with the adjustment of pixel driving and scanning backlight to reach a CRT-like image quality in
motion picture response time. The technology can greatly reduce motion blur, and enable the image
performance to reach optimal levels at 4ms equivalent gray to gray (8ms MPRT). The technology is also
known as one of the few technologies ready for mass production and can be applied both to WXGA
(1366x768) or Full HD (1920x1080) resolutions.

Sony X-Black Technology

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 297

Sony's X-Black / X-Brite technology was developed first of all for laptop panels, which has meant that
once they started to incorporate it into desktop displays; they could make the casing and bezels very
small and stylish. They've incorporated dual fluorescent lamps to light the displays and to help achieve
improved brightness over regular LCD panels. This has helped provide some impressive contrast ratios
too (including 1000:1) and the added brightness is being marketed as improving movie playback.

Sony have also researched a technique they've named "reflection reduction technology", in which several
layers of coating are applied instead of using traditional Anti Reflective coating (which gives you the
matte finish and can lead to some loss in colour, noticeably black, depth). The thickness of each of these
new layers Sony use is precisely calculated at one-quarter of the wavelength of light – so very thin! The
effect is to cancel out reflections before they get to the front of the display. They've improved the colour
reproduction (or so the marketing would certainly have you believe) by ditching the old AR style coating,
and the improved brightness and contrast have helped improve colour depth. The removal of the AR
coating from the panels has also helped them improve image sharpness according to their marketing.

Sony also claim to have improved the viewing angles of their displays by adding a special film coating
filter to the front of the panel, which helps reduce the restrictions on viewing angles caused by the
inability of the liquid crystals to respond uniformly. This is perhaps the biggest problem with TN film
panels today, as while colour reproduction has improved significantly as has response rate of pixels,
viewing angles have deteriorated. Panels like the 8ms Samsung TN film panel (in Hyundai L90D+ etc)
are a good example of this trend. With these new improvements by Sony to increase viewing angles, X-
Black certainly sounds promising on paper.

Sony also claims to have improved the graphics processor used by the panel, to address commands from
the graphics card and convert them to commands to the liquid crystals. They claim the hardware and
software improvements they have produced for the graphics processor has allowed resizing of images to
be improved as well as colorimetric processing advances.

There are a lot of varying opinions on the X-Black technology and its reflective nature. Some people say
it is fine, but a fair few say that it is too reflective. I would certainly be wary of it, and definitely try and
see an X-Black screen or laptop first to see if you think you would be ok with it. This has really been the
main gripe with the X-Black technology panels, but be wary of the marketing side of their displays as
well. While there are many claimed improvements to models using this technology, the advancements
may not be as fantastic as they would have you believe.

Acer CrystalBrite

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 298

Acer's reflective glossy screen coating is referred to as CrystalBrite and appears on some of the desktop
monitors as well as their laptops. The technology offers an ultra-fine, highly polished coating which
reportedly allows superior filtering of light and quicker image building. It is marketed as reducing
reflection from internal and external light sources, and improves colors and image quality. This includes
more vibrant and brighter images via backlight diffusion reduction, as well as superior contrast with
minimal ambient light scattering.

Acer CrytalBrite Whitepaper

BenQ Senseye

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 299

The marketing for Senseye says: “A pure digital image enhancement technology that automatically and
dynamically improves image quality. And a simple promise of higher definition visuals that are deeper,
richer and clearer. Experience Senseye technology today – and come one step closer to the true power of
the human eye.”

The idea behind the technology is to make the colors richer, and more vivid; and the image quality
sharper and clearer. The original image signal is processed through three engines:

• Contrast Enhancement Engine (CEE) – supposedly improves the contrast ratio making the
bright areas brighter, and the darker areas darker
• Colour Management Engine (CME) – adjusts red, blue and green colour depths and
supposedly improves skin colour tones
• Sharpness Enhancement Engine (SEE) – sharpens outlines and helps avoid blurring of edges

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 300

In reality the Senseye products merely offer a series of presets which the user can select like “photo,
movies, user” etc as well as a sensor chip designed to automatically alter the presets when required. The
colors and brightness / contrast are set for each selection, with the “user” option allowing you to change
them all manually.

The official information about the technology can be found here: http://www.benqsenseye.com/

NEC AmbiBright

Similar to BenQ's Senseye technology, this feature automatically adjusts the backlight depending on the
brightness of ambient lighting conditions. For example, if the sensor detects the ambient lighting
becoming darker, it reduces the backlight appropriately, which helps provide optimal readability and
reduce eyestrain. Further, if desired, you can set the display to automatically enter a power-saving mode
when the ambient lighting falls below a predetermined value (i.e. when office lights are shut off at the
end of the day), which can significantly reduce energy expenses. When you consider the number of
monitors used on trading floors and other display-heavy environments, this brightness function can
contribute significantly to a lower total cost of ownership.

Samsung “Magic” Enhancements:

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 301

• MagicTune - Image quality can be perfected quickly, accurately and easily using this software.
Stored on the desktop it enables fine image adjustments, and colour calibration functionality not
available using traditional menu systems. Perfect for photographers, designers and motion
graphic artists, MagicTune provides user-friendly on-screen image control. This is effectively a
small resource friendly application to adjust user settings. Power Strip is also an equivalent
piece of software to achieve this. The MagicTune software and further information can be found
on Samsung’s site here.
• MagicColour - This intelligent colour enhancement system enhances selective colors, such as
skin tones, making it ideal for multimedia applications, surfing the web, watching DVDs or
manipulating images from a digital camera. It is said to enhance skin tone colour, and make
other colors more vivid. It is essentially part of the screen’s presets, which alters the input signal
depending on the use
• MagicContrast - Ensures that the SyncMaster range of monitors deliver the very highest quality
image. As a result, the SyncMaster range boasts a market leading contrast ratio of 1000:1. This
is just a marketing term really, not a technology as such. The Samsung screens which offer high
contrast ratios are labeled with this term and should offer deep blacks and bright whites
• MagicBright – Provides a choice of five brightness settings designed to optimize different
content. The brightness of the monitor can now be simply adjusted to Game, Movie, Sports, and
Internet or Text modes. So, whether you're working, relaxing or surfing the web, the brightness
level will be adjusted accordingly to make it a much more enjoyable experience. This is a series
of monitor presets similar to BenQ Senseye
• MagicRotate – Software which will automatically switch the screens alignment when the
monitor is rotated between landscape and portrait modes. More info and downloads available
here
• MagicSpeed - see above
• MagicStand - uses a unique dual hinge to ensure the screen is perfectly positioned to provide
you with a comfortable viewing position. Now the screen can be moved vertically, swiveled and
tilted to suit your own preferences
• MagicNet – This software is the ultimate way to stream content to multiple screens across a
LAN, a single computer with MagicNet software can be used to control and deliver unique
content to multiple displays

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 302

Acer eColor Management

This is Acer's name for their selection of monitor preset modes for variations in brightness, contrast and
colors. These options are available on selected models via the 'Empowering Key' which gives the user
access to the Acer eColor Management OSD interface. According to the whitepaper, eColor
management enables control of the following parameters, depending on the preset chosen:
• Colour tracking technology - an advanced colour temperature adjustment, stabilizing screen
output
• YUV colour space conversion - from RGB, allowing luminance and chromaticity to be altered
independently
• Uniform-brightness - boosts the output of the display so that dark areas remain visible,
preventing colour wash-out even under bright ambient light or from a distance
• Fine contrast - allows intensity of bright or colored areas to be increased without causing wash-
out of dark areas
• Adaptive gamma - allows effective brightness and contrast levels of the monitor to be adjusted
scene by scene, depending on the content. Similar to dynamic contrast control
• Optimized sharpness
• Independent hue
• Ultra-saturation
• Adaptive colour

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 303

Preset modes available in this suite include standard, text, graphics, movie and user. Ultimately, these
remain the standard preset modes you would see from a lot of modern screens, and may or may not be of
much practical use, depending on the individual.

Acer eColour Management Whitepaper

LG f-Engine

LG's f-Engine form part of their monitor range OSD and offers a series of preset modes for adapting
colour and brightness to meet varying needs of the user. This gives access to settings for brightness,

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 304

ACE (Adaptive Color and Contrast Enhancement) and RCM (Real Color Management). RCM provides
the following settings: 0 = RCM disabled, 1 = enhancement of green, 2 = enhancement of skin tones, 3 =
overall color enhancement. One can quickly recognize how the each of these changes will respectively
affect the image appearance since a split screen is shown. The regular color picture is shown on the right
side; while the left side lets you preview the f-Engine settings' effects on the displayed picture.

ColorComp

This uniformity compensation and correction system aims to reduce any screen uniformity errors to
almost unnoticeable levels. ColorComp works by applying a digital correction to each pixel on the screen
to compensate for differences in colour and luminance. Each display is individually characterized during
production using a fully automated system which measures hundreds of points across the screen at
different grey levels. These measurements are used to build a three-dimensional correction matrix for the
display screen which is then stored inside the display. This data is used to compensate for the screen
uniformity, not only as a function of position on the display screen, but also as a function of grey level. If
desired, the ColorComp correction can be turned off in order to maximize the screen brightness.

Dynamic Contrast

Several manufacturers have introduced dynamic contrast controls to their monitors which are designed
to improve black and white levels and contrast of the display on the fly, in certain conditions. It is
supposed to help colors look more vivid and bright, text look sharper and enhance the extremes ends of
the colour scale, making blacks deeper and whites brighter. This is achieved by adjusting the brightness

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 305

of the backlighting rather than any adjustments at the matrix / panel level. The backlighting can be made
less intensive in dark scenes, to make them even darker, and more intensive, up to the maximum, in
bright scenes, to make them even brighter.

The official numbers for dynamic contrast are arrived at in the following manner: the level of white is
measured at the maximum of backlight brightness and the level of black is measured at its minimum. So
if the matrix has a specified contrast ratio of 1000:1 and the monitor’s electronics can automatically
change the intensity of backlight brightness by 300%, the resulting dynamic contrast is 3000:1. Of
course, the screen contrast – the ratio of white to black – is never higher than the monitor’s static
specified contrast ratio at any given moment, but the level of black is not important for the eye in bright
scenes and vice versa. That’s why the automatic brightness adjustment in movies is indeed helpful and
creates an impression of a monitor with a greatly enhanced dynamic range.

The downside is that the brightness of the whole screen is changed at once. In scenes that contain both
light and dark objects in equal measure, the monitor will just select some average brightness. Dynamic
contrast doesn’t work well on dark scenes with a few small, but very bright objects (like a night street
with lamp-posts) – the background is dark, and the monitor will lower brightness to a minimum,
dimming the bright objects as a consequence. Ideally this kind of enhancement shouldn't be used in
office work since it can prove distracting or problematic for colour work. However, movies and
sometimes gaming can offer some impressive improvements thanks to such technologies.

As ever, different manufacturers have their own versions of these technologies including those discussed
below.

Digital Fine Contrast (DFC)

On its initial release, LG.Philips DFC technology was marketed as being able to improve the contrast
ratio from a typical level of 700:1 to a massive 1600:1! It is supposed to help colors look more vivid and
bright, text look sharper and enhance the extremes ends of the colour scale, making blacks deeper and
whites brighter. This is a great benefit to gamers who have issues seeing enemies lurking in the shadows
and for photo / cinema users who want to improve colour quality. This technology is called the Digital
Fine Contrast engine (DFC) and consists of 3 elements:

o Auto Contents Recognition (ACR) - detects the type of content being viewed and decides how
to use the contrast adjustment engine to make the most of it. This is dependent on the mode
selection in the monitor's OSD, choosing between settings like 'Movie', 'Text', 'Games' etc. For

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 306

example, in 'Movie' mode, the DFC is enhanced for a maximum brightness and in 'Picture'
mode colors are deepened.
o Digital Contrast Enhancer (DCE) - This reduces black luminance.
o Digital Contrast Mapper (DCM) - Displays the image while ensuring that the enhanced
contrast is optimized.

The DFC is based on an automatic contrast booster controlled to the Look up Table (LUT) which is
reported to alter the gamma of the pixels, darkening dark areas and increasing brightness of the brighter
areas. The CCFL backlight tubes have also been replaced by a new generation which is capable of a
wider gamut:

Advanced DVM

NEC features their dynamic contrast on some of their models including the NEC LCD20WGX2.
Ultimately this technology runs under the same principle as DFC, but under a different name.

APE (AUO Picture Enhancer) Technology

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 307

AUO Picture Enhancer (APE) Technology integrates the input image data management and the dynamic
backlight control solution. The intrinsic image processing system circuit can dynamically adjust the
contrast, sharpness, hue, color temperature, and color saturation to accommodate the particular image.
Non-linear image processing can accommodate changes in the dynamics of human perception ideally
used to overcome an existing problem with LCD TVs where motion picture tends to lose its accuracy
during darker states. This technology provides a vivid and sharp image, retrieves natural colors, and
enhances color saturation, details in gray levels, and contrast ratio. With AUO’s Image Processing
Technology, customers can better enjoy details of dark and night scenes on movies.

Features:
• Sharpness Enhancement: Increase the hi-frequency signal to highlight detail information and
provide a sharp picture
• Color Saturation: Enlarge the gamut of input video slice to maximize the utilization of panels
and achieve superior visual stimulus
• Hue Refinement: By separating color space into several independent areas, all colors can be
modified separately without disturbing relative colors.
• Dynamic Backlight Dimming: This unique approach provides capability of backlight modulation
to relief the light leakage, hence to provide high contrast ratio up to 3000:1. Moreover, the
latest High Dynamic Contrast with LED utilizes local-adjustable LED backlight to enhance the
contrast ratio up to 10,000:1. The overall image quality is improved while saving 50% of power
consumption on average.

Acer Adaptive Contrast Management (ACM)

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 308

Acer has their own name for dynamic contrast control as above. They market it as offering improved
detail in both dark and light scenes, as well as helping to reduce power consumption.

Acer ACM Whitepaper

Black Frame Insertion (BFI)

This was first unveiled at CEBIT 2006, and by inserting a black frame between images BenQ / AU
Optronics claim this helps "clean" the human eye from the perceived afterglow from retention of images
in the brain. They have named this technology, BFI (Black Frame Insertion). BenQ have a close
affiliation with AU Optronics and so BFI will be used in some of their range. BenQ use slightly different
terminology sometimes which you need to be aware of. They refer to their overdriven panels as having
'Advanced Motion Acceleration' (AMA) but those featuring Black Frame Insertion technology may be
referred to as AMA-Z as well. For example, the BenQ FP241W comes in two versions, the FP241W
without BFI and the FP241WZ with BFI.

BenQ comment that even a 0ms TFT would result in perceived afterglow due to the human eye mixing
images and introducing blur. This perceived motion blur effect is in large part due to the human visual
system and is something manufacturers are trying to overcome on their hold-type displays. This is the
reason behind looking at new technologies other than overdrive to help reduce blurring on these screens.
Other manufacturers such as Samsung are exploring technologies including backlight scanning but AU
Optronics / BenQ are favoring BFI instead.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 309

There are some misconceptions about the technology and I think it is important to realize that this does
NOT mean the screen will be running at 120Hz, or showing 120 fps. In reality, the screen will still
function at 60Hz / 60 fps, but some of them will be replaced with black frames. The technology will (at
least initially) offer three settings for timing of the black frame insertion allowing the user to find a level
they find comfortable. There is also an "off" option if required.

Organic light-emitting diode

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 310

Demonstration of a flexible OLED device

A green emitting OLED device

Sony XEL-1, the world's first OLED TV.[1]

An organic light emitting diode (OLED) is a light-emitting diode (LED) in which the emissive
electroluminescent layer is a film of organic compounds which emit light in response to an electric current.
This layer of organic semiconductor material is situated between two electrodes. Generally, at least one of
these electrodes is transparent.

OLEDs are used in television screens, computer monitors, small, portable system screens such as mobile
phones and PDAs, watches, advertising, information and indication. OLEDs are also used in light sources
for general space illumination and in large-area light-emitting elements. Due to their comparatively early
stage of development, they typically emit less light per unit area than inorganic solid-state based LED
point-light sources.

An OLED display functions without a backlight. Thus, it can display deep black levels and can also be
thinner and lighter than established liquid crystal displays. Similarly, in low ambient light conditions such

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 311

as dark rooms, an OLED screen can achieve a higher contrast ratio than an LCD screen using either cold
cathode fluorescent lamps or the more recently developed LED backlight.

There are two main families of OLEDs: those based upon small molecules and those employing polymers.
Adding mobile ions to an OLED creates a Light-emitting Electrochemical Cell or LEC, which has a
slightly different mode of operation.

OLED displays can use either passive-matrix (PMOLED) or active-matrix addressing schemes. Active-
matrix OLEDs (AMOLED) require a thin-film transistor backplane to switch each individual pixel on or
off, and can make higher resolution and larger size displays possible.

History

The first observations of electroluminescence in organic materials were in the early 1950s by A. Bernanose
and co-workers at the Nancy-Université, France. They applied high-voltage alternating current (AC) fields
in air to materials such as acridine orange, either deposited on or dissolved in cellulose or cellophane thin
films. The proposed mechanism was either direct excitation of the dye molecules or excitation of
electrons.[2][3][4][5]

In 1960, Martin Pope and co-workers at New York University developed ohmic dark-injecting electrode
contacts to organic crystals.[6][7][8] They further described the necessary energetic requirements (work
functions) for hole and electron injecting electrode contacts. These contacts are the basis of charge injection
in all modern OLED devices. Pope's group also first observed direct current (DC) electroluminescence
under vacuum on a pure single crystal of anthracene and on anthracene crystals doped with tetracene in
1963[9] using a small area silver electrode at 400V. The proposed mechanism was field-accelerated electron
excitation of molecular fluorescence.

Pope's group reported in 1965[10] that in the absence of an external electric field, the electroluminescence in
anthracene crystals is caused by the recombination of a thermalized electron and hole, and that the
conducting level of anthracene is higher in energy than the exciton energy level. Also in 1965, W. Helfrich
and W. G. Schneider of the National Research Council in Canada produced double injection recombination
electroluminescence for the first time in an anthracene single crystal using hole and electron injecting
electrodes,[11] the forerunner of modern double injection devices. In the same year, Dow Chemical
researchers patented a method of preparing electroluminescent cells using high voltage (500–1500 V) AC-
driven (100–3000 Hz) electrically-insulated one millimetre thin layers of a melted phosphor consisting of
ground anthracene powder, tetracene, and graphite powder.[12] Their proposed mechanism involved
electronic excitation at the contacts between the graphite particles and the anthracene molecules.

Device performance was limited by the previously poor electrical conductivity of organic materials.
However this was overcome with the discovery and development of highly conductive polymers.[13] For
more on the history of such materials, see conductive polymers.

Electroluminescence from polymer films was first observed by Roger Partridge at the National Physical
Laboratory in the United Kingdom. The device consisted of a film of poly(n-vinylcarbazole) up to 2.2
micrometres thick located between two charge injecting electrodes. The results of the project were patented
in 1975[14] and published in 1983.[15][16][17][18]

The first diode device was reported at Eastman Kodak by Ching W. Tang and Steven Van Slyke in 1987.[19]
This device used a novel two-layer structure with separate hole transporting and electron transporting
layers such that recombination and light emission occurred in the middle of the organic layer. This resulted
in a reduction in operating voltage and improvements in efficiency and led to the current era of OLED
research and device production.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 312

Research into polymer electroluminescence culminated in 1990 with J. H. Burroughes et al. at the
Cavendish Laboratory in Cambridge reporting a high efficiency green light-emitting polymer based device
using 100 nm thick films of poly(p-phenylene vinylene).[20]

Working principle

Schematic of a bilayer OLED: 1. Cathode (−), 2. Emissive Layer, 3. Emission of radiation, 4. Conductive
Layer, 5. Anode (+)

A typical OLED is composed of a layer of organic materials situated between two electrodes, the anode and
cathode, all deposited on a substrate. The organic molecules are electrically conductive as a result of
delocalization of pi electrons caused by conjugation over all or part of the molecule. These materials have
conductivity levels ranging from insulators to conductors, and therefore are considered organic
semiconductors. The highest occupied and lowest unoccupied molecular orbitals (HOMO and LUMO) of
organic semiconductors are analogous to the valence and conduction bands of inorganic semiconductors.

Originally, the most basic polymer OLEDs consisted of a single organic layer. One example was the first
light-emitting device synthesised by J. H. Burroughes et al., which involved a single layer of poly(p-
phenylene vinylene). However multilayer OLEDs can be fabricated with two or more layers in order to
improve device efficiency. As well as conductive properties, different materials may be chosen to aid
charge injection at electrodes by providing a more gradual electronic profile,[21] or block a charge from
reaching the opposite electrode and being wasted.[22] Many modern OLEDs incorporate a simple bilayer
structure, consisting of a conductive layer and an emissive layer.

During operation, a voltage is applied across the OLED such that the anode is positive with respect to the
cathode. A current of electrons flows through the device from cathode to anode, as electrons are injected
into the LUMO of the organic layer at the cathode and withdrawn from the HOMO at the anode. This latter
process may also be described as the injection of electron holes into the HOMO. Electrostatic forces bring
the electrons and the holes towards each other and they recombine forming an exciton, a bound state of the
electron and hole. This happens closer to the emissive layer, because in organic semiconductors holes are
generally more mobile than electrons. The decay of this excited state results in a relaxation of the energy
levels of the electron, accompanied by emission of radiation whose frequency is in the visible region. The
frequency of this radiation depends on the band gap of the material, in this case the difference in energy
between the HOMO and LUMO.

As electrons and holes are fermions with half integer spin, an exciton may either be in a singlet state or a
triplet state depending on how the spins of the electron and hole have been combined. Statistically three
triplet excitons will be formed for each singlet exciton. Decay from triplet states (phosphorescence) is spin
forbidden, increasing the timescale of the transition and limiting the internal efficiency of fluorescent
devices. Phosphorescent organic light-emitting diodes make use of spin–orbit interactions to facilitate
intersystem crossing between singlet and triplet states, thus obtaining emission from both singlet and triplet
states and improving the internal efficiency.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 313

Indium tin oxide (ITO) is commonly used as the anode material. It is transparent to visible light and has a
high work function which promotes injection of holes into the HOMO level of the organic layer. A typical
conductive layer may consist of PEDOT:PSS[23] as the HOMO level of this material generally lies between
the workfunction of ITO and the HOMO of other commonly used polymers, reducing the energy barriers
for hole injection. Metals such as barium and calcium are often used for the cathode as they have low work
functions which promote injection of electrons into the LUMO of the organic layer.[24] Such metals are
reactive, so require a capping layer of aluminium to avoid degradation.

Single carrier devices are typically used to study the kinetics and charge transport mechanisms of an
organic material and can be useful when trying to study energy transfer processes. As current through the
device is composed of only one type of charge carrier, either electrons or holes, recombination does not
occur and no light is emitted. For example, electron only devices can be obtained by replacing ITO with a
lower work function metal which increases the energy barrier of hole injection. Similarly, hole only devices
can be made by using a cathode comprised solely of aluminium, resulting in an energy barrier too large for
efficient electron injection.[25][26][27]

MATERIAL TECHNOLOGIES

Small molecules

Alq3,[19] commonly used in small molecule OLEDs.

Efficient OLEDs using small molecules were first developed by Dr. Ching W. Tang et al.[19] at Eastman
Kodak. The term OLED traditionally refers specifically to this type of device, though the term SM-OLED
is also in use.

Molecules commonly used in OLEDs include organometallic chelates (for example Alq3, used in the
organic light-emitting device reported by Tang et al.), fluorescent and phosphorescent dyes and conjugated
dendrimers. A number of materials are used for their charge transport properties, for example
triphenylamine and derivatives are commonly used as materials for hole transport layers.[28] Fluorescent
dyes can be chosen to obtain light emission at different wavelengths, and compounds such as perylene,
rubrene and quinacridone derivatives are often used.[29] Alq3 has been used as a green emitter, electron
transport material and as a host for yellow and red emitting dyes.

The production of small molecule devices and displays usually involves thermal evaporation in a vacuum.
This makes the production process more expensive and of limited use for large-area devices than other
processing techniques. However, contrary to polymer-based devices, the vacuum deposition process
enables the formation of well controlled, homogeneous films, and the construction of very complex multi-
layer structures. This high flexibility in layer design, enabling distinct charge transport and charge blocking
layers to be formed, is the main reason for the high efficiencies of the small molecule OLEDs.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 314

Coherent emission from a laser dye-doped tandem SM-OLED device, excited in the pulsed regime, has
been demonstrated.[30] The emission is nearly diffraction limited with a spectral width similar to that of
broadband dye lasers.[31]

Polymer light-emitting diodes

poly(p-phenylene vinylene), used in the first PLED.[20]

Polymer light-emitting diodes (PLED), also light-emitting polymers (LEP), involve an electroluminescent
conductive polymer that emits light when connected to an external voltage. They are used as a thin film for
full-spectrum colour displays. Polymer OLEDs are quite efficient and require a relatively small amount of
power for the amount of light produced.

Vacuum deposition is not a suitable method for forming thin films of polymers. However, polymers can be
processed in solution, and spin coating is a common method of depositing thin polymer films. This method
is more suited to forming large-area films than thermal evaporation. No vacuum is required, and the
emissive materials can also be applied on the substrate by a technique derived from commercial inkjet
printing.[32][33] However, as the application of subsequent layers tends to dissolve those already present,
formation of multilayer structures is difficult with these methods. The metal cathode may still need to be
deposited by thermal evaporation in vacuum.

Typical polymers used in PLED displays include derivatives of poly(p-phenylene vinylene) and
polyfluorene. Substitution of side chains onto the polymer backbone may determine the colour of emitted
light[34] or the stability and solubility of the polymer for performance and ease of processing.[35]

While unsubstituted poly(p-phenylene vinylene) (PPV) is typically insoluble, a number of PPVs and related
poly(naphthalene vinylene)s (PNVs) that are soluble in organic solvents or water have been prepared via
ring opening metathesis polymerization.[36][37][38]

Phosphorescent materials

Ir(mppy)3, a phosphorescent dopant which emits green light.[39]

Phorescent organic light emitting diodes use the principle of electrophosphorescence to convert electrical
energy in an OLED into light in a highly efficient manner,[40][41] with the internal quantum efficiencies of
such devices approaching 100%.[42]

Typically, a polymer such as poly(n-vinylcarbazole) is used as a host material to which an organometallic


complex is added as a dopant. Iridium complexes[41] such as Ir(mppy)3[39] are currently the focus of
research, although complexes based on other heavy metals such as platinum[40] have also been used.

The heavy metal atom at the centre of these complexes exhibits strong spin-orbit coupling, facilitating
intersystem crossing between singlet and triplet states. By using these phosphorescent materials, both
singlet and triplet excitons will be able to decay radiatively, hence improving the internal quantum
efficiency of the device compared to a standard PLED where only the singlet states will contribute to
emission of light.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 315

Applications of OLEDs in solid state lighting require the achievement of high brightness with good CIE
coordinates (for white emission). The use of macromolecular species like polyhedral oligomeric
silsesquioxanes (POSS) in conjunction with the use of phosphorescent species such as Ir for printed OLEDs
have exhibited brightnesses as high as 10,000 cd/m2.[43]

DEVICE ARCHITECTURES

Structure

• Bottom or top emission: Bottom emission devices use a transparent or semi-transparent bottom
electrode to get the light through a transparent substrate. Top emission devices[44][45] use a
transparent or semi-transparent top electrode emitting light directly. Top-emitting OLEDs are
better suited for active-matrix applications as they can be more easily integrated with a non-
transparent transistor backplane.

• Transparent OLEDs use transparent or semi-transparent contacts on both sides of the device to
create displays that can be made to be both top and bottom emitting (transparent). TOLEDs can
greatly improve contrast, making it much easier to view displays in bright sunlight.[46] This
technology can be used in Head-up displays, smart windows or augmented reality applications.
Novaled's[47] OLED panel presented in Finetech Japan 2010, boasts a transparency of 60-70%.

• Stacked OLEDs use a pixel architecture that stacks the red, green, and blue subpixels on top of
one another instead of next to one another, leading to substantial increase in gamut and color
depth, and greatly reducing pixel gap. Currently, other display technologies have the RGB (and
RGBW) pixels mapped next to each other decreasing potential resolution.

• Inverted OLED: In contrast to a conventional OLED, in which the anode is placed on the
substrate, an Inverted OLED uses a bottom cathode that can be connected to the drain end of an n-
channel TFT especially for the low cost amorphous silicon TFT backplane useful in the
manufacturing of AMOLED displays.[48]

Patterning technologies

Patternable organic light-emitting devices use a light or heat activated electroactive layer. A latent material
(PEDOT-TMA) is included in this layer that, upon activation, becomes highly efficient as a hole injection
layer. Using this process, light-emitting devices with arbitrary patterns can be prepared.[49]

Colour patterning can be accomplished by means of laser, such as radiation-induced sublimation transfer
(RIST).[50]

Organic vapour jet printing (OVJP) uses an inert carrier gas, such as argon or nitrogen, to transport
evaporated organic molecules (as in Organic Vapor Phase Deposition). The gas is expelled through a
micron sized nozzle or nozzle array close to the substrate as it is being translated. This allows printing
arbitrary multilayer patterns without the use of solvents.

Conventional OLED displays are formed by vapor thermal evaporation (VTE) and are patterned by
shadow-mask. A mechanical mask has openings allowing the vapor to pass only on the desired location.

[edit] Backplane technologies

For a high resolution display like a TV, a TFT backplane is necessary to drive the pixels correctly.
Currently, Low Temperature Polycrystalline silicon LTPS-TFT is used for commercial AMOLED displays.
LTPS-TFT has variation of the performance in a display, so various compensation circuits have been

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 316

reported.[44] Due to the size limitation of the excimer laser used for LTPS, the AMOLED size was limited.
To cope with the hurdle related to the panel size, amorphous-silicon/microcrystalline-silicon backplanes
have been reported with large display prototype demonstrations.[51]

Advantages

Demonstration of a 4.1" prototype flexible display from Sony

The different manufacturing process of OLEDs lends itself to several advantages over flat-panel displays
made with LCD technology.

• Future lower cost: Although the method is not currently commercially viable for mass
production, OLEDs can be printed onto any suitable substrate using an inkjet printer or even
screen printing technologies,[52] they could theoretically have a lower cost than LCDs or plasma
displays. However, it is the fabrication of the substrate that is the most complex and expensive
process in the production of a TFT LCD, so any savings offered by printing the pixels is easily
cancelled out by OLED's requirement to use a more costly LTPS substrate - a fact that is borne out
by the significantly higher initial price of AMOLED displays than their TFT LCD competitors. A
mitigating factor to this price differential going into the future is the cost of retooling existing lines
to produce AMOLED displays over LCDs to take advantage of the economies of scale afforded by
mass production.

• Light weight & flexible plastic substrates: OLED displays can be fabricated on flexible plastic
substrates leading to the possibility of Organic light-emitting diode roll-up display being
fabricated or other new applications such as roll-up displays embedded in fabrics or clothing. As
the substrate used can be flexible such as PET.[53], the displays may be produced inexpensively.

• Wider viewing angles & improved brightness: OLEDs can enable a greater artificial contrast
ratio (both dynamic range and static, measured in purely dark conditions) and viewing angle
compared to LCDs because OLED pixels directly emit light. OLED pixel colours appear correct
and unshifted, even as the viewing angle approaches 90 degrees from normal.

• Better power efficiency: LCDs filter the light emitted from a backlight, allowing a small fraction
of light through so they cannot show true black, while an inactive OLED element produces no
light and consumes no power.[54]

• Response time: OLEDs can also have a faster response time than standard LCD screens. Whereas
LCD displays are capable of a 1 ms response time or less[55] offering a frame rate of 1,000 Hz or
higher, an OLED can theoretically have less than 0.01 ms response time enabling 100,000 Hz
refresh rates.

[edit] Disadvantages

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 317

LEP display showing partial failure

An old OLED display showing wear

• Lifespan: The biggest technical problem for OLEDs was the limited lifetime of the organic
materials.[56] In particular, blue OLEDs historically have had a lifetime of around 14,000 hours to
half original brightness (five years at 8 hours a day) when used for flat-panel displays. This is
lower than the typical lifetime of LCD, LED or PDP technology—each currently rated for about
60,000 hours to half brightness, depending on manufacturer and model. However, some
manufacturers' displays aim to increase the lifespan of OLED displays, pushing their expected life
past that of LCD displays by improving light outcoupling, thus achieving the same brightness at a
lower drive current.[57][58] In 2007, experimental OLEDs were created which can sustain 400 cd/m2
of luminance for over 198,000 hours for green OLEDs and 62,000 hours for blue OLEDs.[59]

• Color balance issues: Additionally, as the OLED material used to produce blue light degrades
significantly more rapidly than the materials that produce other colors, blue light output will
decrease relative to the other colors of light. This differential color output change will change the
color balance of the display and is much more noticeable than a decrease in overall luminance.[60]
This can be partially avoided by adjusting colour balance but this may require advanced control
circuits and interaction with the user, which is unacceptable for some users. In order to delay the
problem, manufacturers bias the colour balance towards blue so that the display initially has an
artificially blue tint, leading to complaints of artificial-looking, over-saturated colors. More
commonly, though, manufacturers optimize the size of the R, G and B subpixels to reduce the
current density through the subpixel in order to equalize lifetime at full luminance. For example, a
blue subpixel may be 100% larger than the green subpixel. The red subpixel may be 10% smaller
than the green.

• Efficiency of blue OLEDs: Improvements to the efficiency and lifetime of blue OLED’s is vital
to the success of OLED’s as replacements for LCD technology. Considerable research has been
invested in developing blue OLEDs with high external quantum efficiency as well as a deeper blue
color.[61][62] External quantum efficiency values of 20% and 19% have been reported for red
(625 nm) and green (530 nm) diodes, respectively.[63][64] However, blue diodes (430 nm) have only

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 318

been able to achieve maximum external quantum efficiencies in the range between 4% to 6%.[65]
This is primarily due to two factors. Firstly, the human eye is less sensitive to the blue wavelength
compared to the green or red, so lower efficiency is expected. Secondly, by calculating the band
gap (Eg = hc/λ), it is clear that the shorter wavelength of the blue OLED results in a larger band
gap at 2.9 eV. This leads to higher barriers, so less efficiency is also expected.

• Water damage: Water can damage the organic materials of the displays. Therefore, improved
sealing processes are important for practical manufacturing. Water damage may especially limit
the longevity of more flexible displays.[66]

• Outdoor performance: As an emissive display technology, OLEDs rely completely upon


converting electricity to light, unlike most LCDs which are to some extent reflective; e-ink leads
the way in efficiency with ~ 33% ambient light reflectivity, enabling the display to be used
without any internal light source. The metallic cathode in an OLED acts as a mirror, with
reflectance approaching 80%, leading to poor readability in bright ambient light such as outdoors.
However, with the proper application of a circular polarizer and anti-reflective coatings, the
diffuse reflectance can be reduced to less than 0.1%. With 10,000 fc incident illumination (typical
test condition for simulating outdoor illumination), that yields an approximate photopic contrast of
5:1.

• Power consumption: While an OLED will consume around 40% of the power of an LCD
displaying an image which is primarily black, for the majority of images it will consume 60–80%
of the power of an LCD - however it can use over three times as much power to display an image
with a white background[67] such as a document or website. This can lead to disappointing real-
world battery life in mobile devices.

• Screen burn-in: Unlike displays with a common light source, the brightness of each OLED pixel
fades depending on the content displayed. The varied lifespan of the organic dyes can cause a
discrepancy between red, green, and blue intensity. This leads to image persistence, also known as
burn-in.[68]

Manufacturers and Commercial Uses

Magnified image of the AMOLED screen on the Google Nexus One smartphone using the RGBG system
of the PenTile Matrix Family.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 319

A 3.8 cm (1.5 in) OLED display from a Creative ZEN V media player

OLED technology is used in commercial applications such as displays for mobile phones and portable
digital media players, car radios and digital cameras among others. Such portable applications favor the
high light output of OLEDs for readability in sunlight and their low power drain. Portable displays are also
used intermittently, so the lower lifespan of organic displays is less of an issue. Prototypes have been made
of flexible and rollable displays which use OLEDs' unique characteristics. Applications in flexible signs
and lighting are also being developed.[69] Philips Lighting have made OLED lighting samples under the
brand name 'Lumiblade' available online.[70]

OLEDs have been used in most Motorola and Samsung colour cell phones, as well as some HTC, LG and
Sony Ericsson models.[71] Nokia has also recently introduced some OLED products including the N85 and
the N86 8MP, both of which feature an AMOLED display. OLED technology can also be found in digital
media players such as the Creative ZEN V, the iriver clix, the Zune HD and the Sony Walkman X Series.

The Google and HTC Nexus One smartphone includes an AMOLED screen, as does HTC's own Desire and
Legend phones. However due to supply shortages of the Samsung-produced displays, certain HTC models
will use Sony's Super LCD technology in the future.[72]

Other manufacturers of OLED panels include Anwell Technologies Limited,[73] Chi Mei Corporation,[74]
LG,[75] and others.[76]

DuPont stated in a press release in May 2010 that they can produce a 50-inch OLED TV in two minutes
with a new printing technology. If this can be scaled up in terms of manufacturing, then the total cost of
OLED TVs would be greatly reduced. Dupont also states that OLED TVs made with this less expensive
technology can last up to 15 years if left on for a normal eight hour day.[77][78]

Handheld computer manufacturer OQO introduced the smallest Windows netbook computer, including an
OLED display, in 2009.[79]

The use of OLEDs may be subject to patents held by Eastman Kodak, DuPont, General Electric, Royal
Philips Electronics, numerous universities and others.[80] There are by now literally thousands of patents
associated with OLEDs, both from larger corporations and smaller technology companies [1].

Samsung applications

By 2004 Samsung, South Korea's largest conglomerate, was the world's largest OLED manufacturer,
producing 40% of the OLED displays made in the world,[81] and as of 2010 has a 98% share of the global
AMOLED market.[82] The company is leading the world OLED industry, generating $100.2 million out of
the total $475 million revenues in the global OLED market in 2006.[83] As of 2006, it held more than 600
American patents and more than 2800 international patents, making it the largest owner of AMOLED
technology patents.[83]

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 320

Samsung SDI announced in 2005 the world's largest OLED TV at the time, at 21 inches (53 cm).[84] This
OLED featured the highest resolution at the time, of 6.22 million pixels. In addition, the company adopted
active matrix based technology for its low power consumption and high-resolution qualities. This was
exceeded in January 2008, when Samsung showcased the world's largest and thinnest OLED TV at the
time, at 31 inches and 4.3 mm.[85]

In May 2008, Samsung unveiled an ultra-thin 12.1 inch laptop OLED display concept, with a 1,280×768
resolution with infinite contrast ratio.[86] According to Woo Jong Lee, Vice President of the Mobile Display
Marketing Team at Samsung SDI, the company expected OLED displays to be used in notebook PCs as
soon as 2010.[87]

In October 2008, Samsung showcased the world's thinnest OLED display, also the first to be 'flappable' and
bendable.[88] It measures just 0.05 mm (thinner than paper), yet a Samsung staff member said that it is
"technically possible to make the panel thinner".[88] To achieve this thickness, Samsung etched an OLED
panel that uses a normal glass substrate. The drive circuit was formed by low-temperature polysilicon
TFTs. Also, low-molecular organic EL materials were employed. The pixel count of the display is 480 ×
272. The contrast ratio is 100,000:1, and the luminance is 200 cd/m². The colour reproduction range is
100% of the NTSC standard.

In the same month, Samsung unveiled what was then the world's largest OLED Television at 40-inch with a
Full HD resolution of 1920×1080 pixel.[89] In the FPD International, Samsung stated that its 40-inch OLED
Panel is the largest size currently possible. The panel has a contrast ratio of 1,000,000:1, a colour gamut of
107% NTSC, and a luminance of 200 cd/m² (peak luminance of 600 cd/m²).

At the Consumer Electronics Show (CES) in January 2010, Samsung demonstrated a laptop computer with
a large, transparent OLED display featuring up to 40% transparency[90] and an animated OLED display in a
photo ID card.[91]

Samsung's latest AMOLED smartphones use their Super AMOLED trademark, with the Samsung Wave
S8500 and Samsung i9000 Galaxy S being launched in June 2010.

Sony applications

Sony XEL-1, the world's first OLED TV.[1] (front)

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 321

Sony XEL-1 (side)

The Sony CLIÉ PEG-VZ90 was released in 2004, being the first PDA to feature an OLED screen.[92] Other
Sony products to feature OLED screens include the MZ-RH1 portable minidisc recorder, released in
2006[93] and the Walkman X Series.[94]

At the Las Vegas CES 2007, Sony showcased 11-inch (28 cm, resolution 960×540) and 27-inch (68.5 cm,
full HD resolution at 1920×1080) OLED TV models.[95] Both claimed 1,000,000:1 contrast ratios and total
thicknesses (including bezels) of 5 mm. In April 2007, Sony announced it would manufacture 1000 11-inch
OLED TVs per month for market testing purposes.[96] On October 1, 2007, Sony announced that the 11-
inch model, now called the XEL-1, would be released commercially;[1] the XEL-1 was first released in
Japan in December 2007.[97]

In May 2007, Sony publicly unveiled a video of a 2.5-inch flexible OLED screen which is only 0.3
millimeters thick.[98] At the Display 2008 exhibition, Sony demonstrated a 0.2 mm thick 3.5 inch display
with a resolution of 320×200 pixels and a 0.3 mm thick 11 inch display with 960×540 pixels resolution,
one-tenth the thickness of the XEL-1.[99][100]

In July 2008, a Japanese government body said it would fund a joint project of leading firms, which is to
develop a key technology to produce large, energy-saving organic displays. The project involves one
laboratory and 10 companies including Sony Corp. NEDO said the project was aimed at developing a core
technology to mass-produce 40 inch or larger OLED displays in the late 2010s.[101]

In October 2008, Sony published results of research it carried out with the Max Planck Institute over the
possibility of mass-market bending displays, which could replace rigid LCDs and plasma screens.
Eventually, bendable, transparent OLED screens could be stacked to produce 3D images with much greater
contrast ratios and viewing angles than existing products.[102]

Sony exhibited a 24.5" prototype OLED 3D television during the Consumer Electronics Show in January
2010.[103]

References

15. Sony XEL-1:The world's first OLED TV, OLED-Info.com Nov.17 2008
16. A. Bernanose, M. Comte, P. Vouaux, J. Chim. Phys. 1953, 50, 64.
17. A. Bernanose, P. Vouaux, J. Chim. Phys. 1953, 50, 261.
18. A. Bernanose, J. Chim. Phys. 1955, 52, 396.
19. A. Bernanose, P. Vouaux, J. Chim. Phys. 1955, 52, 509.
20. Kallmann, H.; Pope, M. (1960). "Positive Hole Injection into Organic Crystals". The Journal of
Chemical Physics 32: 300. doi:10.1063/1.1700925.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 322

21. Kallmann, H.; Pope, M. (1960). "Bulk Conductivity in Organic Crystals". Nature 186: 31.
doi:10.1038/186031a0.
22. Mark, Peter; Helfrich, Wolfgang (1962). "Space-Charge-Limited Currents in Organic Crystals".
Journal of Applied Physics 33: 205. doi:10.1063/1.1728487.
23. Pope, M.; Kallmann, H. P.; Magnante, P. (1963). "Electroluminescence in Organic Crystals". The
Journal of Chemical Physics 38: 2042. doi:10.1063/1.1733929.
24. Kim, Seul Ong; Lee, Kum Hee; Kim, Gu Young; Seo, Ji Hoon; Kim, Young Kwan; Yoon, Seung
Soo (2010). "A highly efficient deep blue fluorescent OLED based on
diphenylaminofluorenylstyrene-containing emitting materials". Synthetic Metals 160: 1259.
doi:10.1016/j.synthmet.2010.03.020.
25. Jabbour, G. E.; Kawabe, Y.; Shaheen, S. E.; Wang, J. F.; Morrell, M. M.; Kippelen, B.;
Peyghambarian, N. (1997). "Highly efficient and bright organic electroluminescent devices with
an aluminum cathode". Applied Physics Letters 71: 1762. doi:10.1063/1.119392.
26. Mikami, Akiyoshi; Koshiyama, Tatsuya; Tsubokawa, Tetsuro (2005). "High-Efficiency Color and
White Organic Light-Emitting Devices Prepared on Flexible Plastic Substrates". Japanese Journal
of Applied Physics 44: 608. doi:10.1143/JJAP.44.608.
27. Mikami, A.; Nishita, Y.; Iida, Y. “High-efficiency Phosphorescent Organic Light-Emitting
Devices Coupled with Lateral Color-Conversion Layer.” SID Symposium Digest of Technical
Papers 2006. 37-1. 1376-1379
28. P. Chamorro-Posada, J. Martín-Gil, P. Martín-Ramos, L.M. Navas-Gracia, Fundamentos de la
Tecnología OLED (Fundamentals of OLED Technology). University of Valladolid, Spain (2008).
ISBN 978-84-936644-0-4. Available online, with permission from the authors, at the webpage:
http://www.scribd.com/doc/13325893/Fundamentos-de-la-Tecnologia-OLED

• Shinar, Joseph (Ed.), Organic Light-Emitting Devices: A Survey. NY: Springer-Verlag (2004).
ISBN 0-387-95343-4.
• Hari Singh Nalwa (Ed.), Handbook of Luminescence, Display Materials and Devices, Volume 1-3.
American Scientific Publishers, Los Angeles (2003). ISBN 1-58883-010-1. Volume 1: Organic
Light-Emitting Diodes
• Hari Singh Nalwa (Ed.), Handbook of Organic Electronics and Photonics, Volume 1-3. American
Scientific Publishers, Los Angeles (2008). ISBN 1-58883-095-0.
• Müllen, Klaus (Ed.), Organic Light Emitting Devices: Synthesis, Properties and Applications.
Wiley-VCH (2006). ISBN 3-527-31218-8
• Yersin, Hartmut (Ed.), Highly Efficient OLEDs with Phosphorescent Materials. Wiley-VCH
(2007). ISBN 3-527-40594-1

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 323

5.9. TOUCH SCREEN TECHNOLOGY

The recent advances in latest technologies in the field of electronic devices make the things challenging for
the screen technology. The touch screens are basically the thin film having more susceptibility to touch as
the diodes or light emitting diodes (LED) are used for such technology. In latest screen technologies one
can find organic light emitting diodes(OLED),liquid crystal display(LCD) where the liquid forms are used
between two substrata of thin film where the upper strata film is touch sensitive.Indium tin oxide(ITO) is
commonly used ast he anode material.It is transparent to visible light and high working function .Metals
such as Barium and Calcium are often used for the cathode because of the low working function,such
ametals being reactive Aluminum is used as to avoid degradation. This makes the screen touch screen.

As technology advances the need for the portable electronic devices the demand for flexible batteries to
power them also becomes a challenge, which is taken by some Japanese scientists in past. Dr. Hiroyuki
Nishide, Dr. Hiroaki Konishi and Dr. Takeo Suga of Waseda University designed the battery which
consists of a redox-active organic polymer film around 200 nanometers thick. Nitroxide radical groups are
attached to it acts as charge carriers. It has the high charge discharge capability because of high radical
density.
The power rate of performance is very high, it takes only one minute to fully charge the battery and it has a
long life cycle of 1000 cycles or more. The team made the thin polymer film by a solution process able
method –a soluble polymer with the radical groups attached is ‘spin coated” onto a surface. After ultra
violet irradiation, the polymer then becomes cross linked with the help of a bisazide cross-linking agent.

A drawback of some organic radical polymers is the fact they are soluble in the electrolyte solution which
results in self discharging of the battery, but the polymer must be soluble to be spin-coated. The cross
linking method makes the polymer mechanically tough. Professor Peter Skabara, an expert in electro active
materials at the University of Strathclyde, praised the high stability and fabrication strategy of the polymer-
based battery. The plastic battery plays a part in ensuring that organic device technologies can function in
thin film and flexible form as a complete package.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 324

With the use of Double Tokomak collider( DTC) ,Magnetic confinement tokomak collider(MCTC),Duo
Triad tokomak collider(DTTC) based technology with the help of nano torii enhances the performance of
the battery by 2.4 times for DTC, 3.6 times by MCTC and 6.46 times by use of DTTC

Such organic radical battery could be used in pocket sized integrated circuit cards, used for memory storage
and micro processing, within three years. Then the mobile phones will be available in credit card size and
shape.

Light weight & flexible plastic substrates: OLED displays can be fabricated on flexible plastic substrates
leading to the possibility of Organic light-emitting diode roll-up display being fabricated or other new
applications such as roll-up displays embedded in fabrics or clothing. As the substrate used can be flexible ,
the displays may be produced inexpensively.

Rocket propelled granade(RPG) uses peizoelectric fuse with use of DTC,MCTC,DTTC the sensitivity
enhances.

Peizoelectric disk used for guitar pickup enchances with use of DTC,MCTC,DTTC.

For use in buzzer.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 325

HI-FI SOUND SYSTEM

The principle of operation of a piezoelectric sensor is that a physical dimension,


transformed into a force, acts on two opposing faces of the sensing element. Depending
on the design of a sensor, different "modes" to load the piezoelectric element can be used:
longitudinal, transversal and shear.

Detection of pressure variations in the form of sound is the most common sensor
application, e.g. piezoelectric microphones (sound waves bend the piezoelectric material,
creating a changing voltage) and piezoelectric pickups for Acoustic-electric guitars. A
piezo sensor attached to the body of an instrument is known as a contact microphone.

Piezoelectric sensors especially are used with high frequency sound in ultrasonic
transducers for medical imaging and also industrial nondestructive testing (NDT).

For many sensing techniques, the sensor can act as both a sensor and an actuator – often
the term transducer is preferred when the device acts in this dual capacity, but most piezo
devices have this property of reversibility whether it is used or not. Ultrasonic
transducers, for example, can inject ultrasound waves into the body, receive the returned
wave, and convert it to an electrical signal (a voltage). Most medical ultrasound
transducers are piezoelectric.

In addition to those mentioned above, various sensor applications include:

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 326

• Piezoelectric elements are also used in the detection and generation of sonar
waves.
• Power monitoring in high power applications (e.g. medical treatment,
sonochemistry and industrial processing).
• Piezoelectric microbalances are used as very sensitive chemical and biological
sensors.
• Piezos are sometimes used in strain gauges.
• Piezoelectric transducers are used in electronic drum pads to detect the impact of
the drummer's sticks.
• Automotive engine management systems use piezoelectric transducers to detect
detonation by sampling the vibrations of the engine block and also to detect the
precise moment of fuel injection (needle lift sensors).
• Ultrasonic piezo sensors are used in the detection of acoustic emissions in
acoustic emission testing.
• Crystal earpieces are sometimes used in old or low power radios

Amplified piezoelectric actuator with multilayer ceramic

As very high electric fields correspond to only tiny changes in the width of the crystal,
this width can be changed with better-than-micrometer precision, making piezo crystals
the most important tool for positioning objects with extreme accuracy — thus their use in
actuators. Multilayer ceramics, using layers thinner than 100 micrometres, allow reaching
high electric fields with voltage lower than 150 V. These ceramics are used within two
kinds of actuators: direct piezo actuators and Amplified Piezoelectric Actuators. While
direct actuator's stroke is generally lower than 100 micrometres, amplified piezo
actuators can reach millimeter strokes.

• Loudspeakers: Voltage is converted to mechanical movement of a piezoelectric


polymer film.
• Piezoelectric motors: piezoelectric elements apply a directional force to an axle,
causing it to rotate. Due to the extremely small distances involved, the piezo
motor is viewed as a high-precision replacement for the stepper motor.
• Piezoelectric elements can be used in laser mirror alignment, where their ability to
move a large mass (the mirror mount) over microscopic distances is exploited to
electronically align some laser mirrors. By precisely controlling the distance
between mirrors, the laser electronics can accurately maintain optical conditions
inside the laser cavity to optimize the beam output.
• A related application is the acousto-optic modulator, a device that scatters light
off of sound waves in a crystal, generated by piezoelectric elements. This is useful
for fine-tuning a laser's frequency.
• Atomic force microscopes and scanning tunneling microscopes employ converse
piezoelectricity to keep the sensing needle close to the probe[19].
• Inkjet printers: On many inkjet printers, piezoelectric crystals are used to drive the
ejection of ink from the inkjet print head towards the paper.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 327

• Diesel engines: high-performance common rail diesel engines use piezoelectric


fuel injectors, first developed by Robert Bosch GmbH, instead of the more
common solenoid valve devices.
• Active control of vibration using amplified actuators.
• X-ray shutters.
• XY stages for micro scanning used in infrared cameras.
• Moving the patient precisely inside active CT and MRI scanners where the strong
radiation or magnetism precludes electric motors.[20]

Frequency standard

The piezoelectrical properties of quartz are useful as standard of frequency.

• Quartz clocks employ a tuning fork made from quartz that uses a combination of
both direct and converse piezoelectricity to generate a regularly timed series of
electrical pulses that is used to mark time. The quartz crystal (like any elastic
material) has a precisely defined natural frequency (caused by its shape and size)
at which it prefers to oscillate, and this is used to stabilize the frequency of a
periodic voltage applied to the crystal.
• The same principle is critical in all radio transmitters and receivers, and in
computers where it creates a clock pulse. Both of these usually use a frequency
multiplier to reach the megahertz and gigahertz ranges.

Piezoelectric motors

A slip-stick actuator.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 328

SPA motor using CEDRAT APA

Types of piezoelectric motor include:

• The travelling-wave motor used for auto-focus in reflex cameras


• Inchworm motors for linear motion
• Rectangular four-quadrant motors with high power density (2.5 watt/cm3) and
speed ranging from 10 nm/s to 800 mm/s.
• Stepping piezo motor, using stick-slip effect.

All these motors, except the stepping stick-slip motor work on the same principle. Driven
by dual orthogonal vibration modes with a phase difference of 90°, the contact point
between two surfaces vibrates in an elliptical path, producing a frictional force between
the surfaces. Usually, one surface is fixed causing the other to move. In most
piezoelectric motors the piezoelectric crystal is excited by a sine wave signal at the
resonant frequency of the motor. Using the resonance effect, a much lower voltage can be
used to produce a high vibration amplitude.

Stick-slip motor works using the inertia of a mass and the friction of a clamp. Such
motors can be very small. Some are used for camera sensor displacement, allowing anti
shake function.

Reduction of vibrations and noise

Different teams of researchers have been investigating ways to reduce vibrations in


materials by attaching piezo elements to the material. When the material is bent by a
vibration in one direction, the vibration-reduction system responds to the bend and sends
electric power to the piezo element to bend in the other direction. Future applications of
this technology are expected in cars and houses to reduce noise.

In a demonstration at the Material Vision Fair in Frankfurt in November 2005, a team


from TU Darmstadt in Germany showed several panels that were hit with a rubber mallet,
and the panel with the piezo element immediately stopped swinging.

Piezoelectric ceramic fiber technology is being used as an electronic damping system on


some HEAD tennis rackets.[21]

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 329

5.13.Pyroelectric Appliances

Piezoelectric fusion is the technique of using pyro-electric crystals to generate high strength electrostatic
fields to accelerate deuterium ions (tritium to be used later on) into a metal hydride target also containing
deuterium (or tritium) with sufficient kinetic energy to cause these ions to undergo nuclear fusion. By use
electrostatic fields and deuterium ions the light ions are accelerated to produce fusion in solid deuterated
targets which was first demonstrated by Cockcroft and Walton in 1932.The process in use today is
thousands of miniaturized versions of their original accelerator, in the form of small sealed tube neutron
generators, in the petroleum exploration industry.

The first use of a piezoelectric field to accelerate deuterons was done in 1997 in an experiment conducted
by Dr. V.D .Dougar Jabon, Dr. G.V. Fedorovich, and Dr. N.V. Samsonenko. They were first to utilize a
lithium tantalate (LiTaO3) piezoelectric crystal in fusion experiments.

The novel idea with the piezoelectric approach to fusion is in its application of the piezoelectric effect to
generate the accelerating electric fields. This is done by heating the crystal from −30°C to +45°C over a
period of a few minutes.

In April 2005 a UCLA team headed by the Distinguished Professor of Chemistry and Fellow of the Royal
Society James K. Gimzewski and Professor of Physics Seth Putterman utilized a tungsten probe attached to
a piezoelectric crystal in order to increase the electric field strength. Brian Naranjo, a graduate student
working on his Ph.D. degree under Dr. Putterman, conducted the experiment demonstrating the use of a
piezoelectric power source for producing fusion on a laboratory bench top device. The device used a
lithium tantalate (LiTaO3) piezoelectric crystal to ionize deuterium atoms and to accelerate the deuterons
towards a stationary erbium dideuteride (ErD2) target. Around 1000 fusion reactions per second took place,
each resulting in the production of an 820 keV helium-3 nucleus and a 2.45 MeV neutron.

The team anticipates applications of the device as a neutron generator. In micro thrusters for space
propulsion application is done by Assistant Professor of Diphu Government College, Assam, India and
Fellow of Royal Astronomical Society Dr.A.B.Rajib Hazarika in 2010 for Diffusion associated neoclassical
system of Hall assembly(DANISHA) a conceptual Hall thruster which gives the thrust to travel for 56000
hours in space.

A team at Rensselaer Polytechnic Institute, led by Dr. Yaron Danon and his graduate student Jeffrey
Geuther, improved upon the UCLA experiments using a device with two piezoelectric crystals and capable
of operating at non-cryogenic temperatures.

Nuclear D-D fusion driven by piezoelectric crystals was proposed by Naranjo and Putterman in 2002.It was
also discussed by Brownridge and Shafroth in 2004 The possibility of using piezoelectric crystals in a
neutron production device (by D-D fusion) was proposed in a conference paper by Geuther and Danon in
2004 and later in a publication discussing electron and ion acceleration by piezoelectric crystals. None of
these later authors had prior knowledge of the earlier 1997 experimental work conducted by Dougar Jabon,
Fedorovich, and Samsonenko. The key ingredient of using a tungsten needle to produce sufficient ion beam
current for use with a piezoelectric crystal power supply was first demonstrated in the 2005 Nature paper,
although in a broader context tungsten emitter tips have been used as ion sources in other applications for
many years.

In 2009 Dr.A.B.Rajib Hazarika proposed the use of piezoelectric crystal powder form for Duo Triad
Tokomak Collider (DTTC) by using nano torii for it in a conference paper that enhances the acceleration by
6.46 times. In 2010 it was found that tungsten emitter tips are not necessary to increase the acceleration
potential of piezoelectric crystals; the acceleration potential can allow positive ions to reach kinetic
energies between 300 and 310 keV.

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 330

Piezoelectric fusion has been hyped in the news media, which has overlooked the earlier experimental work
of Dougar Jabon, Fedorovich and Samsonenk. Piezoelectric fusion is not related to the earlier claims of
fusion reactions, having been observed during sono luminescence (bubble fusion) experiments conducted
under the direction of Dr. Rusi P. Taleyarkan of Purdue University. In fact, Brain Naranjo of the UCLA
team has been one of the main critics of these earlier prospective fusion claims from Dr. Rusi P.
Taleyarkhan.

5.23 DRABRH CYCLONE PATTERN STUDY

A 3-D study of fuzzy differential inclusion (FDI) is studied for obtaining the stability in cyclone type
pattern formed therein

x ′(t ) ∈ [F ( x, y, z )]
α
(5.23.1)
y ′(t ) ∈ [G ( x, y, z )]
α
(5.23.2)
z ′(t ) ∈ [H ( x, y, z )]
α
(5.23.3)
Where F , G, H : R → E ′ which can be written as
3

α
dy ⎡ G ( x, y, z ) ⎤
= (5.23.4)
dx ⎢⎣ F ( x, y, z ) ⎥⎦

α
dz ⎡ H ( x, y, z ) ⎤
= (5.23.5)
dx ⎢⎣ F ( x, y, z ) ⎥⎦

∈ R 3 such that 0 ∉ [H ( x, y, z )] 0 ∉ [G ( x, y, z )]
α α
System will be stable on all points (x, y, z) and
0 ∉ [F ( x, y, z )]
α
and for any α 〉 0 .

If P0 ≡ ( x0 , y 0 , z 0 ) is a crisp point on the phase plane (i.e., 3-D space ) where 0 belongs to all
functions then is a critical point of the system described by the crisp differential equations

x ′(t ) = α x [tanh(κ x x + σ x y + ξ x z ) − x ]cosh(κ x x + σ x y + ξ x z ) (5.23.6)


[ ]
y ′(t ) = α y tanh(κ y x + σ y y + ξ y z ) − y cosh(κ y x + σ y y + ξ y z ) (5.23.7)
z ′(t ) = α z [tanh(κ z x + σ z y + ξ z z ) − z ]cosh(κ z x + σ z y + ξ z z ) (5.23.8)

State variables − 1 ≤ x (t ), y (t ) ≤ 1,1 ≤ z (t ) characterize the stage s of gradient.


By using FDI it changes to

x ′(t ) ∈ {α x [tanh(κ x x + σ x y + ξ x z ) − x ]cosh(κ x x + σ x y + ξ x z )}α (5.23.9)


[ ]
y ′(t ) ∈ {α y tanh(κ y x + σ y y + ξ y z ) − y cosh(κ y x + σ y y + ξ y z )}α (5.23.10)

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 331

z ′(t ) ∈ {α z [tanh(κ z x + σ z y + ξ z z ) − z ]cosh(κ z x + σ z y + ξ z z )}α (5.23.11)

This leads to the study of the genesis of cyclone type development of tropical storm pattern. The prevalence
of favorable geographic and climatic conditions over a large party of global during storm seasons is
relatively rare phenomenon. Even when the tropical storm is developed near about the half of all of them
can not reach hurricanes strength (intensity of T3.5) in Dovark(1984) scale .Rareness is s due to
disturbing vortex ix essential to give rise to an intense tropical storm Emanuel(1988) strong tropical storm
(Hurricane is USA ,Typhoon in China , Cyclone in India , etc).The presen6t model idea is very simple
based on .Each of the wind disturbances is due to linear wind jet flowing parallel to the ground (flow pas t
each other) give s the rotational motion as gibe by Bhatia and Hazarika (1995).One of them is very strong
(speed more 240 km/hr) and the vortex moves with the speed of 25 km/hr acting at the radical direction o f
vortex . This flow tends to converge to a fuzzy appoint of phase space known as eye of cyclone .For
stability the vortex collapses due to the second order shear such type condition obtained by Bhatia and
Hazarika(1995).
References:
1) Bhatia, P.K. and Hazarika, A.B.R(1996):Physica Scripta 53,57
2) Dvorak, V .F(1984): NOAA Technical report NESDIS 11,Satellite application Laboratory, Washington
D.C.
3) Emanuel, K.A (1988): American Scientist 76,370

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 332

5.24. ALGEBRA OF DRABRH GRAY CODE PATTERN

Here Algebra is used as tool to describe the pattern of luminosity (gray code) with the use of the set theory
in double Tokomak Collider (DTC).
Bel ( A1 ∪ A2 ∪ A) = m( A1 ) + m( A2 ) + m( A3 ) + m( A1 ∪ A2 )
(1)
+ m( A2 ∪ A3 ) + m( A3 ∪ A1 ) + m( A1 + A2 + A3 )

is due to Dempster –Shafer theory of evidence (Shafer, 1976) in the closed interval [0,1]. Let the frame of
discriminate θ is the set of all elementary proposition m ( θ ).
The m : 2 → [0,1] and ∑ m( A) = 1 for all A contained
θ
θ and m ( θ )=0.

For DTC the frame of discriminate is θ = (C1, C2 ) the power set of θ,2θ = (θ, C1, C2 , C1 ∪C2 ) .The
mass function of two images as m1 and , m2 and respectively Image 1 (Im 1) mass function as
m1 (C1 ), m1 (C 2 ), m1 (C1 ∪ C 2 ) and Image 2(Im 2) mass function as
m2 (C1 ), m2 (C 2 ), m2 (C1 ∪ C 2 )
Such that
mi (C1 ) + mi (C 2 ) + mi (C1 ∪ C 2 ) = 1 where i=1,2 (2)
bel (C1 ) = mi (C1 ) and bel (C 2 ) = mi (C 2 )
bel (C1 ∪ C 2 ) = mi (C1 ) + mi (C 2 ) + mi (C1 ∪ C 2 ) = 1 (3)

Computer based pixel luminosity (256) i.e.(0-255) over a specified gray level .Image 1 spreads over 20 190
.Here we divide the range in three parts (i) 0 to 130 range (ii) 130-190 range (iii) 190-255 range.
Specified gray region is distributed in the region (N=N X N)
a) For pixels < 130, 1= j ≠ i , m1 (C1 ) = 0, m1 (C 2 ) = 1, m1 (C1 ∪ C 2 ) = 0
and
b) For pixels >190,i=j=1, m1 (C1 ) = 1, m1 (C 2 ) = 0, m1 (C1 ∪ C 2 ) = 0
c) For pixels between 130 and 190, m1 (C1 ) = 0.49, m1 (C 2 ) = 0.3, m2 (C1 ∪ C 2 ) = 0.193

Similarly Image 2 distributed over pixels.

a) For pixels < 90, j ≠ i = 1 , m2 (C1 ) = 1, m2 (C 2 ) = 0, m2 (C1 ∪ C 2 ) = 0


and
b) For pixels between 90 and 135, m2 (C 2 ) = 0, m2 (C1 ) = 0.552, m2 (C1 ∪ C 2 ) = 0.498
c) For pixels between 135 and 140,
M′ M′ M ′ M 1′
m2 (C1 ) = , m2 (C 2 ) = 1 , m2 (C1 ∪ C 2 ) = 1 − + ,thus
N′ N1′ N ′ N1′
m2 (C1 ) = 0.01, m2 (C 2 ) = 0, m2 (C1 ∪ C 2 ) = 0.99
d) For pixels between140and 197, m2 (C1 ) = 0.03, m2 (C 2 ) = 0.07, m2 (C1 ∪ C 2 ) = 0.9
and
e) For pixels between197 and 255, m2 (C1 ) = 0, m2 (C 2 ) = 0.9, m2 (C1 ∪ C 2 ) = 0.7

Now using the dynamics

Dr.A.B.Rajib Hazarika
Inventions of Dr.A.B.Rajib Hazarika on future devices 333

dM ⎡ ⎧ ⎛M ⎞ ⎫⎤
= q + ⎢ sM ⎨1 − ⎜ ⎟⎬⎥ − rf ( M ) (4)
dt ⎣ ⎩ ⎝K ⎠ ⎭⎦
M=density
Q= constant
⎡ ⎧ M ⎫⎤
⎢ sM ⎨1 − K ⎬⎥ Æ Fischer’s growth term.
⎣ ⎩ ⎭⎦
f(M) =function of M
r= constant
Using rescaled time frame t=(s-q)t probability function is given by
dm ⎡ m ⎤
= v + m(1 − um) − r ⎢ ⎥ (5)
dt ⎣1 + m ⎦
rt = r + σH t
rt → Fluctuation of r
H t → Statistical perturbation
σ → Standard deviation

⎡− v m2 ⎤
⎢ + m ( v + 2 − u − r ) − (1 − 2u ) ⎥
⎢m 2 ⎥
2 ⎢ um 3 ⎥
Probability function is given by P ( m) = exp( 2 ) ⎢ − + (2v − 1 − r − σ 2 ) log e m ⎥
σ ⎢ 3 ⎥
⎢+ σ 2 log e (1 + m) ⎥
⎢ ⎥
⎣ ⎦
(6)
For population density
σ lies between 0 and 3 i.e. 0 ≤ σ ≤ 3.
Phase transition occurs as follows due to Glansforff -Prigogone (1971)
(4.3) 0.5Æ 0.854Æ2.83 0.46
Macro process Æ Micro process
Increase of σ

Stability is obtained at the threshold value σ ≤ 2.83

References:
1. Shafer, G (1976): A mathematical theory of evidence, Princeton university press.
2. Glansddroff, P and Prigogone, I (1971): Thermodynamic theory of stability, structure and fluctuation,
Wiley, London.

Dr.A.B.Rajib Hazarika

S-ar putea să vă placă și