Sunteți pe pagina 1din 679

Surveillance or Research?

Surveillance is not the only process that can provide information to inform public health action. The results
of research can also inform public health action. Sometimes the distinction between surveillance,
particularly surveillance of (or for) emerging infections, and research can be difficult to define. This can
have important implications, particularly where different ethical, legal (usually data protection) and funding
rules apply to research compared to surveillance. As a general rule, the key distinction is that surveillance
should always be justified by, and seen as an integral component of, ongoing established prevention and
control programmes.
There are a number of other dimensions on which research and surveillance can be compared, as
outlined table 1. Although the distinction between research and surveillance is not always clear-cut for
some of the criteria proposed in the table, and sometimes research or surveillance will not fit with all of
the criteria suggested, this provides a framework for assessing whether a problem should be addressed
through surveillance or research.
Research

Surveillance

Creates new knowledge (about what works

May contribute to knowledge (understanding) but is

and what doesn't)

primarily about measuring what is already known

Is based on a hypothesis

Not usually based on a specific hypothesis (other than


that the frequency or distribution of that which is the
subject of surveillance may change)

May involve experiments, new interventions or

Never involves experiments or purposive allocation of

purposive allocation of subjects to different

subjects to different interventions

interventions
Is based on a scientifically valid sample

Rarely based on a scientifically defined sample size

size(although this may not apply to pilot studies)


Extensive statistical analysis of data is routine Statistical analysis usually limited to simple measures of
trend or distribution
Results are generalisable and hence

Results may be generalisable

publishable
Responsibility to act on findings is not

Responsibility to act should always be clear

necessarily clear
Findings may result in the application of new

Findings may result in application of established clinical

clinical or public health practice

or public health practice

1. Box and whisker plot


To create a box-and-whisker plot, you start by ordering your data (putting the
values in numerical order), if they aren't ordered already. Then you find the
median of your data. The median divides the data into two halves. To divide the
data into quarters, you then find the medians of these two halves.

In descriptive statistics, a box plot or boxplot is a convenient way of graphically depicting


groups of numerical data through their quartiles. Box plots may also have lines extending
vertically from the boxes (whiskers) indicating variability outside the upper and lower quartiles,
hence the terms box-and-whisker plot and box-and-whisker diagram. Outliers may be plotted
as individual points.
Box plots are non-parametric: they display variation in samples of a statistical population
without making any assumptions of the underlying statistical distribution. The spacings between
the different parts of the box indicate the degree of dispersion (spread) and skewness in the data,
and show outliers. In addition to the points themselves, they allow one to visually estimate
various L-estimators, notably the interquartile range, midhinge, range, mid-range, and trimean.
Boxplots can be drawn either horizontally or vertically.\

Statistics assumes that your data points (the numbers in your list) are clustered around some central
value. The "box" in the box-and-whisker plot contains, and thereby highlights, the middle half of these
data points.
To create a box-and-whisker plot, you start by ordering your data (putting the values in numerical order), if
they aren't ordered already. Then you find the median of your data. The median divides the data into two
halves. To divide the data into quarters, you then find the medians of these two halves. Note: If you have
an even number of values, so the first median was the average of the two middle values, then you include
the middle values in your sub-median computations. If you have an odd number of values, so the first
median was an actual data point, then you do not include that value in your sub-median computations.
That is, to find the sub-medians, you're only looking at the values that haven't yet been used.
You have three points: the first middle point (the median), and the middle points of the two halves (what I
call the "sub-medians"). These three points divide the entire data set into quarters, called "quartiles". The
top point of each quartile has a name, being a " Q" followed by the number of the quarter. So the top point
of the first quarter of the data points is "Q1", and so forth. Note that Q1 is also the middle number for the
first half of the list, Q2 is also the middle number for the whole list, Q3 is the middle number for the second
half of the list, and Q4 is the largest value in the list.

Once you have these three points, Q1, Q2, and Q3, you have all you need in order to draw a simple boxand-whisker plot. Here's an example of how it works.

Draw a box-and-whisker plot for the following data set:


4.3,

5.1, 3.9, 4.5, 4.4, 4.9, 5.0, 4.7, 4.1, 4.6, 4.4, 4.3, 4.8, 4.4, 4.2, 4.5, 4.4

My first step is to order the set. This gives me:

3.9, 4.1, 4.2, 4.3, 4.3, 4.4, 4.4, 4.4, 4.4, 4.5, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1
The first number I need is the median of the entire set. Since there are seventeen values in this
list, I need the ninth value:

3.9, 4.1, 4.2, 4.3, 4.3, 4.4, 4.4, 4.4, 4.4, 4.5, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1
The median is

Q2 = 4.4.

The next two numbers I need are the medians of the two halves. Since I used the " 4.4" in the
middle of the list, I can't re-use it, so my two remaining data sets are:

3.9, 4.1, 4.2, 4.3, 4.3, 4.4, 4.4, 4.4 and 4.5, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1
The first half has eight values, so the median is the average of the middle two:

Q1 = (4.3 + 4.3)/2 = 4.3


The median of the second half is:

Copyright Elizabeth Stapel 2004-2011 All Rights Reserved

Q3 = (4.7 + 4.8)/2 = 4.75


Since my list values have one decimal place and
range from 3.9 to 5.1, I won't use a scale of, say,
zero to ten, marked off by ones. Instead, I'll draw a
number line from 3.5 to 5.5, and mark off by tenths.

Now I'll mark off the minimum and maximum values,


and Q1, Q2, and Q3:

The "box" part of the plot goes from

Q1 to Q3:

And then the "whiskers" are drawn to the endpoints:

By the way, box-and-whisker plots don't have to be drawn horizontally as I did above; they can be vertical,
too.

2. Validity and reliability


In logic, an argument is valid if and only if its conclusion is logically entailed by its
premises. A formula is valid if and only if it is true under every interpretation, and
an argument form (or schema) is valid if and only if every argument of that logical
form is valid.

EXPLORING RELIABILITY IN ACADEMIC ASSESSMENT


Written by Colin Phelan and Julie Wren, Graduate Assistants, UNI Office of
Academic Assessment (2005-06)

Reliability is the degree to which an assessment tool produces stable and


consistent results.
Types of Reliability

1. Test-retest reliability is a measure of reliability obtained by administering the


same test twice over a period of time to a group of individuals. The scores from
Time 1 and Time 2 can then be correlated in order to evaluate the test for stability
over time.
Example: A test designed to assess student learning in psychology could be given
to a group of students twice, with the second administration perhaps coming a week
after the first. The obtained correlation coefficient would indicate the stability of the
scores.

2. Parallel forms reliability is a measure of reliability obtained by administering


different versions of an assessment tool (both versions must contain items that
probe the same construct, skill, knowledge base, etc.) to the same group of
individuals. The scores from the two versions can then be correlated in order to
evaluate the consistency of results across alternate versions.
Example: If you wanted to evaluate the reliability of a critical thinking assessment,
you might create a large set of items that all pertain to critical thinking and then
randomly split the questions up into two sets, which would represent the parallel
forms.
Inter-rater reliability is a measure of reliability used to assess the degree to which
different judges or raters agree in their assessment decisions. Inter-rater reliability is

useful because human observers will not necessarily interpret answers the same way;
raters may disagree as to how well certain responses or material demonstrate
knowledge of the construct or skill being assessed.
Example: Inter-rater reliability might be employed when different judges are
evaluating the degree to which art portfolios meet certain standards. Inter-rater
reliability is especially useful when judgments can be considered relatively
subjective. Thus, the use of this type of reliability would probably be more likely
when evaluating artwork as opposed to math problems.
Internal consistency reliability is a measure of reliability used to evaluate the degree
to which different test items that probe the same construct produce similar results.

A. Average inter-item correlation is a subtype of internal consistency


reliability. It is obtained by taking all of the items on a test that probe the
same construct (e.g., reading comprehension), determining the correlation
coefficient for each pair of items, and finally taking the average of all of
these correlation coefficients. This final step yields the average inter-item
correlation.

B. Split-half reliability is another subtype of internal consistency reliability.


The process of obtaining split-half reliability is begun by splitting in half
all items of a test that are intended to probe the same area of knowledge
(e.g., World War II) in order to form two sets of items. The entire test is
administered to a group of individuals, the total score for each set is
computed, and finally the split-half reliability is obtained by determining the
correlation between the two total set scores.

Validity refers to how well a test measures what it is purported to measure.


Why is it necessary?
While reliability is necessary, it alone is not sufficient. For a test to be reliable, it also
needs to be valid. For example, if your scale is off by 5 lbs, it reads your weight every
day with an excess of 5lbs. The scale is reliable because it consistently reports the
same weight every day, but it is not valid because it adds 5lbs to your true weight. It is
not a valid measure of your weight.
Types of Validity
1. Face Validity ascertains that the measure appears to be assessing the intended
construct under study. The stakeholders can easily assess face validity. Although this is

not a very scientific type of validity, it may be an essential component in enlisting


motivation of stakeholders. If the stakeholders do not believe the measure is an
accurate assessment of the ability, they may become disengaged with the task.
Example: If a measure of art appreciation is created all of the items should be related to
the different components and types of art. If the questions are regarding historical time
periods, with no reference to any artistic movement, stakeholders may not be motivated
to give their best effort or invest in this measure because they do not believe it is a true
assessment of art appreciation.
2. Construct Validity is used to ensure that the measure is actually measure what it is
intended to measure (i.e. the construct), and not other variables. Using a panel of
experts familiar with the construct is a way in which this type of validity can be
assessed. The experts can examine the items and decide what that specific item is
intended to measure. Students can be involved in this process to obtain their feedback.
Example: A womens studies program may design a cumulative assessment of learning
throughout the major. The questions are written with complicated wording and
phrasing. This can cause the test inadvertently becoming a test of reading
comprehension, rather than a test of womens studies. It is important that the measure
is actually assessing the intended construct, rather than an extraneous factor.
3. Criterion-Related Validity is used to predict future or current performance - it
correlates test results with another criterion of interest.

Example: If a physics program designed a measure to assess cumulative student


learning throughout the major. The new measure could be correlated with a
standardized measure of ability in this discipline, such as an ETS field test or the GRE
subject test. The higher the correlation between the established measure and new
measure, the more faith stakeholders can have in the new assessment tool.
4. Formative Validity when applied to outcomes assessment it is used to assess how
well a measure is able to provide information to help improve the program under study.

Example: When designing a rubric for history one could assess students knowledge
across the discipline. If the measure can provide information that students are lacking
knowledge in a certain area, for instance the Civil Rights Movement, then that
assessment tool is providing meaningful information that can be used to improve the
course or program requirements.

5. Sampling Validity (similar to content validity) ensures that the measure covers the
broad range of areas within the concept under study. Not everything can be covered,
so items need to be sampled from all of the domains. This may need to be completed
using a panel of experts to ensure that the content area is adequately sampled.
Additionally, a panel can help limit expert bias (i.e. a test reflecting what an individual
personally feels are the most important or relevant areas).
Example: When designing an assessment of learning in the theatre department, it
would not be sufficient to only cover issues related to acting. Other areas of theatre
such as lighting, sound, functions of stage managers should all be included. The
assessment should reflect the content area in its entirety.
What are some ways to improve validity?
1. Make sure your goals and objectives are clearly defined and operationalized.
Expectations of students should be written down.
2. Match your assessment measure to your goals and objectives. Additionally, have
the test reviewed by faculty at other schools to obtain feedback from an outside
party who is less invested in the instrument.
3. Get students involved; have the students look over the assessment for
troublesome wording, or other difficulties.
4. If possible, compare your measure with other measures, or data that may be
available

3. Skewed curves, where mean, median and mode lies


In probability theory and statistics, skewness is a measure of the asymmetry of the
probability distribution of a real-valued random variable about its mean. The skewness
value can be positive or negative, or even undefined.
The qualitative interpretation of the skew is complicated. For a unimodal distribution,
negative skew indicates that the tail on the left side of the probability density function is
longer or fatter than the right side it does not distinguish these shapes. Conversely,
positive skew indicates that the tail on the right side is longer or fatter than the left side.
In cases where one tail is long but the other tail is fat, skewness does not obey a simple
rule. For example, a zero value indicates that the tails on both sides of the mean balance
out, which is the case both for a symmetric distribution, and for asymmetric distributions
where the asymmetries even out, such as one tail being long but thin, and the other being
short but fat. Further, in multimodal distributions and discrete distributions, skewness is
also difficult to interpret. Importantly, the skewness does not determine the relationship
of mean and median.
Skewed curves are asymmetrical curves; their skewness is caused by "outliers." (An
outlier is a number thats much smaller or much larger than all other numbers in a data
set.) One or just a few outliers, in a data set, can cause these curves have a "tail." Data is
not normally distributed in skewed curves.
USMLE high yield concepts include knowing, for example, if the mean is less than or
more than the mode when a curve is skewed positively. Or, what happens to the mean,
median and mode if the largest number in a data set is removed (i.e., if an outlier is
removed)?
If you can count, 1, 2, 3, then the USMLEbiostatistics workbook offers the easiest
3 Steps imaginable so you will NEVER miss a question about skewed curves.

Core concepts of skewed curves


Skewed curves are asymmetrical curves: they "skew" negatively (the tail
points left) or positively (the tail points right). Skewed curves NEVER have
the mean, median & mode in the same location. This is distinctly different
than the bell curve, which is symmetrical.

Also, a negatively skewed curves can be of entirely positive numbers and, positively skewed
curves can be of entirely negative numbers. "Positive" and "negative" provides you the direction
of the curves tail and, the direction that numbers are moving on the x-axis.

4. Negatie skew: points in negative direction


5. numbers on the x-axis, under the tail, are less than the numbers under the
hump; negatively skewed curves do NOT necessarily have negative numbers
(as in example below)

6.
7. Positive skew: points in positive direction
8. numbers on the x-axis, under the tail, are more than the numbers under the
hump; positively skewed curves do NOT necessarily have positive numbers
(as in example below)

9.
10.
11.
12.

13.

Correlation, types of correlation and its interpretation

14.

What is a Correlation?

Thus far weve covered the key descriptive statisticsthe


mean, median, mode, and standard deviationand weve
learned how to test the difference between means. But often
we want to know how two things (usually called "variables"
because they vary from high to low) are related to each other.
For example, we might want to know whether reading scores
are related to math scores, i.e., whether students who have
high reading scores also have high math scores, and vice
versa. The statistical technique for determining the degree to
which two variables are related (i.e., the degree to which they
co-vary) is, not surprisingly, called correlation.
There are several different types of correlation, and well talk
about them later, but in this lesson were going to spend most
of the time on the most commonly used type of correlation: the
Pearson Product Moment Correlation. This correlation,
signified by the symbol r, ranges from 1.00 to +1.00. A
correlation of 1.00, whether its positive or negative, is a
perfect correlation. It means that as scores on one of the two
variables increase or decrease, the scores on the other
variable increase or decrease by the same magnitude
something youll probably never see in the real world. A
correlation of 0 means theres no relationship between the two
variables, i.e., when scores on one of the variables go up,
scores on the other variable may go up, down, or whatever.
Youll see a lot of those.
Thus, a correlation of .8 or .9 is regarded as a high correlation,
i.e., there is a very close relationship between scores on one of
the variables with the scores on the other. And correlations of .
2 or .3 are regarded as low correlations, i.e., there is some
relationship between the two variables, but its a weak one.
Knowing peoples score on one variable wouldnt allow you to
predict their score on t

Population pyramid, baby boom, what advantages to


baby boomers is currently available; echo boom?
A population pyramid, also called an age pyramid or age picture diagram, is a graphical
illustration that shows the distribution of various age groups in a population (typically that of
a country or region of the world), which forms the shape of a pyramid when the population is
growing.[1] It is also used in ecology to determine the overall age distribution of a population;
an indication of the reproductive capabilities and likelihood of the continuation of a species.
It typically consists of two back-to-back bar graphs, with the population plotted on the X-axis
and age on the Y-axis, one showing the number of males and one showing females in a
particular population in five-year age groups (also called cohorts). Males are conventionally
shown on the left and females on the right, and they may be measured by raw number or as a
percentage of the total population.
Population pyramids are often viewed as the most effective way to graphically depict the age
and sex distribution of a population, partly because of the very clear image these pyramids
present.[2]
A great deal of information about the population broken down by age and sex can be read
from a population pyramid, and this can shed light on the extent of development and other
aspects of the population. A population pyramid also tells how many people of each age
range live in the area. There tends to be more females than males in the older age groups, due
to females' longer life expectancy.

A baby boom is any period marked by a greatly increased birth rate. This
demographic phenomenon is usually ascribed within certain geographical bounds.
People born during such a period are often called baby boomers; however, some
experts distinguish between those born during such demographic baby booms and
those who identify with the overlapping cultural generations. Conventional wisdom
states that baby booms signify good times and periods of general economic growth
and stability,[citation needed] however in circumstances where baby booms lead to very
large number of children per family unit, such as in the case in lower income regions
of the world, the outcome may be different. One common baby boom was right after
WWII during the Cold War

15.

Population momentum, why difficult to control

and what is it?


Population Momentum Across the Demographic Transition
A typical consequence of the demographic transitiona populations shift from high mortality
and high fertility to low mortality and low fertilityis a period of robust population growth. This
growth occurs once survival has improved but before fertility has fallen to or below replacement
level, so that the birth rate substantially exceeds the death rate. During the second half of the
twentieth century, the world experienced unprecedented population growth as developing
countries underwent a demographic transition. It was during this period that Nathan Keyfitz
demonstrated how an immediate drop to replacement fertility in high-fertility populations could
still result in decades of population growth. Building on work by Paul Vincent (1945), he called
this outcome population momentum. Keyfitz wrote, The phenomenon occurs because a
history of high fertility has resulted in a high proportion of women in the reproductive ages, and
these ensure high crude birth rates long after the age-specific rates have dropped (Keyfitz 1971:
71).
For societies today that have not yet completed their demographic transitions, population
momentum is still expected to contribute significantly to future growth, as relatively large
cohorts of children enter their reproductive years and bear children. John Bongaarts (1994 1999)
calculated that population momentum will account for about half of the developing worlds
projected twenty-first-century population growth. However, even though momentum is a useful
concept precisely because of the non-stationary age structures that exist in populations in the
midst of demographic transition, no research has examined trends in momentum or documented
the highly regular pattern of population momentum across the demographic transition. This
article sets out to do so.
We describe the arc of population momentum over time in 16 populations: five in the nowdeveloped world and 11 in the developing world. Because population momentum identifies the
cumulative future contribution of todays age distribution to a populations growth and size,
adding momentum to our understanding of demographic transition means that we do not treat
changes in age distribution merely as a consequence of demographic transition, as is usually the
case (Lee 2003). Instead, we also illustrate the impact that these age-distribution changes have
themselves had in producing key features of the demographic transition. Age composition exerts
an independent influence on crude birth and crude death rates so that for given vital rate
schedules, population growth rates are typically highest in those populations with a middleheavy age distribution. During demographic transition (or even during a demographic crisis),
any change in a populations age distribution will have repercussions for future population
growth potential and future population size.
We also trace the course of two recently defined measures of population momentum.
Espenshade, Olgiati, and Levin (2011) decompose total momentum into two constituent and
multiplicative parts: stable momentum measures deviations between the stable age distribution

implied by the populations mortality and fertility and the stationary age distribution implied by
the populations death rates; and nonstable momentum measures deviations between the
observed population age distribution and the implied stable age distribution.
To understand the usefulness of stable and nonstable momentum, consider the case of a
population with unchanging vital rates. Over time, stable momentum remains constant as both
the stable age distribution and the stationary age distribution are unchanging. In this sense we
may consider stable momentum to be the permanent component of population momentum; it
persists as long as mortality and fertility do not change. In contrast, nonstable momentum in this
population gradually becomes weaker and eventually vanishes as the populations age
distribution conforms to the stable age distribution. In this sense we may consider nonstable
momentum to be the temporary or transitory component of population momentum. Of course,
most populations exhibit some year-to-year fluctuation in fertility and mortality, so in empirical
analyses we commonly observe concurrent changes in both the permanent and the temporary
components of momentum. Nevertheless, how overall momentum is composed and what part is
contributed by stable versus nonstable momentum have implications for future population
growth or decline.1
In showing patterns over time in total population momentum, stable momentum, and nonstable
momentum, we pursue three distinct ends. First and most simply, we trace how momentum
dynamics have historically unfolded, not only across demographic transitions but also in the
midst of fertility swings and other demographic cycles. This is a straightforward task that has not
yet been undertaken. Second, we demonstrate some previously ignored empirical regularities of
the demographic transition, as it has occurred around the globe and at various times over the last
three centuries. Third, although population momentum is by definition a static measure, our
results suggest that momentum can also be considered a dynamic process. Across the
demographic transition, momentum typically increases and then decreases as survival first
improves and fertility rates later fall. This dynamic view of momentum is further supported by
trends in stable and nonstable momentum. A change in stable momentum induced by a change in
fertility will initiate a demographic chain reaction that affects nonstable momentum both
immediately and subsequently.

The demographic transition


Historical roots
Demographic transition first occurred in Europe: in parts of the continent, death rates began a
steady decline at some point during the seventeenth or eighteenth century. Because the
transitions occurred before the age of reliable vital statistics, the causes of these earliest mortality
declines are unclear. By the early nineteenth century, as industrialization took hold and paved the
way for even greater advances in health, mortality crises became less common in England,
France, and other parts of northern and western Europe (Vallin 1991; Livi-Bacci 2007). Child
survival was improving, and life expectancy at birth was inching upward.
As a result of these early mortality declines, the population of Europe began a long period of
robust growth, also beginning sometime in the seventeenth or eighteenth century. Although death

rates were declining, birth rates remained more or less stable, or at least they declined much
more slowly, so that year after year, for decades if not centuries, the number of births exceeded
the number of deaths by a substantial margin. In 1700 the population of Europe was an estimated
30 million. By 1900 it had more than quadrupled to 127 million (Livi-Bacci 2007). Europeans
also migrated to North America and Australia by the millions. The population continued to grow
despite this out-migration, since most of Europe did not experience substantial declines in the
number of children per woman until sometime in the late nineteenth or early twentieth century.
Fertility reached replacement in many parts of Europe around the mid-twentieth century, and
since then has fallen well below replacement in much of the continent.
Demographic transition has occurred much faster in the developing world than it did in Europe.
In 195055, for example, life expectancy at birth in India was about 38 years for both sexes
combined; 15 years later, life expectancy was nearly 47 (United Nations 2009b). Over the same
period in Kenya, life expectancy rose from 42 to 51 years, while in Mexico it rose from 51 to 60
(United Nations 2009b). This rapid mortality decline, brought about in part by technology
adopted from the West and accompanied initially by little or no decrease in fertility, led not to the
long period of steady population expansion that Europe experienced starting more than a century
earlier, but rather to rapid population growth, especially in the third quarter of the twentieth
century. Following World War II, developing countries grew at an average annual rate of more
than 2 percent, with some countries posting yearly population gains of more than 3 or even 4
percent, as in Ivory Coast, Jordan, and Libya (United Nations 2009b).
Unlike in Europe, rapid fertility decline often followed within just a few decades. Although much
of sub-Saharan Africa still has fertility well above replacement, most of the rest of the world
appears to have completed the demographic transition. Today every country in East Asia has subreplacement fertility, and even in countries like Bangladesh and Indonesia, once the cause of
much hand-wringing among population-control advocates (Connelly 2008: 11, 305), fertility is
now barely above replacement (United Nations 2009b). The concept of a demographic transition
therefore describes developing-world experience about as well as it seems to have portrayed
earlier developed-world experience. The major differences between these two situations are the
speed of mortality decline, the speed of fertility decline, and, as has received most attention both
then and now, the rate of population growth. Today it is very unusual to see the kind of
population doubling timesin some cases less than 20 yearsthat were so alarming to
policymakers and scholars throughout the 1960s and 1970s (Ehrlich 1968).

16.

Study designs, which study designs, are best for which studies?

17.

ROC curves, why used, how AUC is

determined what does 1-specificity means?


In statistics, a receiver operating characteristic (ROC), or ROC curve, is a graphical plot
that illustrates the performance of a binary classifier system as its discrimination threshold is
varied. The curve is created by plotting the true positive rate against the false positive rate at
various threshold settings. (The true-positive rate is also known as sensitivity in biomedical
informatics, or recall in machine learning. The false-positive rate is also known as the fall-out
and can be calculated as 1 - specificity). The ROC curve is thus the sensitivity as a function
of fall-out. In general, if the probability distributions for both detection and false alarm are
known, the ROC curve can be generated by plotting the cumulative distribution function
(area under the probability distribution from
to
) of the detection probability in the
y-axis versus the cumulative distribution function of the false-alarm probability in x-axis.

ROC analysis provides tools to select possibly optimal models and to discard suboptimal
ones independently from (and prior to specifying) the cost context or the class
distribution. ROC analysis is related in a direct and natural way to cost/benefit analysis of
diagnostic decision making.
The ROC curve was first developed by electrical engineers and radar engineers during
World War II for detecting enemy objects in battlefields and was soon introduced to
psychology to account for perceptual detection of stimuli. ROC analysis since then has
been used in medicine, radiology, biometrics, and other areas for many decades and is
increasingly used in machine learning and data mining research.
The ROC is also known as a relative operating characteristic curve, because it is a
comparison of two operating characteristics (TPR and FPR) as the criterion changes.[1

18.

Type I and II errors, how to minimize it?

Type I error is often referred to as a 'false positive', and is the process of incorrectly
rejecting the null hypothesis in favor of the alternative. In the case above, the null hypothesis
refers to the natural state of things, stating that the patient is not HIV positive.
The alternative hypothesis states that the patient does carry the virus. A Type I error would
indicate that the patient has the virus when they do not, a false rejection of the null.

Type II Error
A Type II error is the opposite of a Type I error and is the false acceptance of the null
hypothesis. A Type II error, also known as a false negative, would imply that the patient is
free of HIV when they are not, a dangerous diagnosis.
In most fields of science, Type II errors are not seen to be as problematic as a Type I error.
With the Type II error, a chance to reject the null hypothesis was lost, and no conclusion is
inferred from a non-rejected null. The Type I error is more serious, because you have
wrongly rejected the null hypothesis.
Medicine, however, is one exception; telling a patient that they are free of disease, when
they are not, is potentially dangerous.

19.

Types of validity, how will you increase the internal and external

validity?

20.

What is HDI? How will you measure it and its ranges?

Health index
Life expectancy at birth expressed as an index using a minimum value of 20 years
and a maximum value of 85 years. ????????

Health Index
is a network of physicians and researchers whose goal is to help promote world
health by providing extensive information on prevention, wellness, and therapy
to the world community by:

Providing access to multiple biomedical databases and information


representing all health models and paradigms worldwide

Fostering better understanding and exchange between health care


providers of different disciplines

And conveying valuable free information related to health and to keep to


a minimum the subscription charge for royalty based biomedical indexes

a) Legionnaire disease, causes, how to prevent and treat


b) Growth chart (other name, interpretation, how to prepare, present)
c) Gl0bal warming, what policies and strategies to reduce it?
d) Hidden hunger, what type of micronutrient deficiency can occur in
children?
e) Types of accidents and what strategies to control?
f) Polio, GPEI, Travellors restrictions for polio, what policies and strategies
for polio?
g) Plaque, types, D/D of bubonic plaque, control?

1. Within one year what measure can be taken to MMR


2. How will you evaluate the cost of an intervention of type of cost
analysis techniques?
3. What is surveillance, what are the types, what are criteria for
4.
5.
6.
7.
8.
9.

conducting surveillance?
What is HMIS, what are the weaknesses?
What are the criteria for good governance?
What are the criteria for evaluation?
What are policy measures to increase the utilization of services?
What is the difference between NIDs & HIV, malaria, TB in Pakistan?
How many goals of MDGs, How many indicators & targets. Post 2015,

what going to do?


10.
What percentage of population is living below poverty line and
how will you reduce poverty?
11.
What do you mean by poverty line?

Pearl Index
The Pearl Index, also called the Pearl rate, is the most common technique used in clinical trials for
reporting the effectiveness of a birth control method.

P E A RL I ND E X
Methods of contraception are compared by the Pearl index. A high Pearl index stands for a high
chance of unintentionally getting pregnant; a low value for a low chance. The Pearl index will be
determined by the number of unintentional pregnancies related to 100 women years. E.g. 100
women can contracept for 1 year each with the method that is going to be examined. If three
pregnancies occur during this period in this group, the Pearl index will be 3.0.
To convert this abstract value to a concrete one, it is possible to multiply the Pearl index of a
method by 0.4. The result is the number of pregnancies you will get in your life, if you use this
particular contraception method during the whole of your fertile time (from 12 till 52 years of
age).
Some examples of different birth control methods' Pearl indices
Knaus-Ogino method

15-35

Cervical Cap

4-20**

Standard Days
Method

4.8-12

Condom

3-12**

PERIMON

1.5-12*

Persona computer

NuvaRing

0.65-1.86

Sympto-thermal
method

0.5-2

Intrauterine Device

0.1-1.5

Plaster

0.5-1

Birth control pill

0.1-1

Sterilization

0.1-0.4

To make sure not to use a less safe method too long, it is good not to use a method longer than
(80 divided through Pearl index) years. E.g. if the Pearl index of a method is 3, you shouldn't use
this method longer than 80/3=26 years for contraception. (Please mind that this is only an
statistic calculation and can absolutely not guarantee to prevent pregnancy.)

Life table
In actuarial science and demography, a life table (also called a mortality
table or actuarial table) is a table which shows, for each age, what the probability is
that a person of that age will die before his or her next birthday ("probability of death").
From this starting point, a number of inferences can be derived.

the probability of surviving any particular year of age

remaining life expectancy for people at different ages

Life tables are also used extensively in biology and epidemiology. The concept is also of
importance in product life cycle management.

There are two types of life tables:

Period or static life tables show the current probability of death (for people of
different ages, in the current year)

Cohort life tables show the probability of death of people from a given cohort
(especially birth year) over the course of their lifetime.

Static life tables sample individuals assuming a stationary population with overlapping
generations. "Static Life tables" and "cohort life tables" will be identical if population is in
equilibrium and environment does not change. "Life table" primarily refers to period life
tables, as cohort life tables can only be constructed using data up to the current point,
and distant projections for future mortality.
Life tables can be constructed using projections of future mortality rates, but more often
they are a snapshot of age-specific mortality rates in the recent past, and do not
necessarily purport to be projections. For these reasons, the older ages represented in
a life table may have a greater chance of not being representative of what lives at these
ages may experience in future, as it is predicated on current advances in

medicine, public health, and safety standards that did not exist in the early years of this
cohort.
Life tables are usually constructed separately for men and for women because of their
substantially different mortality rates. Other characteristics can also be used to
distinguish different risks, such as smoking status, occupation, and socioeconomic class

The disability-adjusted life year (DALY) is a measure of overall disease burden,


expressed as the number of years lost due to ill-health, disability or early death.
Originally developed by Harvard University for the World Bank in 1990, the World Health
Organizationsubsequently adopted the method in 1996 as part of the Ad hoc Committee
on Health Research "Investing in Health Research & Development" report. The DALY is
becoming increasingly common in the field of public healthand health impact
assessment (HIA). It "extends the concept of potential years of life lost due to
prematuredeath...to include equivalent years of 'healthy' life lost by virtue of being in

states of poor health or disability."[2] In so doing, mortality and morbidity are combined
into a single, common metric.
Traditionally, health liabilities were expressed using one measure: (expected or average
number of) 'Years of Life Lost' (YLL). This measure does not take the impact of disability
into account, which can be expressed by: 'Years Lived with Disability' (YLD). DALYs are
calculated by taking the sum of these two components. In a formula:
DALY = YLL + YLD.[3]
The DALY relies on an acceptance that the most appropriate measure of the effects
of chronic illness is time, both time lost due to premature death and time spent
disabled by disease. One DALY, therefore, is equal to one year of healthy life
lost. Japanese life expectancy statistics are used as the standard for measuring
premature death, as the Japanese have the longest life expectancies. [4]

Disability
Disability is the consequence of an impairment that may be physical, cognitive, mental,
sensory, emotional, developmental, or some combination of these. A disability may be
present from birth, or occur during a person's lifetime.
Disabilities is an umbrella term, covering impairments, activity limitations, and
participation restrictions. Animpairment is a problem in body function or structure;
an activity limitation is a difficulty encountered by an individual in executing a task or
action; while a participation restriction is a problem experienced by an individual in
involvement in life situations. Thus, disability is a complex phenomenon, reflecting an
interaction between features of a persons body and features of the society in which he
or she lives.[1]
An individual may also qualify as disabled if they have had an impairment in the past or
is seen as disabled based on a personal or group standard or norm. Such impairments
may include physical, sensory, and cognitive or developmental disabilities. Mental
disorders (also known as psychiatric or psychosocial disability) and various types
of chronic diseasemay also qualify as disabilities.
Some advocates object to describing certain conditions (notably deafness and autism)
as "disabilities", arguing that it is more appropriate to consider them developmental

differences that have been unfairly stigmatized by society.[2][3]Furthermore, other


advocates argue that disability is a result of exclusion from mainstream society and not
any inherent impairment.[4][5]

The term "disability" broadly describes an impairment in a person's ability to function,


caused by changes in various subsystems of the body, or to mental health. The degree
of disability may range from mild to moderate, severe, or profound. [6] A person may also
have multiple disabilities.
Conditions causing disability are classified by the medical community as: [7]

inherited (genetically transmitted);

congenital, meaning caused by a mother's infection or other disease during pregnancy,


embryonic or fetal developmental irregularities, or by injury during or soon after birth;

acquired, such as conditions caused by illness or injury;

of unknown origin.

Types of disability may also be categorized in the following way:

Physical disability[edit]
Any impairment which limits the physical function of limbs, fine bones, or gross motor
ability is a physical impairment, not necessarily a physical disability. The Social Model of
Disability defines physical disability as manifest when an impairment meets a nonuniversal design or program, e.g. a person who cannot climb stairs may have a physical
impairment of the knees when putting stress on them from an elevated position such as
with climbing or descending stairs. If an elevator was provided, or a building had
services on the first floor, this impairment would not become a disability. Other physical
disabilities include impairments which limit other facets of daily living, such as
severe sleep apnea.

Sensory disability[edit]
Sensory disability is impairment of one of the senses. The term is used primarily to refer
to vision and hearing impairment, but other senses can be impaired.

Vision impairment[edit]
Vision impairment (or "visual impairment") is vision loss (of a person) to such a degree
as to qualify as an additional support need through a significant limitation
of visual capability resulting from either disease, trauma, or congenital or degenerative
conditions that cannot be corrected by conventional means, such as refractive
correction, medication, or surgery.[8][9][10] This functional loss of vision is typically defined
to manifest with
1.

best corrected visual acuity of less than 20/60, or significant central field defect,

2.

significant peripheral field defect including homonymous or heteronymous bilateral


visual, field defect or generalized contraction or constriction of field, or

3.

reduced peak contrast sensitivity with either of the above conditions.[8][11]

Hearing impairment[edit]
Hearing impairment or hard of hearing or deafness refers to conditions in which
individuals are fully or partially unable to detect or perceive at least some frequencies of
sound which can typically be heard by most people. Mild hearing loss may sometimes
not be considered a disability.

Olfactory and gustatory impairment[edit]


Impairment of the sense of smell and taste are commonly associated with aging but can
also occur in younger people due to a wide variety of causes.
There are various olfactory disorders:

Anosmia inability to smell

Dysosmia things do not smell as they "should"

Hyperosmia an abnormally acute sense of smell

Hyposmia decreased ability to smell

Olfactory Reference Syndrome psychological disorder which causes patients to


imagine they have strong body odor

Parosmia things smell worse than they should

Phantosmia "hallucinated smell", often unpleasant in nature

Complete loss of the sense of taste is known as ageusia, while dysgeusia is persistent
abnormal sense of taste,

Somatosensory impairment[edit]
Insensitivity to stimuli such as touch, heat, cold, and pain are often an adjunct to a more
general physical impairment involving neural pathways and is very commonly
associated with paralysis (in which the motor neural circuits are also affected).

Balance disorder[edit]
A balance disorder is a disturbance that causes an individual to feel unsteady, for
example when standing or walking. It may be accompanied by symptoms of being
giddy, woozy, or have a sensation of movement, spinning, or floating. Balance is the
result of several body systems working together. The eyes (visual system), ears
(vestibular system) and the body's sense of where it is in space (proprioception) need to
be intact. The brain, which compiles this information, needs to be functioning effectively.

Intellectual disability[edit]
Intellectual disability is a broad concept that ranges from mental retardation to cognitive
deficits too mild or too specific (as in specific learning disability) to qualify as mental
retardation. Intellectual disabilities may appear at any age. Mental retardation is a
subtype of intellectual disability, and the term intellectual disability is now preferred by
many advocates in most English-speaking countries.

Mental health and emotional disabilities [edit]


A mental disorder or mental illness is a psychological or behavioral pattern generally
associated with subjective distress or disability that occurs in an individual, and
perceived by the majority of society as being outside of normal development or cultural
expectations. The recognition and understanding of mental health conditions has
changed over time and across cultures, and there are still variations in the definition,
assessment, and classification of mental disorders, although standard guideline criteria
are widely accepted.

Pervasive developmental disorders[edit]


The diagnostic category of pervasive developmental disorders refers to a group of
five developmental disabilities characterized by differences in the development of
multiple basic functions including socialization and communication. The DSM-IVTR listed the pervasive developmental disorders as autistic disorder, Asperger
syndrome, Rett syndrome, childhood disintegrative disorder, and pervasive
developmental disorder not otherwise specified (PDD-NOS).[12][13][14] The DSM-5 does not
describe individual diagnosis of any of the pervasive developmental disorders, replacing
all of them with a unified diagnosis of autism spectrum disorder.[15] TheICD-10 also
includes the diagnosis of overactive disorder associated with mental retardation and
stereotyped movements.[16]

Developmental disability[edit]
Developmental disability is any disability that results in problems with growth and
development. Although the term is often used as a synonym or euphemism for
intellectual disability, the term also encompasses many congenital medical
conditions that have no mental or intellectual components, for example spina bifida.

Nonvisible disabilities[edit]
Several chronic disorders, such as diabetes, asthma, inflammatory bowel
disease, epilepsy, narcolepsy, fibromyalgia, or some sleep disorders may be counted as
nonvisible disabilities, as opposed to disabilities which are clearly visible, such as those
requiring the use of a wheelchair.
The Healthy Life Years indicator (HLY) is a European structural indicator computed
by Eurostat. It is one of the summary measures of population health, known as health
expectancies,[1] composite measures of health that combine mortality and morbidity data to
represent overall population health on a single indicator.[2]HLY measures the number of
remaining years that a person of a certain age is expected to live without disability. It is actually
a disability-free life expectancy.

The quality-adjusted life year or quality-adjusted life-year (QALY) is a measure


of disease burden, including both the quality and the quantity of life lived. [1][2] It is used in
assessing the value for money of a medical intervention. According to Pliskin et al., The

QALY model requires utility independent, risk neutral, and constant proportional tradeoff
behaviour.[3]
The QALY is based on the number of years of life that would be added by the
intervention. Each year in perfect health is assigned the value of 1.0 down to a value of
0.0 for being dead. If the extra years would not be lived in full health, for example if the
patient would lose a limb, or be blind or have to use a wheelchair, then the extra lifeyears are given a value between 0 and 1 to account for this. [citation needed] Under certain
methods, such as the EQ-5D, QALY can be negative number.
Uses

The QALY is often used in cost-utility analysis to calculate the ratio of cost to QALYs saved for a
particular health care intervention. This is then used to allocatehealthcare resources, with an
intervention with a lower cost to QALY saved (incremental cost effectiveness) ratio ("ICER")
being preferred over an intervention with a higher ratio

Calculation

The QALY is a measure of the value of health outcomes. Since health is a function of length of
life and quality of life, the QALY was developed as an attempt to combine the value of these
attributes into a single index number. The basic idea underlying the QALY is simple: it assumes
that a year of life lived in perfect health is worth 1 QALY (1 Year of Life 1 Utility value = 1
QALY) and that a year of life lived in a state of less than this perfect health is worth less than 1.
In order to determine the exact QALY value, it is sufficient to multiply the utility value associated
with a given state of health by the years lived in that state. QALYs are therefore expressed in
terms of "years lived in perfect health": half a year lived in perfect health is equivalent to 0.5
QALYs (0.5 years 1 Utility), the same as 1 year of life lived in a situation with utility 0.5 (e.g.
bedridden) (1 year 0.5 Utility). QALYs can then be incorporated with medical costs to arrive at
a final common denominator of cost/QALY. This parameter can be used to develop a costeffectiveness analysis of any treatment.

Decrement tables, also called life table methods, are used to calculate the probability of
certain events.
Birth control

Life table methods are often used to study birth control effectiveness. In this role, they
are an alternative to the Pearl Index.

As used in birth control studies, a decrement table calculates a separate effectiveness


rate for each month of the study, as well as for a standard period of time (usually 12
months). Use of life table methods eliminates time-related biases (i.e. the most fertile
couples getting pregnant and dropping out of the study early, and couples becoming
more skilled at using the method as time goes on), and in this way is superior to the
Pearl Index.
Two kinds of decrement tables are used to evaluate birth control methods. Multipledecrement (or competing) tables report net effectiveness rates. These are useful for
comparing competing reasons for couples dropping out of a study. Single-decrement (or
noncompeting) tables report gross effectiveness rates, which can be used to accurately
compare one study to another

Survival analysis
Survival analysis is a branch of statistics which deals with analysis of time duration to
until one or more events happen, such as death in biological organisms and failure in
mechanical systems. This topic is called reliability theory or reliability
analysis in engineering, and duration analysis or duration
modeling in economicsor event history analysis in sociology. Survival analysis
attempts to answer questions such as: what is the proportion of a population which will
survive past a certain time? Of those that survive, at what rate will they die or fail? Can
multiple causes of death or failure be taken into account? How do particular
circumstances or characteristics increase or decrease the probability of survival?
To answer such questions, it is necessary to define "lifetime". In the case of biological
survival, death is unambiguous, but for mechanical reliability, failure may not be welldefined, for there may well be mechanical systems in which failure is partial, a matter of
degree, or not otherwise localized in time. Even in biological problems, some events (for
example, heart attack or other organ failure) may have the same ambiguity.
The theory outlined below assumes well-defined events at specific times; other cases
may be better treated by models which explicitly account for ambiguous events.
More generally, survival analysis involves the modeling of time to event data; in this
context, death or failure is considered an "event" in the survival analysis literature

traditionally only a single event occurs for each subject, after which the organism or
mechanism is dead or broken. Recurring event or repeated eventmodels relax that
assumption. The study of recurring events is relevant in systems reliability, and in many
areas of social sciences and medical research.
A norm is a group-held belief about how members should behave in a given context.
1.Informal guideline about what is considered normal (what is correct or
incorrect) social behavior in a particular group or social unit. Norms form the
basis of collective expectations that members of a community have from
each other, and play a key part in social control and social order by exerting
a pressure on the individual to conform. In short, "The way we do things
around here."
2.Formal rule or standard laid
down
by legal,
religious,
or
social authority against which appropriateness (what is right or wrong) of an
individual's behavior is judged
NORM, an abbreviation for naturally occurring radioactive materials

Shaking hands after a sports match is an example of a social norm.

procurement

The act of obtaining or buying goods and services.


The process includes preparation and processing of a demand as well as the
end receipt and approval of payment. It often involves
(1) purchase planning,
(2) standards determination,
(3) specifications development,
(4) supplier research and selection,
(5) value analysis,
(6) financing,
(7) price negotiation,
(8) making the purchase,
(9) supply contract administration,
(10) inventory control and stores, and
(11) disposals and other related functions.
The process of procurement is often part of a company's strategy because
the ability to purchase certain materials will determine if operations will
continue. A business will not be able to survive if it's price of procurement is
more than the profit it makes on selling the actual product.
planning
1.A basic management function involving formulation of one or
more detailed plans to achieve optimum balance of needs or demands with
the available resources. The planning process (1) identifies
the goals or objectives to be achieved, (2) formulates strategies to achieve
them, (3) arranges or creates the means required, and (4) implements,
directs, and monitors all steps in their proper sequence.
2.The control of development by a local authority, through regulation and
licensing for land use changes and building.

objective
1. A specific result that a person or system aims to achieve within a time
frame and with available resources.
In general, objectives are more specific and easier to measure than goals.
Objectives are basic tools that underlie all planning and strategic activities.
They serve as the basis for creating policy and evaluating performance.
Some examples of business objectives include minimizing expenses,
expanding internationally, or making a profit.
2.

Neutral (bias free), relating to, or based on


verifiable evidence or facts instead of on attitude, belief, or opinion. Opposite
of subjective
management
1.
The organization and coordination of the activities of
a business in order to achieve defined objectives.
Management is often included as a factor of production along
with machines, materials, and money. According to the management
guru Peter Drucker (1909-2005), the basic task of management includes
both marketing and innovation. Practice of modern management originates
from the 16th century study of low-efficiency and failures of
certain enterprises, conducted by the English statesman Sir Thomas
More (1478-1535). Management consists of the interlocking functions of
creating corporate policy and organizing, planning, controlling,
and directing an organization's resources in order to achieve the objectives
of that policy.
2.
The directors and managers who have the power and responsibility to
make decisions and oversee an enterprise.
The size of management can range from one person in a small organization
to hundreds or thousands of managers in multinational companies. In large
organizations, the board of directors defines the policy which is then carried
out by the chief executive officer, or CEO. Some people agree that in order
to evaluate a company's current and future worth, the most important
factors are the quality and experience of the managers
Goals vs. Objectives Whats the Difference?
Its often hard to know the difference between goals and objectives in fact, we often
use the two terms interchangeably. But knowing the difference can help us to use both
in a constructive way, to get us from where we are to where we want to go.
Both are a Way of Moving Forward
The major similarity between goals and objectives is that the both involve forward
motion, but accomplish it in very different ways. We can think of goals as being the Big
Picture where we hope that our efforts will ultimately bring us. Objectives are about a
specific plan of attack usually a series of them each being relatively short-term in
nature.

Goals: Changing Mindset and Direction


Goals tend to be long on direction, and short on specific tactics. For example, you can
set a goal of losing 30 pounds without having a specific plan as to how to do
it. Youve defined the destination you want to arrive at, and tactics can be developed as
you move forward.
We can think of a goal as doing the following:

Defines the destination

Changes the directon to move toward the destination

Changes the mindset to adjust to and support the new direction

Creates the necessity to develop specific tactics

Goals tend to change your mindset by changing your focus. And as your focus changes,
it takes your thinking with it. This is why goals are often accompanied by affirmations,
which involve projecting yourself into the desired (but as yet unattained) destination.
People set goals all the time, without ever being very specific. Organizations do it too. A
company can set a goal of returning to profitability in two years, or becoming the leader
in their industry in five years, all without ever determining how that will be accomplished.
And once again, the details are worked out later, after the big picture changes of
direction and destination or goals have been changed and defined.
Objectives: Establishing a Series of Concrete Steps
If goals are about the big picture, then objectives are all about tactics. Mechanically,
tactics are action plans to get from where you are to where you want to be. A goal
defines the direction and destination, but the road to get there is accomplished by a
series of objectives.
A good example of this is a person who owes $50,000 in credit card debt on ten
different cards and wants to become debt-free. Getting out of debt is the goal. But it is
achieved by paying off each of the ten credit cards, one at a time. The payoff of each
credit card is an objective the series of smaller targets that need to be hit in order to
achieve the big picture goal of becoming debt-free.

The methodology for paying off each credit card will be very specific i.e., youll need
to pay X amount of extra money to Credit Card #1 for Y number of months in order to
meet the objective of paying it off. Then you need to repeat the action for the remaining
nine credit cards. The tactics which are the objectives are very specific.
How Objectives Can Help You Reach Your Goals
In nearly any goal you want to reach you can use the credit card example to help you
get there. First, you define the goal what ever it may be. Unless the goal is a small
one and easily obtained, its usually best to break big goals down to a series of specific
action steps its a way of using the divide-and-conquer strategy to accomplish a goal
thats far too large to do in the near term.
The action steps have specific targets, as well as methods to reach them. Each target is
an objective. Once its accomplished you move on to the next one, gradually moving
toward your goal as each target is completed.
Though goals generally control objectives, objectives can also control goals as they
unfold. For example, since a goal is general in nature, it may be refined and altered as
objectives are completed. The completion of an objective or a series of them, could
cause you to either raise or lower the ultimate goal.

Goals

Objectives

Definition

Something which you try to


achieve

A specific result that


a person or system aims to achieve wi
thin a time frame and with
available resources.

Time Frame

Usually long-term.

A series of smaller steps, often along


the way to achieving a long-term goal.

Magnitude

Typically involves life


changing outcomes, like
retiring, buying a home or
making a major career
change.

Usually a near-term target of a larger


expected outcome, such as passing a
course as part of completing a degree
program.

Outcome of

Actions tend to advance


progress in a very general
sense; there is often
awareness that there are
several ways to reach a
goal, so specific outcomes
arent necessary.

Very specific and measurable, a target


is established and victory is declared
only when the target is hit.

immediate
action

Purpose of
action

A goal if often characterized Objectives tend to be actions aimed


as a change of direction that at accomplishing a certain task.
will ultimately lead to a
desired outcome.

Example

I want to retire by age


50In order to reach my
goal of retiring at age 50, I
need to save $20,000 by the
end of this year

I want to retire by age 50In order


to reach my goal of retiring at age 50,
I need to save $20,000 by the end of
this year

Hierarchy

Goals tend to control


objectives; a change in a
goal could eliminate one or
more objectives, or add new
ones.

An objective can modify a goal, but


will seldom change it in a
fundamental way, even if the
objective isnt reached.

accounting system
Organized set of manual and computerized accounting methods, procedures, and controls established to
gather, record, classify, analyze, summarize, interpret, and present accurate and
timely financial data for management decisions.

Taylorism
Production efficiency methodology that breaks every action, job, or task into small
and simple segments which can be easily analyzed and taught. Introduced in the
early 20th century, Taylorism (1) aims to achieve maximum job fragmentation to
minimize skill requirements and job learning time, (2)
separates execution of work from work-planning, (3) separates direct
labor from indirect labor (4) replaces rule of
thumb productivity estimates with precise measurements, (5) introduces time and
motion study for optimum job performance, cost accounting, tool and work
station design, and (6) makes possible payment-by-result method of wage
determination. Named after the US industrial engineerFrederick Winslow Taylor
(1856-1915) who in his 1911 book 'Principles Of Scientific Management' laid
down the fundamental principles of large-scale manufacturing through assemblyline factories. He emphasized gaining maximum efficiency from
both machine and worker, and maximization of profit for the benefit of
both workers and management. Although rightly criticized for alienating workers
by (indirectly but substantially) treating them as mindless, emotionless, and easily
replicable factors of production, Taylorism was a critical factor in the
unprecedented scale of US factory output that led to Allied victory in Second
World War, and the subsequent US dominance of the industrial world

motivation
Internal and external factors that stimulate desire and energy in people to be
continually interested and committed to a job, role or subject, or to make an effort
to attain a goal.
Motivation results from the interaction of both conscious and
unconscious factors such as the (1) intensity of desire or need,
(2) incentive or reward value of the goal, and (3) expectations of
the individual and of his or her peers. These factors are the reasons one has for
behaving a certain way. An example is a student that spends extra time studying
for a test because he or she wants a better grade in the class.

management by objectives (MBO)


A management system in which the objectives of an organization are agreed
upon so that management and employees understand a common way forward.
Management by objectives aims to serve as a basis for (A)
greater efficiency through systematic procedures, (B) greater employee
motivation and commitment through participation in the planning process, and
(C) planning for results instead of planning just for work. In management by
objectives practice, specific objectives are determined jointly bymanagers and
their subordinates, progress toward agreed-upon objectives is periodically
reviewed, end results are evaluated, and rewards are allocated on the basis of
the progress. The objectives must meet five criteria: they must be (1) arranged
in order of their importance, (2) expressed quantitatively, wherever possible, (3)
realistic, (4)consistent with the organization's policies, and (5) compatible with
one another. Suggested by the management guru Peter Drucker (1909-2005) in
early 1950s, management by objectives enjoyed huge popularity for some time
but soon fell out of favor due to its rigidity and administrative burden. Its
emphasis on setting clear goals, however, has been vindicated and remains valid.

Logical Framework (LogFrame)


analysis
Management by objectives
(MBO) applied to program or project design, monitoring, and evaluation. This
approach consists of four steps: (1) establishing objectives, (2) establishing
cause-and-effect relationships (causal linkages) among activities, inputs, outputs,
and objectives, (3) identifying assumptions underlying the causal linkages, and
(4) identifying objectively-verifiable measures for
evaluating progress and success. It gets its name from the 4 x
4 matrix (frame) employed in its mapping: the columns (which represent the
levels of program or project objectives) are called vertical logic, and rows (which
represent measures for assessing progress) are called horizontal logic. Also
calledlogical framework method or project framework

human resource management


(HRM)
The process of hiring and developing employees so that they become more
valuable to the organization.
Human Resource Management includes
conducting job analyses, planning personnel needs, recruiting the right people for
the job, orienting
and training, managing wages and salaries, providing benefits and incentives,
evaluating performance, resolving disputes, and communicating with all
employees at all levels. Examples of core qualities of HR management are
extensive knowledge of the industry, leadership, and effective negotiation skills.
Formerly called personnel management.

interpersonal conflict
Human resource management: A situation in which
an individual or group frustrates, or tries to frustrate, the goal attainment
efforts of the other.

performance
The accomplishment of a given task measured against preset known standards
of accuracy, completeness, cost, and speed. In a contract, performance
is deemed to be the fulfillment of an obligation, in a manner that releases the
performer from all liabilities under the contract.

quality of performance
A numerical measurement of the performance of an organization, division, or
process. Quality of performance can be assessed through measurements of
physical products, statistical sampling of the output of processes, or
through surveys of purchasers of goods or services. Also referred to as quality of
service.

agreement

A negotiated and usually legally enforceable understanding between two or more


legally competent parties.
Although a binding contract can (and often does) result from an agreement, an
agreement typically documents the give-and-take of a negotiated settlement and
a contract specifies the minimum acceptable standard of performance.

total quality management (TQM)


A holistic approach to long-term success that views continuous improvement in
all aspects of an organization as a process and not as a short-term goal.
It aims to radically transform the organization through progressive changes in
the attitudes, practices, structures, and systems.
Total quality management transcends the product quality approach, involves
everyone in the organization, and encompasses its
every function: administration, communications, distribution, manufacturing, mark
eting, planning, training, etc. Coined by the US Naval Air Systems Command in
early 1980s, this term has now taken on several meanings and includes
(1) commitment and direct involvement of highest-level executives in setting
quality goals and policies, allocation of resources, and monitoring of results;
(2) realization that transforming an organization means fundamental changes in
basic beliefs and practices and that this transformation is everyone's job;
(3) building quality intoproducts and practices right from the beginning; (4)
understanding of the changing needs of the internal and external customers,
and stakeholders, and satisfying them in a cost effective manner; (5)
instituting leadership in place of mere supervision so that
every individual performs in the best possible manner to improve quality
and productivity, thereby continually reducing total cost; (6) eliminating barriers
between people and departments so that
they work as teams to achieve common objectives; and (7) instituting
flexible programs for training and education,
and providing meaningful measures of performance that guide the selfimprovement efforts of everyone involved.

quality management
Management activities and functions involved in determination of quality
policy and its implementation through means such as quality
planning and quality assurance (including quality control). See
also total quality management (TQM).

key principles of quality


management
Derived from the ISO 9001:2000 standard, these eight principles are
(1) Customer focus: Management should understand (and anticipate)
the customers' needs and requirements, and strive to exceed customer
expectations in meeting them; (2) Leadership: Management
should establish unity of purpose and direction,
and create and maintain anenvironment in which everyone can participate in
achieving the organization's objectives; (3) Involvement of people: Management
should involve all people at all levels so that they
willingly contribute their abilities in achieving the organization's goals; (4) Process
approach: Management should recognize that an objective is achieved more
efficiently when activities and associated resources are managed together as a
process; (5) Systems approach: Management should recognize that identifying
and understanding interrelated processes, and managing them as a system, is
more efficient and effective in achieving the organization's objectives;
(6) Continual improvement: Management should aimat steady,
incremental improvement in the organization's overall performance as a
permanent objective; (7) Factual approach to decision making: Management

should base its decisions solely on the analysis of data and information; (8)
Mutually beneficial supplier relationships: Management should enhance the
interdependent relationship with itssuppliers for mutual benefit and in creation
of value.

total quality control (TQC)


Application of quality management principles to all areas
of business from design to delivery instead of confining them only to production activities.
Popularized by the US quality pioneer Armand Val Feigenbaum (1920-) in his 1951 book 'Total
Quality Control.' See also total quality management.

certificate of compliance
A document certified by a competent authority that the supplied good
or service meets the required specifications. Also called certificate of
conformance, certificate of conformity.

certificate of conformance
A document certified by a competent authority that the supplied good
or service meets the required specifications. Also called certificate of
compliance, certificate of conformity.

certificate of conformity
A document certified by a competent authority that the supplied good
or service meets the required specifications. Also called certificate of
conformance or certificate of compliance.

quality
In manufacturing, a measure of excellence or a state of
being free from defects, deficiencies and significant variations. It is brought about
by strict and consistent commitment to certain standards
that achieve uniformity of a product in order to satisfy
specific customer or user requirements. ISO 8402-1986 standard defines quality
as "the totality offeatures and characteristics of a product
or service that bears its ability to satisfy stated or implied needs." If
an automobile company finds a defect in one of their cars and makes a
product recall, customer reliability and therefore production will decrease
because trust will be lost in the car's quality.

fixed cost
A periodic cost that remains more or less unchanged irrespective of the output
level or sales revenue, such as depreciation, insurance, interest, rent, salaries,
and wages.

marginal cost
The increase or decrease in the total cost of a production run for making one
additional unit of an item. It is computed in situations where the breakeven
point has been reached: the fixed costs have already been absorbed by the
already produced items and only the direct (variable) costs have to be accounted
for.

Marginal costs are variable costs consisting of labor and material


costs, plus an estimated portion of fixed costs (such
as administration overheads and selling expenses).

quality assurance (QA)


Often used interchangeably with quality control (QC), it is a
wider concept that covers all policies and systematic activities implemented
within a quality system. QA frameworks include (1) determination
of adequate technical requirement of inputs and outputs,
(2) certification and rating of suppliers, (3) testing of procured material for
its conformance toestablished quality, performance, safety,
and reliability standards, (4) proper receipt, storage, and issue of material,
(5) audit of the process quality, (6) evaluation of the process to
establish required corrective response, and (7) audit of the final output for
conformance to (a) technical (b) reliability, (c) maintainability, and (d)
performance requirements.

process
Sequence of interdependent and linked procedures which, at every stage,
consume one or more resources (employee time, energy, machines, money) to
convert inputs (data, material, parts, etc.) into outputs. These outputs then serve
as inputs for the next stage until a known goal or end result is reached.

activity based management (ABM)


Approach to management that aims to maximize the value adding activities while
minimizing or eliminating non-value adding activities. The overall objective of
ABM is to improve efficiencies and effectiveness of an organization in securing
its markets. It draws on activity based-costing (ABC) as its
major source of information and focuses on (1) reducing costs, (2)
creating performance measures, (3) improving cashflow and quality and,
(4) producing enhanced value products.

American Society for Testing And


Materials (ASTM)
World's largest source of standards (arrived at by voluntary consensus)
for materials, goods, services and systems. ASTM also publishes information on
(1) sampling and testing methods for health, safety and performance aspects of
materials, (2) effects of physical and biological agents and chemicals and, (3)
safety guidelines.

production
The processes and methods used to transform tangible inputs (raw
materials, semi-finished goods, subassemblies) and intangible inputs
(ideas, information, knowledge) into goods or services. Resources are used in
this process to create an output that is suitable for use or has exchange value.

asset
1.Something valuable that an entity owns, benefits from, or has use of, in
generating income.
2.
Accounting: Something that an entity has acquired or purchased, and that
has money value (its cost, book value, market value, or residual value). An
asset can be (1) something physical, such as cash,
machinery, inventory, land and building, (2) an enforceable claim against
others, such as accounts receivable, (3)right, such as copyright, patent,
trademark, or (4) an assumption, such as goodwill.

revenue
The income generated from sale of goods or services, or any other use
of capital or assets, associated with the main operations of
an organization before any costs or expenses are deducted. Revenue is shown
usually as the top item in an income (profit and loss) statement from which
all charges, costs, and expenses are subtracted to arrive at net income.
Also called sales, or (in the UK) turnover.

financial operating plan (FOP)

A business or financial road map that identifies revenues and expenses. This
type of plan tracks where money comes from and where it goes in a business
operation. It defines specific goals such
as budgeting, costs associated with operations, and sales projections. A
financial operating plan uses historic and recent performance to predict expected
outcomes in the near future. The plan must be updated periodically to adjust for
changing circumstances.

depreciation
1. Accounting: The gradual conversion of the cost of a tangible capital
asset or fixed asset into an operational expense (called depreciation
expense) over the asset's estimated useful life.
The objectives of computing depreciation are to (1) reflect reduction in
the book value of the asset due to obsolescence or wear and tear,
(2) spread a large expenditure (purchase price of the asset) proportionately
over a fixed period to match revenue received from it, and (3) reduce
the taxable income by charging theamount of depreciation against
the company's total income. In effect, charging of
depreciation means the recovery of invested capital, by gradual sale of the
asset over the years during which output or services are received from it.
Depreciation is computed at the end of an accounting period (usually a
year), using amethod best suited to the particular asset.
When applied to intangible assets, the preferred term is amortization.
2.Commerce: The decline in the market value of an asset.

3.Economics: The decrease in the economic potential of an asset over its


productive or useful life.
4.Foreign exchange: The reduction in the exchange value of a currency, either
by a government or due to weakening of the underlying economy in a floating
exchange rate system.

personnel management
Administrative discipline of hiring and developing employees so that they become
more valuable to the organization. It includes (1) conducting job analyses,
(2) planning personnel needs, and recruitment, (3) selecting the right people for
the job, (4) orienting and training, (5) determining
and managing wages and salaries, (6) providing benefits andincentives,
(7) appraising performance, (8) resolving disputes, (9) communicating with all
employees at all levels.

organization
A social unit of people that is structured and managed to meet a need or to
pursue collective goals. All organizations have a management structure that
determines relationships between the different activities and the members, and
subdivides and assigns roles, responsibilities, and authority to carry out
different tasks. Organizations are opensystems--they affect and are affected by
their environment.

chain of command
The order in which authority and power in an organization is wielded and
delegated from top management to every employee at every level of the
organization. Instructions flow downward along the chain of command
and accountability flows upward.
According to its proponent Henri Fayol (1841-1925), the more clear cut the chain
of command, the more effective the decision making process and greater
the efficiency. Military forces are an example of straight chain of command that
extends in unbroken line from the top brass to ranks. Also called line of
command.

scalar principle
Classical-management rule that subordinates at every level should follow
the chain of command, and communicate with their seniors only through the
immediate or intermediate senior. According to its proponent, the
French management pioneer Henri Fayol (1841-1925), a clear understanding of
this principle is necessary for the proper management of any organization.

stakeholder
A person, group or organization that has interest or concern in an organization.
Stakeholders can affect or be affected by
the organization's actions, objectives and policies. Some examples of key

stakeholders are creditors, directors, employees, government (and


its agencies), owners (shareholders), suppliers, unions, and the community from
which the business draws its resources.
Not all stakeholders are equal. A company's customers are entitled to
fair trading practices but they are not entitled to the same consideration as the
company's employees.
An example of a negative impact on stakeholders is when a company needs to
cut costs and plans a round of layoffs. This negatively affects the community
of workers in the area and therefore the local economy. Someone
owning shares in a business such as Microsoft is positively affected, for example,
when the company releases a new deviceand sees their profit and
therefore stock price rise.

Maslow's hierarchy of needs


Motivation theory which suggests five interdependent levels of basic
human needs (motivators) that must be satisfied in a strict sequence starting with
the lowest level. Physiological needs for survival (to stay alive and reproduce)
and security (to feel safe) are the most fundamental and most pressing needs.
They are followed by social needs (for love and belonging) and self-esteem
needs (to feel worthy, respected, and have status). The final and highest level
needs are self-actualization needs (self-fulfillment and achievement). Its
underlying theme is that human beings are 'wanting' beings: as they satisfy one
need the next emerges on its own and demands satisfaction ... and so on until
the need for self-actualization that, by its very nature, cannot be fully satisfied
and thus does not generate more needs. This theory states that once a need is
satisfied, it stops being a motivator of human beings. In personnel management,
it is used in design of incentive schemes. In marketing, it is used in design
of promotional campaigns based on the perceived needs of a market

segment a product satisfies. Named after its originator, the US psychologist


Abraham Harold Maslow (1908-70) who proposed it in 1954.

1.
A health system, also sometimes referred to
as health care system or healthcare system, is the organization of people,
institutions, and resources that deliver health care services to meet
the health needs of target populations.

A good health system delivers quality services to all people, when and where they need them.
The exact configuration of services varies from country to country, but in all cases requires a
robust financing mechanism; a well-trained and adequately paid workforce; reliable information
on which to base decisions and policies; well maintained facilities and logistics to deliver quality
medicines and technologies

Plotting and Intrepretating an ROC Curve


| Previous Section | Main Menu | Next Section |
This section continues the hypothyroidism example started in the the previous section. We showed that the table at
left can be summarized by the operating characteristics at right:
Cutpoint Sensitivity
Specificity
5
0.56
0.99
7
0.78
0.81
9
0.91
0.42
The operating characteristics (above right) can be reformulated slightly
and then presented graphically as shown below to the right:
Cutpoint True Positives False Positives
5
0.56
0.01
7
0.78
0.19
9
0.91
0.58
curve (or ROC
rate against the
possible

This type of graph is called a


Receiver Operating Characteristic
curve.) It is a plot of the true positive
false positive rate for the different
cutpoints of a diagnostic test.

An ROC curve

demonstrates several things:

1.

2.

It
and
will be

shows the tradeoff between sensitivity


specificity (any increase in sensitivity
accompanied by a decrease in
specificity).
The closer the curve follows the left-hand border and then the top border of the ROC space, the more
accurate the test.

3.

The closer the curve comes to the 45-degree diagonal of the ROC space, the less accurate the test.

4.

The slope of the tangent line at a cutpoint gives the likelihood ratio (LR) for that value of the test. You can
check this out on the graph above. Recall that the LR for T4 < 5 is 52. This corresponds to the far left, steep
portion of the curve. The LR for T4 > 9 is 0.2. This corresponds to the far right, nearly horizontal portion of
the curve.

5.

The area under the curve is a measure of text accuracy

SIMPLE ROC CURVE ANALYSIS


The programming on this page provides a streamlined approach to ROC curve analysis that I think will be fairly
accessible to the non-statistician. For the more heavy-duty version of this procedure, applicable software can be
downloaded from the Department of Radiology, Kurt Rossmann Laboratories, University of Chicago.

To illustrate, consider the following set of data (source). Of a total of 125 subjects, 32 are known to

be hypothyroid and 93 are known to have normal thyroid function. All subjects are
assessed with respect to T4 (thyroxine) levels, and then sorted among the four
ordinal categories: T4<5.1, T4=5.1 to 7.0, T4=7.1 to 9.0, and T4>9.0. Of the 19
subjects with T4 levels lower than 5.1, 18 were in fact hypothyroid while only 1 was
euthyroid. Thus, if a T4 of 5 or less were taken as an indication of hypothroidism,
this measure would yield 18 true positives and 1 false positive, with a true-positive
rate (sensitivity) of 18/32=.5625 and a false-positive rate (1-specificity) of
1/93=.0108.
Observed Frequencies
T4 Value

Cumulative Rates

Euthyroid Hypothyroid Euthyroid Hypothyroid

Diagnostic
Level

False
Positive

True
Positive

False
Positive

True
Positive

<5.1

18

.0108

.5625

5.1-7.0

17

.1935

.7813

7.1-9.0

36

.5806

.9063

>9.0

39

1.0

1.0

Totals:

93

32

Similarly, 7 of the hypothyroid subjects and 17 of the euthyroid had T4 levels


between 5.1 and 7.0. Thus, if any value of T4 less than 7.1 were taken as an
indication of hypothroidism, this measure would yield 18+7=25 true positives and
1+17=18 false positive, with a true-positive rate of 25/32=.7813 and a falsepositive rate of 18/93=.1935. And so on for the other diagnostic levels,
T4=7.1 to 9.0,
and
T4>9.0.
Given k exhaustive ordinal diagnostic levels, the programming on this page fits a
simple logarithmic curve to the first k-1 pairs of cumulative rates, with
x = cumulative false-positive rate
y = cumulative true-positive rate
For the present example k=4, so the curve is fitted to the first three of the bivariate pairs, as shown below in
Graph A. Graph B shows the same pairs fitted by a conventional binormal ROC curve. In most practical cases, as in
the present example, the difference between the two curve- fitting procedures will be fairly small.
A. Example data fitted by a simple
A. logarithmic curve.

B. Example data fitted by a binormal


B. ROC curve.

To proceed, enter into the cells of the following table either the observed frequencies or the cumulative rates for
each of the k diagnostic levels, up to a maximum of k=10. (Note that this procedure makes no sense with k<4.) If
you are entering observed frequencies, cumulative rates will be calculated automatically. If you are entering
cumulative rates, the final entry in each of the two columns on the right must always be equal to 1.0. Cumulative
rates can be entered as either decimal fractions (.5625) or common fractions (18/32). When all values have been
entered, click the Calculate button, and the results of the analysis will appear in the scrolling text box that follows
the data- entry table.

Introduction - A statistical prelude


ROC curves were developed in the 1950's as a by-product of research into making sense of radio signals
contaminated by noise. More recently it's become clear that they are remarkably useful in medical decision-making.
That doesn't mean that they are always used appropriately! We'll highlight their use (and misuse) in our tutorial.
We'll first try to move rapidly through basic stats, and then address ROC curves. We'll take a practical, medical
approach to ROC curves, and give a few examples.
If you know all about the terms 'sensitivity', 'specificity', FPF, FNF, TPF and TNF, as well as
understanding the terms 'SIRS' and 'sepsis', you can click here to skip past the basics, but
we wouldn't advise it! Once we've introduced ROCs, we'll play a bit, and then look at two
examples - procalcitonin and sepsis, and also tuberculosis and pleural fluid adenosine
deaminase. Finally, in a footnote, we examine accuracy, and positive and negative
predictive values - such discussion will become important when we find out about costing,
and how to set a test threshold.

Consider patients in intensive care (ICU). One of the major causes of death in such patients is "sepsis". Wouldn't it
be nice if we had a quick, easy test that defined early on whether our patients were "septic" or not? Ignoring for the
moment what sepsis is, let's consider such a test. We imagine that we take a population of ICU patients, and do two
things:
1. Perform our magical TEST and record the results;
2. Use some "gold standard" to decide who REALLY has "sepsis", and record this result
(in a blinded fashion).
Please note (note this well) that we have represented our results as fractions, and that:

FNF + TPF = 1
In other words, given FNF, the False Negative Fraction, you can work out TPF, the True Positive Fraction, and vice
versa. Similarly, the False Positive Fraction and True Negative Fraction must also add up to one - those patients who
really have NO sepsis (in our example) must either be true negatives, or misclassified by the test as positives despite
the absence of sepsis.
In our table, TPF represents the number of patients who have sepsis, and have this corroborated by having a "high"
TEST (above whatever cutoff level was chosen). FPF represents false positives - the test has lied to us, and told us
that non-septic patients are really septic. Similarly, true negatives are represented by TNF, and false negatives by
FNF.
In elementary statistical texts, you'll encounter other terms. Here they are:

The sensitivity is how good the test is at picking out patients with sepsis. It is simply
the True Positive Fraction. In other words, sensitivity gives us the proportion of cases
picked out by the test, relative to all cases who actually have the disease.
Specificity is the ability of the test to pick out patients who do NOT have the
disease. It won't surprise you to see that this is synonymous with the True Negative
Fraction.

Probability and StatSpeak


Not content with the above terms and abbreviations, statisticians have further confused things using the following
sort of terminology:
P( T+ | D- )
Frightening, isn't it? Well, not when one realises that the above simply reads "the probability of the test being
positive, given that the disease is not present". T+ is simply an abbreviation for "a positive test", and "D-" is
similarly a shorthand for "the disease isn't present". P(something) is a well-accepted abbreviation for "the probability
of the event something", and the vertical bar means "given that". Not too difficult! So here are the translations:
Statement
P(T+ | D+)

Translation
sensitivity, =true positive fraction, =TPF

P(T- | D-)

specificity, TNF

P(T+ | D-)

FPF

P(T- | D+)

FNF

Using similar notation, one can also talk about the prevalence of a disease in a population as "P(D+)". Remember
(we stress this again!) that the false negative fraction is the same as one minus the true positive fraction, and
similarly, FPF = 1 - TNF.
KISS
We'll keep it simple. From now on, we will usually talk about TPF, TNF, FPF and FNF. If you like terms like
sensitivity, specificity, bully for you. Substitute them where required!

Truth
Consider our table again:
Actuality v the TEST
SEPSIS

NO sepsis

"high" TEST* (positive)

TPF

FPF

"low" TEST* (negative)

FNF

TNF

See how we've assumed that we have absolute knowledge of who has the disease (here, sepsis), and who doesn't. A
good intensivist will probably give you a hefty swipe around the ears if you go to her and say that you have an
infallible test for "sepsis". Until fairly recently, there weren't even any good definitions of sepsis! Fortunately, Roger
Bone (and his committee) came up with a fairly reasonable definition. The ACCP/CCM consensus criteria [Crit Care
Med 1992 20 864-74] first define something called the Systemic Inflammatory Response Syndrome, characterised
by at least two of:
1. Temperature under 36oC or over 38oC;
2. Heart rate over 90/min;
3. Respiratory rate over 20/min or PaCO2 under 32 mmHg;
4. White cell count under 4000/mm3 or over 12000/mm3 or over 10% immature forms;
The above process is often abbreviated to "SIRS". The consensus criteria then go on to define sepsis:
When the systemic inflammatory response syndrome is the result of a confirmed infectious process, it is termed
'sepsis'.
Later, they define 'severe sepsis' (which is sepsis associated with organ dysfunction, hypoperfusion, or hypotension.
"Hypoperfusion and perfusion abnormalities may include, but are not limited to lactic acidosis, oliguria, or an acute
alteration in mental status"). Finally, 'septic shock' is defined as sepsis with hypotension, despite adequate fluid
resuscitation, along with the presence of perfusion abnormalities. Hypotension is a systolic blood pressure under 90
mmHg or a reduction of 40(+) mmHg from baseline.
The above definitions have been widely accepted. Now, there are many reasons why such definitions can be
criticised. We will not explore such criticism in detail but merely note that:
1. The definition of SIRS appears to be over-inclusive (Almost all patients in ICU will
conform to the definition at some time during their stay);
2. Various modifications of the third criterion (respiratory rate) have been used to
accommodate patients on mechanical ventilation;
3. The use of high or low values for temperature and white cell count appears to
exclude patients who might be 'in transition' from low to high, or high to low values!
4. Proof that SIRS "is the result of an infectious process" may be difficult or impossible
to achieve. 'Proof' of anything in ICU (as opposed to 'showing an association') is
particularly difficult because of the multiple problems experienced by patients. (Quite
apart from the philosophical problems posed by 'proof')!

5. It may be difficult to establish whether infecting organisms are present. Even if


adequate quantities of culture material have been collected at the right time, and
before antibiotics have been started, and your microbiology laboratory maintains
superb standards of quality control, infecting organisms may still be missed. Some
have even claimed that organisms (or their toxic products) enter the portal vein and
cause sepsis, but don't get into the systemic circulation!
6. Evidence of the presence of bacteria in an organ or tissue (say lung, or blood) is not
evidence that the bacteria are causing the patient's systemic illness. Ventilated
patients are often colonised by bacteria, without being infected; intravascular lines
may likewise be colonised without the bacteria necessarily causing SIRS.
Despite the above limitations, one needs some starting point in defining sepsis, and we will use the ACCP/SCCM
criteria. Our problem then becomes one of differentiating between patients with SIRS without evidence of bacterial
infection, and patients who "truly" have sepsis. (We will not here examine whether certain patients have severe
systemic infection without features of SIRS).
The magnificent ROC!
Remember that, way back above, we said that our TEST is "positive" if the value was above some arbitrary cutoff,
and "negative" if below? Central to the idea of ROC curves (receiver operating characteristic, otherwise called
'relative operating characteristic' curves) is this idea of a cutoff level. Let's imagine that we have two populations septic and non-septic patients with SIRS, for example. We have a TEST that we apply to each patient in each
population in turn, and we get numeric results for each patient. We then plot histograms of these results, for each
population, thus:
Play around with the above simple applet - move the (green) demarcating line from low to high (left to right), and
see how, as you move the test threshold from left to right, the proportion of false positives decreases. Unfortunately,
there is a problem - as we decrease the false positives, so the true positives also decrease! As an aside, note how we
have drawn the curve such that where the curves overlap, we've shaded the overlap region. This is ugly, so in future,
we'll leave the overlap to your imagination, thus:
Now we introduce the magnificent ROC! All an ROC curve is, is an exploration of what happens to TPF and FPF as
we vary the position of our arbitrary TEST threshold. (AUC refers to the Area under the curve and will be discussed
later).
Watch how, as you move the test threshold from right to left using the 'slider' bar at the bottom, so the corresponding
point on the ROC curve moves across from left to right! Why is this? Simple. If our threshold is very high, then
there will be almost no false positives .. but we won't really identify many true positives either. Both TPF and FPF
will be close to zero, so we're at a point low down and to the left of the ROC curve.
As we move our test threshold towards a more reasonable, lower value, so the number of true positives will increase
(rather dramatically at first, so the ROC curve moves steeply up). Finally, we reach a region where there is a
remarkable increase in false positives - so the ROC curve slopes off as we move our test threshold down to
ridiculously low values.
And that's really that! (We will of course explore a little further).
Playing with ROCs
In this section we will fool around with ROCs. We will:
1. Create ROC curves;

2. Find out why the area under the ROC curve is non-parametric, and why this is
important;
3. Learn to calculate required sample sizes;
4. Compare the areas under two ROC curves;
5. Examine the effects of noise, a bad 'gold standard', and other sources of error.
Let's play some more. In the following example, see how closely the two curves are superimposed, and how flat the
corresponding ROC curve is! This demonstrates an important property of ROC curves - the greater the overlap of
the two curves, the smaller the area under the ROC curve.
Vary the curve separation using the upper "slider" control, and see how the ROC curve changes. When the curves
overlap almost totally the ROC curve turns into a diagonal line from the bottom left corner to the upper right corner.
What does this mean?
Once you've understood what's happening here, then the true power of ROCs will be revealed. Let's think about this
carefully..
Let's make an ROC curve
Consider two populations, one of "normal" individuals and another of those with a disease. We have a test for the
disease, and apply it to a mixed group of people, some with the disease, and others without. The test values range
from (say) zero to a very large number - we rank the results in order. (We have rather arbitrarily decided that patients
with bigger test values are more likely to be 'diseased' but remember that this is not necessarily the case. Of the
thousand possibilities, consider patients with low serum calcium concentrations and hypoparathyroidism - here the
low values are the abnormal ones). Now, here's how we construct our curve..
1. Start at the bottom left hand corner of the ROC curve - here we know that both FPF
and TPF must be zero (This corresponds to having the green 'test threshold' line in
our applet way over on the right);
2. Now examine the largest result. In order to start constructing our ROC curve, we set
our test threshold at just below this large result - we move the green marker slightly
left. Now, if this, the first result, belongs to a patient with the disease, then the case
is a true positive, the TPF must now be bigger, and we plot our first ROC curve point
by moving UP on the screen and plotting a point. Conversely, if the disease is absent,
we have a false positive, the FPF is now greater than zero, and we move RIGHT on
the screen and plot our point.
3. Set the test threshold lower, to just below the second largest result, and repeat the
process described in (2).
4. .. and so on until we've moved the threshold down to below the lowest test value. We
will now be in the upper right hand corner of the ROC curve - because our green
threshold marker is below the lowest value, all results will be classified as positive, so
the TPF and FPF will both be 1.0 !
Consider two tests. The first test is good at discriminating between patients with and without the disease. We'll call it
test A. The second test is lousy - let's call it test Z. Let's examine each:

Test Z. Because this is a lousy test, as we move our green marker left, picking off
either false or true positives, our likelihood of encountering either is much the same.
For every true positive (that moves us UP) we are likely to encounter a false positive

that moves us to the RIGHT, as we plot the graph. You can see what will happen we'll get a more-or-less diagonal line from the bottom left corner of the ROC curve, up
to the top right corner.
Test A. This is a good test, so we're initially more likely to encounter true positives as
we move our green marker left. This means that initially our curve will move steeply
UP. Only later, as we start to encounter fewer and fewer true positives, and more and
more false positives, will the curve ease off and become more horizontal!

From the above, you can get a good intuitive feel that the closer the ROC curve is to a diagonal, the less useful the
test is at discriminating between the two populations. The more steeply the curve moves up and then (only later)
across, the better the test. A more precise way of characterising this "closeness to the diagonal" is simply to look at
the AREA under the ROC curve. The closer the area is to 0.5, the more lousy the test, and the closer it is to 1.0, the
better the test!
The Area under the ROC curve is non-parametric!
The real beauty of using the area under this curve is its simplicity. Consider the above process we used to construct
the curve - we simply ranked the values, decided whether each represented a true or false positive, and then
constructed our curve. It didn't matter whether result number 23 was a zillion times greater than result number 24, or
0.00001% greater. We certainly didn't worry about the 'shapes of the curves', or any sort of curve parameter. From
this you can deduce that the area under the ROC curve is not significantly affected by the shapes of the underlying
populations. This is most useful, for we don't have to worry about "non-normality" or other curve shape worries, and
can derive a single parameter of great meaning - the area under the ROC curve!
We're about to get rather technical, so you might wish to skip the following,
and move on to the nitty gritty!

In an authoritative paper, Hanley and McNeil [Radiology 1982 143 29-36] explore the concept of the area under the
ROC curve. They show that there is a clear similarity between this quantity and well-known (at least, to statisticians)
Wilcoxon (or Mann-Whitney) statistics. Considering the specific case of randomly paired normal and abnormal
radiological images, the authors show that the area under the ROC curve is a measure of the probability that the
perceived abnormality of the two images will allow correct identification. (This can be generalised to other uses of
the AUC). Note that ROC curves can be used even when test results don't necessarily give an accurate number! As
long as one can rank results, one can create an ROC curve. For example, we might rate x-ray images according to
degree of abnormality (say 1=normal, 2=probably normal, and so on to 5=definitely abnormal), check how this
ranking correlates with our 'gold standard', and then proceed to create an ROC curve.
Hanley and McNeil explore further, providing methods of working out standard errors for ROC curves. Note that
their estimates for standard error (SE) depend to a degree on the shapes of the distributions, but are conservative so
even if the distributions are not normal, estimates of SE will tend to be a bit too large, rather than too small. (If
you're unfamiliar with the concept of standard error, consult a basic text on statistics).
In short, they calculate standard error as

___________________________________________
/

SE = __

/
\

\/

A (1-A) + (na-1)(Q1 - A2)+(nn-1)(Q2 - A2)


----------------------------------------nann

Where A is the area under the curve, na and nn are the number of abnormals and normals respectively, and Q1 and
Q2 are estimated by:
Q1 = A / (2 - A)
Q2 = 2A2 / (1 + A)
Note that it is extremely silly to rely on Gaussian-based formulae to calculate standard error when the number of
abnormal and normal cases in a sample are not the same. One should use the above formulae.

Sample Size
Now that we can calculate the standard error for a particular sample size, (given a certain AUC), we can plan sample
size for a study! Simply vary sample size until you achieve an appropriately small standard error. Note that, to do
this, you do need an idea of the area under the ROC curve that is anticipated. Hanley and McNeil even provide a
convenient diagram (Figure 3 in their article) that plots number against standard error for various areas under the
curve. As usual, standard errors vary with the square root of the number of samples, and (as you might expect)
numbers required will be smaller with greater AUCs.
Planning sample size when comparing two tests
ROC curves should be particularly valuable if we can use them to compare the performance of two tests. Such
comparison is also discussed by Hanley and McNeil in the above mentioned paper, and a subsequent one [Hanley JA
& McNeil BJ, Radiology 1983 148 839-43] entitled A method of comparing the areas under Receiver Operating
Characteristic curves derived from the same cases.
Commonly in statistics, we set up a null hypothesis (that there is no statistically significant difference between two
populations). If we reject such a hypothesis when it should be accepted, then we've made a Type I error. It is a
tradition that we allow a one in twenty chance that we have made a type I error, in other words, we set our criterion
for a "significant difference" between two populations at the 5% level. We call this cutoff of 0.05 "alpha".
Less commonly discussed is "beta", () the probability associated with committing a Type II error. We commit a type
II error if we accept our null hypothesis when, in fact, the two populations do differ, and the hypothesis should have
been rejected. Clearly, the smaller our sample size, the more likely is a type II error. It is common to be more
tolerant with beta - to accept say a one in ten chance that we have missed a significant difference between the two
populations. Often, statisticians refer to the power of a test. The power is simply (1 - ), so if is 10%, then the
power is 90%.
In their 1982 paper, Hanley & McNeil provide a convenient table (Table III) that gives the numbers of normal and
abnormal subjects required to provide a probability of 80%, 90% or 95% of detecting differences between various
ROC areas under the curve (with a one sided alpha of 0.05). For example, if we have one AUC of 0.775 and a
second of 0.900, and we want a power of 90%, then we need 104 cases in each group (normals and abnormals).
Note that generally, the greater the areas under both curves, the smaller the difference between the areas needs to be,
to achieve significance. The tables are however not applicable where two tests are applied to the same set of cases.
The approach to two different tests being applied to the same cases is the subject of Hanley & McNeil's second
(1983) paper. This approach is discussed next.
Actually comparing two curves
This can be non-trivial. Just because the areas are similar doesn't necessarily mean that the curves are not different
(they might cross one another)! If we have two curves of similar area and still wish to decide whether the two curves
differ, we unfortunately have to use complex statistical tests - bivariate statistical analysis.
In the much more common case where we have different areas derived from two tests applied to different sets of
cases, then it is appropriate to calculate the standard error of the difference between the two areas, thus:

___________________
_
SE(A1 - A2) =

/
\/

SE2(A1) + SE2(A2)

Such an approach is NOT appropriate where two tests are applied to the same set of patients. In their 1983 paper,
Hanley and McNeil show that in these circumstances, the correct formula is:

_
SE(A1 - A2) =

/
\/

__________________________________
SE2(A1) + SE2(A2) - 2r.SE(A1)SE(A2)

where r is a quantity that represents the correlation induced between the two areas by the study of the same set of
cases. (The difference may be non-trivial - if r is big, then we will need far fewer cases to demonstrate a difference
between tests on the same subjects)!
Once we have the standard error of the difference in areas, we can then calculate the statistic:
z = (A1 - A2) / SE(A1-A2)
If z is above a critical level, then we accept that the two areas are different. It is common to set this critical level at
1.96, as we then have our conventional one in twenty chance of making a type I error in rejecting the hypothesis that
the two curves are similar. (Simplistically, the value of 1.96 indicates that the areas of the two curves are two
standard deviations apart, so there is only an ~5% chance that this occurred randomly and that the curves are in fact
the same).
In the circumstance where the same cases were studied, we still haven't told you how to calculate the magic number
r. This isn't that simple. Assuming we have two tests T1 and T2, that classify our cases into either normals (n) or
abnormals (a), and we have already calculated the ROC AUCs for each test (Let's call these areas A1 and A2). The
procedure is as follows:
1. Look at (n), the non-diseased patients. Find how the two tests correlate for these
patients, and obtain a value rn for this correlation. (We'll soon reveal how to obtain
this value);
2. Look at (a), the abnormals, and similarly derive ra, the correlation between the two
tests for these patients;
3. Average out rn and ra;
4. Average out the areas A1 and A2, in other words, calculate (A1+A2)/2;
5. Use Hanley and McNeil's Table I to look up a value of r, given the average areas, and
average of rn and ra.
You now have r and can plug it into the standard error equation. But wait a bit, how do we
calculate rn and ra? This depends on your method of scoring your data - if you are measuring
things on an interval scale (for example, blood pressure in millimetres of mercury), then
something called the Pearson product-moment correlation method is appropriate. For ordinal
information (e.g. saying that 'this image is definitely abnormal and that one is probably
abnormal'), we use something called the Kendall tau. Either can be derived from most
statistical packages.
Sources of Error
The effect of noise
Let's consider how "random noise" might affect our curve. Still assuming that we have a 'gold standard' which
confirms the presence or absence of disease, what happens as 'noise' confuses our test, in other words, when the test

results we are getting are affected by random variations over which we have no control. If we start off by assuming
our test correlates perfectly with the gold standard, then the area under the ROC curve (AUC) will be 1.0. As we
introduce noise, so some test results will be mis-classified - false positives and false negatives will creep in. The
AUC will diminish.
What if the test is already pretty crummy at differentiating 'normals' from 'abnormals'? Here things become more
complex, because some false positives or false negatives might accidentally be classified as true values. You can see
however, that on average (provided sample numbers are sufficient and the test has some discriminatory power),
noise will in general degrade test performance. It's unlikely that random noise will lead you to believe that the test is
performing better than it really is - a most desirable characteristic!
Independence from the gold standard
The one big catch with ROC curves is where the test and gold standard are not independent. This interdependence
will give you spuriously high area under the ROC curve. Consider the extreme case where the gold standard is
compared to itself (!) - the AUC will be 1.0, regardless. This becomes extremely worrying where the "gold standard"
is itself a bit suspect - if the test being compared to the standard now also varies as does the standard, but both have
a poor relationship to the disease you want to detect, then you might believe you're doing well and making
appropriate diagnoses, but be far from the truth! Conversely, if the gold standard is a bit shoddy, but independent
from the test, then the effect will be that of 'noise' - the test characteristics will be underestimated (often called
"nondifferential misclassification" by those who wish to confuse you)!
Other sources of error
It should also be clear that any bias inherent in a test is not transferred to bias the ROC curve. If one is biased in
favour of making a diagnosis of abnormality, this merely reflects a position on the ROC curve, and has no impact on
the overall shape of the curve.
Other errors may still creep in. A fine article that examines sources of error (and why, after initial enthusiasm, so
many tests fall into disfavour) is that of Ransohoff and Feinstein [New Engl J Med 1978 299(17) 926-30]. With
every examination of a test one needs to look at:
1. Whether the full spectrum of a disease process is being examined. If only severe
cases are reported on, then the test may be useless in milder cases (both pathologic
and clinical components of the disease should represent its full spectrum). A good
example is with malignant tumours - large, advanced tumours will be easily picked
up, and a screening test might also perform well in this setting, but miss early
disease!
2. Comparative ('control') patients. These should be similar - for example, "the search
for a comparative pathological spectrum should include a different process in the
same anatomical location .. and the same process in a different anatomical location"
(citing the case of a test for say, cancer of the colon);
3. Co-morbid disease. This may affect the positivity or negative status of a test.
4. Verification bias. If the clinician is not blinded to the result of the test, a positive may
make him scrutinise the patient very carefully and find the disease (which he missed
in the other patient who had a negative test). Another name for verification bias is
work-up bias. Verification bias is common and counter-intuitive. People tend to get
rather angry when you say it might exist, for they will reply along the lines of "We
confirmed all cases at autopsy, dammit!" (The positive test may have influenced the
clinicians to send the patients to autopsy). A good test will be more likely to influence
selection for 'verification', and thus introduce a stronger bias! (Begg & McNeil
describe this bias well, and show how it can be corrected for).

5. Diagnostic review bias. If the test is first performed, and then the definitive diagnosis
is made, knowledge of the test result may affect the final 'definitive' diagnosis.
Similar is "test-review bias", where knowledge of the 'gold standard' diagnosis might
influence interpretation of the test. Studies in radiology have shown that provision of
clinical information may move observers along an ROC curve, or even to a new curve
entirely! ('Co-variate analysis' may help in controlling for this form of bias).
6. "Incorporation bias". This has already been mentioned above under "independence
from the gold standard". Here, the test is incorporated into the evidence used to
diagnose the disease!
7. Uninterpretable test results. These are infrequently reported in studies! Such results
should be considered 'equivocal' if the test is not repeatable. However, if the test is
repeatable, then correction (and estimation of sensitivity and specificity) may be
possible, provided the variation is random. Uninterpretable tests may have a positive
association with the disease state (or even with 'normality').
8. Interobserver variation. In studies where observer abilities are important, different
observers may perform on different ROC curves, or move along the same ROC curve.
An Example: Procalcitonin and Sepsis
Let's see how ROC curves have been applied to a particular TEST, widely promoted as an easy and quick method of
diagnosing sepsis. As with all clinical medicine, we must first state our problem. We will simply repeat our
SIRS/sepsis problem from above:
The Problem
Some patients with SIRS have underlying bacterial infection, whereas others do not. It is generally highly
inappropriate to empirically treat everyone with SIRS as if they had bacterial infection, so we need a reliable
diagnostic test that tells us early on whether bacterial infection is present.
Waiting for culture results takes days, and such delays will compromise infected patients. Although positive
identification of bacterial infection is our gold standard, the delay involved (1 to 2 days) is too great for us to wait
for cultures. We need something quicker. The test we examine will be serum procalcitonin.
Clearly what we now need is to perform a study on patients with SIRS, in whom bacterial infection is suspected.
These patients should then have serum PCT determination, and adequate bacteriological investigation. Knowledge
of the presence or absence of infection can then be used to create a receiver operating characteristic curve for the
PCT assay. We can then examine the utility of the ROC curve for distinguishing between plain old SIRS, and sepsis.
(We might even compare such a curve with a similar curve constructed for other indicators of infection, such as Creactive protein).
(Note that there are other requirements for our PCT assay, for example, that the test is reproducible. In addition, we
must have reasonable evidence that the 'gold standard' test - here interpretation of microbiological data - is
reproducibly and correctly performed).
PCT - a look at the literature
Fortunately for us, there's a 'state of the art' supplement to Intensive Care Medicine (2000 26 S 145-216) where most
of the big names in procalcitonin research seem to have had their say. Let's look at those articles that seem to have
specific applicability to intensive care. Interestingly enough, most of these articles make use of ROC analysis! Here
they are:

1. Brunkhorst FM, et al (pp 148-152) Procalcitonin for the early diagnosis and
differentiation of SIRS, sepsis, severe sepsis and septic shock
2. Cheval C. et al (pp 153-158) Procalcitonin is useful in predicting the bacterial origin
of an acute circulatory failure in critically ill patients
3. Rau B. et al (pp 158-164) The Clinical Value of Procalcitonin in the prediction of
infected necro[s]is in acute pancreatitis
4. Reith HB. et al (pp 165-169) Procalcitonin in patients with abdominal sepsis
5. Oberhoffer M. et al (pp170-174) Discriminative power of inflammatory markers for
prediction of tumour necrosis factor-alpha and interleukin-6 in ICU patients with
systemic inflammatory response syndrome or sepsis at arbitrary time points
Quite an impressive list! Let's look at each in turn:
1. Brunkhorst FM, et al (pp 148-152)
Procalcitonin for the early diagnosis and differentiation of SIRS, sepsis, severe sepsis
and septic shock
The authors recruited 185 consecutive patients. Unfortunately, only seventeen patients in the study had
uncomplicated 'SIRS' - the rest had sepsis (n=61), 'severe sepsis' (n=68) or septic shock (n=39). The authors
then indulge in intricate statistical manipulation to differentiate between sepsis, severe sepsis, and septic
shock - they even construct ROC curves (although we are not told, when they construct an ROC curve for
'prediction of severe sepsis' what those with severe sepsis are being differentiated from - presumably the
rest of the population)! The authors do not address why, in their ICU, so many patients had sepsis, and so
few had SIRS without sepsis. The bottom line is that the results of this study, with an apparently highly
selected group of just seventeen 'non-septic' SIRS patients, seem useless for addressing our problem of
differentiating SIRS and sepsis! Their ROC curves seem irrelevant to our problem. (Parenthetically one
might observe that if you walk into their ICU and find a patient with SIRS, there would appear to be an
over 90% chance that the patient has sepsis - who needs procalcitonin in such a setting)?
2. Cheval C. et al (pp 153-158)
Procalcitonin is useful in predicting the bacterial origin of an acute circulatory failure
in critically ill patients
This study looked at four groups:
1. septic shock (n=16);
2. shock without infection(n=18);
3. SIRS related to proved infection(n=16);
4. ICU patients without shock or infection(n=10).
The choice of groups is somewhat unfortunate! Where are the patients we really want
to know about - those with SIRS but no infection? Reading on, we find that only four
of the patients in the fourth group met the criteria for SIRS! This study too does not
appear to help us in our quest! (The authors use ROC curves to analyse their patients
in shock, comparing those with and without sepsis. The numbers look impressive - an
AUC of 0.902 for procalcitonin's ability to differentiate between septic shock and
'other' causes of shock. But hang on - let's look at the 'other' causes of shock. We find
that in these cases, shock was due to haemorrhage(n=8), heart failure(n=7),

anaphylaxis(n=2), and 'hypovolaemia' (n=1). One doesn't need a PCT level to decide
whether a patient is in heart failure, bleeding to death, etc. A study whose title
promises more than is delivered)!
3. Rau B. et al (pp 158-164)
The Clinical Value of Procalcitonin in the prediction of infected necro[s]is in acute
pancreatitis
Sixty one patients were entered into this study. Twenty two had oedematous pancreatitis, 18 had sterile
necrosis, and 21 had infected necrosis. Serial PCT levels were determined over a period of fourteen days.
The 'gold standard' used to determine whether infected necrosis was present was fine needle aspiration of
the pancreas, combined with results of intra-operative bacteriology. We learn that
"PCT concentrations were significantly higher from day 3-13 after onset of symptoms in patients with
[infected necrosis, compared with sterile necrosis]". {The emphasis is ours}.
The authors then inform us that
"ROC analysis for PCT and CRP has been calcul[a]ted on the basis of at least two maximum values
reached during the total observation period. By comparison of the areas under the ROC curve (AUC), PCT
was found to have the closest correlation to the presence and severity of bacterial/fungal infection of
necrosis and was clearly superior to CRP in this respect (AUC for PCT: 0.955, AUC for CRP: 0.861;
p<0.02)."
Again, the numbers look impressive. Hold it! Does this mean that we have to do daily PCT levels on all of
our patients, and then take the two maximum values, and average them in order to decide who has infected
necrosis?? Even more tellingly, we are not provided with information about how PCT might have been
used in prospectively differentiating between those who developed sepsis and those who didn't, before
bacterial cultures became available. In other words, was PCT useful in identifying infected necrosis early
on? If I have a sick patient with pancreatitis, can I base my management decision on a PCT level? This
vital question is left unanswered, but the lack of utility of PCT in the first two days is of concern!
4. Reith HB. et al (pp 165-169)
Procalcitonin in patients with abdominal sepsis
A large study compared 246 patients with "infective or septic episodes confirmed at laparotomy" with 66
controls. And this is where the wheels fall off, for the sixty six controls were undergoing elective operation!
Clearly, any results from such a study are irrelevant to the problem ICU case where you are agonizing over
whether to send the patient for a laparotomy - "is there sepsis or not"?
5. Oberhoffer M. et al (pp170-174)
Discriminative power of inflammatory markers for prediction of tumour necrosis
factor-alpha and interleukin-6 in ICU patients with systemic inflammatory response
syndrome or sepsis at arbitrary time points
The authors reason that TNF and IL-6 levels predict mortality from sepsis. Strangely enough, they do not
appear to have looked at actual mortality in the 243 patients in the study! This is all very well if you're
interested in deciding whether the TNF and IL-6 levels in your patients are over their cutoff levels of
40pg/ml and 500pg/ml respectively, but perhaps of somewhat less utility unless such levels themselves
absolutely predict fatal outcome (they don't). From a clinical point of view, this study suffers from use of a
'gold standard' that may not be of great overall relevance. A hard end point (like death) would have been far
better. (In addition, the authors are surprisingly coy with their AUCs. If you're really keen, you might try
and work these out from their Table 4).

A Summary
Four of the five papers above used ROC analysis. In our opinion, this use provides us with little or no clinical
direction. If the above articles reflect the 'state of the art' as regards use of procalcitonin in distinguishing between
the systemic inflammatory response syndrome and sepsis, we can at present find no justification in using the test on
our critically ill patients! (This does not mean that the test is of no value, simply that we have no substantial
evidence that it is of use).
What would be most desirable is a study that conformed to the requirements we gave above - a study that examines
a substantial number of patients with either:

SIRS not complicated by sepsis; OR


sepsis;

and demonstrates unequivocally that serum procalcitonin is useful in differentiating between


the two early on, before blood cultures become positive. Clearly a substantial area under an
appropriately constructed ROC curve would be powerful evidence in support of using the
test.
A second example - Tuberculosis, ADA, and pleural fluid
For our second example, we'll use some data on Adenosine Deaminase (ADA) levels determined on pleural
effusions. It is well known that ADA levels in empyemas may be high, (we might explore this later), so at first we
will concentrate on data for pleural fluid obtained from patients with either neoplasms, or those with documented
tuberculosis (TB). The data and ROC curve can be downloaded as a self-extracting Microsoft Excel spreadsheet. To
derive full benefit from this example, some knowledge of spreadsheets (specifically, Excel) is desirable but probably
not vital. The data are the property of Dr Mark Hopley of Chris-Hani Baragwanath Hospital (CHB, the largest
hospital in the world).
The spreadsheet contains three important columns of data:
1. The leftmost column contains ADA levels;
2. The next column contains a '1' if the patient had documented tuberculosis, and
otherwise a zero;
3. The third column contains a '1' only if the patient had documented carcinoma. There
were six patients who had both carcinoma and tuberculosis - these have been
excluded from analysis.
There were eight hundred and twelve tuberculosis patients, and one hundred and two patients with malignant pleural
effusion. How do we go about creating an ROC curve? The steps, as demonstrated in the worksheet, are:
1. Sort the data according to the ADA level - largest values first;
2. Create a column where each row gives the total number of TB patients with ADA
levels greater than or equal to the ADA value for that row;
3. Create a similar column for patients with cancer;
4. Create two new columns, containing the TPF and FPF for each row. In other words, we
position our 'green marker' (remember our ROC applet!) just below the current ADA
level for that row, and then work out a TPF and an FPF at that cutoff level. We work
out the TPF by taking the number of TB cases identified at or above the ADA level for
the current row, and dividing by the total number of TB cases. We determine the FPF

by taking the number of "false alarms" (cancer patients) at or above that level, and
dividing by the total number of such non-TB patients.
We now have sufficient data to plot our ROC curve. Here it is:

We still need to determine the Area Under the Curve (AUC). We do this by noting that every time we move RIGHT
along the x-axis, we can calculate the increase in area by finding:
(how much we moved right) * (the current y value)
We can then add up all these tiny areas to get a final AUC. As shown in the spreadsheet, this works out at 85.4%,
which indicates that, in distinguishing between tuberculosis and neoplasia as a cause of pleural effusion, ADA seems
to be a fairly decent test!
Here are the corresponding ROC curve for tuberculosis compared with inflammatory disorders. As expected, the
AUC is less for chronic inflammatory disorders, about 77.9%, and pretty poor at 63.9% for 'acute inflammation'
which mainly represents empyemas.

Note that there were only 67 cases of "chronic inflammatory disorders", and thirty five with "acute inflammation".
Finally, let's look at TB versus "all other" effusion data - there were 393 "non-tuberculous" cases. The data include
the above 'cancer' and 'inflammatory' cases. The AUC is still a respectable 78.6%.

Is the above credible?


Through our analysis of ADA in pleural fluid, we've learnt how to create an ROC curve. But we still must ask
ourselves questions about error and bias! Here are a few questions you have to ask - they will profoundly influence
your interpretation and use of the above ROC curves:

Are the data selected, or were all samples of pleural fluid subject to analysis?
Does the hospital concerned have a peculiar case spectrum, or will your case profile
be similar?

How severe were the cases of tuberculosis - is the full spectrum of pleural effusions
being examined?

Should the cases who had two diseases (that is, carcinoma and tuberculosis) have
been excluded from analysis?

What co-morbid diseases were present (for example, Human Immunodeficiency Virus
infection)?

Was there verification bias introduced by, for example, a high ADA value being found,
and the diagnosis of tuberculosis therefore being aggressively pursued?

Were any test results uninterpretable?

In how many cases was the diagnosis known before the test was performed? How
many of the cases were considered by the attending physician to be "really
problematical diagnoses"? One could even ask "How good were the physicians at
clinically diagnosing the various conditions - did the ADA add to diagnostic sensitivity
and specificity?"

{ Just as an aside, it's perhaps worth mentioning that the above ADA results are not normally distributed, for either
the 'tuberculosis' or the 'neoplasia' samples. Even taking the logarithms of the values (although it decreases the
skewness of the curves dramatically) doesn't quite result in normal distributions, so any ROC calculations that
assume normality are likely to give spurious results. Fortunately our calculations above make no such assumption.}
Working out Standard Errors
You can calculate Standard Errors for the Areas Under the Curves we've presented, using the following JavaScript
calculator. It's based on the formulae from above.

1. Exploring Accuracy
Accuracy, PPV and NPV
It would be great if we could lump things together in some way, and come up with a single number that could tell us
how well a test performs. One such number is represented by the area under the ROC. Another more traditional (and
far more limited) number is accuracy, commonly given as:
accuracy = number of correct diagnoses / number in total population
While we're about it, let's also consider a few other traditional terms:

Positive predictive value (PPV) is of some interest to clinicians. It answers the


question "How likely is the patient to have the disease, given that the test is
positive?". You can work out that this is given by:
true positives / all positive tests
You'll find that positive (and negative) predictive values depend on the frequency of
the disease in the population, which is one reason why you cannot just blindly apply
tests, without considering whom you are applying them to!

In a completely analogous fashion, we calculate the negative predictive value,


which tells us how likely it is that the disease is NOT present, given that the test is
negative. We calculate:
true negatives / all negative tests
(Yet another name for the PPV is accuracy for positive prediction, and the negative predictive value,
accuracy for negative prediction).

KISS(2)
We will refer to positive predictive value as PPV, and negative predictive value as NPV. Accuracy we'll refer to as
'accuracy' (heh).
An examination of 'accuracy'
Let's consider two tests with the same accuracy. Let's say we have a population of 1000 patients, of whom 100 have
a particular disease (D+). We apply our tests (call them T1 and T2) to the population, and get the following results.
Test performance:
T1
(n=1000)

Test performance:
T2
(n=1000)

D+

D-

D+

D-

60

95

40

T+

T+

T-

40

895

PPV = 92.3%
NPV = 95.7%

T-

860

PPV = 70.3%
NPV = 99.4%

See how the two tests have the same accuracy (a + d)/1000 = 95.5%, but they do remarkably different things. The
first test, T1, misses the diagnosis 40% of the time, but makes up for this by providing us with few false positives the TNF is 99.4%. The second test is quite different - impressive at picking up the disease (a sensitivity of 95%) but
relatively lousy performance with false positives (a TNF of 95.5%). At first glance, if we accept the common
medical obsession with "making the diagnosis", we would be tempted to use T2 in preference to T1, (the TPF is
after all, 95% for T2 and only 60% for T1), but surely this depends on the disease? If the consequences of missing
the disease are relatively minor, and the costs of work-up of the false positives are going to be enormous, we might
just conceivably favour T1.
Now, let's drop the prevalence of the disease to just ten in a thousand, that is P(D+) = 1%. Note that the TPF and
TNF ( or sensitivity and specificity, if you prefer) are of course the same, but the positive predictive and negative
predictive values have altered substantially.
Test performance:
T1
(n=1000)
D+

D-

T+

5.5

T-

984.
5

PPV = 52.2%
NPV = 99.6%

Test performance:
T2
(n=1000)
D+

D-

T+

9.5

44

T-

0.5

946

PPV = 17.8%
NPV = 99.9%

(Okay, you might wish to round off the "fractional people")! See how the PPV and NPV have changed for both tests.
Now, almost five out of every six patients reported "positive" according to test T2, will in fact be false positives.
Makes you think, doesn't it?
Another example
Now let's consider a test which is 99% sensitive and and 99% specific for the diagnosis of say, Human
Immunodeficiency Virus infection. Let's look at how such a test would perform in two populations, one where the
prevalence of HIV infection is 0.1%, another where the prevalence is 30%. Let's sample 10 000 cases:
Test performance: Popu
lation A
(n=10 000, prevalence
1/1000)

Test performance: Popu


lation B
(n=10 000, prevalence
300/1000)

T+

D+

D-

10

100

9890

T-

PPV = 9.1%
NPV = almost 100%

T+

D+

D-

2970

70

30

6930

T-

PPV = 97.7%
NPV = 99.5%

If the disease is rare, use of even a very specific test will be associated with many false positives (and all that this
entails, especially for a problem like HIV infection); conversely, if the disease is common, a positive test is likely to
be a true positive. (This should really be common sense, shouldn't it?)
You can see from the above that it's rather silly to have a fixed test threshold. We've already played around with our
applet where we varied the test threshold, and watched how the TPF/FPF coordinates moved along the ROC curve.
The (quite literally) million dollar question is "Where do we set the threshold"?
2. Deciding on a test threshold
The reason why we choose to plot FPF against TPF when we make our ROC is that all the information is
contained in the relationship between just these two values, and it's awfully convenient to think of, in the words of
Swets, "hits" and "false alarms" (in other words, TPF and FPF). We can limit the false alarms, but at the expense of
fewer "hits". What dictates where we should put our cutoff point for diagnosing a disease? The answer is not simple,
because we have many possible criteria on which to base a decision. These include:

Financial costs both direct and indirect of treating a disease (present or not), and of
failing to treat a disease;
Costs of further investigation (where deemed appropriate);

Discomfort to the patient caused by disease treatment, or failure to treat;

Mortality associated with treatment or non-treatment;

Soon we will explore the mildly complex maths involved, but first let's use a little common sense. It would seem
logical that if the cost of missing a diagnosis is great, and treatment (even inappropriate treatment of a normal
person) is safe, then one should move to a point on the right of the ROC, where we have a high TPF (most of the
true positives will be treated) at the cost of many false positives. Conversely, if the risks of therapy are grave, and
therapy doesn't help much anway, we should position our point far to the left, where we'll miss a substantial number
of positives (low TPF) but not harm many unaffected people (low FPF)!
More formally, we can express the average cost resulting from the use of a diagnostic test as:
Cavg

= Co + CTP*P(TP) + CTN*P(TN) + CFP*P(FP) + CFN*P(FN)

where Cavg is the average cost, CTP is the cost associated with management of true positives, and so on. Co is the
"overhead cost" of actually doing the test. Now, we can work out that the probability of a true positive P(TP) is
given by:
P(TP) = P(D+) * P(T+|D+)
= P(D+) * TPF

In other words, P(TP) is given by the product of the prevalence of the disease in the population, P(D+), multiplied
by the true positive fraction, for the test. We can similarly substitute for the three other probabilities in the equation,
to get:
Cavg

= Co + CTP*P(D+)*P(T+|D+) + CTN*P(D-)*P(T-|D-)
+ CFP*P(D-)*P(T+|D-) + CFN*P(D+)*P(T-|D+)

Another way of writing this is:


Cavg

= Co + CTP*P(D+)*TPF + CTN*P(D-)*TNF
+ CFP*P(D-)*FPF + CFN*P(D+)*FNF

Remembering that TNF = 1 - FPF, and FNF = 1 - TPF, we can write:


Cavg

= Co + CTP*P(D+)*TPF + CTN*P(D-)*(1-FPF)
+ CFP*P(D-)*FPF + CFN*P(D+)*(1-TPF)

and, rearrange to ..
Cavg

TPF * P(D+) * { CTP - CFN }


+ FPF * P(D-) * { CFP - CTN }
+ Co + CTN*P(D-) + CFN*P(D+)

As Metz has pointed out, even if a diagnostic test improves decision-making, it may still increase overall costs if C o
is great. Of even more interest is the dependence of Cavg on TPF and FPF - the coordinates on an ROC curve! Thus
average cost depends on the test threshold defined on an ROC curve, and varying this threshold will vary costs. The
best cost performance is achieved when Cavg is minimised. We know from elementary calculus that this cost will be
minimal when the derivative of the cost equation is zero. Now because we can express TPF as a function of FPF
using the curve of the ROC, thus:
Cavg

ROC(FPF) * P(D+) * { CTP - CFN }


+ FPF * P(D-) * { CFP - CTN }
+ Co + CTN*P(D-) + CFN*P(D+)

we can differentiate this equation with respect to FPF, and obtain:


dC/dFPF

dROC/dFPF * P(D+) * { CTP - CFN }


P(D-) * { CFP - CTN }

Setting dC/dFPF to zero, we get:


dROC/dFPF * P(D+) * { CTP - CFN }
= - P(D-) * { CFP - CTN }
or, rearranging:

dROC/dFPF

P(D-) * { CFP - CTN }


------------------P(D+) * { CFN - CTP}

In other words, we have found a differential equation that gives us the slope of the ROC curve at the point where
costs are optimal. Now let's look at a few circumstances:

Where the disease is rare, P(D-)/P(D+) will be enormous, and so we should shift our
test threshold down to the lower left part of the ROC curve, where dROC/dFPF, the
slope of the curve, is large. This fits in with our previous simple analysis, where with
uncommon diseases, we found that false positives are a very bad thing. We must
minimise our false positives, even at the expense of missing true positives!
Conversely, with a common disease, we move our threshold to a lower, more lenient
level, (and our position on the ROC curve necessarily moves right). Otherwise, most
of our negatives are false negatives!

Also notice that the curve slope is great if the cost difference is far greater for C FP CTN than for CFN - CTP. Let's consider a practical scenario - assume for a particular
disease (say a brain tumour) that if you get a positive test, you have to open up the
patient's skull and cut into the brain to find the presumed cancer. If you have a
negative, you do nothing. Let's also assume that the operation doesn't help those
who have the cancer - many die, regardless. Then the cost of a false positive
(operating on the brains of normal individuals!) is indeed far greater than the cost of
a true negative (doing nothing), and the cost of a false negative (not doing an
operation that doesn't help a lot) is similar to the cost of a true positive (doing the
rather unhelpful operation). The curve slope is steep, so we move our test threshold
down on the left of the ROC curve.

The opposite is where the consequences of a false positive are minimal, and there is
great benefit if you treat sufferers from the disease. Here, you must move up and to
the right on the ROC curve.

Fine Print - Old fashioned assumptions of Normality


Earlier literature on ROC curves often seems to have made the unfortunate assumption that the underlying
distributions are normal curves. (The only reason we used normal curves in our applet is their convenience - perhaps
the same reason that others have 'assumed normality'). Under this assumption, one trick that has been used is to
create special 'graph paper' where axes are transformed according to the normal distribution. ('double normal
probability co-ordinate scales'). Using such coordinates, ROC curves become linear (!), and one can read off slope
and axis, which correspond to the two parameters that contain the mean and standard deviation. Curve fitting can be
done (using special techniques, NOT least squares) to work out the line that best fits the plotted coordinates. Such
methods appear to have been applied mainly in studies of experimental psychology.
Note that if one uses double normal probability plots, the slope of the straight line obtained by plotting TPF against
FPF will give us the ratio of standard deviations of the two distributions (assuming normality). In other words, if the
standard deviations of the populations D+ and D- are sD+ and sD-, the line slope is sD- / sD+. In the particular case
where this value is 1, we can measure the distance between the plotted line and the 'chance line' (connecting the
bottom left and top right corners of the graph). This distance is a normalised measure of the distance between the
means of the two distributions where m refers to mean, and s, standard deviation:
d' = (mD+ / mD-)

/ s

RECEIVER OPERATING CHARACTERISTIC (ROC), OR ROC CURVE


In statistics, a receiver operating characteristic (ROC), or ROC curve, is a graphical plot that illustrates the
performance of a binary classifier system as its discrimination threshold is varied. The curve is created by plotting
the true positive rate against the false positive rate at various threshold settings. (The true-positive rate is also known
as sensitivity in biomedicine, or recall in machine learning. The false-positive rate is also known as the fall-out and
can be calculated as 1 - specificity). The ROC curve is thus the sensitivity as a function of fall-out. In general, if the
probability distributions for both detection and false alarm are known, the ROC curve can be generated by plotting

the cumulative distribution function (area under the probability distribution from
to
) of the detection
probability in the y-axis versus the cumulative distribution function of the false-alarm probability in x-axis.
ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from
(and prior to specifying) the cost context or the class distribution. ROC analysis is related in a direct and natural way
to cost/benefit analysis of diagnostic decision making.
The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting
enemy objects in battlefields and was soon introduced to psychology to account for perceptual detection of stimuli.
ROC analysis since then has been used in medicine, radiology, biometrics, and other areas for many decades and is
increasingly used in machine learning and data mining research.
The ROC is also known as a relative operating characteristic curve, because it is a comparison of two operating
characteristics (TPR and FPR) as the criterion changes.[1]
..

Terminology and derivations from a confusion matrix


true positive (TP)
eqv. with hit
true negative (TN)
eqv. with correct rejection
false positive (FP)
eqv. with false alarm, Type I error
false negative (FN)
eqv. with miss, Type II error
sensitivity or true positive rate (TPR)
eqv. with hit rate, recall
specificity (SPC) or True Negative Rate
precision or positive predictive value (PPV)
negative predictive value (NPV)
fall-out or false positive rate (FPR)
false discovery rate (FDR)
Miss Rate or False Negative Rate (FNR)

accuracy (ACC)
F1 score
is the harmonic mean of precision and sensitivity
Matthews correlation coefficient (MCC)

Informedness = Sensitivity + Specificity - 1


Markedness = Precision + NPV - 1
A classification model (classifier or diagnosis) is a mapping of instances between certain classes/groups. The
classifier or diagnosis result can be a real value (continuous output), in which case the classifier boundary between
classes must be determined by a threshold value (for instance, to determine whether a person has hypertension based
on a blood pressure measure). Or it can be a discrete class label, indicating one of the classes.
Let us consider a two-class prediction problem (binary classification), in which the outcomes are labeled either as
positive (p) or negative (n). There are four possible outcomes from a binary classifier. If the outcome from a
prediction is p and the actual value is also p, then it is called a true positive (TP); however if the actual value is n
then it is said to be a false positive (FP). Conversely, a true negative (TN) has occurred when both the prediction
outcome and the actual value are n, and false negative (FN) is when the prediction outcome is n while the actual
value is p.
To get an appropriate example in a real-world problem, consider a diagnostic test that seeks to determine whether a
person has a certain disease. A false positive in this case occurs when the person tests positive, but actually does not
have the disease. A false negative, on the other hand, occurs when the person tests negative, suggesting they are
healthy, when they actually do have the disease.
Let us define an experiment from P positive instances and N negative instances for some condition. The four
outcomes can be formulated in a 22 contingency table or confusion matrix, as follows:

Condition
(as determined by "Gold standard")
Prevalence =
Condition positive
Total population
Positive predictive value
False discovery rate
Test
False positive
(PPV, Precision) =
(FDR) =
outcome
True positive
(Type I error)
True positive
False positive
positive
Test outcome positive
Test outcome positive
Test
outcome
False omission rate (FOR) Negative predictive value
Test
False negative
=
(NPV) =
outcome
True negative
(Type II error)
False negative
True negative
negative
Test outcome negative Test outcome negative
Positive
True positive rate (TPR, False positive rate (FPR,
Accuracy (ACC) =
likelihood
Sensitivity, Recall) =
Fall-out) =
True positive + True
ratio (LR+) =
True positive
False positive
negative
TPR/FPR
Condition positive
Condition negative
Total population
True negative rate
Negative
False negative rate
(TNR, Specificity, SPC)
likelihood
(FNR) =
=
ratio (LR) =
False negative
True negative
FNR/TNR
Condition positive
Condition negative
Diagnostic
odds ratio
(DOR) =
LR+/LR
Total
population

Condition positive

Condition negative

ROC space: The ROC space and plots of the four prediction examples.

The contingency table can derive several evaluation "metrics" (see infobox). To draw an ROC curve, only the true
positive rate (TPR) and false positive rate (FPR) are needed (as functions of some classifier parameter). The TPR
defines how many correct positive results occur among all positive samples available during the test. FPR, on the
other hand, defines how many incorrect positive results occur among all negative samples available during the test.

A ROC space is defined by FPR and TPR as x and y axes respectively, which depicts relative trade-offs between true
positive (benefits) and false positive (costs). Since TPR is equivalent to sensitivity and FPR is equal to 1
specificity, the ROC graph is sometimes called the sensitivity vs (1 specificity) plot. Each prediction result or
instance of a confusion matrix represents one point in the ROC space.
The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC
space, representing 100% sensitivity (no false negatives) and 100% specificity (no false positives). The (0,1) point is
also called a perfect classification. A completely random guess would give a point along a diagonal line (the socalled line of no-discrimination) from the left bottom to the top right corners (regardless of the positive and negative
base rates). An intuitive example of random guessing is a decision by flipping coins (heads or tails). As the size of
the sample increases, a random classifier's ROC point migrates towards (0.5,0.5).
The diagonal divides the ROC space. Points above the diagonal represent good classification results (better than
random), points below the line poor results (worse than random). Note that the output of a consistently poor
predictor could simply be inverted to obtain a good predictor.
Let us look into four prediction results from 100 positive and 100 negative instances:
A

TP=6
3

FP=28

91

TP=7
7

FP=77

15
4

TP=2
4

FP=88

11
2

TP=7
6

FP=12

88

FN=3
7

TN=72

10
9

FN=2
3

TN=23

46

FN=7
6

TN=12

88

FN=2
4

TN=88

11
2

100

100

20
0

100

100

20
0

100

100

20
0

100

100

20
0

TPR = 0.63
FPR = 0.28
PPV = 0.69
F1 = 0.66
ACC = 0.68

TPR = 0.77
FPR = 0.77
PPV = 0.50
F1 = 0.61
ACC = 0.50

TPR = 0.24
FPR = 0.88
PPV = 0.21
F1 = 0.22
ACC = 0.18

TPR = 0.76
FPR = 0.12
PPV = 0.86
F1 = 0.81
ACC = 0.82

Plots of the four results above in the ROC space are given in the figure. The result of method A clearly shows the
best predictive power among A, B, and C. The result of B lies on the random guess line (the diagonal line), and it
can be seen in the table that the accuracy of B is 50%. However, when C is mirrored across the center point
(0.5,0.5), the resulting method C is even better than A. This mirrored method simply reverses the predictions of
whatever method or test produced the C contingency table. Although the original C method has negative predictive
power, simply reversing its decisions leads to a new predictive method C which has positive predictive power.
When the C method predicts p or n, the C method would predict n or p, respectively. In this manner, the C test
would perform the best. The closer a result from a contingency table is to the upper left corner, the better it predicts,
but the distance from the random guess line in either direction is the best indicator of how much predictive power a
method has. If the result is below the line (i.e. the method is worse than a random guess), all of the method's
predictions must be reversed in order to utilize its power, thereby moving the result above the random guess line.

Curves in ROC space

Classifications are often based on a continuous random variable. Write the probability for belonging in the class as a
function of a decision/threshold parameter

as

and the probability of not belonging to the class as

. The false positive rate FPR is given by

and the true positive rate is

. The ROC curve plots parametrically TPR(T) versus FPR(T) with T as the
varying parameter.
For example, imagine that the blood protein levels in diseased people and healthy people are normally distributed
with means of 2 g/dL and 1 g/dL respectively. A medical test might measure the level of a certain protein in a blood
sample and classify any number above a certain threshold as indicating disease. The experimenter can adjust the
threshold (black vertical line in the figure), which will in turn change the false positive rate. Increasing the threshold
would result in fewer false positives (and more false negatives), corresponding to a leftward movement on the curve.
The actual shape of the curve is determined by how much overlap the two distributions have.
Further interpretations
Sometimes, the ROC is used to generate a summary statistic. Common versions are:

the intercept of the ROC curve with the line at 90 degrees to the no-discrimination line (also called
Youden's J statistic)
the area between the ROC curve and the no-discrimination line[citation needed]

the area under the ROC curve, or "AUC" ("Area Under Curve"), or A' (pronounced "a-prime"), [3] or "cstatistic".[4]

d' (pronounced "d-prime"), the distance between the mean of the distribution of activity in the system under
noise-alone conditions and its distribution under signal-alone conditions, divided by their standard

deviation, under the assumption that both these distributions are normal with the same standard deviation.
Under these assumptions, it can be proved that the shape of the ROC depends only on d'.
However, any attempt to summarize the ROC curve into a single number loses information about the pattern of
tradeoffs of the particular discriminator algorithm.
Area under the curve
When using normalized units, the area under the curve (often referred to as simply the AUC, or AUROC) is equal to
the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen
negative one (assuming 'positive' ranks higher than 'negative'). [5] This can be seen as follows: the area under the
curve is given by (the integral boundaries are reversed as large T has a lower value on the x-axis)

. The angular brackets denote average from the distribution of negative samples.
It can further be shown that the AUC is closely related to the MannWhitney U,[6][7] which tests whether positives
are ranked higher than negatives. It is also equivalent to the Wilcoxon test of ranks.[7] The AUC is related to the Gini
coefficient (

) by the formula

, where:

[8]

In this way, it is possible to calculate the AUC by using an average of a number of trapezoidal approximations.
It is also common to calculate the Area Under the ROC Convex Hull (ROC AUCH = ROCH AUC) as any point on
the line segment between two prediction results can be achieved by randomly using one or other system with
probabilities proportional to the relative length of the opposite component of the segment. [9] Interestingly, it is also
possible to invert concavities just as in the figure the worse solution can be reflected to become a better solution;
concavities can be reflected in any line segment, but this more extreme form of fusion is much more likely to overfit
the data.[10]
The machine learning community most often uses the ROC AUC statistic for model comparison. [11] However, this
practice has recently been questioned based upon new machine learning research that shows that the AUC is quite
noisy as a classification measure[12] and has some other significant problems in model comparison.[13][14] A reliable
and valid AUC estimate can be interpreted as the probability that the classifier will assign a higher score to a
randomly chosen positive example than to a randomly chosen negative example. However, the critical research [12][13]
suggests frequent failures in obtaining reliable and valid AUC estimates. Thus, the practical value of the AUC
measure has been called into question,[14] raising the possibility that the AUC may actually introduce more
uncertainty into machine learning classification accuracy comparisons than resolution. Nonetheless, the coherence
of AUC as a measure of aggregated classification performance has been vindicated, in terms of a uniform rate
distribution,[15] and AUC has being linked to a number of other performance metrics such as the Brier score.[16]
One recent explanation of the problem with ROC AUC is that reducing the ROC Curve to a single number ignores
the fact that it is about the tradeoffs between the different systems or performance points plotted and not the
performance of an individual system, as well as ignoring the possibility of concavity repair, so that related
alternative measures such as Informedness [17] or DeltaP are recommended.[18] These measures are essentially
equivalent to the Gini for a single prediction point with DeltaP' = Informedness = 2AUC-1, whilst DeltaP =
Markedness represents the dual (viz. predicting the prediction from the real class) and their geometric mean is the
Matthews correlation coefficient.[17]

Other measures
In engineering, the area between the ROC curve and the no-discrimination line is sometimes preferred (equivalent to
subtracting 0.5 from the AUC), and referred to as the discrimination.[citation needed] In psychophysics, the Sensitivity
Index d' (d-prime), P' or DeltaP' is the most commonly used measure[19] and is equivalent to twice the
discrimination, being equal also to Informedness, deskewed WRAcc and Gini Coefficient in the single point case
(single parameterization or single system). [17] These measures all have the advantage that 0 represents chance
performance whilst 1 represents perfect performance, and -1 represents the "perverse" case of full informedness
used to always give the wrong response.[20]
These varying choices of scale are fairly arbitrary since chance performance always has a fixed value: for AUC it is
0.5, but these alternative scales bring chance performance to 0 and allow them to be interpreted as Kappa statistics.
Informedness has been shown to have desirable characteristics for Machine Learning versus other common
definitions of Kappa such as Cohen Kappa and Fleiss Kappa.[17][21]
Sometimes it can be more useful to look at a specific region of the ROC Curve rather than at the whole curve. It is
possible to compute partial AUC.[22] For example, one could focus on the region of the curve with low false positive
rate, which is often of prime interest for population screening tests.[23] Another common approach for classification
problems in which P N (common in bioinformatics applications) is to use a logarithmic scale for the x-axis. [24]

Detection error tradeoff graph

Example DET graph


An alternative to the ROC curve is the detection error tradeoff (DET) graph, which plots the false negative rate
(missed detections) vs. the false positive rate (false alarms) on non-linearly transformed x- and y-axes. The
transformation function is the quantile function of the normal distribution, i.e., the inverse of the cumulative normal
distribution. It is, in fact, the same transformation as zROC, below, except that the complement of the hit rate, the
miss rate or false negative rate, is used. This alternative spends more graph area on the region of interest. Most of
the ROC area is of little interest; one primarily cares about the region tight against the y-axis and the top left corner
which, because of using miss rate instead of its complement, the hit rate, is the lower left corner in a DET plot.
The DET plot is used extensively in the automatic speaker recognition community, where the name DET was first
used. The analysis of the ROC performance in graphs with this warping of the axes was used by psychologists in
perception studies halfway the 20th century, where this was dubbed "double probability paper".
Z-transformation
If a z-transformation is applied to the ROC curve, the curve will be transformed into a straight line. [25] This ztransformation is based on a normal distribution with a mean of zero and a standard deviation of one. In memory
strength theory, one must assume that the zROC is not only linear, but has a slope of 1.0. The normal distributions of
targets (studied objects that the subjects need to recall) and lures (non studied objects that the subjects attempt to
recall) is the factor causing the zROC to be linear.
The linearity of the zROC curve depends on the standard deviations of the target and lure strength distributions. If
the standard deviations are equal, the slope will be 1.0. If the standard deviation of the target strength distribution is
larger than the standard deviation of the lure strength distribution, then the slope will be smaller than 1.0. In most
studies, it has been found that the zROC curve slopes constantly fall below 1, usually between 0.5 and 0.9.[26] Many
experiments yielded a zROC slope of 0.8. A slope of 0.8 implies that the variability of the target strength distribution
is 25% larger than the variability of the lure strength distribution.[27]
Another variable used is d' (d prime) (discussed above in "Other measures"), which can easily be expressed in terms
of z-values. Although d' is a commonly used parameter, it must be recognized that it is only relevant when strictly
adhering to the very strong assumptions of strength theory made above. [28]
The z-transformation of a ROC curve is always linear, as assumed, except in special situations. The Yonelinas
familiarity-recollection model is a two-dimensional account of recognition memory. Instead of the subject simply
answering yes or no to a specific input, the subject gives the input a feeling of familiarity, which operates like the
original ROC curve. What changes, though, is a parameter for Recollection (R). Recollection is assumed to be allor-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1.
However, when adding the recollection component, the zROC curve will be concave up, with a decreased slope.

This difference in shape and slope result from an added element of variability due to some items being recollected.
Patients with anterograde amnesia are unable to recollect, so their Yonelinas zROC curve would have a slope close
to 1.0.[29]
History
The ROC curve was first used during World War II for the analysis of radar signals before it was employed in signal
detection theory.[30] Following the attack on Pearl Harbor in 1941, the United States army began new research to
increase the prediction of correctly detected Japanese aircraft from their radar signals.
In the 1950s, ROC curves were employed in psychophysics to assess human (and occasionally non-human animal)
detection of weak signals.[30] In medicine, ROC analysis has been extensively used in the evaluation of diagnostic
tests.[31][32] ROC curves are also used extensively in epidemiology and medical research and are frequently
mentioned in conjunction with evidence-based medicine. In radiology, ROC analysis is a common technique to
evaluate new radiology techniques.[33] In the social sciences, ROC analysis is often called the ROC Accuracy Ratio,
a common technique for judging the accuracy of default probability models.
ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in
machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating
different classification algorithms.[34]
ROC curves beyond binary classification
The extension of ROC curves for classification problems with more than two classes has always been cumbersome,
as the degrees of freedom increase quadratically with the number of classes, and the ROC space has
dimensions, where is the number of classes.[35] Some approaches have been made for the particular case with three
classes (three-way ROC).[36] The calculation of the volume under the ROC surface (VUS) has been analyzed and
studied as a performance metric for multi-class problems.[37] However, because of the complexity of approximating
the true VUS, some other approaches [38] based on an extension of AUC are more popular as an evaluation metric.
Given the success of ROC curves for the assessment of classification models, the extension of ROC curves for other
supervised tasks has also been investigated. Notable proposals for regression problems are the so-called regression
error characteristic (REC) Curves [39] and the Regression ROC (RROC) curves.[40] In the latter, RROC curves
become extremely similar to ROC curves for classification, with the notions of asymmetry, dominance and convex
hull. Also, the area under RROC curves is proportional to the error variance of the regression model.
ROC curve is related to the lift and uplift curves,[41][42] which are used in uplift modelling. The ROC curve itself has
also been used as the optimization metric in uplift modeling. [43][44]
False positive paradox
The false positive paradox is a statistical result where false positive tests are more probable than true positive tests,
occurring when the overall population has a low incidence of a condition and the incidence rate is lower than the
false positive rate. The probability of a positive test result is determined not only by the accuracy of the test but by
the characteristics of the sampled population.[1] When the incidence, the proportion of those who have a given
condition, is lower than the test's false positive rate, even tests that have a very low chance of giving a false positive
in an individual case will give more false than true positives overall.[2] So, in a society with very few infected people
fewer proportionately than the test gives false positivesthere will actually be more who test positive for a
disease incorrectly and don't have it than those who test positive accurately and do. The paradox has surprised many.
[3]

It is especially counter-intuitive when interpreting a positive result in a test on a low-incidence population after
having dealt with positive results drawn from a high-incidence population.[2] If the false positive rate of the test is

higher than the proportion of the new population with the condition, then a test administrator whose experience has
been drawn from testing in a high-incidence population may conclude from experience that a positive test result
usually indicates a positive subject, when in fact a false positive is far more likely to have occurred.
Not adjusting to the scarcity of the condition in the new population, and concluding that a positive test result
probably indicates a positive subject, even though population incidence is below the false positive rate is a "base rate
fallacy".
Type I error: "rejecting the null hypothesis when it is true".
Type II error: "accepting the null hypothesis when it is false".
Type III error: "correctly rejecting the null hypothesis for the wrong reason".
Type III error
In statistical hypothesis testing, there are various notions of so-called type III errors (or errors of the third kind),
and sometimes type IV errors or higher, by analogy with the type I and type II errors of Jerzy Neyman and Egon
Pearson. Fundamentally, Type III errors occur when researchers provide the right answer to the wrong question.
Since the paired notions of type I errors (or "false positives") and type II errors (or "false negatives") that were
introduced by Neyman and Pearson are now widely used, their choice of terminology ("errors of the first kind"
and "errors of the second kind"), has led others to suppose that certain sorts of mistake that they have identified
might be an "error of the third kind", "fourth kind", etc

The Area Under an ROC Curve


The graph at right shows three ROC curves representing
excellent, good, and worthless tests plotted on the same
graph. The accuracy of the test depends on how well the test
separates the group being tested into those with and without
the disease in question. Accuracy is measured by the area
under the ROC curve. An area of 1 represents a perfect test;
an area of .5 represents a worthless test. A rough guide for
classifying the accuracy of a diagnostic test is the traditional
academic point system:

.90-1 = excellent (A)


.80-.90 = good (B)

.70-.80 = fair (C)

.60-.70 = poor (D)

.50-.60 = fail (F)

Recall the T4 data from the previous section. The area under the T4 ROC curve is .86. The T4 would be considered
to be "good" at separating hypothyroid from euthyroid patients.

ROC curves can also be constructed from clinical prediction


rules. The graphs at right come from a study of how clinical
findings predict strep throat (Wigton RS, Connor JL, Centor
RM. Transportability of a decision rule for the diagnosis of
streptococcal pharyngitis. Arch Intern Med. 1986;146:8183.) In that study, the presence of tonsillar exudate, fever,
adenopathy and the absence of cough all predicted strep.
The curves were constructed by computing the sensitivity
and specificity of increasing numbers of clinical findings
(from 0 to 4) in predicting strep. The study compared
patients in Virginia and Nebraska and found that the rule
performed more accurately in Virginia (area under the curve
= .78) compared to Nebraska (area under the curve = .73).
These differences turn out not to be statistically different,
however.
At this point, you may be wondering what this area number
really means and how it is computed. The area measures
discrimination, that is, the ability of the test to correctly
classify those with and without the disease. Consider the
situation in which patients are already correctly classified into two groups. You randomly pick on from the disease
group and one from the no-disease group and do the test on both. The patient with the more abnormal test result
should be the one from the disease group. The area under the curve is the percentage of randomly drawn pairs for
which this is true (that is, the test correctly classifies the two patients in the random pair).
Computing the area is more difficult to explain and beyond the scope of this introductory material. Two methods are
commonly used: a non-parametric method based on constructing trapeziods under the curve as an approximation of
area and a parametric method using a maximum likelihood estimator to fit a smooth curve to the data points. Both
methods are available as computer programs and give an estimate of area and standard error that can be used to
compare different tests or the same test in different patient populations. For more on quantitative ROC analysis, see
Metz CE. Basic principles of ROC analysis. Sem Nuc Med. 1978;8:283-298.
A final note of historical interest
You may be wondering where the name "Reciever Operating Characteristic" came from. ROC analysis is part of a
field called "Signal Dectection Therory" developed during World War II for the analysis of radar images. Radar
operators had to decide whether a blip on the screen represented an enemy target, a friendly ship, or just noise.
Signal detection theory measures the ability of radar receiver operators to make these important distinctions. Their
ability to do so was called the Receiver Operating Characteristics. It was not until the 1970's that signal detection
theory was recognized as useful for interpreting medical test results.
Measures of Effect Size of an Intervention
A key question needed to interpret the results of a clinical trial is whether the measured effect size is clinically
important. Three commonly used measures of effect size are relative risk reduction (RRR), absolute risk
reduction (ARR), and the number needed to treat (NNT) to prevent one bad outcome. These terms are defined
below. The material in this section is adapted from Evidence-based medicine: How to practice and teach EBM by
DL Sackett, WS Richardson, W Rosenberg and RB Haynes. 1997, New York: Churchill Livingston.
Consider the data from the Diabetes Control and Complications Trial (DCCT-Ann Intern Med 1995;122:561-8.).
Neuropathy occurred in 9.6% of the usual care group and in 2.8% of the intensively treated group. These rates are
sometimes referred to as risks by epidemiologists. For our purposes, risk can be thought of as the rate of some
outcome.
Relative risk reduction

Relative risk measures how much the risk is reduced in the experimental group compared to a control group. For
example, if 60% of the control group died and 30% of the treated group died, the treatment would have a relative
risk reduction of 0.5 or 50% (the rate of death in the treated group is half of that in the control group).
The formula for computing relative risk reduction is: (CER - EER)/CER. CER is the control group event rate and
EER is the experimental group event rate. Using the DCCT data, this would work out to (0.096 - 0.028)/0.096 =
0.71 or 71%. This means that neuropathy was reduced by 71% in the intensive treatment group compared with the
usual care group.
One problem with the relative risk measure is that without knowing the level of risk in the control group, one cannot
assess the effect size in the treatment group. Treatments with very large relative risk reductions may have a small
effect in conditions where the control group has a very low bad outcome rate. On the other hand, modest relative
risk reductions can assume major clinical importance if the baseline (control) rate of bad outcomes is large.
Absolute risk reduction
Absolute risk reduction is just the absolute difference in outcome rates between the control and treatment groups:
CER - EER. The absolute risk reduction does not involve an explicit comparison to the control group as in the
relative risk reduction and thus, does not confound the effect size with the baseline risk. However, it is a less intuitve
measure to interpret.
For the DCCT data, the absolute risk reduction for neuropathy would be (0.096 - 0.028) = 0.068 or 6.8%. This
means that for every 100 patients enrolled in the intensive treatment group, about seven bad outcomes would be
averted.
Number needed to treat
The number needed to treat is basically another way to express the absolute risk reduction. It is just 1/ARR and can
be thought of as the number of patients that would need to be treated to prevent one additional bad outcome. For the
DCCT data, NNT = 1/.068 = 14.7. Thus, for every 15 patients treated with intensive therapy, one case of neuropathy
would be prevented.
The NNT concept has been gaining in popularity because of its simplicity to compute and its ease of interpretion.
NNT data are especially useful in comparing the results of multiple clinical trials in which the relative effectiveness
of the treatments are readily apparent. For example, the NNT to prevent stroke by treating patients with very high
blood pressures (DBP 115-129) is only 3 but rises to 128 for patients with less severe hypertension (DBP 90-109).

DOTS (Directly Observed Treatment, Short-Course)


DOTS (directly observed treatment, short-course), is the name given to the tuberculosis control
strategy recommended by the World Health Organization.[1] According to WHO, The most costeffective way to stop the spread of TB in communities with a high incidence is by curing it. The
best curative method for TB is known as DOTS.[2] DOTS has five main components:

Government commitment (including political will at all levels, and


establishment of a centralized and prioritized system of TB monitoring,
recording and training).
Case detection by sputum smear microscopy.

Standardized treatment regimen directly of six to eight months observed by a


healthcare worker or community health worker for at least the first two
months.

A regular, uninterrupted drug supply.

A standardized recording and reporting system that allows assessment of


treatment results

History
The technical strategy for DOTS was developed by Dr. Karel Styblo of the International Union
Against TB & Lung Disease in the 1970s and 80s, primarily in Tanzania, but also in Malawi,
Nicaragua and Mozambique. Styblo refined, a treatment system of checks and balances that
provided high cure rates at a cost affordable for most developing countries. This increased the
proportion of people cured of TB from 40% to nearly 80%, costing up to $10 per life saved and
$3 per new infection avoided.[3]
In 1989, WHO and the World Bank began investigating the potential expansion of this strategy.
In July 1990, the World Bank, under Richard Bumgarner's direction, invited Dr. Styblo and
WHO to design a TB control project for China. By the end of 1991, this pilot project was
achieving phenomenal results, more than doubling cure rates among TB patients. China soon
extended this project to cover half the country.[4]
During the early 1990s, WHO determined that of the nearly 700 different tasks involved in Dr.
Styblo's meticulous system, only 100 of them were essential to run an effective TB control
program. From this, WHO's relatively small TB Unit at that time, led by Dr. Arata Kochi,
developed an even more concise "Framework for TB Control" focusing on five main elements
and nine key operations. The initial emphasis was on "DOT, or directly observed therapy, using a
specific combination of TB medicines known as short-course chemotherapy as one of the five
essential elements for controlling TB.[5] In 1993, the World Banks Word Development Report
claimed that the TB control strategies used in DOTS were one of the most cost-effective public
health investments.[6]

In the Fall of 1994, Kraig Klaudt, WHO's TB Advocacy Officer, developed the name and
concept for a marketing strategy to brand this complex public health intervention. To help market
"DOTS" to global and national decision makers, turning the word "dots" upside down to spell
"stop," proved a memorable shorthand that promoted "Stop TB. Use Dots!"[7][8]
According to POZ Magazine, You know the worldwide epidemic of TB is entering a critical
stage when the cash-strapped World Health Organization spends a fortune on glossy paper,
morbid photos and an interactive, spinning (!) cover for its 1995 TB report.[9] India's Joint Effort
to Eradicate TB NGO observated that, DOTS became a clarion call for TB control programmes
around the world. Because of its novelty, this health intervention quickly captured the attention
of even those outside of the international health community." [7]
The DOTS report was released to the public on March 20, 1995, at New York Citys Health
Department. At the news conference, Dr. Thomas Frieden, head of the citys Bureau of TB
Control captured the essence of DOTS, "TB control is basically a management problem.
Frieden had been credited for using the strategy to turn around New York Citys TB outbreak a
few years earlier.[10][11]
On March 19, 1997 at the Robert Koch Institute in Berlin, Germany, WHO announced that
"DOTS was the biggest health breakthrough of the decade." According to WHO DirectorGeneral Dr. Hiroshi Nakajima, We anticipate that at least 10 million deaths from TB will be
prevented in the next ten years with the introduction and extensive use of the DOTS strategy. [12]
[13]
Upon Nakajima's death in 2013, WHO recognized that the promotion of DOTS was one of
one of WHO's most successful programs developed during his ten-year administration. [14]

Impact
There has been a steady global uptake of DOTS TB control services over the subsequent
decades. Whereas less than 2% of infectious TB patients were being detected and cured with
DOTS treatment services in 1990, approximately 60% are now benefiting from this care. Since
1995, 41 million people have been successfully treated and up to 6 million lives saved through
DOTS and the Stop TB Strategy. 5.8 million TB cases were notified through DOTS programs in
2009.[15]
A systematic review of randomized clinical trials found no difference for cure rates as well as the
treatment completion rates between DOTS and self-administered drug therapy.[16] A 2013 metaanalysis of both clinical trials and observational studies too did not find any difference between
DOTS and self-administered therapy.[17] However the WHO and all other TB programs continue
to use DOTS as an important strategy for TB delivery for fear of drug resistance .
DOTS-Plus is for multi-drug-resistant tuberculosis (MDR-TB).

The Stop TB Strategy


WHO has developed a new six point Stop TB Strategy which builds on the successes of DOTS
while also explicitly addressing the key challenges facing TB. Its goal is to dramatically reduce
the global burden of tuberculosis by 2015 by ensuring all TB patients, including for example,
those co-infected with HIV and those with drug-resistant TB, benefit from universal access to
high-quality diagnosis and patient-centered treatment. The strategy also supports the
development of new and effective tools to prevent, detect and treat TB. The Stop TB Strategy
underpins the Stop TB Partnership's Global Plan to Stop TB 2006-2015.

The Strategy - one-page summary


The Stop TB Strategy - Full document [pdf 303kb]

Implementing the Stop TB Strategy - a handbook for national tuberculosis


programmes
pdf, 1.52Mb

Topics covered on this web site organized under the six components of the Stop TB Strategy

1. Pursue high-quality DOTS expansion and enhancement

DOTS expansion and enhancement


Drug Resistance Surveillance (DRS)

Effective drug supply and management system

Electronic recording and reporting systems

Global Drug Facility (Stop TB Partnership)

Global Fund grant guidance and tools

Global TB Control Report

Green Light Committee

Laboratories

Legislation / planning / human resources / management / training

Revised TB recording and reporting forms (2006)

Treatment and programme management guidelines

TB epidemiology and surveillance online workshop

TB technical assistance mechanism (TBTEAM)

2. Address TB-HIV, MDR-TB, and the needs of poor and


vulnerable populations

Air travel and TB


Children and TB

Extensively drug-resistant TB (XDR-TB)

Gender and TB

HIV and TB

Multidrug-resistant TB (MDR-TB)

Poverty and TB

Prisons and TB

Refugees and TB

Tobacco and TB

3. Contribute to health system strengthening based on


primary health care

Health system strengthening


PAL (Practical Approach to Lung Health)

4. Engage all care providers

Public-Private Mix (PPM)


International Standards for TB Care (ISTC)

5. Empower people with TB, and communities through


partnership

Community engagement in TB care and prevention


Patient's Charter for Tuberculosis Care

6. Enable and promote research

TB research
The TB Research Movement

Public Private Mix (PPM) Models for the Sustainability of


Successful TB Control Initiatives
In May 2014, a three-day working meeting was co-convened by the United States Agency for
International Development (USAID) and the World Bank, in collaboration with the Stop TB
Partnerships subgroup on PPM.
The meeting brought together TB, health financing and public-private partnership experts to
identify the essential elements for the sustainability, growth and future relevance of PPM efforts.
The goal was to improve the sustainability of private sector engagement in TB control by
bringing together innovations in service delivery models and financing.

Ebola virus disease

Ebola virus disease (formerly known as Ebola haemorrhagic fever) is a severe, often fatal illness,
with a case fatality rate of up to 90%. It is one of the worlds most virulent diseases. The
infection is transmitted by direct contact with the blood, body fluids and tissues of infected
animals or people. Severely ill patients require intensive supportive care. During an outbreak,
those at higher risk of infection are health workers, family members and others in close contact
with sick people and deceased patients.

Preparedness is key to our fight against Ebola


By Dr Poonam Khetrapal Singh, Regional Director, WHO SouthEast Asia
WHO has declared the current outbreak of Ebola Virus Disease in some countries in West Africa
a public health emergency of international concern. The main aim of this declaration is to contain
the existing outbreaks and prevent further spread of Ebola through an internationally coordinated
response. The declaration also serves as an international alert so that countries can prepare for
any possible cases. It will help mobilize foreign aid and action to fight Ebola in affected
countries. As of today, there are no cases of Ebola in the 11 countries of WHOs South-East Asia
Region. This is the time to step up preparedness. A successful public health response will need
strong health systems with sensitive surveillance, infection control and community mobilization.
Since 1976, when the Ebola virus was first detected in Africa, it has been responsible for several
outbreaks within a few African countries. The virus moves from its natural reservoir to humans
through animals. Ebola is associated with high mortality and no vaccine or cure is available at
present.
The current outbreak of Ebola virus disease in the four West African countries Guinea, Liberia,
Nigeria and Sierre Leone, has been ongoing for months. It has already caused more than 1800
cases with almost 1000 deaths. This is the highest number of cases and deaths and the widest
geographical spread ever known for an Ebola outbreak. This complex outbreak involves multiple
countries with a lot of cross-border movement among the communities. The large number of

cases in peri-urban and rural settings makes this one of the most challenging Ebola outbreaks
ever.
Though risk of spread of this disease to countries outside Africa is currently assessed to be low,
there is an urgent need to strengthen national capacity for its early detection, prompt
management and rapid containment. WHO believes that countries with strong health systems can
quickly contain any imported cases using strict infection control measures.
While global focus is on Ebola, we must not forget that several pathogens have been and shall
continue to threaten the world. Since the discovery of Ebola virus in 1976, more than 30 new
pathogens have been detected. SARS and Influenza are two such pathogens which caused
pandemics in this millennium. Fortunately, both could be contained in a short period.
The International Health Regulations, IHR (2005), call upon countries to be transparent in
sharing information on diseases that may have the potential to move across countries to facilitate
an international response. IHR regulations also specify, among others capacities, surveillance,
response, laboratories, human-resource, risk communication and preparedness for early detection
and prompt treatment.
The 2009 pandemic of influenza clearly demonstrated the importance of IHR (2005) as countries
shared information on disease spread in real time to enable the global community to mount a
coordinated response. Since the inception of IHR (2005), countries of WHOs South-East Asia
Region have been striving to strengthen their national capacities. Substantial progress has been
made. More work is yet to be done. Many countries have developed plans to achieve the desired
level of competence before June 2016. To supplement the national efforts and address the gaps,
WHO has established several networks of institutions of excellence and collaborating centres.
In the ongoing Ebola outbreak more than 100 WHO staff are deployed in the affected countries
to support national health authorities. Hundreds of global experts have also been mobilized. An
accelerated response is being implemented through a comprehensive plan in West Africa. WHO
has sought international financial aid of USD 101 million to effectively implement this plan.
No infectious disease can be controlled unless communities are informed and empowered to
protect themselves. Countries must provide accurate and relevant information to the public
including measures to reduce the risk of exposure.
Ebola virus spreads through contact with body fluids of the patient. Avoiding this contact
prevents transmission of infection. In communities and health care facilities, knowledge of
simple preventive measures including hand hygiene and standard infection control precautions
would be crucial to the national public health response.
WHO does not recommend imposing travel bans to or from the affected countries. A ban on
travel could have serious economic and social effects on these countries. A core principle of IHR
is the need to balance public health concerns without harming international travel and trade. The
risk of infection for travellers is very low since person-to-person transmission results only from
direct contact with the body fluids or secretions of an infected patient. People are infectious only

once they show symptoms. Sick people are advised not to travel and to seek medical advice
immediately if Ebola is suspected. All countries should be alert and have the capacity to manage
travellers from Ebola-infected areas who have unexplained febrile illness.
Preparedness, vigilance and community awareness will be crucial to success in our fight against
a complex public health emergency like Ebola. It will take effective national efforts to support an
internationally coordination response

Regional Consultation on Polio End-Game Strategy in SEAR


Dr Samlee Plianbangchang: Regional Director, WHO
South-East Asia
Opening remarks at Regional Consultation on Polio End-Game Strategy in SEAR
Bangkok, Thailand; 14 December 2012
Distinguished participants, ladies and gentlemen,
I welcome you all to the Regional Consultation on Polio End-game Strategy in SEAR. I thank
you very much for sparing your valuable time to stay on and attend this consultation. It is being
held back-to-back with our meeting on introduction of new vaccines.
Distinguished participants, eradication of poliomyelitis is one of our ultimate goals in
communicable disease control. India is the last country in the Region that was removed from the
WHO list of polio-endemic countries. Practically speaking, all countries in SEA are now poliofree. However, as long as there still is circulation of wild poliovirus anywhere in the world, the
countries in SEAR remain susceptible to importation of the virus.
According to the globally-agreed process, WHO regions are certified poliofree by their
respective Regional Certification Commissions (RCC). The poliofree certification is granted
on the basis of convincing evidence presented by National Certification Committees (NCC). The
RCC will consider regional certification only after three years of the last case of indigenous
wild poliovirus having been reported in the Region. The certification will be decided with one
important condition that there is firm evidence of high-quality AFP surveillance. The RCC
certifies the Region to be polio-free; the certification is not for individual countries.
The completion in all countries of Phase 1 laboratory containment is mandatory for regional
certification. Phase 1 laboratory containment means that each country in the Region has
conducted a survey and has submitted a list of institutions or laboratories that may harbour
materials that are potentially infectious with wild polio virus.
At the third meeting of the RCC held in Delhi last August, it was decided that at its fourth
meeting to be held in December 2012, the NCC of India will present subnational documentation
required for regional certification, and that the India Laboratory Task Force will present the
Phase 1 laboratory containment plan. The RCC also scheduled a series of its activities that will
finally lead to the certification of SEAR as polio-free in February 2014.
The polio end-game strategy refers particularly to the management of posteradication risks
that include issues relating to the use of oral polio vaccine (OPV). As we are aware, OPV is the
only vaccine recommended globally to be used to achieve eradication of wild poliovirus. In rare
instances, OPV can also cause paralytic cases, therefore, the continued use of OPV after the

interruption of wild poliovirus transmission is considered inconsistent with the idea of


eradication.
There are two main reasons for stopping the use of OPV for routine immunization after
eradication. The OPV may cause polio cases due to vaccineassociated paralytic poliomyelitis
(VAPP), and it may also lead to outbreaks due to circulating vaccine-derived poliovirus
(eVDPV). The polio end-game strategy has focused on sequential risk management:

from eradication;
through certification/containment;

and VDPV elimination;

until post-OPV surveillance.

The recent developments in polio eradication have led to some serious thinking, especially
regarding the choice of vaccines to be used in the polio endgame strategy to ensure effective
risk management in maintaining and sustaining the eradication. In moving forward with the
Polio end-game strategy, a number of issues need to be systematically addressed, such as
policy to support the implementation of the strategy, research and development required for
ensuring rational planning, assurance of continuous vaccine supply, ensuring operational
management efficiency, efficient surveillance and validation systems, and among other things,
for the post-polio eradication, an attempt needs to be made to integrate polio eradication into the
national immunization programme.
A high coverage of routine immunization is critically needed to ensure sustainability of polio
eradication in the long term, and for the national immunization services to be integrated into
general health services to ensure sustained, long-term immunization services in the most costefficient manner. While focusing on such integration, an attempt should be made to ensure
continued effectiveness of AFT surveillance. All in all, attention should also be paid to
improvement in hygiene and sanitation in the community.
Ladies and gentlemen, a lot of efforts and resources have been put into the polio eradication
programme. Experiences in the development and management of this programme should be used
to further strengthen the national immunization programme. We should utilize funds received
from GAVI-HSS in a big way so as to strengthen the health system infrastructure that supports
the national immunization services. Also, such strengthening will help ensure the sustainability
of polio eradication in the long term.

The World Health Organisation on Monday


recommended strict travel restrictions on Pakistan
due to the rising number of polio cases in the country
The WHO said the spread of polio is an international public health emergency that threatens to
infect other countries with the crippling disease.
The public health arm of the United Nations, issued its new guidelines to fight the disease,
recommending Pakistanis traveling abroad should present a polio vaccination certificate.
The WHO also recommended similar restrictions on Syria and Cameroon two other countries
where the disease was previously said to have been eradicated but have recently been known to
have been exporting the potentially disease.
Pakistan is one of only three countries where the crippling virus is endemic. The other two
countries are Nigeria and Afghanistan.
In an announcement today, the agency described the ongoing polio outbreaks in Asia, Africa and
the Middle East as an ''extraordinary'' situation requiring a coordinated international response.

States currently exporting wild polio virus


In a statement, the WHO said Pakistan, Cameroon, and the Syrian Arab Republic pose the
greatest risk of further wild poliovirus exportations in 2014. The WHO recommended:
These States should:
1. officially declare, if not already done, at the level of head of state or government, that the
interruption of poliovirus transmission is a national public health emergency;
2. ensure that all residents and long-term visitors (i.e. > 4 weeks) receive a dose of OPV or
inactivated poliovirus vaccine (IPV) between 4 weeks and 12 months prior to
international travel;
3. ensure that those undertaking urgent travel (i.e. within 4 weeks), who have not received a
dose of OPV or IPV in the previous 4 weeks to 12 months, receive a dose of polio
vaccine at least by the time of departure as this will still provide benefit, particularly for
frequent travelers;
4. ensure that such travelers are provided with an International Certificate of Vaccination or
Prophylaxis in the form specified in Annex 6 of the International Health Regulations
(2005) to record their polio vaccination and serve as proof of vaccination;

5. maintain these measures until the following criteria have been met: (i) at least 6 months
have passed without new exportations and (ii) there is documentation of full application
of high quality eradication activities in all infected and high risk areas; in the absence of
such documentation these measures should be maintained until at least 12 months have
passed without new exportations.
Once a State has met the criteria to be assessed as no longer exporting wild poliovirus, it should
continue to be considered as an infected State until such time as it has met the criteria to be
removed from that category, added the WHO statement.

Related: 5 more polio cases emerge in Fata, KP

Polio usually strikes children under five and is usually spread via infected water. There is no
specific treatment or cure, but several vaccines exist.
Experts are particularly concerned the virus continues to pop up in countries previously free of
the disease, such as Syria, Somalia and Iraq where civil war or unrest complicates efforts to
contain the virus.
Some critics say the rapid spread of polio could unravel the nearly three-decade effort to
eradicate it.

WHO puts shackles on Pakistan over polio


ISLAMABAD: The inevitable has finally happened. To prevent the possible spread of
the polio virus from Pakistan to other countries, the World Health Organisation
(WHO) decided on Monday to impose strict travel restrictions on the country.

WHO recommends travel restrictions on


Pakistan to prevent spread of polio
The World Health Organisation (WHO) on Monday recommended that
travel restrictions be placed on Pakistan, Cameroon and Syria for being
the only three countries that are currently exporting wild poliovirus,
Express News reported.

WHO slaps travel restrictions on Pakistan


over polio fears
The World Health Organisation (WHO) on Monday recommended that
travel restrictions be placed on Pakistan, Cameroon and Syria for being
the only three countries that are currently exporting wild poliovirus,
Express News reported.

Leprosy
Leprosy is an infectious disease that causes severe, disfiguring skin sores and nerve damage in
the arms and legs. The disease has been around since ancient times, often surrounded by
terrifying, negative stigmas and tales of leprosy patients being shunned as outcasts. Outbreaks of
leprosy have affected, and panicked, people on every continent. The oldest civilizations of China,
Egypt, and India feared leprosy was an incurable, mutilating, and contagious disease.
Leprosy, also known as Hansen's disease (HD), is a chronic infection caused by the bacteria
Mycobacterium leprae[1] and Mycobacterium lepromatosis.[2] Initially infections are without
symptoms and typically remain this way for 5 to as long as 20 years. [1] Symptoms that develop
include granulomas of the nerves, respiratory tract, skin, and eyes.[1] This may result in a lack of
ability to feel pain and thus loss of parts of extremities due to repeated injuries.[3] Weakness and
poor eyesight may also be present.[3]
There are two main types of disease based on the number of bacteria present: paucibacillary and
multibacillary.[3] The two types are differentiated by the number of poorly pigmented numb skin
patches present, with paucibacillary having five or fewer and multibacillary having more than
five.[3] The diagnosis is confirmed by finding acid-fast bacilli in a biopsy of the skin or via
detecting the DNA by polymerase chain reaction.[3] It occurs more commonly among those living
in poverty and is believed to be transmitted by respiratory droplets.[3] It is not very contagious.[3]
Leprosy is curable with treatment.[1] Treatment for paucibacillary leprosy is with the medications
dapsone and rifampicin for 6 months.[3] Treatment for multibacillary leprosy consists of
rifampicin, dapsone, and clofazimine for 12 months.[3] These treatments are provided for free by
the World Health Organization.[1] A number of other antibiotics may also be used.[3] Globally in
2012 the number of chronic cases of leprosy was 189,000 and the number of new cases was
230,000.[1] The number of chronic cases has decreased from some 5.2 million in the 1980s.[1][4][5]
Most new cases occur in 16 countries, with India accounting for more than half.[1][3] In the past 20
years, 16 million people worldwide have been cured of leprosy.[1]
Leprosy has affected humanity for thousands of years.[3] The disease takes its name from the
Latin word lepra, which means "scaly", while the term "Hansen's disease" is named after the
physician Gerhard Armauer Hansen.[3] Separating people in leper colonies still occurs in
countries like India, where there are more than a thousand;[6] China, where there are hundreds;[7]
and in the continent of Africa.[8] However, most colonies have closed.[8] Leprosy has been
associated with social stigma for much of history,[1] which remains a barrier to self-reporting and
early treatment. World Leprosy Day was started in 1954 to draw awareness to those affected by
leprosy.

Prevention
Medications can decrease the risk of those living with people with leprosy from acquiring the
disease and likely those with whom people with leprosy come into contact outside the home.[54]
There are however concerns of resistance, cost, and disclosure of a person's infection status when
doing follow up of contacts, thus the WHO however recommends that people who live in the
same household be examined for leprosy and only be treated if symptoms are present.[54]
The Bacillus CalmetteGurin (BCG) vaccine offers a variable amount of protection against
leprosy in addition to tuberculosis.[55] It appears to be 26 to 41% effective (based on controlled
trials) and about 60% effective based on observational studies with two doses possibly working
better than one.[56][57] Development of a more effective vaccine is ongoing as of 2011.[54]

Treatment

MDT anti-leprosy drugs: standard regimens

A number of leprostatic agents are available for treatment. For paucibacillary (PB or tuberculoid)
cases treatment with daily dapsone and monthly rifampicin for six months is recommended.[3]
While for multibacillary (MB or lepromatous) cases treatment with daily dapsone and
clofazimine along with monthly rifampicin for twelve months is recommended.[3]
Multi-drug therapy (MDT) remains highly effective, and people are no longer infectious after the
first monthly dose.[23] It is safe and easy to use under field conditions due to its presentation in
calendar blister packs.[23] Relapse rates remain low, and there is no known resistance to the
combined drugs.[23]

Survival rate
Overall survival
Patients with a certain disease (for example, colorectal cancer) can die directly from that disease
or from an unrelated cause (for example, a car accident). When the precise cause of death is not
specified, this is called the overall survival rate or observed survival rate. Doctors often use
mean overall survival rates to estimate the patient's prognosis. This is often expressed over
standard time periods, like one, five, and ten years. For example, prostate cancer has a much
higher one year overall survival rate than
Net survival rate
When someone is interested in how survival is affected by the disease, there is also the net
survival rate, which filters out the effect of mortality from other causes than the disease. The
two main ways to calculate net survival are relative survival and cause-specific survival or
disease-specific survival.
Relative survival has the advantage that it does not depend on accuracy of the reported cause of
death; cause specific survival has the advantage that it does not depend on the ability to find a
similar population of people without the disease.
Relative survival
Relative survival is calculated by dividing the overall survival after diagnosis of a disease by the
survival as observed in a similar population that was not diagnosed with that disease. A similar
population is composed of individuals with at least age and gender similar to those diagnosed
with the disease.
Cause-specific survival and disease-specific survival
Cause-specific survival is calculated by treating deaths from other causes than the disease as
withdrawals from the population that don't lower survival, comparable to patients who are not
observed any longer, e.g. due to reaching the end of the study period.
Median survival
Median survival is also commonly used in regards to survival rates, meaning the amount of time
at which 50% of the patients have died and 50% have survived.
Five-year survival
Five-year survival rate measures survival at 5 years after diagnosis.

The five-year survival rate is a type of survival rate for estimating the prognosis of a particular
disease, normally calculated from the point of diagnosis. Lead time bias due to earlier diagnosis
can affect interpretation of the five-year survival rate.
There are absolute and relative survival rates; the latter are more useful and commonly used.
Uses
Five-year survival rates can be used to compare the effectiveness of treatments. Use of 5-year
survival statistics is more useful in aggressive diseases that have a shorter life expectancy
following diagnosis (such as lung cancer) and less useful in cases with a long life expectancy
such as prostate cancer.
Improvements in rates are sometimes attributed to improvements in diagnosis, rather than
improvements in prognosis.
To compare treatments (independent of diagnostics) it may be better to consider survival from
reaching a certain stage of the disease or its treatment.
Analysis performed against the Surveillance, Epidemiology, and End Results database (SEER)
facilitates calculation of Five-year survival rates.

Disability

is conceptualised as the interaction between barriers and


impairments. Impairments may be physical, cognitive, mental, sensory, emotional,
developmental, or some combination of these. Barriers may be physical, like stairs
or attitudinal, like bias and may differ depending on context.

The Convention on the Rights of Persons with Disabilities (CRPD) has been ratified by more
than 140 countries. It reinforces the rights of people with disabilities.
Impairments may be present from birth, the majority are acquired during a person's lifetime.
Different countries may apply several definitions when it comes to disability.
Disabilities is an umbrella term, covering impairments, activity limitations, and participation
restrictions. An impairment impacts in body function or structure; an activity limitation is a
difficulty encountered by an individual in executing a task or action; while a participation
restriction is a problem experienced by an individual in involvement in life situations. Thus,
disability is a complex phenomenon, reflecting an interaction between features of a persons
body and features of the society in which he or she lives.[1]
An individual may also qualify as disabled if they have had an impairment in the past or is seen
as disabled based on a personal or group standard or norm. Such impairments may include
physical, sensory, and cognitive or developmental disabilities. Mental disorders (also known as
psychiatric or psychosocial disability) and various types of chronic disease may also qualify as
disabilities.
Some advocates object to describing certain conditions (notably deafness and autism) as
"disabilities", arguing that it is more appropriate to consider them developmental differences that
have been unfairly stigmatized by society.[2][3] However, other advocates argue that disability is a
result of exclusion from mainstream society and not any inherent impairment.[4][5]

Types of disability
The term "disability" broadly describes an impairment in a person's ability to function, caused by
changes in various subsystems of the body, or to mental health. The degree of disability may
range from mild to moderate, severe, or profound.[6] A person may also have multiple disabilities.
Conditions causing disability are classified by the medical community as:[7]

inherited (genetically transmitted);


congenital, meaning caused by a mother's infection or other disease during
pregnancy, embryonic or fetal developmental irregularities, or by injury
during or soon after birth;

acquired, such as conditions caused by illness or injury;

of unknown origin.

Types of disability may also be categorized in the following way:

Physical disability
Any impairment which limits the physical function of limbs, fine bones, or gross motor ability is
a physical impairment, not necessarily a physical disability. The Social Model of Disability
defines physical disability as manifest when an impairment meets a non-universal design or
program, e.g. a person who cannot climb stairs may have a physical impairment of the knees
when putting stress on them from an elevated position such as with climbing or descending
stairs. If an elevator was provided, or a building had services on the first floor, this impairment
would not become a disability. Other physical disabilities include impairments which limit other
facets of daily living, such as severe sleep apnea.
A man with an above the knee amputation exercises while wearing a prosthetic leg

Sensory disability
Sensory disability is impairment of one of the senses. The term is used primarily to refer to
vision and hearing impairment, but other senses can be impaired.

Vision impairment
Vision impairment (or "visual impairment") is vision loss (of a person) to such a degree as to
qualify as an additional support need through a significant limitation of visual capability
resulting from either disease, trauma, or congenital or degenerative conditions that cannot be
corrected by conventional means, such as refractive correction, medication, or surgery.[8][9][10]
This functional loss of vision is typically defined to manifest with
1.

2.

best corrected visual acuity of less than 20/60, or significant central field
defect,
significant peripheral field defect including homonymous or heteronymous
bilateral visual, field defect or generalized contraction or constriction of field,
or

3.

reduced peak contrast sensitivity with either of the above conditions. [8][11]

Hearing impairment
Hearing impairment or hard of hearing or deafness refers to conditions in which individuals are
fully or partially unable to detect or perceive at least some frequencies of sound which can
typically be heard by most people. Mild hearing loss may sometimes not be considered a
disability.

Olfactory and gustatory impairment


Impairment of the sense of smell and taste are commonly associated with aging but can also
occur in younger people due to a wide variety of causes.
There are various olfactory disorders:

Anosmia inability to smell


Dysosmia things do not smell as they "should"

Hyperosmia an abnormally acute sense of smell

Hyposmia decreased ability to smell

Olfactory Reference Syndrome psychological disorder which causes patients


to imagine they have strong body odor

Parosmia things smell worse than they should

Phantosmia "hallucinated smell", often unpleasant in nature

Complete loss of the sense of taste is known as ageusia, while dysgeusia is persistent abnormal
sense of taste,

Somatosensory impairment
Insensitivity to stimuli such as touch, heat, cold, and pain are often an adjunct to a more general
physical impairment involving neural pathways and is very commonly associated with paralysis
(in which the motor neural circuits are also affected).

Balance disorder
A balance disorder is a disturbance that causes an individual to feel unsteady, for example when
standing or walking. It may be accompanied by symptoms of being giddy, woozy, or have a
sensation of movement, spinning, or floating. Balance is the result of several body systems
working together. The eyes (visual system), ears (vestibular system) and the body's sense of
where it is in space (proprioception) need to be intact. The brain, which compiles this
information, needs to be functioning effectively.

Intellectual disability
Intellectual disability is a broad concept that ranges from mental retardation to cognitive deficits
too mild or too specific (as in specific learning disability) to qualify as mental retardation.
Intellectual disabilities may appear at any age. Mental retardation is a subtype of intellectual
disability, and the term intellectual disability is now preferred by many advocates in most
English-speaking countries.

Mental health and emotional disabilities


A mental disorder or mental illness is a psychological or behavioral pattern generally associated
with subjective distress or disability that occurs in an individual, and perceived by the majority
of society as being outside of normal development or cultural expectations. The recognition and
understanding of mental health conditions has changed over time and across cultures, and there
are still variations in the definition, assessment, and classification of mental disorders, although
standard guideline criteria are widely accepted.

Disability-adjusted life year

Disability-adjusted life years out of 100,000 lost due to any cause in 2004.
The disability-adjusted life year (DALY) is a measure of overall disease burden, expressed as
the number of years lost due to ill-health, disability or early death.
Originally developed by Harvard University for the World Bank in 1990, the World Health
Organization subsequently adopted the method in 1996 as part of the Ad hoc Committee on
Health Research "Investing in Health Research & Development" report. The DALY is becoming
increasingly common in the field of public health and health impact assessment (HIA). It
"extends the concept of potential years of life lost due to premature death...to include equivalent
years of 'healthy' life lost by virtue of being in states of poor health or disability." In so doing,
mortality and morbidity are combined into a single, common metric.
Traditionally, health liabilities were expressed using one measure: (expected or average number
of) 'Years of Life Lost' (YLL). This measure does not take the impact of disability into account,
which can be expressed by: 'Years Lived with Disability' (YLD). DALYs are calculated by taking
the sum of these two components. In a formula:
DALY = YLL + YLD.
The DALY relies on an acceptance that the most appropriate measure of the effects of chronic
illness is time, both time lost due to premature death and time spent disabled by disease. One
DALY, therefore, is equal to one year of healthy life lost. Japanese life expectancy statistics are
used as the standard for measuring premature death, as the Japanese have the longest life
expectancies.
Looking at the burden of disease via DALYs can reveal surprising things about a population's
health. For example, the 1990 WHO report[citation needed] indicated that 5 of the 10 leading causes of
disability were psychiatric conditions. Psychiatric and neurologic conditions account for 28% of

all years lived with disability, but only 1.4% of all deaths and 1.1% of years of life lost. Thus,
psychiatric disorders, while traditionally not regarded as a major epidemiological problem, are
shown by consideration of disability years to have a huge impact on populations.
Social weighting
The disability-adjusted life year is a type of health adjusted life year (HALY) that attempts to
quantify the burden of disease or disability in populations. They are similar to quality of life
adjust life year (QALY) measures, but rather than attach health related quality of life (HRQL)
estimates to health states, DALYs assign HRQLs to specific diseases and disabilities. The
methodology was originally developed by the World Bank, but has since been greatly modified
and is not an economic measure. However, unique among disease measures, HALYs, including
DALYs and QALYs, are especially useful in guiding the allocation of health resources as they
provide a common denominator, allowing for the expression of utility in terms of DALYs/dollar,
or QALY/dollar.[5] For example, in Gambia, provision of the pneumococcal conjugate
vaccination costs $670 per DALY saved.[6]

Some studies use DALYs calculated to place greater value on a year lived as a young adult. This
formula produces average values around age 10 and age 55, a peak around age 25, and lowest
values among very young children and very old people.[7]
A crucial distinction among DALY studies is the use of "social weighting", in which the value of
each year of life depends on age. There are two components to this differential accounting of
time, age weighing and time discounting. Age weighing is based on the theory of human capital.
Commonly, years lived as a young adult are valued more highly than years spent as a young
child or older adult, at these are years of peak productivity. Age weighing receives considerable
flak from those who criticize for valuing young adults at the expense of children and the old.
Some criticize, while others rationalize, this as reflecting society's interest in productivity and
receiving a return on its investment in raising children. This age weighting system means that
somebody disabled at 30 years of age, for ten years, would be measured as having a higher loss
of DALYs (a greater burden of disease), than somebody disabled by the same disease or injury at
the age of sixty for 15 years. This age-weight funtion is by no means a universal methodology in
HALY studies, but is common when using DALYs. Cost effectiveness studies using QALYs, for
example, do not discount time at different ages differently.[5] It is important to not that this age

weighing function applies to the calculation of DALYs lost due to disability. Years lost to
premature death are determined by the age at death years and life expectancy.
The global burden of disease (GBD) 2001-2002 study counted disability adjusted life years
equally for all ages, but the GBD 1990 and GBD 2004 studies used the formula[8]
[9]

where is the age at which the year is lived and is the value
assigned to it relative to an average value of 1. This age weighting function is not the same as the
disability weight (DW) which is determined by disease or disability and does not vary with age.
Tables have been created of thousands of diseases and disabilities, ranging from Alzheimer's
disease to lose of finger, with the disability weight meant to indicate the level of disability that
result from the specific condition.
At the population level, the burden of disease as measured by DALYs is calculated by DALY =
YLL + YLD where YLL is years of life lost, and YLD is years lived with disability. In turn,
population YLD is determined by the number of years disabled weighed by level of disability
caused by a disability or disease using the formula YLD = I x DW x L. In this formula I =
number of incident cases in the population, DW = disability weight of specific condition, and L =
average duration of the case until remission or death (years). There is also a prevalence (as
opposed to incidence) based calculation for YLD. Premature death is calculate by YLL = N x L,
where N = number of deaths due to condition, L = standard life expectancy at age of death
(expectancy - age at death).[10]
In these studies future years were also discounted at a 3% rate to account for future health care
losses. Time discounting, which is distinct from the age weight function, describes preferences in
time as used in economic models.[11]
The effects of the interplay between life expectancy and years lost, discounting, and social
weighting are complex, depending on the severity and duration of illness. For example, the
parameters used in the GBD 1990 study generally give greater weight to deaths at any year prior
to age 39 than afterward, with the death of a newborn weighted at 33 DALYs and the death of
someone aged 520 weighted at approximately 36 DALYs
The Human Development Index (HDI) is a composite statistic of life expectancy, education,
and income indices used to rank countries into four tiers of human development. It was created
by Pakistani economist Mahbub ul Haq and Indian economist Amartya Sen in 1990, and was
published by the United Nations Development Programme.
The 2010 Human Development Report introduced an Inequality-adjusted Human Development
Index (IHDI). While the simple HDI remains useful, it stated that "the IHDI is the actual level of
human development (accounting for inequality)" and "the HDI can be viewed as an index of
'potential' human development (or the maximum IHDI that could be achieved if there were no
inequality)".

Years of potential life lost (YPLL) or potential years of life lost (PYLL), is an estimate of the
average years a person would have lived if he or she had not died prematurely. It is, therefore, a
measure of premature mortality. As a method, it is an alternative to death rates that gives more
weight to deaths that occur among younger people. Another alternative is to consider the effects
of both disability and premature death using disability adjusted life years.

Calculation
To calculate the years of potential life lost, the analyst has to set an upper reference age. The
reference age should correspond roughly to the life expectancy of the population under study. In
the developed world, this is commonly set at age 75, but it is essentially arbitrary. Thus, PYLL
should be written with respect to the reference age used in the calculation: e.g., PYLL [75].
PYLL can be calculated using individual level data or using age grouped data.
Briefly, for the individual method, each person's PYLL is calculated by subtracting the person's
age at death from the reference age. If a person is older than the reference age when he or she
dies, that person's PYLL is set to zero (i.e., there are no "negative" PYLLs). In effect, only those
who die before the reference age are included in the calculation. Some examples:
1. Reference age = 75; Age at death = 60; PYLL[75] = 75 - 60 = 15
2. Reference age = 75; Age at death = 6 months; PYLL[75] = 75 - 0.5 = 74.5
3. Reference age = 75; Age at death = 80; PYLL[75] = 0 (age at death greater than reference
age)
To calculate the PYLL for a particular population in a particular year, the analyst sums the
individual PYLLs for all individuals in that population who died in that year. This can be done
for all-cause mortality or for cause-specific mortality.

Significance
In the developed world, mortality counts and rates tend to emphasize the most common causes of
death in older people, because the risk of death increases with age. Because PYLL gives more
weight to deaths among younger individuals, it is the favoured metric among those who wish to
draw attention to those causes of death that are more common in younger people. Some
researchers say that this measurement should be considered by governments when they decide
how best to divide up scarce resources for research.

For example, in most of the developed world, heart disease and cancer are the leading causes of
death, as measured by the number (or rate) of deaths. For this reason, heart disease and cancer
tend to get a lot of attention (and research funding). However, one might argue that everyone has
to die of something eventually, and so public health efforts should be more explicitly directed at
preventing premature death. When PYLL is used as an explicit measure of premature death, then
injuries and infectious diseases, become more important. While the most common cause of death
of young people aged 5 to 40 is injury and poisoning in the developed world, because relatively
few young people die, the principal causes of lost years remain cardiovascular disease and
cancer.
Person-years of potential life lost in the United States in 2006
Cause of premature death
Person-years lost
Cancer
8,628,000 person-years
Heart disease and strokes
8,760,000 person-years
Accidents and other injuries
5,873,000 person-years
All other causes
13,649,000 person-years

Epidemiological transition

Diagram showing sharp birth rate and death rate decreases between Time 1 and
Time 4, the congruent increase in population caused by delayed birth rate
decreases, and the subsequent re-leveling of population growth by Time 5.

In demography and medical geography, epidemiological transition is a phase of development


witnessed by a sudden and stark increase in population growth rates brought about by medical
innovation in disease or sickness therapy and treatment, followed by a re-leveling of population
growth from subsequent declines in fertility rates. "Epidemiological transition" accounts for the
replacement of infectious diseases by chronic diseases over time due to expanded public health
and sanitation.[1][2] This theory was originally posited by Abdel Omran in 1971.[3]

Theory
Omran divided the epidemiological transition of mortality into three phases, in the last of which
chronic diseases replace infection as the primary cause of death.[4] These phases are:
1. The Age of Pestilence and Famine: Where mortality is high and fluctuating,
precluding sustained population growth, with low and variable life
expectancy, vacillating between 20 and 40 years.
2. The Age of Receding Pandemics: Where mortality progressively declines, with
the rate of decline accelerating as epidemic peaks decrease in frequency.
Average life expectancy increases steadily from about 30 to 50 years.
Population growth is sustained and begins to be exponential.
3. The Age of Degenerative and Man-Made Diseases: Mortality continues to
decline and eventually approaches stability at a relatively low level.

The epidemiological transition occurs as a country undergoes the process of modernization from
developing nation to developed nation status. The developments of modern healthcare, and
medicine like antibiotics, drastically reduces infant mortality rates and extends average life

expectancy which, coupled with subsequent declines in fertility rates, reflects a transition to
chronic and degenerative diseases as more important causes of death.

History
In general human history, Omran's first phase occurs when human population sustains cyclic,
low-growth, and mostly linear, up-and-down patterns associated with wars, famine, epidemic
outbreaks, as well as small golden ages, and localized periods of "prosperity". In early preagricultural history, infant mortality rates were high and average life expectancy low. Today, life
expectancy in third world countries remains relatively low, as in many Sub-Saharan African
nations where it typically doesn't exceed 60 years of age.[5]
The second phase involves advancements in medicine and the devopment of a healthcare system.
One treatment breakthrough of note was the discovery of penicillin in the mid 20th century
which led to widespread and dramatic declines in death rates from previously serious diseases
such as syphilis. Population growth rates surged in the 1950s, 1960s and 1970s, to 1.8% per year
and higher, with the world gaining 2 billion people between 1950 and the 1980s alone.
Omran's third phase occurs when human birth rates drastically decline from highly positive
replacement numbers to stable replacement rates. In several European nations replacement rates
have even become negative.[6] As this transition generally represents the net effect of individual
choices on family size (and the ability to implement those choices), it is more complicated.
Omran gives three possible factors tending to encourage reduced fertility rates:[3]
1. Biophysiologic factors, associated with reduced infant mortaliity and the
expectation of longer life in parents,
2. Socioeconomic factors, associated with childhood survival and the economic
perceptions of large family size, and
3. Psychologic or emotional factors, where society as a whole changes its
rationale and opinion on family size and parental energies are redirected to
qualitative aspects of child-raising.

This transition may also be associated with the sociological adaptations associated with
demographic movements to urban areas, and a shift from agriculture and labor based production
output to technological and service-sector-based economies.
Regardless, Chronic and degenerative diseases, and accidents and injuries, became more
important causes of death. This shift in demographic and disease profiles is currently under way
in most developing nations, however every country is unique in its transition speed based on a
myriad of geographical and socio-political factors.

Controversy
Many question whether or not epidemiological transition really took place during the twentieth
century. The transition during this time describes the replacement of infectious diseases by

chronic diseases. This replacement of diseases has been identified to be caused by multiple
factors such as antibiotics and increased overall public sanitation. Even though these factors
undeniably affected society in ways such as increased lifespan, many believe that the increase
from infectious disease to chronic disease may be an illusion. It is debated that there was an
actual increase in chronic diseases. Instead it is argued that due to new techniques of diagnosing
and managing diseases that previously had been undiagnosed and untreated, it gave the
appearance of an emergence of new chronic illnesses. Multiple factors made chronic diseases
more visible to health care professionals such as increased use of hospitals as treatment centers
and improved statistical evaluation. This led to the question, "Was an epidemiological transition
really taking place in the twentieth century?

Dual burden of disease


Deemed as a developmental challenge of epidemic proportions,[30] the double burden of disease
(DBD) is an emerging global health challenge, that exists predominately in low-to-middle
income countries. More specifically, the DBD refers to the dual burden of communicable and
non-communicable diseases(NCD).[31] Today, over 90 per cent of the worlds disease burden
occurs in developing regions, and most are attributed to communicable diseases. Communicable
diseases are infectious diseases that can be passed between people through proximity, social
contact or intimate contact.[32] Common diseases in this category include whooping cough or
tuberculosis, HIV/AIDs, malaria, influenza (the flu), and mumps.[33] As low-to-middle income
countries continue to develop, the types of diseases that affecting populations within these
countries shifts primarily from infectious diseases, such as diarrhea and pneumonia, to primarily
non-communicable diseases, such as cardiovascular disease, cancer and obesity. This shift is
increasingly being referred to as the risk transition.[34][35] Thus, as globalization and the
proliferation of pre-packaged foods continues, traditional diets and lifestyles are changing in
many developing countries. As such, it is becoming increasingly common to see low-to-middle
income countries battle with century old issues such as food insecurity and under nutrition, in
addition to emerging health epidemics such as chronic heart disease, hypertension, stroke, and
diabetes. Diseases once characteristic of industrialized nations, are increasingly becoming health
challenges of epidemic proportions in many low-to-middle income countries.

Nutrition transition
Nutrition transition is the shift in dietary consumption and energy expenditure that coincides
with economic, demographic, and epidemiological changes. Specifically the term is used for the
recent transition of developing countries from traditional diets high in cereal and fiber to more
Western pattern diets high in sugars, fat, and animal-source food.

Demographic transition (DT) refers to the transition from high birth and death rates to low
birth and death rates as a country develops from a pre-industrial to an industrialized economic
system. This is typically demonstrated through a demographic transition model (DTM). The
theory is based on an interpretation of demographic history developed in 1929 by the American
demographer Warren Thompson (18871973).[1] Thompson observed changes, or transitions, in
birth and death rates in industrialized societies over the previous 200 years. Most developed
countries are in stage 3 or 4 of the model; the majority of developing countries have reached
stage 2 or stage 3. The major (relative) exceptions are some poor countries, mainly in subSaharan Africa and some Middle Eastern countries, which are poor or affected by government
policy or civil strife, notably Pakistan, Palestinian Territories, Yemen and Afghanistan.[2]
Although this model predicts ever decreasing fertility rates, recent data show that beyond a
certain level of development fertility rates increase again.[3]
A correlation matching the demographic transition has been established; however, it is not
certain whether industrialization and higher incomes lead to lower population or if lower
populations lead to industrialization and higher incomes.[4] In countries that are now developed
this demographic transition began in the 18th century and continues today. In less developed
countries, this demographic transition started later and is still at an earlier stage.[5]

Demographic transition (DT) refers to the transition from high birth and death rates to low
birth and death rates as a country develops from a pre-industrial to an industrialized economic
system. This is typically demonstrated through a demographic transition model (DTM). The
theory is based on an interpretation of demographic history developed in 1929 by the American
demographer Warren Thompson (18871973).[1] Thompson observed changes, or transitions, in
birth and death rates in industrialized societies over the previous 200 years. Most developed
countries are in stage 3 or 4 of the model; the majority of developing countries have reached
stage 2 or stage 3. The major (relative) exceptions are some poor countries, mainly in subSaharan Africa and some Middle Eastern countries, which are poor or affected by government
policy or civil strife, notably Pakistan, Palestinian Territories, Yemen and Afghanistan.[2]
Although this model predicts ever decreasing fertility rates, recent data show that beyond a
certain level of development fertility rates increase again.[3]
A correlation matching the demographic transition has been established; however, it is not
certain whether industrialization and higher incomes lead to lower population or if lower
populations lead to industrialization and higher incomes.[4] In countries that are now developed
this demographic transition began in the 18th century and continues today. In less developed
countries, this demographic transition started later and is still at an earlier stage.[5]

Summary of the theory

Demographic change in Sweden from 1735 to 2000.


Red line: crude death rate (CDR), blue line: (crude) birth rate (CBR)
The transition involves four stages, or possibly five.

In stage one, pre-industrial society, death rates and birth rates are high and roughly in
balance. All human populations are believed to have had this balance until the late 18th
century, when this balance ended in Western Europe.[6] In fact, growth rates were less
than 0.05% at least since the Agricultural Revolution over 10,000 years ago.[6] Birth and
death rates both tend to be very high in this stage.[6] Because both rates are approximately
in balance, population growth is typically very slow in stage one.[6]
In stage two, that of a developing country, the death rates drop rapidly due to
improvements in food supply and sanitation, which increase life spans and reduce
disease. The improvements specific to food supply typically include selective breeding
and crop rotation and farming techniques.[6] Other improvements generally include access
to technology, basic healthcare, and education. For example, numerous improvements in
public health reduce mortality, especially childhood mortality.[6] Prior to the mid-20th
century, these improvements in public health were primarily in the areas of food
handling, water supply, sewage, and personal hygiene.[6] One of the variables often cited
is the increase in female literacy combined with public health education programs which
emerged in the late 19th and early 20th centuries.[6] In Europe, the death rate decline
started in the late 18th century in northwestern Europe and spread to the south and east
over approximately the next 100 years.[6] Without a corresponding fall in birth rates this
produces an imbalance, and the countries in this stage experience a large increase in
population.
In stage three, birth rates fall due to access to contraception, increases in wages,
urbanization, a reduction in subsistence agriculture, an increase in the status and
education of women, a reduction in the value of children's work, an increase in parental
investment in the education of children and other social changes. Population growth
begins to level off. The birth rate decline in developed countries started in the late 19th
century in northern Europe.[6] While improvements in contraception do play a role in birth
rate decline, it should be noted that contraceptives were not generally available nor
widely used in the 19th century and as a result likely did not play a significant role in the

decline then.[6] It is important to note that birth rate decline is caused also by a transition
in values; not just because of the availability of contraceptives.[6]

During stage four there are both low birth rates and low death rates. Birth rates may drop
to well below replacement level as has happened in countries like Germany, Italy, and
Japan, leading to a shrinking population, a threat to many industries that rely on
population growth. As the large group born during stage two ages, it creates an economic
burden on the shrinking working population. Death rates may remain consistently low or
increase slightly due to increases in lifestyle diseases due to low exercise levels and high
obesity and an aging population in developed countries. By the late 20th century, birth
rates and death rates in developed countries leveled off at lower rates.[5]

As with all models, this is an idealized picture of population change in these countries. The
model is a generalization that applies to these countries as a group and may not accurately
describe all individual cases. The extent to which it applies to less-developed societies today
remains to be seen. Many countries such as China, Brazil and Thailand have passed through the
Demographic Transition Model (DTM) very quickly due to fast social and economic change.
Some countries, particularly African countries, appear to be stalled in the second stage due to
stagnant development and the effect of AIDS.

Stage One
In pre-industrial society,death rates and birth rates were both high and fluctuated rapidly
according to natural events, such as drought and disease, to produce a relatively constant and
young population. Family planning and contraception were virtually nonexistent; therefore, birth
rates were essentially only limited by the ability of women to bear children. Emigration
depressed death rates in some special cases (for example, Europe and particularly the Eastern
United States during the 19th century), but, overall, death rates tended to match birth rates, often
exceeding 40 per 1000 per year. Children contributed to the economy of the household from an
early age by carrying water, firewood, and messages, caring for younger siblings, sweeping,
washing dishes, preparing food, and working in the fields.[7] Raising a child cost little more than
feeding him or her; there were no education or entertainment expenses. Thus, the total cost of
raising children barely exceeded their contribution to the household. In addition, as they became
adults they become a major input to the family business, mainly farming, and were the primary
form of insurance for adults in old age. In India, an adult son was all that prevented a widow
from falling into destitution. While death rates remained high there was no question as to the
need for children, even if the means to prevent them had existed.[8]
During this stage, the society evolves in accordance with Malthusian paradigm, with population
essentially determined by the food supply. Any fluctuations in food supply (either positive, for
example, due to technology improvements, or negative, due to droughts and pest invasions) tend
to translate directly into population fluctuations. Famines resulting in significant mortality are
frequent. Overall, the population dynamics during stage one is highly reminiscent of that
commonly observed in animals.

Stage Two

World population 10,000 BC - 2000 AD


This stage leads to a fall in death rates and an increase in population.[9] The changes leading to
this stage in Europe were initiated in the Agricultural Revolution of the 18th century and were
initially quite slow. In the 20th century, the falls in death rates in developing countries tended to
be substantially faster. Countries in this stage include Yemen, Afghanistan, the Palestinian
territories, Bhutan and Laos and much of Sub-Saharan Africa (but do not include South Africa,
Zimbabwe, Botswana, Swaziland, Lesotho, Namibia, Kenya and Ghana, which have begun to
move into stage 3).[10]
The decline in the death rate is due initially to two factors:

First, improvements in the food supply brought about by higher yields in agricultural
practices and better transportation prevent death due to starvation and lack of water.
Agricultural improvements included crop rotation, selective breeding, and seed drill
technology.
Second, significant improvements in public health reduce mortality, particularly in
childhood. These are not so many medical breakthroughs (Europe passed through stage
two before the advances of the mid-20th century, although there was significant medical
progress in the 19th century, such as the development of vaccination) as they are
improvements in water supply, sewerage, food handling, and general personal hygiene
following from growing scientific knowledge of the causes of disease and the improved
education and social status of mothers.

A consequence of the decline in mortality in Stage Two is an increasingly rapid rise in population
growth (a "population explosion") as the gap between deaths and births grows wider. Note that
this growth is not due to an increase in fertility (or birth rates) but to a decline in deaths. This
change in population occurred in north-western Europe during the 19th century due to the
Industrial Revolution. During the second half of the 20th century less-developed countries
entered Stage Two, creating the worldwide population explosion that has demographers
concerned today. In this stage of DT, countries are vulnerable to become failed states in the
absence of progressive governments.

Population pyramid of Angola 2005


Another characteristic of Stage Two of the demographic transition is a change in the age
structure of the population. In Stage One, the majority of deaths are concentrated in the first 510
years of life. Therefore, more than anything else, the decline in death rates in Stage Two entails
the increasing survival of children and a growing population. Hence, the age structure of the
population becomes increasingly youthful and more of these children enter the reproductive
cycle of their lives while maintaining the high fertility rates of their parents. The bottom of the
"age pyramid" widens first, accelerating population growth. The age structure of such a
population is illustrated by using an example from the Third World today.

Stage Three
Stage Three moves the population towards stability through a decline in the birth rate.[11] Several
factors contribute to this eventual decline, although some of them remain speculative:

In rural areas continued decline in childhood death means that at some point parents
realize they need not require so many children to be born to ensure a comfortable old age.
As childhood death continues to fall and incomes increase parents can become
increasingly confident that fewer children will suffice to help in family business and care
for them in old age.
Increasing urbanization changes the traditional values placed upon fertility and the value
of children in rural society. Urban living also raises the cost of dependent children to a
family. A recent theory suggests that urbanization also contributes to reducing the birth
rate because it disrupts optimal mating patterns. A 2008 study in Iceland found that the
most fecund marriages are between distant cousins. Genetic incompatibilities inherent in
more distant outbreeding makes reproduction harder.[12]
In both rural and urban areas, the cost of children to parents is exacerbated by the
introduction of compulsory education acts and the increased need to educate children so
they can take up a respected position in society. Children are increasingly prohibited
under law from working outside the household and make an increasingly limited
contribution to the household, as school children are increasingly exempted from the
expectation of making a significant contribution to domestic work. Even in equatorial
Africa, children now need to be clothed, and may even require school uniforms. Parents
begin to consider it a duty to buy children books and toys. Partly due to education and

access to family planning, people begin to reassess their need for children and their
ability to raise them.[8]
A major factor in reducing birth rates in stage 3 countries such as Malaysia is the availability of
family planning facilities, like this one in Kuala Terengganu, Terengganu, Malaysia.
Increasing female literacy and employment lowers the uncritical acceptance of
childbearing and motherhood as measures of the status of women. Working women have
less time to raise children; this is particularly an issue where fathers traditionally make
little or no contribution to child-raising, such as southern Europe or Japan. Valuation of
women beyond childbearing and motherhood becomes important.
Improvements in contraceptive technology are now a major factor. Fertility decline is
caused as much by changes in values about children and sex as by the availability of
contraceptives and knowledge of how to use them.
The resulting changes in the age structure of the population include a reduction in the youth
dependency ratio and eventually population aging. The population structure becomes less
triangular and more like an elongated balloon. During the period between the decline in youth
dependency and rise in old age dependency there is a demographic window of opportunity that
can potentially produce economic growth through an increase in the ratio of working age to
dependent population; the demographic dividend.
However, unless factors such as those listed above are allowed to work, a society's birth rates
may not drop to a low level in due time, which means that the society cannot proceed to Stage
Four and is locked in what is called a demographic trap.
Countries that have experienced a fertility decline of over 40% from their pre-transition levels
include: Costa Rica, El Salvador, Panama, Jamaica, Mexico, Colombia, Ecuador, Guyana,
Philippines, Indonesia, Malaysia, Sri Lanka, Turkey, Azerbaijan, Turkmenistan, Uzbekistan,
Egypt, Tunisia, Algeria, Morocco, Lebanon, South Africa, India, Saudi Arabia, and many Pacific
islands.
Countries that have experienced a fertility decline of 25-40% include: Honduras, Guatemala,
Nicaragua, Paraguay, Bolivia, Vietnam, Myanmar, Bangladesh, Tajikistan, Jordan, Qatar,
Albania, United Arab Emirates, Zimbabwe, and Botswana.
Countries that have experienced a fertility decline of 10-25% include: Haiti, Papua New Guinea,
Nepal, Pakistan, Syria, Iraq, Libya, Sudan, Kenya, Ghana and Senegal.[10]

Stage Four
This occurs where birth and death rates are both low, leading to a total population which is high
and stable. Death rates are low for a number of reasons, primarily lower rates of diseases and
higher production of food. The birth rate is low because people have more opportunities to
choose if they want children; this is made possible by improvements in contraception or women
gaining more independence and work opportunities.[13] Some theorists[who?] consider there are only

4 stages and that the population of a country will remain at this level. The DTM is only a
suggestion about the future population levels of a country, not a prediction.
Countries that are at this stage (Total Fertility Rate of less than 2.5 in 1997) include: United
States, Canada, Argentina, Australia, New Zealand, most of Europe, Bahamas, Puerto Rico,
Trinidad and Tobago, Brazil, Sri Lanka, South Korea, Singapore, Iran, China, Turkey, Thailand
and Mauritius.[10]

Stage Five and/or Six


See also: Aging of Europe and Aging of Japan

United Nation's population projections by location.


Note the vertical axis is logarithmic and represents millions of people.
The original Demographic Transition model has just four stages, but additional stages have been
proposed. Both more-fertile and less-fertile futures have been claimed as a Stage Five.
Some countries have sub-replacement fertility (that is, below 2.1 children per woman).
Replacement fertility is typically 2.1 because this replaces the two parents and adds population to
compensate for deaths (i.e. members of the population who die without reproducing) with the
additional 0.1. Many European and East Asian countries now have higher death rates than birth
rates. Population aging and population decline may eventually occur, assuming that the fertility
rate does not change and sustained mass immigration does not occur.
In an article in the August 2009 issue of Nature, Myrskyla, Kohler and Billari show that the
previously negative relationship between national wealth (as measured by the Human
Development Index (HDI)) and birth rates has become J-shaped. Development promotes fertility
decline at low and medium HDI levels, but advanced HDI promotes a rebound in fertility. In
many countries with very high levels of development fertility rates are now approaching two
children per woman although there are exceptions, notably Germany, Italy, Japan.

In the current century, most developed countries have increased fertility. From the point of view
of evolutionary biology, richer people having fewer children is unexpected, as natural selection
would be expected to favor individuals who are willing and able to convert plentiful resources
into plentiful fertile descendants.

Quality-adjusted life year


The quality-adjusted life year or quality-adjusted life-year (QALY) is a measure of disease
burden, including both the quality and the quantity of life lived.[1][2] It is used in assessing the
value for money of a medical intervention. According to Pliskin et al., The QALY model requires
utility independent, risk neutral, and constant proportional tradeoff behaviour.[3]
The QALY is based on the number of years of life that would be added by the intervention. Each
year in perfect health is assigned the value of 1.0 down to a value of 0.0 for being dead. If the
extra years would not be lived in full health, for example if the patient would lose a limb, or be
blind or have to use a wheelchair, then the extra life-years are given a value between 0 and 1 to
account for this.[citation needed] Under certain methods, such as the EQ-5D, QALY can be negative
number.

Use
The QALY is often used in cost-utility analysis to calculate the ratio of cost to QALYs saved for
a particular health care intervention. This is then used to allocate healthcare resources, with an
intervention with a lower cost to QALY saved (incremental cost effectiveness) ratio ("ICER")
being preferred over an intervention with a higher ratio.[citation needed]

Calculation
The QALY is a measure of the value of health outcomes. Since health is a function of length of
life and quality of life, the QALY was developed as an attempt to combine the value of these
attributes into a single index number. The basic idea underlying the QALY is simple: it assumes
that a year of life lived in perfect health is worth 1 QALY (1 Year of Life 1 Utility value = 1
QALY) and that a year of life lived in a state of less than this perfect health is worth less than 1.
In order to determine the exact QALY value, it is sufficient to multiply the utility value
associated with a given state of health by the years lived in that state. QALYs are therefore
expressed in terms of "years lived in perfect health": half a year lived in perfect health is
equivalent to 0.5 QALYs (0.5 years 1 Utility), the same as 1 year of life lived in a situation
with utility 0.5 (e.g. bedridden) (1 year 0.5 Utility). QALYs can then be incorporated with
medical costs to arrive at a final common denominator of cost/QALY. This parameter can be
used to develop a cost-effectiveness analysis of any treatment

Meaning
The concept of the QALY is credited to work by Klarman[4] and later Fanshel and Bush[5] and
Torrance [6] who suggested the idea of length of life adjusted by indices of functionality or health.
[7]
It was officially named the QALY in print in an article by Zeckhauser and Shepard[8] It was
later promoted through medical technology assessment conducted by the US Congress Office of
Technology Assessment.

Then, in 1980, Pliskin proposed a justification of the construction of the QALY indicator using
the multiattribute utility theory: if a set of conditions pertaining to agent preferences on life years
and quality of life are verified, then it is possible to express the agents preferences about couples
(number of life years/health state), by an interval (Neumannian) utility function. This utility
function would be equal to the product of an interval utility function on life years , and an
interval utility function on health state . Because of these theoretical assumptions, the
meaning and usefulness of the QALY is debated.[9][10][11] Perfect health is hard, if not impossible,
to define. Some argue that there are health states worse than being dead, and that therefore there
should be negative values possible on the health spectrum (indeed, some health economists have
incorporated negative values into calculations). Determining the level of health depends on
measures that some argue place disproportionate importance on physical pain or disability over
mental health.[12] The effects of a patient's health on the quality of life of others (e.g. caregivers or
family) do not figure into these calculations

Sullivan's Index
Sullivan's index is a method to compute life expectancy free of disability.[1] Health expectancy
calculated by Sullivans method is the number of remaining years, at a particular age, that an
individual can expect to live in a healthy state.[2] It is computed by subtracting the probable
duration of bed disability and inability to perform major activities from the life expectancy. The
data for calculation is obtained from population surveys and period life table. The Sullivan's
index collects mortality and disability data separately, and this data is almost often readily
available. The Sullivan health expectancy reflects the current health of a real population adjusted
for mortality levels and independent of age structure.[3]

Calculating Potential Years of Life Lost (PYLL)


The PYLL is an indicator of premature mortality. It represents the total number of years NOT
lived by an individual who died before age 75.

This indicator gives more importance to the causes of death that occurred at younger
ages than those occurred at older ages.
The upper age limit of 75 is used to approximate the life expectancy of Canadians for
both sexes combined. For example, an individual in good health is expected to live up
to age 75 in Canada.

Deaths occurring in individuals age 75 or older are NOT included in the calculation.

Infant deaths, deaths among infants under 1 year of age, are included in the calculation
due to their very small numbers. Other methods exclude these deaths since they are
often due to causes that have different etiology from deaths at later ages.

PYLL can be calculated in two ways. The Core Indicators for Public Health in Ontario uses
Method A.
Method A (Individual):
The PYLL due to death is calculated for each person who died before age 75. For example, a
person who died at age 20 would contribute 55 potential years of life lost. Deaths occurring in
individuals age 75 or older are NOT included in the calculation. Potential years of life lost
correspond to the sum of the PYLL contributed for each individual. The rate is obtained by
dividing total potential years of life lost by the total population less than 75 years of age.
Method of Calculation:

Individual

Age at Death (in years)

PYLL Contributed(75 age


at death)

6 months

75 0.5 = 74.5

55

75 55 = 20

15

75 15 = 60

85 *

60

75 60 = 15

SUM of PYLL

169.5

Note: * refers to deaths that DO NOT contribute to PYLL as deaths occurred to individuals 75
years of age or older.
Method B (Age Group):
The PYLL due to death is calculated for each age group (< 1, 1-4, 5-9, , and 70-74) by
multiplying the number of deaths by the difference between age 75 and the mean age at death
in each age group. Potential years of life lost correspond to the sum of the products obtained
for each age group. The rate is obtained by dividing total potential years of life lost by the
total population under 75 years old.
Method of Calculation:
Age

# of Deaths (1)

Mean Age at
Death (2)

75 Mean Age at
Death (3)

PYLL(1) x (3)

<1

0.5

74.5

298.0

1-4

28

3.0

72.0

2,016.0

5-9

52

7.5

67.5

3,510.0

10-14

64

12.5

62.5

4,000.0

15-19

315

17.5

57.5

18,112.5

20-24

410

22.5

52.5

21,525.0

25-29

308

27.5

47.5

14,630.0

30-34

243

32.5

42.5

10,327.5

35-39

171

37.5

37.5

6,412.5

40-44

131

42.5

32.5

4,257.5

45-49

116

47.5

27.5

3,190.0

50-54

85

52.5

22.5

1,912.5

55-59

85

57.5

17.5

1,487.5

60-64

86

62.5

12.5

1,075.0

65-69

64

67.5

7.5

480.0

70-74

70

SUM of PYLL

72.5

2.5

175.0
93,409.0

(1) Calculate the mean age for each age group (column 2) and subtract from the selected age,
75 (column 3)
(2) Calculate the potential years of life lost for each age group by multiplying the number of
deaths (column 1) by the remaining years of life lost (column 3)
(3) Calculate the PYLL rate by dividing the sum of the potential years of life lost by age
group (93,409) by the total population for the ages selected (12,975,615).
Rate per 1,000 persons
= Total PYLL divided by Population under age 75
= 93,409.0/12,975,615
= 7.2 per 1,000

Time trend

Time-trend analysis, time series designs


Epidemiology: Time-trend Analysis
Time-trend designs are a form of longitudinal ecological study, and can provide a dynamic view
of a populations health status. Data are collected from a population over time to look for trends
and changes. Like other ecological studies, the data are collected at a population level and can be
used to generate hypotheses for further research, rather than demonstrating causality.
Ecological studies are described elsewhere in these notes, but there are four principal reasons for
carrying out between-group studies:1

To investigate differences between populations


To study group-specific effects, for example of a public health intervention
aimed at a group

Availability of group-level data, such as healthcare utilisation

Cheap and quick if routine data are available

In a time-trend analysis, comparisons are made between groups to help draw conclusions about
the effect of an exposure on different populations. Observations are recorded for each group at
equal time intervals, for example monthly. Examples of measurements include prevalence of
disease, levels of pollution, or mean temperature in a region.
Uses of time-trend analysis
Trends in factors such as rates of disease and death, as well as behaviours such as smoking are
often used by public health professionals to assist in healthcare needs assessments, service
planning, and policy development. Examining data over time also makes it possible to predict
future frequencies and rates of occurrence.
Studies of time trends may focus on any of the following:

Patterns of change in an indicator over time for example whether usage of a


service has increased or decreased over time, and if it has, how quickly or
slowly the increase or decrease has occurred
Comparing one time period to another time period for example, evaluating
the impact of a smoking cessation programme by comparing smoking rates
before and after the event

Comparing one geographical area or population to another

Making future projections for example to aid the planning of healthcare


services by estimating likely resource requirements

Analysis of time-trend studies


The most obvious first step in assessing a trend is to plot the observations of interest by year (or
some other time period deemed appropriate). The observations can also be examined in tabular
form. These steps form the basis of subsequent analysis and provide an overview of the general
shape of the trend, help identify any outliers in the data, and allow the researcher to become
familiar with both the rates being studied.2
Detailed knowledge of the statistical methods used in analysis is beyond the scope of MFPH Part
A, but methods include:

Chi-square test for linear trend


Regression analysis

More detailed consideration of analysis is available here


Time series analysis
Time series analysis refers to a particular collection of specialised regression methods that use
integrated moving averages and other smoothing techniques to illustrate trends in the data. It
involves a complex process that incorporates information from past observations and past errors
in those observations into the estimation of predicted values.
Moving averages provide a useful way of presenting time series data, highlighting any long-term
trends whilst smoothing out any short-term fluctuations. They are also commonly used to analyse
trends in financial analysis. The calculation of moving averages is described in more detail here.
Presentation of trend data
Presentations of time-trend data should usually include the following:

Graphical plots displaying the observed data over time


Comment on any statistical methods used to transform the data

Report average percent change

An interpretation of the trends seen

Interpretation of trend data


The results of all ecological studies, including time-series designs should be interpreted with
caution:1

Data on exposure and outcome may be collected in different ways for


different populations
Migration of populations between groups during the study period may dilute
any difference between the groups

Such studies usually rely on routine data sources, which may have been
collected for other purposes

Ecological studies do not allow us to answer questions about individual risks

Meta-analysis; graphs; interpretation


In statistics, meta-analysis comprises statistical methods for contrasting and combining results
from different studies, in the hope of identifying patterns among study results, sources of
disagreement among those results, or other interesting relationships that may come to light in the
context of multiple studies.[1] Meta-analysis can be thought of as "conducting research about
previous research." In its simplest form, meta-analysis is done by identifying a common
statistical measure that is shared between studies, such as effect size or p-value, and calculating a
weighted average of that common measure. This weighting is usually related to the sample sizes
of the individual studies, although it can also include other factors, such as study quality.
The motivation of a meta-analysis is to aggregate information in order to achieve a higher
statistical power for the measure of interest, as opposed to a less precise measure derived from a
single study. In performing a meta-analysis, an investigator must make choices many of which
can affect its results, including deciding how to search for studies, selecting studies based on a
set of objective criteria, dealing with incomplete data, analyzing the data, and accounting for or
choosing not to account for publication bias. [2]
Meta-analyses are often, but not always, important components of a systematic review
procedure. For instance, a meta-analysis may be conducted on several clinical trials of a medical
treatment, in an effort to obtain a better understanding of how well the treatment works. Here it is
convenient to follow the terminology used by the Cochrane Collaboration,[3] and use "metaanalysis" to refer to statistical methods of combining evidence, leaving other aspects of 'research
synthesis' or 'evidence synthesis', such as combining information from qualitative studies, for the
more general context of systematic reviews.

Advantages
Conceptually, a meta-analysis uses a statistical approach to combine the results from multiple
studies in an effort to increase power (over individual studies), improve estimates of the size of
the effect and/or to resolve uncertainty when reports disagree. Basically, it produces a weighted
average of the included study results and this approach has several advantages:

Results can be generalized to a larger population,


The precision and accuracy of estimates can be improved as more data is used. This, in
turn, may increase the statistical power to detect an effect.

Inconsistency of results across studies can be quantified and analyzed. For instance, does
inconsistency arise from sampling error, or are study results (partially) influenced by
between-study heterogeneity.

Hypothesis testing can be applied on summary estimates,

Moderators can be included to explain variation between studies,

The presence of publication bias can be investigated

Pitfalls
A meta-analysis of several small studies does not predict the results of a single large study.[11]
Some have argued that a weakness of the method is that sources of bias are not controlled by the
method: a good meta-analysis of badly designed studies will still result in bad statistics.[12] This
would mean that only methodologically sound studies should be included in a meta-analysis, a
practice called 'best evidence synthesis'.[12] Other meta-analysts would include weaker studies,
and add a study-level predictor variable that reflects the methodological quality of the studies to
examine the effect of study quality on the effect size.[13] However, others have argued that a
better approach is to preserve information about the variance in the study sample, casting as wide
a net as possible, and that methodological selection criteria introduce unwanted subjectivity,
defeating the purpose of the approach.[14]

Publication bias: the file drawer problem

A funnelplot expected without the file drawer problem. The largest studies converge
on a null result, while smaller studies show more random variability.

A funnelplot expected with the file drawer problem. The largest studies still cluster
around the null result, but the bias against publishing negative studies has caused
the literature as a whole to appear unjustifiably favourable to the hypothesis.

Another potential pitfall is the reliance on the available corpus of published studies, which may
create exaggerated outcomes due to publication bias, as studies which show negative results or
insignificant results are less likely to be published. For example, one may have overlooked
dissertation studies or studies that have never been published. This is not easily solved, as one
cannot know how many studies have gone unreported.[15]
This file drawer problem results in the distribution of effect sizes that are biased, skewed or
completely cut off, creating a serious base rate fallacy, in which the significance of the published
studies is overestimated, as other studies were either not submitted for publication or were
rejected. This should be seriously considered when interpreting the outcomes of a meta-analysis.
[15][16]

The distribution of effect sizes can be visualized with a funnel plot which is a scatter plot of
sample size and effect sizes. In fact, for a certain effect level, the smaller the study, the higher is
the probability to find it by chance. At the same time, the higher the effect level, the lower is the
probability that a larger study can result in that positive result by chance. If many negative
studies were not published, the remained positive studies give rise to a funnel plot in which effect
size is inversely proportional to sample size, in other words: the higher the effect size, the
smaller the sample size. An important part of the shown effect is then due to chance that is not
balanced in the plot because of unpublished negative data absence. In contrast, when most
studies were published, the effect shown has no reason to be biased by the study size, so a
symmetric funnel plot results. So, if no publication bias is present, one would expect that there is
no relation between sample size and effect size.[17] A negative relation between sample size and
effect size would imply that studies that found significant effects were more likely to be
published and/or to be submitted for publication. There are several procedures available that
attempt to correct for the file drawer problem, once identified, such as guessing at the cut off part
of the distribution of study effects.
Methods for detecting publication bias have been controversial as they typically have low power
for detection of bias, but also may create false positives under some circumstances.[18] For
instance small study effects, wherein methodological differences between smaller and larger
studies exist, may cause differences in effect sizes between studies that resemble publication
bias.[clarification needed] However, small study effects may be just as problematic for the interpretation
of meta-analyses, and the imperative is on meta-analytic authors to investigate potential sources
of bias. A Tandem Method for analyzing publication bias has been suggested for cutting down
false positive error problems.[19] This Tandem method consists of three stages. Firstly, one
calculates Orwin's fail-safe N, to check how many studies should be added in order to reduce the
test statistic to a trivial size. If this number of studies is larger than the number of studies used in
the meta-analysis, it is a sign that there is no publication bias, as in that case, one needs a lot of
studies to reduce the effect size. Secondly, one can do an Egger's regression test, which tests
whether the funnel plot is symmetrical. As mentioned before: a symmetrical funnel plot is a sign
that there is no publication bias, as the effect size and sample size are not dependent. Thirdly, one

can do the trim-and-fill method, which imputes data if the funnel plot is asymmetrical. Important
to note is that these are just a couple of methods that can be used, but several more exist.
Nevertheless, it is suggested that 25% of meta-analyses in the psychological sciences may have
publication bias.[19] However, low power problems likely remain at issue, and estimations of
publication bias may remain lower than the true amount.
Most discussions of publication bias focus on journal practices favoring publication of
statistically significant finds. However, questionable research practices, such as reworking
statistical models until significance is achieved, may also favor statistically significant findings
in support of researchers' hypotheses[20][21] Questionable researcher practices aren't necessarily
sample size dependent, and as such are unlikely to be evident on a funnel plot and may go
undetected by most publication bias detection methods currently in use.
Other weaknesses are Simpson's paradox (two smaller studies may point in one direction, and the
combination study in the opposite direction) and subjectivity in the coding of an effect or
decisions about including or rejecting studies.[22] There are two different ways to measure effect:
correlation or standardized mean difference. The interpretation of effect size is arbitrary, and
there is no universally agreed upon way to weigh the risk. It has not been determined if the
statistically most accurate method for combining results is the fixed, random or quality effect
models.[citation needed]

Agenda-driven bias
The most severe fault in meta-analysis[23] often occurs when the person or persons doing the
meta-analysis have an economic, social, or political agenda such as the passage or defeat of
legislation. People with these types of agendas may be more likely to abuse meta-analysis due to
personal bias. For example, researchers favorable to the author's agenda are likely to have their
studies cherry-picked while those not favorable will be ignored or labeled as "not credible". In
addition, the favored authors may themselves be biased or paid to produce results that support
their overall political, social, or economic goals in ways such as selecting small favorable data
sets and not incorporating larger unfavorable data sets. The influence of such biases on the
results of a meta-analysis is possible because the methodology of meta-analysis is highly
malleable.[22]
A 2011 study done to disclose possible conflicts of interests in underlying research studies used
for medical meta-analyses reviewed 29 meta-analyses and found that conflicts of interests in the
studies underlying the meta-analyses were rarely disclosed. The 29 meta-analyses included 11
from general medicine journals, 15 from specialty medicine journals, and three from the
Cochrane Database of Systematic Reviews. The 29 meta-analyses reviewed a total of 509
randomized controlled trials (RCTs). Of these, 318 RCTs reported funding sources, with 219
(69%) receiving funding from industry[clarification needed]. Of the 509 RCTs, 132 reported author
conflict of interest disclosures, with 91 studies (69%) disclosing one or more authors having
industry financial ties. The information was, however, seldom reflected in the meta-analyses.
Only two (7%) reported RCT funding sources and none reported RCT author-industry ties. The
authors concluded without acknowledgment of COI due to industry funding or author industry

financial ties from RCTs included in meta-analyses, readers understanding and appraisal of the
evidence from the meta-analysis may be compromised.[24]

Steps in a meta-analysis
1. Formulation of the problem
2. Search of literature
3. Selection of studies ('incorporation criteria')

Based on quality criteria, e.g. the requirement of randomization and blinding


in a clinical trial
Selection of specific studies on a well-specified subject, e.g. the treatment of
breast cancer.
Decide whether unpublished studies are included to avoid publication bias
(file drawer problem)

4. Decide which dependent variables or summary measures are allowed. For instance:

Differences (discrete data)


Means (continuous data)

Hedges' g is a popular summary measure for continuous data that is


standardized in order to eliminate scale differences, but it incorporates an
index of variation between groups:

in which
pooled variance.

is the treatment mean,

is the control mean,

the

5. Selection of a meta-regression statistical model: e.g. simple regression, fixed-effect metaregression or random-effect meta-regression. Meta-regression is a tool used in meta-analysis to
examine the impact of moderator variables on study effect size using regression-based
techniques. Meta-regression is more effective at this task than are standard regression techniques.
For reporting guidelines, see the Preferred Reporting Items for Systematic Reviews and MetaAnalyses (PRISMA) statement

Forest graph

Funnel plot

An example of a funnel plot.

A funnel plot is a graph designed to check for the existence of publication bias in systematic
reviews and meta-analyses. In the absence of publication bias, it assumes that the largest studies
will be plotted near the average, and smaller studies will be spread evenly on both sides of the
average, creating a roughly funnel-shaped distribution. Deviation from this shape can indicate
publication bias

Quotation
Funnel plots, introduced by Light and Pillemer in 1984[1] and discussed in detail by Egger and
colleagues,[2][3] are useful adjuncts to meta-analyses. A funnel plot is a scatterplot of treatment
effect against a measure of study size. It is used primarily as a visual aid for detecting bias or
systematic heterogeneity. A symmetric inverted funnel shape arises from a well-behaved data
set, in which publication bias is unlikely. An asymmetric funnel indicates a relationship between
treatment effect and study size. This suggests the possibility of either publication bias or a
systematic difference between smaller and larger studies (small study effects). Asymmetry can
also arise from use of an inappropriate effect measure. Whatever the cause, an asymmetric funnel
plot leads to doubts over the appropriateness of a simple meta-analysis and suggests that there
needs to be investigation of possible causes.
A variety of choices of measures of study size is available, including total sample size, standard
error of the treatment effect, and inverse variance of the treatment effect (weight). Sterne and
Egger have compared these with others, and conclude that the standard error is to be
recommended.[3] When the standard error is used, straight lines may be drawn to define a region
within which 95% of points might lie in the absence of both heterogeneity and publication bias.[3]
In common with confidence interval plots, funnel plots are conventionally drawn with the
treatment effect measure on the horizontal axis, so that study size appears on the vertical axis,
breaking with the general rule. Since funnel plots are principally visual aids for detecting
asymmetry along the treatment effect axis, this makes them considerably easier to interpret.

Criticism
The funnel plot is not without problems. If high precision studies really are different from low
precision studies with respect to effect size (e.g., due to different populations examined) a funnel
plot may give a wrong impression of publication bias.[4] The appearance of the funnel plot can
change quite dramatically depending on the scale on the y-axis whether it is the inverse
square error or the trial size.

META-ANALYSIS IN MEDICAL RESEARCH


The objectives of this paper are to provide an introduction to meta-analysis and to discuss the
rationale for this type of research and other general considerations. Methods used to produce a
rigorous meta-analysis are highlighted and some aspects of presentation and interpretation of
meta-analysis are discussed.
Meta-analysis is a quantitative, formal, epidemiological study design used to systematically
assess previous research studies to derive conclusions about that body of research. Outcomes
from a meta-analysis may include a more precise estimate of the effect of treatment or risk factor
for disease, or other outcomes, than any individual study contributing to the pooled analysis. The
examination of variability or heterogeneity in study results is also a critical outcome. The
benefits of meta-analysis include a consolidated and quantitative review of a large, and often
complex, sometimes apparently conflicting, body of literature. The specification of the outcome
and hypotheses that are tested is critical to the conduct of meta-analyses, as is a sensitive
literature search. A failure to identify the majority of existing studies can lead to erroneous
conclusions; however, there are methods of examining data to identify the potential for studies to
be missing; for example, by the use of funnel plots. Rigorously conducted meta-analyses are
useful tools in evidence-based medicine. The need to integrate findings from many studies
ensures that meta-analytic research is desirable and the large body of research now generated
makes the conduct of this research feasible.

Null result
In science, a null result is a result without the expected content: that is, the proposed result is
absent.[1] It is an experimental outcome which does not show an otherwise expected effect. This
does not imply a result of zero or nothing, simply a result that does not support the hypothesis.
The term is a translation of the scientific Latin nullus resultarum, meaning "no consequence".
In statistical hypothesis testing, a null result occurs when an experimental result is not
significantly different from what is to be expected under the null hypothesis. While some effect
may in fact be observed, its probability (under the null hypothesis) does not exceed the
significance level, i.e., the threshold set prior to testing for rejection of the null hypothesis. The
significance level varies, but is often set at 0.05 (5%).
As an example in physics, the results of the MichelsonMorley experiment were of this type, as
it did not detect the expected velocity relative to the postulated luminiferous aether. This
experiment's famous failed detection, commonly referred to as the null result, contributed to the
development of special relativity. Note that the experiment did in fact appear to measure a nonzero "drift", but the value was far too small to account for the theoretically expected results; it is
generally thought to be inside the noise level of the experiment

Journal of Negative Results in Biomedicine


The Journal of Negative Results in Biomedicine is an open access peer-reviewed medical
journal. It publishes papers that promote a discussion of unexpected, controversial, provocative
and/or negative results in the context of current research. The journal was founded in 2002. As of
February 2014, all referees must agree to sign their reviews, and links to all signed reviews are
provided in the published articles

SYSTEMATIC REVIEW
A systematic review (also systematic literature review or structured
literature review, SLR) is a literature review focused on a research question that
tries to identify, appraise, select and synthesize all high quality research evidence
relevant to that question. Systematic reviews of high-quality randomized controlled
trials are crucial to evidence-based medicine.[1] An understanding of systematic
reviews and how to implement them in practice is becoming mandatory for all
professionals involved in the delivery of health care. Besides health interventions,
systematic reviews may concern clinical tests, public health interventions, social
interventions, adverse effects, and economic evaluations.[2][3] Systematic reviews
are not limited to medicine and are quite common in all other sciences where data
are collected, published in the literature, and an assessment of methodological
quality for a precisely defined subject would be helpful.

Characteristics
A systematic review aims to provide an exhaustive summary of current literature relevant to a
research question. The first step of a systematic review is a thorough search of the literature for
relevant papers. The Methodology section of the review will list the databases and citation
indexes searched, such as Web of Science, Embase, and PubMed, as well as any hand searched
individual journals. Next, the titles and the abstracts of the identified articles are checked against
pre-determined criteria for eligibility and relevance. This list will always depend on the research
problem. Each included study may be assigned an objective assessment of methodological
quality preferably using a method conforming to the Preferred Reporting Items for Systematic
Reviews and Meta-Analyses (PRISMA) statement (the current guideline)[5] or the high quality
standards of Cochrane collaboration.[6]
Systematic reviews often, but not always, use statistical techniques (meta-analysis) to combine
results of the eligible studies, or at least use scoring of the levels of evidence depending on the
methodology used. An additional rater may be consulted to resolve any scoring differences
between raters.[4] Systematic review is often applied in the biomedical or healthcare context, but
it can be applied in any field of research. Groups like the Campbell Collaboration are promoting
the use of systematic reviews in policy-making beyond just healthcare.
A systematic review uses an objective and transparent approach for research synthesis, with the
aim of minimizing bias. While many systematic reviews are based on an explicit quantitative
meta-analysis of available data, there are also qualitative reviews which adhere to the standards
for gathering, analyzing and reporting evidence. The EPPI-Centre has been influential in
developing methods for combining both qualitative and quantitative research in systematic
reviews.[7]
Recent developments in systematic reviews include realist reviews,[8] and the meta-narrative
approach.[9][10] These approaches try to overcome the problems of methodological and
epistemological heterogeneity in the diverse literatures existing on some subjects. The PRISMA

statement[11] suggests a standardized way to ensure a transparent and complete reporting of


systematic reviews, and is now required for this kind of research by more than 170 medical
journals worldwide.[12]

Cochrane Collaboration
The Cochrane Collaboration is a group of over 31,000 specialists in healthcare who
systematically review randomised trials of the effects of prevention, treatments and rehabilitation
as well as health systems interventions. When appropriate, they also include the results of other
types of research. Cochrane Reviews are published in The Cochrane Database of Systematic
Reviews section of The Cochrane Library. The 2010 impact factor for The Cochrane Database of
Systematic Reviews was 6.186, and it was ranked 10th in the Medicine, General & Internal
category.[13]
The Cochrane Collaboration provides a handbook for systematic reviewers of interventions
which "provides guidance to authors for the preparation of Cochrane Intervention reviews."[14]
The Cochrane Handbook outlines eight general steps for preparing a systematic review:[14]
1. Defining the review question(s) and developing criteria for including studies
2. Searching for studies
3. Selecting studies and collecting data
4. Assessing risk of bias in included studies
5. Analysing data and undertaking meta-analyses
6. Addressing reporting biases
7. Presenting results and "summary of findings" tables
8. Interpreting results and drawing conclusions
The Cochrane Handbook forms the basis of two sets of standards for the conduct and reporting
of Cochrane Intervention Reviews (MECIR - Methodological Expectations of Cochrane
Intervention Reviews)[15]

Strengths and weaknesses


While systematic reviews are regarded as the strongest form of medical evidence, a review of
300 studies found that not all systematic reviews were equally reliable, and that their reporting
can be improved by a universally agreed upon set of standards and guidelines.[16]
A further study by the same group found that of 100 systematic reviews monitored, 7% needed
updating at the time of publication, another 4% within a year, and another 11% within 2 years;
this figure was higher in rapidly changing fields of medicine, especially cardiovascular medicine.

[17]

A 2003 study suggested that extending searches beyond major databases, perhaps into grey
literature, would increase the effectiveness of reviews.[18]
Systematic reviews are increasingly prevalent in other fields, such as international development
research.[19] Subsequently, a number of donors most notably the UK Department for
International Development (DFID) and AusAid are focusing more attention and resources on
testing the appropriateness of systematic reviews in assessing the impacts of development and
humanitarian interventions.[19]
One concern is that the methods used to conduct a systematic review are sometimes changed one
researchers see the available trials they are going to include.[20]

Galbraith plot
In statistics, a Galbraith plot (also known as Galbraith's radial plot or just radial plot), is one
way of displaying several estimates of the same quantity that have different standard errors.

Example for Galbraith's radial plot.

It can be used to examine heterogeneity in a meta-analysis, as an alternative or supplement to a


forest plot.
A Galbraith plot is produced by first calculating the standardized estimates or z-statistics by
dividing each estimate by its standard error (SE). The Galbraith plot is then a scatter plot of each
z-statistic (vertical axis) against 1/SE (horizontal axis). Larger studies (with smaller SE and
larger 1/SE) will be observed to aggregate away from the origin

Surveillance
Surveillance is the monitoring of the behavior, activities, or other changing information, usually
of people for the purpose of influencing, managing, directing, or protecting them.[2] This can
include observation from a distance by means of electronic equipment (such as CCTV cameras),
or interception of electronically transmitted information (such as Internet traffic or phone calls);
and it can include simple, relatively no- or low-technology methods such as human intelligence
agents and postal interception. The word surveillance comes from a French phrase for "watching
over" ("sur" means "from above" and "veiller" means "to watch"), and is in contrast to more
recent developments such as sousveillance.
Surveillance is very useful to governments and law enforcement to maintain social control,
recognize and monitor threats, and prevent/investigate criminal activity. With the advent of
programs such as the Total Information Awareness program and ADVISE, technologies such as
high speed surveillance computers and biometrics software, and laws such as the
Communications Assistance for Law Enforcement Act, governments now possess an
unprecedented ability to monitor the activities of their subjects.[6]
However, many civil rights and privacy groups, such as the Electronic Frontier Foundation and
American Civil Liberties Union, have expressed concern that by allowing continual increases in
government surveillance of citizens we will end up in a mass surveillance society, with
extremely limited, or non-existent political and/or personal freedoms. Fears such as this have led
to numerous lawsuits such as Hepting v. AT&T

GDP
Gross domestic product (GDP) is defined by OECD as "an aggregate measure of
production equal to the sum of the gross values added of all resident institutional
units engaged in production (plus any taxes, and minus any subsidies, on products
not included in the value of their outputs)."

GDP estimates are commonly used to measure the economic performance of a whole country or
region, but can also measure the relative contribution of an industry sector. This is possible
because GDP is a measure of 'value added' rather than sales; it adds each firm's value added (the
value of its output minus the value of goods that are used up in producing it). For example, a
firm buys steel and adds value to it by producing a car; double counting would occur if GDP
added together the value of the steel and the value of the car.[3] Because it is based on value
added, GDP also increases when an enterprise reduces its use of materials or other resources
('intermediate consumption') to produce the same output.
The more familiar use of GDP estimates is to calculate the growth of the economy from year to
year (and recently from quarter to quarter). The pattern of GDP growth is held to indicate the
success or failure of economic policy and to determine whether an economy is 'in recession'.
GDP estimates are commonly used to measure the economic performance of a whole country or
region, but can also measure the relative contribution of an industry sector. This is possible
because GDP is a measure of 'value added' rather than sales; it adds each firm's value added (the
value of its output minus the value of goods that are used up in producing it). For example, a
firm buys steel and adds value to it by producing a car; double counting would occur if GDP
added together the value of the steel and the value of the car.[3] Because it is based on value
added, GDP also increases when an enterprise reduces its use of materials or other resources
('intermediate consumption') to produce the same output.
The more familiar use of GDP estimates is to calculate the growth of the economy from year to
year (and recently from quarter to quarter). The pattern of GDP growth is held to indicate the
success or failure of economic policy and to determine whether an economy is 'in recession'.

Contents

1 History
2 Determining GDP
o

2.1 Production approach

2.2 Income approach

2.3 Expenditure approach

2.3.1 Components of GDP by expenditure

2.3.2 Examples of GDP component variables

3 GDP vs GNI

3.1 International standards

3.2 National measurement

3.3 Interest rates

4 Nominal GDP and adjustments to GDP

5 Cross-border comparison and PPP

6 Per unit GDP

7 Standard of living and GDP

8 Externalities

9 Limitations and criticisms

10 Lists of countries by their GDP

11 List of newer approaches to the measurement of (economic) progress

12 See also

13 Notes and references

14 Further reading

15 External links
o

15.1 Global

15.2 Data

15.3 Articles and books

History
This section requires expansion.
(March 2011)

The concept of GDP was first developed by Simon Kuznets for a US Congress report in 1934.[4]
In this report, Kuznets warned against its use as a measure of welfare (see below under
limitations and criticisms). After the Bretton Woods conference in 1944, GDP became the main
tool for measuring a country's economy.[5] At that time Gross National Product (GNP) was the
preferred estimate, which differed from GDP in that it measured production by a country's
citizens at home and abroad rather than its 'resident institutional units' (see OECD definition
above). The switch to GDP came in the 1990s.
The history of the concept of GDP should be distinguished from the history of changes in ways
of estimating it. The value added by firms is relatively easy to calculate from their accounts, but
the value added by the public sector, by financial industries, and by intangible asset creation is

more complex. These activities are increasingly important in developed economies, and the
international conventions governing their estimation and their inclusion or exclusion in GDP
regularly change in an attempt to keep up with industrial advances. In the words of one academic
economist "The actual number for GDP is therefore the product of a vast patchwork of statistics
and a complicated set of processes carried out on the raw data to fit them to the conceptual
framework."[6]
Angus Maddison calculated historical GDP figures going back to 1830 and before.

GDP CAN BE DETERMINED


GDP can be determined in three ways, all of which should, in principle, give the same result.
They are the production (or output or value added) approach, the income approach, or the
expenditure approach.
The most direct of the three is the production approach, which sums the outputs of every class of
enterprise to arrive at the total. The expenditure approach works on the principle that all of the
product must be bought by somebody, therefore the value of the total product must be equal to
people's total expenditures in buying things. The income approach works on the principle that the
incomes of the productive factors ("producers," colloquially) must be equal to the value of their
product, and determines GDP by finding the sum of all producers' incomes.[7]

Production approach
This approach mirrors the OECD definition given above.
1. Estimate the gross value of domestic output out of the many various
economic activities;
2. Determine the intermediate consumption, i.e., the cost of material, supplies
and services used to produce final goods or services.
3. Deduct intermediate consumption from gross value to obtain the gross value
added.

Gross value added = gross value of output value of intermediate consumption.


Value of output = value of the total sales of goods and services plus value of changes in the
inventories.
The sum of the gross value added in the various economic activities is known as "GDP at factor
cost".
GDP at factor cost plus indirect taxes less subsidies on products = "GDP at producer price".

For measuring output of domestic product, economic activities (i.e. industries) are classified into
various sectors. After classifying economic activities, the output of each sector is calculated by
any of the following two methods:
1. By multiplying the output of each sector by their respective market price and
adding them together
2. By collecting data on gross sales and inventories from the records of
companies and adding them together

The gross value of all sectors is then added to get the gross value added (GVA) at factor cost.
Subtracting each sector's intermediate consumption from gross output gives the GDP at factor
cost. Adding indirect tax minus subsidies in GDP at factor cost gives the "GDP at producer
prices".

Income approach
The second way of estimating GDP is to use "the sum of primary incomes distributed by resident
producer units".[2]
If GDP is calculated this way it is sometimes called gross domestic income (GDI), or GDP (I).
GDI should provide the same amount as the expenditure method described later. (By definition,
GDI = GDP. In practice, however, measurement errors will make the two figures slightly off
when reported by national statistical agencies.)
This method measures GDP by adding incomes that firms pay households for factors of
production they hire - wages for labour, interest for capital, rent for land and profits for
entrepreneurship.
The US "National Income and Expenditure Accounts" divide incomes into five categories:
1. Wages, salaries, and supplementary labour income
2. Corporate profits
3. Interest and miscellaneous investment income
4. Farmers' incomes
5. Income from non-farm unincorporated businesses

These five income components sum to net domestic income at factor cost.
Two adjustments must be made to get GDP:
1. Indirect taxes minus subsidies are added to get from factor cost to market
prices.
2. Depreciation (or capital consumption allowance) is added to get from net
domestic product to gross domestic product.

Total income can be subdivided according to various schemes, leading to various formulae for
GDP measured by the income approach. A common one is:
Nominal GDP Income Approach [2]
GDP = compensation of employees + gross operating surplus + gross mixed
income + taxes less subsidies on production and imports
GDP = COE + GOS + GMI + TP & M SP & M

Compensation of employees (COE) measures the total remuneration to


employees for work done. It includes wages and salaries, as well as employer
contributions to social security and other such programs.
Gross operating surplus (GOS) is the surplus due to owners of
incorporated businesses. Often called profits, although only a subset of total
costs are subtracted from gross output to calculate GOS.
Gross mixed income (GMI) is the same measure as GOS, but for
unincorporated businesses. This often includes most small businesses.

The sum of COE, GOS and GMI is called total factor income; it is the income of all of the
factors of production in society. It measures the value of GDP at factor (basic) prices. The
difference between basic prices and final prices (those used in the expenditure calculation) is the
total taxes and subsidies that the government has levied or paid on that production. So adding
taxes less subsidies on production and imports converts GDP at factor cost to GDP(I).
Total factor income is also sometimes expressed as:
Total factor income = employee compensation + corporate profits +
proprietor's income + rental income + net interest [9]

Yet another formula for GDP by the income method is:[citation needed]

where R : rents
I : interests
P : profits
SA : statistical adjustments (corporate income taxes, dividends, undistributed corporate profits)
W : wages.

Expenditure approach
The third way to estimate GDP is to calculate the sum of the final uses of goods and services (all
uses except intermediate consumption) measured in purchasers' prices.[2]

In economics, most things produced are produced for sale and then sold. Therefore, measuring
the total expenditure of money used to buy things is a way of measuring production. This is
known as the expenditure method of calculating GDP. Note that if you knit yourself a sweater, it
is production but does not get counted as GDP because it is never sold. Sweater-knitting is a
small part of the economy, but if one counts some major activities such as child-rearing
(generally unpaid) as production, GDP ceases to be an accurate indicator of production.
Similarly, if there is a long term shift from non-market provision of services (for example
cooking, cleaning, child rearing, do-it yourself repairs) to market provision of services, then this
trend toward increased market provision of services may mask a dramatic decrease in actual
domestic production, resulting in overly optimistic and inflated reported GDP. This is
particularly a problem for economies which have shifted from production economies to service
economies.

Components of GDP by expenditure


GDP (Y) is the sum of consumption (C), investment (I), government spending (G) and net
exports (X M).
Y = C + I + G + (X M)

Here is a description of each GDP component:

C (consumption) is normally the largest GDP component in the economy,


consisting of private (household final consumption expenditure) in the
economy. These personal expenditures fall under one of the following
categories: durable goods, non-durable goods, and services. Examples
include food, rent, jewelry, gasoline, and medical expenses but does not
include the purchase of new housing.

I (investment) includes, for instance, business investment in equipment, but


does not include exchanges of existing assets. Examples include construction
of a new mine, purchase of software, or purchase of machinery and
equipment for a factory. Spending by households (not government) on new
houses is also included in investment. In contrast to its colloquial meaning,
"investment" in GDP does not mean purchases of financial products. Buying
financial products is classed as 'saving', as opposed to investment. This
avoids double-counting: if one buys shares in a company, and the company
uses the money received to buy plant, equipment, etc., the amount will be
counted toward GDP when the company spends the money on those things;
to also count it when one gives it to the company would be to count two
times an amount that only corresponds to one group of products. Buying
bonds or stocks is a swapping of deeds, a transfer of claims on future
production, not directly an expenditure on products.

G (government spending) is the sum of government expenditures on final


goods and services. It includes salaries of public servants, purchases of
weapons for the military and any investment expenditure by a government. It

does not include any transfer payments, such as social security or


unemployment benefits.

X (exports) represents gross exports. GDP captures the amount a country


produces, including goods and services produced for other nations'
consumption, therefore exports are added.

M (imports) represents gross imports. Imports are subtracted since


imported goods will be included in the terms G, I, or C, and must be deducted
to avoid counting foreign supply as domestic.

A fully equivalent definition is that GDP (Y) is the sum of final consumption expenditure
(FCE), gross capital formation (GCF), and net exports (X M).
Y = FCE + GCF+ (X M)

FCE can then be further broken down by three sectors (households, governments and non-profit
institutions serving households) and GCF by five sectors (non-financial corporations, financial
corporations, households, governments and non-profit institutions serving households). The
advantage of this second definition is that expenditure is systematically broken down, firstly, by
type of final use (final consumption or capital formation) and, secondly, by sectors making the
expenditure, whereas the first definition partly follows a mixed delimitation concept by type of
final use and sector.
Note that C, G, and I are expenditures on final goods and services; expenditures on intermediate
goods and services do not count. (Intermediate goods and services are those used by businesses
to produce other goods and services within the accounting year.[10] )
According to the U.S. Bureau of Economic Analysis, which is responsible for calculating the
national accounts in the United States, "In general, the source data for the expenditures
components are considered more reliable than those for the income components [see income
method, below]
GROSS NATIONAL INCOME (GNI)
The Gross national income (GNI) is the total domestic and foreign output claimed
by residents of a country, consisting of gross domestic product (GDP) plus factor
incomes earned by foreign residents, minus income earned in the domestic
economy by nonresidents.

Gross national product

Gross national product (GNP) is the market value of all the products and services produced in
one year by labour and property supplied by the citizens of a country. Unlike Gross Domestic
Product (GDP), which defines production based on the geographical location of production, GNP
allocates production based on location of ownership.
GNP does not distinguish between qualitative improvements in the state of the technical arts
(e.g., increasing computer processing speeds), and quantitative increases in goods (e.g., number
of computers produced), and considers both to be forms of "economic growth".[1]
When a country's capital or labour resources are employed outside its borders, or
when a foreign firm is operating in its territory, GDP and GNP can produce different
measures of total output. In 2009 for instance, the United States estimated its GDP
at $14.119 trillion, and its GNP at $14.265 trillion.

Meta analysis
In statistics, a meta-analysis refers to methods that focus on contrasting and combining
results from different studies, in the hope of identifying patterns among study results,
sources of disagreement among those results, or other interesting relationships that
may come to light in the context of multiple studies. [1] In its simplest form, meta-analysis
is normally done by identification of a common measure of effect size. A weighted
average of that common measure is the output of a meta-analysis. The weighting is
related to sample sizes within the individual studies. More generally there are other
differences between the studies that need to be allowed for, but the general aim of a
meta-analysis is to more powerfully estimate the true effect size as opposed to a less
precise effect size derived in a single study under a given single set of assumptions and
conditions. A meta-analysis therefore gives a thorough summary of several studies that
have been done on the same topic, and provides the reader with extensive information
on whether an effect exists and what size that effect has.
Meta analysis can be thought of as "conducting research about research."
Meta-analyses are often, but not always, important components of a systematic
review procedure. For instance, a meta-analysis may be conducted on several clinical
trials of a medical treatment, in an effort to obtain a better understanding of how well the
treatment works. Here it is convenient to follow the terminology used by the Cochrane
Collaboration,[2] and use "meta-analysis" to refer to statistical methods of combining
evidence, leaving other aspects of 'research synthesis' or 'evidence synthesis', such as
combining information from qualitative studies, for the more general context
of systematic reviews.
Meta-analysis forms part of a framework called estimation statistics which relies
on effect sizes, confidence intervals and precision planning to guide data analysis, and
is an alternative to null hypothesis significance testing.

Advantages of meta analysis


Conceptually, a meta-analysis uses a statistical approach to combine the results from
multiple studies in an effort to increase power (over individual studies), improve
estimates of the size of the effect and/or to resolve uncertainty when reports disagree.

Basically, it produces a weighted average of the included study results and this
approach has several advantages:

Results can be generalized to a larger population,

The precision and accuracy of estimates can be improved as more data is used.
This, in turn, may increase the statistical power to detect an effect.

Inconsistency of results across studies can be quantified and analyzed. For


instance, does inconsistency arise from sampling error, or are study results
(partially) influenced by between-study heterogeneity.

Hypothesis testing can be applied on summary estimates,

Moderators can be included to explain variation between studies,

The presence of publication bias can be investigated

Pitfalls
A meta-analysis of several small studies does not predict the results of a single large study.
[9]
Some have argued that a weakness of the method is that sources of bias are not controlled
by the method: a good meta-analysis of badly designed studies will still result in bad statistics.
[10]
This would mean that only methodologically sound studies should be included in a metaanalysis, a practice called 'best evidence synthesis'. [10] Other meta-analysts would include
weaker studies, and add a study-level predictor variable that reflects the methodological quality
of the studies to examine the effect of study quality on the effect size.[11] However, others have
argued that a better approach is to preserve information about the variance in the study sample,
casting as wide a net as possible, and that methodological selection criteria introduce unwanted
subjectivity, defeating the purpose of the approach.[12]

steps of meta analysis

1. Formulation of the problem


2. Search of literature

3. Selection of studies ('incorporation criteria')

Based on quality criteria, e.g. the requirement of randomization and blinding in a


clinical trial

Selection of specific studies on a well-specified subject, e.g. the treatment of


breast cancer.

Decide whether unpublished studies are included to avoid publication bias (file
drawer problem)

4. Decide which dependent variables or summary measures are allowed. For instance:

Differences (discrete data)

Means (continuous data)

Hedges' g is a popular summary measure for continuous data that is


standardized in order to eliminate scale differences, but it incorporates an index of
variation between groups:

in which
pooled variance.

is the treatment mean,

is the control mean,

the

5. Selection of meta-regression statistic model. e.g. Simple regression, fixed-effect


meta regression and random-effect meta regression. Meta-regression is a tool used
in meta-analysis to examine the impact of moderator variables on study effect size
using regression-based techniques. Meta-regression is more effective at this task
than are standard regression techniques.

Meta-analysis
combines the results
of several studies

What is meta-analysis?
Meta-analysis is the use of statistical methods to combine results
of individual studies. This allows us to make the best use of all the
information we have gathered in our systematic review by increasing
the power of the analysis. By statistically combining the results of
similar studies we can improve the precision of our estimates of
treatment effect, and assess whether treatment effects are similar in
similar situations. The decision about whether or not the results of

individual studies are similar enough to be combined in a metaanalysis is essential to the validity of the result, and will be covered
in the next module on heterogeneity. In this module we will look at
the process of combining studies and outline the various methods
available.
There are many approaches to meta-analysis. We have discussed
already that meta-analysis is not simply a matter of adding up
numbers of participants across studies (although unfortunately some
non-Cochrane reviews do this). This is the 'pooling participants' or
'treat-as-one-trial' method and we will discuss it in a little more
detail now.

Pooling participants (not a valid approach to meta-analysis).


This method effectively considers the participants in all the studies
as if they were part of one big study. Suppose the studies are
randomised controlled trials: we could look at everyone who
received the experimental intervention by adding up the
experimental group events and sample sizes and compare them with
everyone who received the control intervention. This is a tempting
way to 'pool results', but let's demonstrate how it can produce the
wrong answer.
A Cochrane review of trials of daycare for pre-school children
included the following two trials. For this example we will focus on
the outcome of whether a child was retained in the same class after
a period in either a daycare treatment group or a non-daycare
control group. In the first trial (Gray 1970), the risk difference is
-0.16, so daycare looks promising:
Gray 1970 Retained Total
Daycare

19

36

Risk

Risk difference

0.528
-0.16

Control

13

19

0.684

In the second trial (Schweinhart 1993) the absolute risk of being


retained in the same class is considerably lower, but the risk

difference, while small, still lies on the side of a benefit of daycare:


Schweinhart Retained Total
Daycare

58

Risk

Risk difference

0.1034
-0.004

Control

65

0.1077

What would happen if we pooled all the children as if they were part
of a single trial?
Pooled results Retained Total

We don't add up
patients across trials

Risk

Daycare

25

94

0.266

Control

20

84

0.238

Risk difference

+0.03
WRONG!

It suddenly looks as if daycare may be harmful: the risk difference is


now bigger than 0. This is called Simpson's paradox (or bias), and is
why we don't pool participants directly across studies. The first rule
of meta-analysis is to keep participants within each study grouped
together, so as to preserve the effects of randomisation and
compare like with like. Therefore, we must take the comparison of
risks within each of the two trials and somehow combine these. In
practice, this means we need to calculate a single measure of
treatment effect from each study before contemplating metaanalysis. For example, for a dichotomous outcome (like being
retained in the same class) we calculate a risk ratio, the risk
difference or the odds ratio for each study separately, then pool
these estimates of effect across the studies.

Simple average of treatment effects (not used in Cochrane reviews)


If we obtain a treatment effect separately from each study, what do
We don't use simple
we do with them in the meta-analysis? How about taking the
averages to calculate a average? The average of the risk differences in the two trials above
meta-analysis
is (-0.004 - 0.16) / 2 = - 0.082. This may seem fair at first, but the
second trial randomised more than twice as many children as the
first, so the contribution of each randomised child in the first trial is
diminished. It is not uncommon for a meta-analysis to contain trials
of vastly different sizes. To give each one the same influence cannot
be reasonable. So we need a better method than a simple average.

Definition:

What is a meta-analysis? A meta-analysis is a type of research study in which the


researcher compiles numerous previously published studies on a particular research
question and re-analyzes the results to find the general trend for results across the studies.
A meta-analysis is a useful tool because it can help overcome the problem of small sample
sizes in the original studies, and can help identify trends in an area of the research literature
that may not be evident by merely reading the published studies.

Graphs
Economic growth
Definition of 'Economic Growth'
An increase in the capacity of an economy to produce goods and services,
compared from one period of time to another. Economic growth can be
measured in nominal terms, which include inflation, or in real terms,
which are adjusted for inflation. For comparing one country's economic
growth to another, GDP or GNP per capita should be used as these take
into account population differences between countries.

Increase in a country's productive capacity, as measured by comparing gross


national product (GNP) in a year with the GNP in the previous year.
Increase in the capital stock, advances in technology, and improvement in
the quality and level of literacy are considered to be the principal causes of
economic growth. In recent years, the idea of sustainable development has
brought in additional factors such as environmentally sound processes that
must be taken into account in growing an economy.

Economic growth is the increase in the market value of the goods and services
produced by an economy over time. It is conventionally measured as the percent rate of
increase in real gross domestic product, or real GDP.[1] Of more importance is the
growth of the ratio of GDP to population (GDP per capita), which is also called per
capita income. An increase in per capita income is referred to as intensive growth. GDP
growth caused only by increases in population or territory is called extensive growth.[2]
Growth is usually calculated in real terms i.e., inflation-adjusted terms to eliminate
the distorting effect of inflation on the price of goods produced. In economics, "economic
growth" or "economic growth theory" typically refers to growth of potential output, i.e.,
production at "full employment".
As an area of study, economic growth is generally distinguished from development
economics. The former is primarily the study of how countries can advance their
economies. The latter is the study of the economic aspects of the development process
in low-income countries. See also Economic development.
Since economic growth is measured as the annual percent change of gross domestic
product (GDP), it has all the advantages and drawbacks of that measure. For example,
GDP only measures the market economy, which tends to overstate growth during the
change over from a farming economy with household production. [3] An adjustment was
made for food grown on and consumed on farms, but no correction was made for other
household production. Also, there is no allowance in GDP calculations for depletion of
natural resources.

Pros
1. Quality of life
Cons
2. Resource depletion
3. Environmental impact
4. Global warming

Inflation graphs

Growth rate decreases vs inflation increases?

Inflation and Economic Growth


David Henderson explains:

The idea that an increase in economic growth leads to an increase in


inflation and that decreased growth reduces inflation is reflected
endlessly in the media. On April 28, for example, AP writer Rajesh Mahapatra
claimed that high economic growth of more than 8.5% annually in India
since 2003 has spurred demand and caused prices to rise. This makes no
sense.
All other things being equal, an increase in economic growth must cause
inflation to drop, and a reduction in growth must cause inflation to rise. In his
congressional testimony yesterday, Federal Reserve chairman Ben Bernanke
thankfully did not state that the higher economic growth he expects will lead
to higher inflation. Although he didnt connect growth and inflation at all, Mr.
Bernanke has long understood that higher growth leads to lower inflation.
Heres why. Inflation, as the old saying goes, is caused by too much money
chasing too few goods. Just as more money means higher prices, fewer
goods also mean higher prices. The connection between the level of
production and the level of prices also holds for the rate of change of
production (that is, the rate of economic growth) and the rate of change of
prices (that is, the inflation rate).
Some simple arithmetic will clarify. Start with the famous equation of
exchange, MV = Py, where M is the money supply; V is the velocity of money
that is, the speed at which money circulates; P is the price level; and y is
the real output of the economy (real GDP.) A version of this equation,
incidentally, was on the license plate of the late economist Milton Friedman,

who made a large part of his academic reputation by reviving, and giving
evidence for, the role of money growth in causing inflation.
If the growth rate of real GDP increases and the growth rates of M and V are
held constant, the growth rate of the price level must fall. But the growth
rate of the price level is just another term for the inflation rate; therefore,
inflation must fall. An increase in the rate of economic growth means more
goods for money to chase, which puts downward pressure on the inflation
rate. If for example the money supply grows at 7% a year and velocity is
constant and if annual economic growth is 3%, inflation must be 4% (more
exactly, 3.9%). If, however, economic growth rises to 4%, inflation falls to 3%
(actually, 2.9%.)

The April numbers for the index of industrial production (IIP), released on Thursday,
brought some cheer on the growthfront. The IIP grew by 3.4 per cent, its highest in
a long time. April, of course, was a month in which the entire country was deep in
electioneering. Therefore, some sort of stimulus from all the campaign spending
might have been reasonable to expect. The biggest beneficiary of this was the
category of "electrical machinery", which grew by over 66 per cent year on year,
reflecting all those campaign rallies, with their generators and audio equipment.
The other significant contributor to the growth in the overall index was electricity,
which grew by almost 12 per cent year on year, significantly higher than its growth
during 2013-14. Typically, a growth acceleration that relies heavily on one or two
sectoral surges does not have much staying power. It would require an across-theboard show of resurgence to allow people to conclude that a sustainable recovery
was under way. That is clearly not happening yet. However, these numbers do
reinforce the perception that things are not getting worse as far as growth is
concerned.
Likewise, there was some room for relief on the inflation front. The consumer price
index, or CPI, numbers for May 2014 showed headline inflation declining slightly,
from 8.6 per cent in April to 8.3 per cent in May. The Central Statistical Office is now
separately reporting a sub-index labelled consumer food price index, or CFPI, which
provides some convenience to observers. The index itself, though, offers little cheer.
It came down modestly between April and May, largely explaining the decline in the
headline rate, but is still significantly above nine per cent. At a time when there are
concerns about the performance of the monsoon and the impact of that on food

prices, these numbers should be a major cause of worry for the government. Milk,
eggs, fish and meat, vegetables and fruit contributed to the persistence of food
inflation. But cereals are also kicking in, as they have been for the past couple of
years, and the government must use its large stocks of rice and wheat quickly to
dampen at least this source of food inflation. It would be unconscionable not to do
so when risks of a resurgence of inflation are high. The larger point on inflation,
though, is how stubborn the rate is despite sluggish growth and high interest rates.
The limitations of monetary policy are being repeatedly underscored.
Against this backdrop, the government's prioritisation of its fight against inflation is
an extremely important development. It has to move quickly from intent to action
on a variety of reforms, from procurement policy to subsidies and to investment in
rural infrastructure. Many of these will generate benefits only over the medium
term. So those expecting a growth stimulus from the Reserve Bank of India any time
soon are bound to be in for a disappointment. Even so, room for optimism should
come from the fact that this government does have the capacity to design and
execute long-term strategies with complete credibility. The simple equation that it
needs to keep in mind is that inflation will not subside unless food prices moderate
and growth will not recover unless inflation subsides.

Which study design is good

The Best Study Design


For Dummies
When I had those tired looks again, my mother in law recommended coenzyme Q, which
research had proven to have wondrous effects on tiredness. Indeed many sites and
magazines advocate this natural energy producing nutrient which mobilizes your
mitochondria
for
cellular
energy! Another
time
she
asked
me
if
I
thought komkommerslank(cucumber pills for slimming) would work to lose some extra
weight. She took my NO for granted.
It is often difficult to explain people that not all research is equally good, and that outcomes
are not always equally significant (both statistically and clinically). It is even more difficult
to understand levels of evidence and why we should even care. Pharmaceutical
Industries (especially the supplement-selling ones) take advantage of this ignorance and are
very successful in selling their stories and pills.
If properly conducted, the Randomized Controlled Trial (RCT) is the best study-design to
examine the clinical efficacy of health interventions. An RCT is an experimental study where
individuals who are similar at the beginning are randomly allocated to two or more
treatment groups and the outcomes of the groups are compared after sufficient follow-up
time. However an RCT may not always be feasible, because it may not be ethical or
desirable to randomize people or to expose them to certain interventions.

Observational studies provide weaker empirical evidence, because the allocation of


factors is not under control of the investigator, but just happen or are chosen (e.g.
smoking). Of the observational studies, cohort studies provide stronger evidence than
case control studies, because in cohort studies factors are measured before the outcome,
whereas in case controls studies factors are measured after the outcome.
Most people find such a description of study types and levels of evidence too theoretical and
not appealing.
Last year I was challenged to tell about how doctors search medical information (central
theme = Google) for and here it comes. the Society of History and ICT.
To explain the audience why it is important for clinicians to find the best evidence and how
methodological filters can be used to sift through the overwhelming amount of information
in for instance PubMed, I had to introduce RCTs and the levels of evidence. To explain it to
them I used an example that stroke me when I first read about it.
I showed them the following slide :

And clarified: Beta-carotene is a vitamine in carrots and many other vegetables, but you can
also buy it in pure form as pills. There is reason to believe that beta-carotene might help to
prevent lung cancer in cigarette smokers. How do you think you can find out whether betacarotene will have this effect?

Suppose you have two neighbors, both heavy smokers of the same age, both males.
The neighbor who doesnt eat much vegetables gets lung cancer, but the neighbor who eats
a lot of vegetables and is fond of carrots doesnt. Do you think this provides good evidence
that
beta-carotene
prevents
lung
cancer?
There is a laughter in the room, so they dont believe in n=1 experiments/case reports. (still
how many people dont think smoking does not necessarily do any harm because their

chainsmoking
father
reached
his
nineties
I show them the following slide with the lowest box only.

in

good

health).

O.k. What about this study? Ive a group of lung cancer patients, who smoke(d)
heavily. I ask them to fill in a questionnaire about their eating habits in the past and take a
blood sample, and I do the same with a simlar group of smokers without cancer (controls).
Analysis shows that smokers developing lung cancer eat much less beta-carotene
containing vegetables and have lower bloodlevels of beta-carotene than the smokers not
developing cancer. Does this mean that beta-carotene is preventing lung cancer?
Humming in the audience, till one man says: perhaps some people dont remember exactly
what they eat and then several people object that it is just an association and you do not
yet know whether beta-carotene really causes this. Right! I show the box patient-control
studies.

Than consider this study design. I follow a large cohort of healthy heavy smokers
and look at their eating habits (including use of supplements) and take regular blood
samples. After a long follow-up some heavy smokers develop lung cancer whereas others
dont. Now it turns out that the group that did not develop lung cancer had significantly
more beta-carotene in their blood and eat larger amount of beta-carotene containing food.
What
do
you
think
about
that
then?
Now the room is a bit quiet, there is some hesitation. Then someone says: well it is more
convincing and finally the chair says: but it may still not be the carrots, but something else
in their food or they may just have other healthy living habits (including eating carrots).
Cohort-study appears on the slide (What a perfect audience!)

O.k. youre not convinced that these study designs give conclusive evidence. How
could we then establish that beta-carotene lowers the risk of lung cancer in

hea
vy smokers? Suppose you really wanted to
know,
how
do
you
set
up
such
a
study?
Grinning. Someone says by giving half of the smokersbeta-carotene and the other half
nothing. Or a placebo, someone else says. Right! Randomized Controlled Trial is on top of
the slide. And there is not much room left for another box, so we are there. I only add that
the best way to do it is to do it double blinded.
Than I reveal that all this research has really been done. There have been numerous
observational studies (case-control as well cohorts studies) showing a consistent negative
correlation between the intake of beta-carotene and the development of lung cancer in
heavy smokers. The same has been shown for vitamin E.

Knowing that, I asked the public: Would you as a heavy smoker participate in a trial
where you are randomly assigned to one of the following groups: 1. beta-carotene, 2.
vitamin E, 3. both or 4. neither vitamin (placebo)?
The recruitment fails. Some people say they dont believe in supplements, others say that it
would be far more effective if smokers quit smoking (laughter). Just 2 individuals said they
would at least consider it. But they thought there was a snag in it and they were right. Such
studies have been done, and did not give the expected positive results.
In the first large RCT (appr. 30,000 male smokers!), the ATBC Cancer Prevention Study, betacarotene rather increased the incidence of lung cancer with 18 percent and overall mortality
with 8 percent (although harmful effects faded after men stopped taking the pills). Similar
results were obtained in the CARET-study, but not in a 3rd RCT, the Physicians Health Trial,
the only difference being that the latter trial was performed both with smokers nd nonsmokers.
It is now generally thought that cigarette smoke causes beta-carotene to breakdown in
detrimental products, a process that can be halted by other anti-oxidants (normally present
in food). Whether vitamins act positively (anti-oxidant) or negatively (pro-oxidant) depends
very much on the dose and the situation and on whether there is a shortage of such
supplements or not.
I found that this way of explaining study designs to well-educated layman was very effective
and
fun!
The take-home message is that no matter how reproducible the observational studies seem
to indicate a certain effect, better evidence is obtained by randomized control trials. It also
shows that scientists should be very prudent to translate observational findings directly in a
particular lifestyle advice.
On the other hand, I wonder whether all hypotheses have to be tested in a costly RCT (the
costs for the ATCB trial were $46 million). Shouldnt there be very very solid grounds to start
a prevention study with dietary supplements in healthy individuals ? Arent their any
dangers? Personally I think we should be very restrictive about these chemopreventive
studies. Till now most chemopreventive studies have not met the high expectations, anyway.
And what about coenzyme-Q and komkommerslank? Besides that I do not expect the
evidence to be convincing, tiredness can obviously be best combated by rest and I already
eat
enough
cucumbers. ;)
To be continued

Ecological studies are studies of risk-modifying factors on health or other outcomes


based on populations defined either geographically or temporally. Both risk-modifying
factors and outcomes are averaged for the populations in each geographical or
temporal unit and then compared using standard statistical methods.
Ecological studies have often found links between risk-modifying factors and health
outcomes well in advance of other epidemiological or laboratory approaches. Several
examples are given here.
The study by John Snow regarding a cholera outbreak in London is considered the first
ecological study to solve a health issue. He used a map of deaths from cholera to
determine that the source of the cholera was a pump on Broad Street. He had the pump
handle removed in 1854 and people stopped dying there [Newsom, 2006]. It was only
when Robert Koch discovered bacteria years later that the mechanism of cholera
transmission was understood.[1]
Dietary risk factors for cancer have also been studied using both geographical and
temporal ecological studies. Multi-country ecological studies of cancer incidence and
mortality rates with respect to national diets have shown that some dietary factors such
as animal products (meat, milk, fish and eggs), added sweeteners/sugar, and some fats
appear to be risk factors for many types of cancer, while cereals/grains and vegetable
products as a whole appear to be risk reduction factors for many types of cancer.[2]
[3]

Temporal changes in Japan in the types of cancer common in Western developed

countries have been linked to the nutrition transition to the Western diet. [4]
An important advancement in the understanding of risk-modifying factors for cancer was
made by examining maps of cancer mortality rates. The map of colon cancer mortality
rates in the United States was used by the brothers Cedric and Frank C. Garland to
propose the hypothesis that solar ultraviolet B (UVB) radiation, through vitamin D
production, reduced the risk of cancer (the UVB-vitamin D-cancer hypothesis). [5] Since
then many ecological studies have been performed relating the reduction of incidence
or mortality rates of over 20 types of cancer to lower solar UVB doses. [6]
Links between diet and Alzheimers disease have been studied using both geographical
and temporal ecological studies. The first paper linking diet to risk of Alzheimers
disease was a multicountry ecological study published in 1997. [7] It used prevalence of
Alzheimers disease in 11 countries along with dietary supply factors, finding that total

fat and total energy (caloric) supply were strongly correlated with prevalence, while fish
and cereals/grains were inversely correlated (i.e., protective). Diet is now considered an
important risk-modifying factor for Alzheimers disease. [8] Recently it was reported that
the rapid rise of Alzheimers disease in Japan between 1985 and 2007 was likely due to
the nutrition transition from the traditional Japanese diet to the Western diet. [9]
Another example of the use of temporal ecological studies relates to influenza. John
Cannell and associates hypothesized that the seasonality of influenza was largely
driven by seasonal variations in solar UVB doses and calcidiol levels.[10] A randomized
controlled trial involving Japanese school children found that taking 1000 IU per day
vitamin D3 reduced the risk of type A influenza by two-thirds. [11]
Ecological studies are particularly useful for generating hypotheses since they can use
existing data sets and rapidly test the hypothesis. The advantages of the ecological
studies include the large number of people that can be included in the study and the
large number of risk-modifying factors that can be examined.
The term ecological fallacy means that the findings for the groups may not apply to
individuals in the group. However, this term also applies to observational studies
and randomized controlled trials. All epidemiological studies include some people who
have health outcomes related to the risk-modifying factors studied and some who do
not. For example, genetic differences affect how people respond to pharmaceutical
drugs. Thus, concern about the ecological fallacy should not be used to disparage
ecological studies. The more important consideration is that ecological studies should
include as many known risk-modifying factors for any outcome as possible, adding
others if warranted. Then the results should be evaluated by other methods, using, for
example, Hills criteria for causality in a biological system.

The ecological fallacy may occur when conclusions about individuals are drawn from analyses
conducted on grouped data. The nature of this type of analysis tends to overestimate the
degree of association between variables.

Survival rate.
Life table.....
In actuarial science and demography, a life table (also called a mortality
table or actuarial table) is a table which shows, for each age, what the probability is
that a person of that age will die before his or her next birthday ("probability of death").
From this starting point, a number of inferences can be derived.

the probability of surviving any particular year of age

remaining life expectancy for people at different ages

Life tables are also used extensively in biology and epidemiology. The concept is also of
importance in product life cycle management.

Using

from Table 1 data, the chart shows

with

(Age) ranging from 20 to 90

years and ranging from 5 to 25 future years.


These curves show the probability that someone at (who has reached) the age of
live at least years and can be used to discuss annuity issues from the boomer
viewpoint where an increase in group size will have major effects.

will

For those in the age range covered by the chart, the "5 yr" curve indicates the group
that will reach beyond the life expectancy. This curve represents the need for support
that covers longevity requirements.
The "20 yr" and "25 yr" curves indicate the continuing diminishing of the life
expectancy value as "age" increases. The differences between the curves are very
pronounced starting around the age of 50 to 55 and ought to be used for planning
based upon expectation models.
The "10 yr" and "15 yr" curves can be thought of as the trajectory that is followed by
the life expectancy curve related to those along the median which indicates that the age
of 90 is not out of the question.

A "life table" is a kind of bookkeeping system that ecologists often use to keep
track of stage-specific mortality in the populations they study.

It is an especially

useful approach in entomology where developmental stages are discrete and


mortality rates may vary widely from one life stage to another.

From a pest

management standpoint, it is very useful to know when (and why) a pest


population suffers high mortality -- this is usually the time when it is most
vulnerable.

By managing the natural environment to maximize this vulnerability,

pest populations can often be suppressed without any other control methods.
To create a life table, an ecologist follows the life history of many individuals in a
population, keeping track of how many offspring each female produces, when each
one dies, and what caused its death.

After amassing data from different

populations, different years, and different environmental conditions, the ecologist


summarizes this data by calculating average mortality within each developmental
stage.
For example, in a hypothetical insect population, an average female will lay 200
eggs before she dies.

Half of these eggs (on average) will be consumed by

predators, 90% of the larvae will die from parasitization, and three-fifths of the
pupae will freeze to death in the winter.

(These numbers are averages, but they

are based on a large database of observations.) A life table can be created from
the above data.
Female).

Start with a cohortof 200 eggs (the progeny of Mrs. Average

This number represents the maximum biotic potential of the species (i.e. the
greatest number of offspring that could be produced in one generation under ideal
conditions).

The first line of the life table lists the main cause(s) of death, the

number dying, and the percent mortality during the egg stage.

In this example,

an average of only 100 individuals survive the egg stage and become larvae.
The second line of the table lists the mortality experience of these 100 larvae: only
10 of them survive to become pupae (90% mortality of the larvae).

The third

line of the table lists the mortality experience of the 10 pupae -- three-fifths die of
freezing.

This leaves only 4 individuals alive in the adult stage to reproduce.

If

we assume a 1:1 sex ratio, then there are 2 males and 2 females to start the next
generation.
If there is no mortality of these females, they will each lay an average of 200 eggs
to start the next generation.

Thus there are two females in the cohort to replace

the one original female -- this population is DOUBLING in size each generation!!
In ecology, the symbol "R" (capital R) is known as the replacement rate.

It is a

way to measure the change in reproductive capacity from generation to


generation.

The value of "R" is simply the number of reproductive daughters that

each female produces over her lifetime:

Number of daughters
R = ------------------------------Number of mothers
If the value of "R" is less than 1, the population is decreasing -- if this situation
persists for any length of time the population becomes extinct.
If the value of "R" is greater than 1, the population is increasing -- if this situation
persists for any length of time the population will grow beyond the environment's
carrying capacity. (Uncontrolled population growth is usually a sign of a disturbed
habitat, an introduced species, or some other type of human intervention.)
If the value of "R" is equal to 1, the population is stable -- most natural populations
are very close to this value.

Practice Problem:
A typical female of the bubble gum maggot (Bubblicious blowhardi Meyer) lays 250
eggs.

On average, 32 of these eggs are infertile and 64 are killed by parasites.

Of the survivors, 64 die as larvae due to habitat destruction (gum is cleared away
by the janitorial staff) and 87 die as pupae because the gum gets too hard.
Construct a life table for this species and calculate a value for "R", the replacement
rate (assume a 1:1 sex ratio).

Is this population increasing, decreasing, or

remaining stable?

Practice Problem:
A typical female of the bubble gum maggot (Bubblicious blowhardi Meyer) lays 250
eggs.

On average, 32 of these eggs are infertile and 64 are killed by parasites. Of

the survivors, 64 die as larvae due to habitat destruction (gum is cleared away by
the janitorial staff) and 87 die as pupae because the gum gets too hard.

Construct

a life table for this species and calculate a value for "R", the replacement rate
(assume a 1:1 sex ratio).
stable?

Is this population increasing, decreasing, or remaining

How to compare life table, survival rate.


Relative Risk
Y-Y analysis
Forest Graph
A forest plot (or blobbogram[1]) is a graphical display designed to illustrate the relative
strength of treatment effects in multiple quantitative scientific studies addressing the
same question. It was developed for use in medical research as a means of graphically
representing a meta-analysis of the results of randomized controlled trials. In the last
twenty years, similar meta-analytical techniques have been applied in observational
studies (e.g. environmental epidemiology) and forest plots are often used in presenting
the results of such studies also.
Although forest plots can take several forms, they are commonly presented with two
columns. The left-hand column lists the names of the studies (frequently randomized
controlled trials or epidemiological studies), commonly in chronological order from the
top downwards. The right-hand column is a plot of the measure of effect (e.g. an odds
ratio) for each of these studies (often represented by a square) incorporating confidence
intervals represented by horizontal lines. The graph may be plotted on a natural
logarithmic scale when using odds ratios or other ratio-based effect measures, so that
the confidence intervals are symmetrical about the means from each study and to
ensure undue emphasis is not given to odds ratios greater than 1 when compared to
those less than 1. The area of each square is proportional to the study's weight in the
meta-analysis. The overall meta-analysed measure of effect is often represented on the
plot as a dashed vertical line. This meta-analysed measure of effect is commonly plotted
as a diamond, the lateral points of which indicate confidence intervals for this estimate.
A vertical line representing no effect is also plotted. If the confidence intervals for
individual studies overlap with this line, it demonstrates that at the given level of
confidence their effect sizes do not differ from no effect for the individual study. The
same applies for the meta-analysed measure of effect: if the points of the diamond
overlap the line of no effect the overall meta-analysed result cannot be said to differ
from no effect at the given level of confidence.

Forest plots date back to at least the 1970s. One plot is shown in a 1985 book about
meta-analysis.[2]:252 The first use in print of the word "forest plot" may be in an abstract
for a poster at the Pittsburgh (USA) meeting of the Society for Clinical Trials in May
1996.[3] An informative investigation on the origin of the notion "forest plot" was
published in 2001.[4] The name refers to the forest of lines produced. In September
1990, Richard Peto joked that the plot was named after a breast cancer researcher
called Pat Forrest and as a result the name has sometimes been spelt "forrest plot".[4]

Effective human resources management


Strategic Human Resource Management is done by linking of HRM with strategic
goals and objectives in order to improve business performance and developing
organizational cultures that foster innovation and flexibility. It involves planning HR
activities and deployment in such a way to enable organizations to achieve their
goals. Human Resource activities such as recruitment, selection, training and
rewarding personnel are done by keeping in view the company's goals and
objectives. Organizations focuses on identifying, analyzing and balancing two sorts
of forces that is; the organization's external opportunities and threats on one hand
and its internal strengths and weaknesses on the other. Alignment of the Human
Resource system with the strategic goals of firm has facilitated organizations to
achieve superb targets.

Effective Human Resource Management is the Center for Effective Organizations' (CEO) sixth
report of a fifteen-year study of HR management in today's organizations. The only long-term
analysis of its kind, this book compares the findings from CEO's earlier studies to new data
collected in 2010. Edward E. Lawler III and John W. Boudreau measure how HR management
is changing, paying particular attention to what creates a successful HR functionone that
contributes to a strategic partnership and overall organizational effectiveness. Moreover, the
book identifies best practices in areas such as the design of the HR organization and HR metrics.
It clearly points out how the HR function can and should change to meet the future demands of
a global and dynamic labor market.
For the first time, the study features comparisons between U.S.-based firms and companies in
China, Canada, Australia, the United Kingdom, and other European countries. With this new
analysis, organizations can measure their HR organization against a worldwide sample,
assessing their positioning in the global marketplace, while creating an international standard
for HR management.
(PDF 2 docs)

Policy?
1. Politics: (1) The basic principles by which a government is guided.
(2) The declared objectives that a government or party seeks to achieve and preserve in the interest of
national community. See also public policy.
2. Insurance: The formal contract issued by an insurer that contains terms and conditions of the
insurance cover and serves as its legal evidence.
3. Management: The set of basic principles and associated guidelines, formulated and enforced by the
governing body of an organization, to direct and limit its actions in pursuit of long-term goals. See
also corporate policy.

A policy is a principle or protocol to guide decisions and achieve rational outcomes. A


policy is a statement of intent, and is implemented as a procedure [1] or protocol. Policies
are generally adopted by the Board of or senior governance body within an organization
whereas procedures or protocols would be developed and adopted by senior executive
officers. Policies can assist in both subjective and objective decision making. Policies to
assist in subjective decision making would usually assist senior management with
decisions that must consider the relative merits of a number of factors before making
decisions and as a result are often hard to objectively test e.g. work-life balance policy.
In contrast policies to assist in objective decision making are usually operational in
nature and can be objectively tested e.g. password policy.[citation needed]
The term may apply to government, private sector organizations and groups, as well as
individuals. Presidential executive orders, corporate privacy policies, and
parliamentary rules of order are all examples of policy. Policy differs from rules or law.
While law can compel or prohibit behaviors (e.g. a law requiring the payment of taxes
on income), policy merely guides actions toward those that are most likely to achieve a
desired outcome.[citation needed]
Policy or policy study may also refer to the process of making important organizational
decisions, including the identification of different alternatives such as programs or
spending priorities, and choosing among them on the basis of the impact they will have.
Policies can be understood as political, management, financial, and administrative
mechanisms arranged to reach explicit goals. In public corporate finance, a critical
accounting policy is a policy for a firm/company or an industry which is considered to

have a notably high subjective element, and that has a material impact on the financial
statements.[citation needed]

Micro-planning
Micro Planning: A tool to empower people
Micro-planning is a comprehensive planning approach wherein the community
prepares development plans themselves considering the priority needs of the village.
Inclusion and participation of all sections of the community is central to micro-

planning, thus making it


an integral component of
decentralized governance. For village development to be sustainable and participatory,
it is imperative that the community owns its village development plans and that the
community ensures that development is in consonance with its needs.
However, from our experience of working with the panchayats in Mewat, we realized
that this bottom-up planning approach was never followed in making village
development plans in the past. Many a times, the elected panchayat representatives
had not even heard of this term.

Acknowledging the significance of micro-planning for


village development, IRRADs Capacity Building Center organized a week long
training workshop on micro-planning for elected representatives of panchayats and
IRRADs staff working with panchayats in the villages. The aim of this workshop
was to educate the participants about the concept of micro-planning and its
importance in decentralized governance system.
As part of this workshop the participants were explained, in detail about the concept,
why and how of micro planning; the difference between micro-planning and the

traditional planning approaches. To give practical exposure to the participants, a three


day micro-planning exercise was carried out in Untaka Village of Nuh Block, Mewat.
The objective of this exposure was to show participants how micro-planning is carried
out and what challenges may arise during its conduct and prepare the village
development plan following the micro-planning approach.
The village sarpanch led the process from the front, and the entire village and
panchayat members participated wholeheartedly in this exercise. Participatory Rural
Appraisal (PRA) technique which incorporates the knowledge and opinions of rural
people in the planning and management of development projects and programmes was
used to gather information and prioritize development works. Resource, social and
development issue prioritization maps were prepared by the villagers after analyzing
the collected information. The villagers further identified the problems associated with
village development and recommended solutions for specific problems while working
in groups. The planning process went on for two days subsequent to which a Gram
Sabha (village committee), the first power unit in the panchayati raj system, was
organized on the third day. About 250 people participated in the Gram Sabha
including 65 women and 185 men. The sarpanch shared the final village analysis and
development plans with the villagers present in Gram Sabha and asked for their inputs
and suggestions. After incorporating the suggestions received, a plan was prepared
and submitted to Block Development Office for final approval and sanction of funds.
"After the successful conduct of Gram Sabha in our village, we now need to build
synergies with the district level departments to implement the plans drawn in the
meeting," said the satisfied Sarpanch of Untka after experiencing the conduct of micro
planning exercise in their village.

Macro-planning
Macro Planning and Policy Division (MPPD) is responsible for setting macroeconomic policies and
strategies in consultation with key agencies, such as the Reserve Bank of Fiji (RBF) and Ministry of
Finance. The Division analyzes and forecasts movements in macroeconomic indicators and accounts,
including Gross Domestic Product (GDP), Exports and Imports, and the Balance of Payments (BOP).
Macroeconomic forecasting involves making assessments on production data in the various sectors of the
economy for compilation of quarterly forecasts of the National Accounts.

The Division also involves in undertaking assessments and research on macroeconomic indicators,
internal external shocks and structural reform measures, which include areas such as investment, labour
market, goods market, trade, public enterprises, and public service.
The Macro Policy and Planning Division:

Provides technical and policy advice;

Produces macroeconomic forecasts of Gross Domestic Product, Exports, Imports and Balance of
Payments on a quarterly basis;

Effective participation at policy development meetings and consultative forums;

Undertake research on topical issues and provide pre-budget macroeconomic analyses and
advice.

1. Macro lesson planning


The term macro comes from Greek makros meaning long,
large. For teachers, macro lesson planning means coming
up with the curriculum for the semester/month/year/etc. Not all
teachers feel they are responsible for this as many schools
have set curriculums and/or textbooks determined by the
academic coordinator. However, even in these cases,
teachers may be called upon to devise a curriculum for a new

class, modify an older curriculum, or map out themes to


match the target lessons within the curriculum.
At my old school, for instance, I had the chance to develop the
curriculum for a TOEIC Intermediate and a TOEFL Advanced
class when they were first introduced at our school. Ive also
modified older curricula (or curriculums, if you preferboth are
acceptable) for various levels because of students changing
needs. And finally, my old school kindly granted the teachers
one day a month of paid prep time/new student intake, where
wed decide on the themes that wed be using for our class to
ensure there wasnt too much overlap with other classes. We
did have a set curriculum in terms of grammar points, but
themes and supplementary materials were up to us. Doing a
bit of planning before the semester started ensured that we
stayed organized and kept the students interest throughout
the semester.
Another benefit of macro lesson planning is that teachers can
share the overall goals of the course with their students on the
first day, and they can reiterate those goals as the semester
progresses. Students often lose sight of the big picture and
get discouraged with their English level, and having clear
goals that they see themselves reaching helps prevent this.

2. Micro lesson planning


The term micro comes from the Greek mikros meaning
small, little. In the ELT industry, micro lesson
planning refers to planning one specific lesson based on one

target (e.g., the simple past). It involves choosing a topic or


grammar point and building a full lesson to complement it. A
typical lesson plan involves a warm-up activity, which
introduces the topic or elicits the grammar naturally, followed
by an explanation/lesson of the point to be covered. Next,
teachers devise a few activities that allow students to practice
the target point, preferably through a mix of skills (speaking,
listening, reading, writing). Finally, teachers should plan a brief
wrap-up activity that brings the lesson to a close. This could
be as simple as planning to ask students to share their
answers from the final activity as a class.
Some benefits of micro lesson planning include classes that
runs smoothly and students who dont get bored. Lesson
planning ensures that youll be prepared for every class and
that youll have a variety of activities on hand for whatever
situation may arise (well, the majority of situationsIm sure
weve all had those classes where an activity we thought
would rock ends up as an epic fail).
For more information on micro lesson planning, check
out How to Make a Lesson Plan, a blog post I wrote last year,
where I emphasized the importance of planning fun,
interesting fillers so that students stay engaged. I also
provided links in that post to many examples of activities you
can use for warm-ups, main activities, fillers, homework, etc.
There is also a good template for a typical lesson plan
at.docstoc.

Can anyone think of other benefits of macro or micro lesson


planning? Does anyone have a different definition of these
terms? Let us know below.
Happy planning!
Tanya
Macro is big and micro is very small. Macro economics depends on big projects like steel mills,
big industrial units, national highway projects etc. which aim at producing good and services at a
very large quantity and serve a wide area. These take time to porduce results because of the
size of the projects. Micro economics is on a small scale, limited to specific area or location and
purpose and normally produce results in a much shorter time. The best example of micro
economics is the Grameen Bank of Bangladesh started by Md. Yunus, who also got
international awards for his initative.The concept of Micro credit was pioneered by the
Bangladesh-based Grameen Bank, which broke away from the age old belief that low income
amounted to low savings and low investment. It started what came to be a system which
followed this sequence: low income, credit, investment, more income, more credit, more
investment, more income. It is owned by the poor borrowers of the bank who are mostly women.
Borrowers of Grameen Bank at present own 95 per cent of the total equity and the balance 5%
by the Govt. Micro economics was also one of the policies of Mahatma Gandhi who wanted
planning to start from local village level and spread thru the country; unfortunately this has not
happened and even now the result of developments has not percolated to the common man,
particularly in the rural areas.

Macro planning vs. micro planning


Ideally, lesson planning should be done at two levels: macro planning and micro planning. The
former is planning over time, for instance, the planning for a month, a term, or the whole course.
The latter is planning for a specific lesson, which usually lasts 40 or 50 minutes. Of course, there
is no clear cut difference between these two types of planning. Micro planning should be based
on macro planning, and macro planning is apt to be modified as lessons go on.
Read through the following items and decide which belong to macro planning and which
belong to micro planning. Some could belong to both. When you have finished, compare
your decisions with your partner.
Thinking and sharing activity
TASK 2

1.

Write down lesson notes to guide teaching.

2.

Decide on the overall aims of a course or programme

3.

Design activities and procedures for a lesson.

4.

Decide which language points to cover in a lesson.

5.

Study the textbooks and syllabus chosen by the institute.

6.

Decide which skills are to be practised.

7.

Prepare teaching aids.

8.

Allocate time for activities.

9.

Prepare games or songs for a lesson.

10.

Prepare supplementary materials.

In a sense, macro planning is not writing lesson plans for specific lessons but rather familiarizing
with the context in which language teaching is taking place. Macro planning involves the
following:
1) Knowing about the course: The teacher should get to know which language areas and language
skills should be taught or practised in the course, what materials and teaching aids are available,
and what methods and techniques can be used.
2) Knowing about the institution: The teacher should get to know the institution's arrangements
regarding time, length, frequency of lessons, physical conditions of classrooms, and exam
requirements.
3) Knowing about the learners: The teacher should acquire information about the students?age
range, sex ratio, social background, motivation, attitudes, interests, learning needs and other
individual factors.
4) Knowing about the syllabus: The teacher should be clear about the purposes, requirements and
targets specified in the syllabus.
Much of macro planning is done prior to the commencement of a course. However, macro
planning is a job that never really ends until the end of the course.
Macro planning provides general guidance for language teachers. However, most teachers have
more confidence if they have a kind of written plan for each lesson they teach. All teachers have
different personalities and different teaching strategies, so it is very likely their lesson plans
would differ from each other. However, there are certain guidelines that we can follow and certain
elements that we can incorporate in our plans to help us create purposeful, interesting and
motivating lessons for our learners.

Components of policy/ planning


Five essential components
The five essential components that ensure an effective P&P program include the

organizational documentation process

information plan or architecture

documentation approach

P&P expertise, and

technologies (tools).
Definition of P&P program
A policies and procedures (P&P) program refers to the context in which an organization formally plans, designs,
implements, manages, and uses P&P communication in support of performance-based learning and on-going
reference.

Description of components
The five components of a formal P&P program are described below:

An organizational documentation process which describes how members of the organization


interact in the development and maintenance of the life span of P&P content

The information plan or architecture which identifies the coverage and organization of subject
matter and related topics to be included

The documentation approach which designates how P&P content will be designed and
presented, including the documentation methods, techniques, formats, and styles

The P&P expertise necessary for planning, designing, developing, coordinating, implementing,
and publishing P&P content, as well as the expertise needed for managing the program and the
content development projects

The designated technologies for developing, publishing, storing, accessing, and managing
content, as well as for monitoring content usage.
Implementing components
Every organization is usually at a different maturity stage for their P&P investment. Therefore, before establishing or
enhancing a current P&P program, it is important to obtain an objective assessment of the organizational maturity,
including where your P&P program is now and where it needs to be in the future. Once the maturity level is
established, it is then necessary to develop a strategic P&P program plan. The strategic plan will enable your
organization to achieve the necessary level of maturity for each component and ensure that your organization will
maximize the value of its P&P investment.

Conclusion
Organizations with informal P&P programs do not usually reap the benefits that formal P&P programs provide. An
effective P&P program must include five components. It is essential to have an objective P&P program assessment to
determine the existing P&P maturity grade and where it should be. The P&P strategic plan is the basis for achieving a
higher level of performance in your P&P program

The following information is provided as a template to assist learners draft a policy. However
it must be remembered that policies are written to address specific issues, and therefore the
structure and components of a policy will differ considerably according to the need. A policy
document may be many pages or it may be a single page with just a few simple statements.
The following template is drawn from an Information Bulletin "Policy and Planning" by Sport
and Recreation Victoria. It is suggested that there are nine components. The example given at
the right of the table should not be construed as a complete policy

Component
1

Brief Example

A statement of what the

The following policy aims to ensure that XYZ Association Inc.

organisation seeks to

fulfills the expectation of its members for quality services in

achieve for its clients

sport and recreation delivery.

Underpinning principles,

The underpinning principle of this policy is that the provision

values and philosophies

of quality services is of the utmost importance in building


membership and participation. Satisfied members are more
likely to continue participation, contribute to the
organisation and renew the memberships each year.

Broad
service objectives which
explain the areas in

This policy aims to improve the quality of services provided


XYZ Assoc. Inc.:

which the organisation


will be dealing

The organisation and management of programs and


services

The management of association resources

These hypothetical examples are for illustration. There is


no substitute for research and consultation in the
development of effective policies.
4

Strategies to achieve
each objective

Strategies to improve the quality of services in program and


event management include:

Provision of training for event officials

Implementing a participant survey

Fostering a culture of continuous improvement

Strategies to improve the quality of services through the


better management of resources through:

Implementation of best practice consultation and


planning processes

Professional development opportunities for the


human resources of the organisation

Instituting a risk management program

The maintenance of records and databases to assist


in the management process.

These hypothetical examples are for illustration. There is


no substitute for research and consultation in the
development of effective policies.
5

Specific actions to be
taken

This policy recommends the following actions:

Participants are surveyed on a once-year basis for


satisfaction with programs and services

The quality of services to participants is reviewed

annually as part of the strategic planning process

The operational planning process include scheduling


events for the professional development of staff

The risk management program should be reviewed


on a yearly basis, and that this review should involve
risk management professionals

All clubs be consulted in the maintenance,


distribution of and usage of physical and financial
resources

These hypothetical examples are for illustration. There is


no substitute for research and consultation in the
development of effective policies.
6

Desired outcomes of
specific actions

The desired outcomes of this policy are as follows:

Increased satisfaction of participants with the


association's events and programs

The best utilisation of then association's resources in


line with the expectations of members

The better management of risks associated with


services delivery

Performance indicators

The success of this policy may be measured in terms of:

An increase in the average membership duration An


increase in the participation of association events

An increase in the number of volunteer officials

A reduction in injuries

Management plans and

This section of the policy provides further information and

day to day operational

detail on how the policy is to be implemented and observed

rules covering all aspects

on a day-to-day basis.

of services delivery
9

A review program

This policy should be review annually. The review process


should include an examination of the performance
indicators, consultation with members of the association,
and a discussionforum involving the management committee
and risk management professionals.

Health care financing

Health Care Financing, Efficiency, and Equity


This paper examines the efficiency and equity implications of
alternative health care system financing strategies. Using data across
the OECD, I find that almost all financing choices are compatible with
efficiency in the delivery of health care, and that there has been no
consistent and systematic relationship between financing and cost
containment. Using data on expenditures and life expectancy by
income quintile from the Canadian health care system, I find that
universal, publicly-funded health insurance is modestly redistributive.
Putting $1 of tax funds into the public health insurance system
effectively channels between $0.23 and $0.26 toward the lowest
income quintile people, and about $0.50 to the bottom two income
quintiles. Finally, a review of the literature across the OECD suggests
that the progressivity of financing of the health insurance system has
limited implications for overall income inequality, particularly over time.

Health financing systems are critical for reaching universal health coverage. Health financing
levers to move closer to universal health coverage lie in three interrelated areas:

raising funds for health;


reducing financial barriers to access through prepayment and subsequent
pooling of funds in preference to direct out-of-pocket payments; and
allocating or using funds in a way that promotes efficiency and equity.
Developments in these key health financing areas will determine whether health services exist
and are available for everyone and whether people can afford to use health services when they
need them.
Guided by the World Health Assembly resolution WHA64.9 from May 2011 and based on the
recommendations from the World Health Report 2010 Health systems financing: The path to
universal coverage, WHO is supporting countries in developing of health financing systems that
can bring them closer to universal coverage.

HEALTH CARE FINANCING


Management Sciences Health helps governments and nongovernmental
organizations assess their current financial situation and systems,
understand service costs, develop financing solutions and to use funds
more effectively and efficiently. MSH believes in integrated approaches to
health finance and works with sets of policy levers that will produce the
best outcomes, including government regulations, budgeting
mechanisms, insurance payment methods and provider and patient
incentives.

Healthcare Financing
The Need
More than 120 million people in Pakistan do not have health coverage. This pushes the poor into
debt and an inevitable medical-poverty trap. Two-thirds of households surveyed over the last three
years, reported that they were affected by one or more health problems and went into debt to
finance the cost. Many who cannot afford treatment, particularly women, forego medical treatment
altogether.
The Solution
To fill this vacuum in healthcare financing, the American Pakistan Foundation has partnered with
Heartfile Health Financing to support their groundbreaking work in healthcare reform and health
financing for the poor in Pakistan.
Heartfile is an innovative program that utilizes a custom-made technology platform to transfer funds
for treatment costs of the poor. The system, founded by Dr. Sania Nishtar, is highly transparent and
effective by providing a direct connection between the donor, healthcare facility, and beneficiary
patient.

Success Stories
At the age of 15 Majjid was the only breadwinner of his family. After being hit by a tractor he was out
of a job with a starving family and no money for an operation. Through Heartfile he was able to get
the treatment he needed and stay out of debt.

Majid
The Process
Heartfile is contacted via text or email when a person of dire financial need is admitted into one of a
list of preregistered hospitals.
Within 24 hours a volunteer is mobilized to see the patient, assess poverty status and the eligibility
by running their identity card information through the national database authority.

Once eligibility is established, the patient is sent funds within 72 hours through a cash transfer to
their service provider.
Donors to Heartfile have full control over their donation through a web database that allows them to
decide where they want their funds to go. They are connected to the people they support through a
personal donation page that allows them to see exactly how their funds were used.

Hills Criteria of Causation


Hills Criteria of Causation outlines the minimal conditions
needed to establish a causal relationship between two
items. These criteria were originally presented by Austin
Bradford Hill (1897-1991), a British medical statistician, as a
way of determining the causal link between a specific factor
(e.g., cigarette smoking) and a disease (such as emphysema
or lung cancer). Hill's Criteria form the basis of modern
epidemiological research, which attempts to establish
scientifically valid causal connections between potential
disease agents and the many diseases that afflict
humankind. While the criteria established by Hill (and
elaborated by others) were developed as a research tool in
the medical sciences, they are equally applicable to
sociology, anthropology and other social sciences, which
attempt to establish causal relationships among social
phenomena. Indeed, the principles set forth by Hill form the
basis of evaluation used in all modern scientific research.
While it is quite easy to claim that agent "A" (e.g., smoking)
causes disease "B" (lung cancer), it is quite another matter
to establish a meaningful, statistically valid connection
between the two phenomena. It is just as necessary to ask if
the claims made within the social and behavioral sciences
live up to Hill's Criteria as it is to ask the question in
epidemiology (which is also a social and behavioral
science). While it is quite easy to claim that population
growth causes poverty or that globalization causes
underdevelopment in Third World countries, it is quite
another thing to demonstrate scientifically that such causal
relationships, in fact, exist. Hill's Criteria simply provides an
additional valuable measure by which to evaluate the many
theories and explanations proposed within the social
sciences.

Hill's Criteria
Hills Criteria* are
presented here as
they have been
applied in
epidemiological
research, followed by
examples which
illustrate how they
would be applied to
research in the social
and behavioral
sciences.

1.

Temporal Relationship:
Exposure always precedes the
outcome. If factor "A" is believed to
cause a disease, then it is clear that
factor "A" must necessarily always
precede the occurrence of the disease.
This is the only absolutely essential
criterion. This criterion negates the
validity of all functional explanations
used in the social sciences, including
the functionalist explanations that
dominated British social anthropology
for so many years and the ecological
functionalism that pervades much
American cultural ecology.

2.

Strength:

This is defined by the size of the


association as measured by
appropriate statistical tests. The
stronger the association, the more
likely it is that the relation of "A" to
"B" is causal. For example, the more
highly correlated hypertension is with
a high sodium diet, the stronger is the
relation between sodium and
hypertension. Similarly, the higher the
correlation between patrilocal
residence and the practice of male
circumcision, the stronger is the
relation between the two social
practices.

3.

Dose-Response Relationship:
An increasing amount of exposure
increases the risk. If a dose-response
relationship is present, it is strong
evidence for a causal relationship.
However, as with specificity (see
below), the absence of a doseresponse relationship does not rule
out a causal relationship. A threshold
may exist above which a relationship
may develop. At the same time, if a
specific factor is the cause of a
disease, the incidence of the disease
should decline when exposure to the
factor is reduced or eliminated. An
anthropological example of this would
be the relationship between
population growth and agricultural
intensification. If population growth is
a cause of agricultural intensification,
then an increase in the size of a
population within a given area should
result in a commensurate increase in
the amount of energy and resources
invested in agricultural production.
Conversely, when a population

decrease occurs, we should see a


commensurate reduction in the
investment of energy and resources
per acre. This is precisely what
happened in Europe before and after
the Black Plague. The same analogy
can be applied to global
temperatures. If increasing levels of
CO2 in the atmosphere causes
increasing global temperatures, then
"other things being equal", we should
see both a commensurate increase
and a commensurate decrease in
global temperatures following an
increase or decrease respectively in
CO2 levels in the atmosphere.

4.

Consistency:
The association is consistent when
results are replicated in studies in
different settings using different
methods. That is, if a relationship is
causal, we would expect to find it
consistently in different studies and
among different populations. This is
why numerous experiments have to be
done before meaningful statements
can be made about the causal
relationship between two or more
factors. For example, it required
thousands of highly technical studies
of the relationship between cigarette
smoking and cancer before a definitive
conclusion could be made that
cigarette smoking increases the risk of
(but does not cause) cancer. Similarly,
it would require numerous studies of
the difference between male and
female performance of specific
behaviors by a number of different
researchers and under a variety of
different circumstances before a

conclusion could be made regarding


whether a gender difference exists in
the performance of such behaviors.

5.

Plausibility:
The association agrees with currently
accepted understanding of
pathological processes. In other
words, there needs to be some
theoretical basis for positing an
association between a vector and
disease, or one social phenomenon
and another. One may, by chance,
discover a correlation between the
price of bananas and the election of
dog catchers in a particular
community, but there is not likely to be
any logical connection between the
two phenomena. On the other hand,
the discovery of a correlation between
population growth and the incidence
of warfare among Yanomamo villages
would fit well with ecological theories
of conflict under conditions of
increasing competition over
resources. At the same time, research
that disagrees with established theory
is not necessarily false; it may, in fact,
force a reconsideration of accepted
beliefs and principles.

ernate Explanations:
In judging whether a reported
association is causal, it is necessary
to determine the extent to which
researchers have taken other possible
explanations into account and have
effectively ruled out such alternate
explanations. In other words, it is

always necessary to consider multiple


hypotheses before making
conclusions about the causal
relationship between any two items
under investigation.

7.

Experiment:
The condition can be altered
(prevented or ameliorated) by an
appropriate experimental regimen.

8.

Specificity:
This is established when a single
putative cause produces a specific
effect. This is considered by some to
be the weakest of all the criteria. The
diseases attributed to cigarette
smoking, for example, do not meet
this criteria. When specificity of an
association is found, it provides
additional support for a causal
relationship. However, absence of
specificity in no way negates a causal
relationship. Because outcomes (be
they the spread of a disease, the
incidence of a specific human social
behavior or changes in global
temperature) are likely to have
multiple factors influencing them, it is
highly unlikely that we will find a oneto-one cause-effect relationship
between two phenomena. Causality is
most often multiple. Therefore, it is
necessary to examine specific causal
relationships within a
larger systemic perspective.

9.

Coherence:
The association should be compatible
with existing theory and knowledge.
In other words, it is necessary to
evaluate claims of causality within the
context of the current state of
knowledge within a given field and in
related fields. What do we have to
sacrifice about what we currently
know in order to accept a particular
claim of causality. What, for example,
do we have to reject regarding our
current knowledge in geography,
physics, biology and anthropology in
order to accept the Creationist claim
that the world was created as
described in the Bible a few thousand
years ago? Similarly, how consistent
are racist and sexist theories of
intelligence with our current
understanding of how genes work and
how they are inherited from one
generation to the next? However, as
with the issue of plausibility, research
that disagrees with established theory
and knowledge are not automatically
false. They may, in fact, force a
reconsideration of accepted beliefs
and principles. All currently accepted
theories, including Evolution,
Relativity and non-Malthusian
population ecology, were at one time
new ideas that challenged orthodoxy.
Thomas Kuhn has referred to such
changes in accepted theories
as "Paradigm Shifts".

The Bradford Hill criteria, otherwise known as Hill's criteria for causation, are a
group of minimal conditions necessary to provide adequate evidence of a causal
relationship between an incidence and a consequence, established by
the English epidemiologist Sir Austin Bradford Hill (18971991) in 1965.
The list of the criteria is as follows:
1.

Strength: A small association does not mean that there is not a causal effect,
though the larger the association, the more likely that it is causal. [1]

2.

Consistency: Consistent findings observed by different persons in different


places with different samples strengthens the likelihood of an effect. [1]

3.

Specificity: Causation is likely if a very specific population at a specific site and


disease with no other likely explanation. The more specific an association
between a factor and an effect is, the bigger the probability of a causal
relationship.[1]

4.

Temporality: The effect has to occur after the cause (and if there is an expected
delay between the cause and expected effect, then the effect must occur after
that delay).[1]

5.

Biological gradient: Greater exposure should generally lead to greater


incidence of the effect. However, in some cases, the mere presence of the factor
can trigger the effect. In other cases, an inverse proportion is observed: greater
exposure leads to lower incidence. [1]

6.

Plausibility: A plausible mechanism between cause and effect is helpful (but Hill
noted that knowledge of the mechanism is limited by current knowledge). [1]

7.

Coherence: Coherence between epidemiological and laboratory findings


increases the likelihood of an effect. However, Hill noted that "... lack of such
[laboratory] evidence cannot nullify the epidemiological effect on associations". [1]

8.

Experiment: "Occasionally it is possible to appeal to experimental evidence". [1]

9.

Analogy: The effect of similar factors may be considered. [1]

Dioxins and their effects on human health


Key Facts

Dioxins are a group of chemically-related compounds that are persistent


environmental pollutants (POPs).
Dioxins are found throughout the world in the environment and they accumulate
in the food chain, mainly in the fatty tissue of animals.
More than 90% of human exposure is through food, mainly meat and dairy
products, fish and shellfish. Many national authorities have programmes in place to
monitor the food supply.
Dioxins are highly toxic and can cause reproductive and developmental problems,
damage the immune system, interfere with hormones and also cause cancer.
Due to the omnipresence of dioxins, all people have background exposure, which
is not expected to affect human health. However, due to the highly toxic potential, efforts
need to be undertaken to reduce current background exposure.
Prevention or reduction of human exposure is best done via source-directed
measures, i.e. strict control of industrial processes to reduce formation of dioxins.
Background
Dioxins are environmental pollutants. They belong to the so-called dirty dozen - a group
of dangerous chemicals known as persistent organic pollutants (POPs). Dioxins are of
concern because of their highly toxic potential. Experiments have shown they affect a
number of organs and systems.
Once dioxins enter the body, they last a long time because of their chemical stability and
their ability to be absorbed by fat tissue, where they are then stored in the body. Their
half-life in the body is estimated to be 7 to 11 years. In the environment, dioxins tend to
accumulate in the food chain. The higher an animal is in the food chain, the higher the
concentration of dioxins.
The chemical name for dioxin is: 2,3,7,8- tetrachlorodibenzo para dioxin (TCDD). The
name "dioxins" is often used for the family of structurally and chemically
relatedpolychlorinated dibenzo para dioxins (PCDDs) and polychlorinated dibenzofurans
(PCDFs). Certain dioxin-like polychlorinated biphenyls (PCBs) with similar toxic properties
are also included under the term dioxins. Some 419 types of dioxin-related compounds
have been identified but only about 30 of these are considered to have significant
toxicity, with TCDD being the most toxic.
Sources of dioxin contamination
Dioxins are mainly by-products of industrial processes but can also result from natural
processes, such as volcanic eruptions and forest fires. Dioxins are unwanted by products
of a wide range of manufacturing processes including smelting, chlorine bleaching of
paper pulp and the manufacturing of some herbicides and pesticides. In terms of dioxin
release into the environment, uncontrolled waste incinerators (solid waste and hospital
waste) are often the worst culprits, due to incomplete burning. Technology is available
that allows for controlled waste incineration with low dioxin emissions.
Although formation of dioxins is local, environmental distribution is global. Dioxins are
found throughout the world in the environment. The highest levels of these compounds

are found in some soils, sediments and food, especially dairy products, meat, fish and
shellfish. Very low levels are found in plants, water and air.
Extensive stores of PCB-based waste industrial oils, many with high levels of PCDFs, exist
throughout the world. Long-term storage and improper disposal of this material may
result in dioxin release into the environment and the contamination of human and animal
food supplies. PCB-based waste is not easily disposed of without contamination of the
environment and human populations. Such material needs to be treated as hazardous
waste and is best destroyed by high temperature incineration in specialised facilities.
Dioxin contamination incidents
Many countries monitor their food supply for dioxins. This has led to early detection of
contamination and has often prevented impact on a larger scale. In many instances
dioxin contamination is introduced via contaminated animal feed, e.g. incidences of
increased dioxin levels in milk or animal feed were traced back to clay, fat or citrus pulp
pellets used in the production of the animal feed,
Some dioxin contamination events have been more significant, with broader implications
in many countries.
In late 2008, Ireland recalled many tons of pork meat and pork products when up to 200
times the safe limit of dioxins were detected in samples of pork. This led to one of the
largest food recalls related to a chemical contamination. Risk assessments performed by
Ireland indicated no public health concern. The contamination was also traced back to
contaminated feed.
In 1999, high levels of dioxins were found in poultry and eggs from Belgium.
Subsequently, dioxin-contaminated animal-based food (poultry, eggs, pork), were
detected in several other countries. The cause was traced to animal feed contaminated
with illegally disposed PCB-based waste industrial oil.
Large amounts of dioxins were released in a serious accident at a chemical factory in
Seveso, Italy, in 1976. A cloud of toxic chemicals, including 2,3,7,8-Tetrachlorodibenzo-pdioxin, or TCDD, was released into the air and eventually contaminated an area of 15
square kilometres where 37 000 people lived.
Extensive studies in the affected population are continuing to determine the long-term
human health effects from this incident.
TCDD has also been extensively studied for health effects linked to its presence as a
contaminant in some batches of the herbicide Agent Orange, which was used as a
defoliant during the Vietnam War. A link to certain types of cancers and also to diabetes is
still being investigated.
Although all countries can be affected, most contamination cases have been reported in
industrialized countries where adequate food contamination monitoring, greater
awareness of the hazard and better regulatory controls are available for the detection of
dioxin problems.
A few cases of intentional human poisoning have also been reported. The most notable
incident is the 2004 case of Viktor Yushchenko, President of the Ukraine, whose face was
disfigured by chloracne.
Effects of dioxins on human health
Short-term exposure of humans to high levels of dioxins may result in skin lesions, such
as chloracne and patchy darkening of the skin, and altered liver function. Long-term
exposure is linked to impairment of the immune system, the developing nervous system,
the endocrine system and reproductive functions.

Chronic exposure of animals to dioxins has resulted in several types of cancer. TCDD was
evaluated by the WHOs International Agency for Research on Cancer (IARC) in 1997 and
2012. Based on animal data and on human epidemiology data, TCDD was classified by
IARC as a "known human carcinogen. However, TCDD does not affect genetic material
and there is a level of exposure below which cancer risk would be negligible.
Due to the omnipresence of dioxins, all people have background exposure and a certain
level of dioxins in the body, leading to the so-called body burden. Current normal
background exposure is not expected to affect human health on average. However, due
to the high toxic potential of this class of compounds, efforts need to be undertaken to
reduce current background exposure.
Sensitive groups
The developing fetus is most sensitive to dioxin exposure. Newborn, with rapidly
developing organ systems, may also be more vulnerable to certain effects. Some people
or groups of people may be exposed to higher levels of dioxins because of their diet (e.g.,
high consumers of fish in certain parts of the world) or their occupation (e.g., workers in
the pulp and paper industry, in incineration plants and at hazardous waste sites).
Prevention and control of dioxin exposure
Proper incineration of contaminated material is the best available method of preventing
and controlling exposure to dioxins. It can also destroy PCB-based waste oils. The
incineration process requires high temperatures, over 850C. For the destruction of large
amounts of contaminated material, even higher temperatures - 1000C or more - are
required.
Prevention or reduction of human exposure is best done via source-directed measures,
i.e. strict control of industrial processes to reduce formation of dioxins as much as
possible. This is the responsibility of national governments. The Codex Alimentarius
Commission adopted a Code of Practice for Source Directed Measures to Reduce
Contamination of Foods with Chemicals (CAC/RCP 49-2001) in 2001. Later in 2006 a Code
of Practice for the Prevention and Reduction of Dioxin and Dioxin-like PCB Contamination
in Food and Feeds (CAC/RCP 62-2006) was adopted.
More than 90% of human exposure to dioxins is through the food supply, mainly meat
and dairy products, fish and shellfish. Therefore, protecting the food supply is critical. One
approach includes source-directed measures to reduce dioxin emissions. Secondary
contamination of the food supply needs to be avoided throughout the food-chain. Good
controls and practices during primary production, processing, distribution and sale are all
essential in the production of safe food.
As indicated through the examples listed above, contaminated animal feed is often the
root-cause of food contamination.
Food and feed contamination monitoring systems must be in place to ensure that
tolerance levels are not exceeded. It is the role of national governments to monitor the
safety of food supply and to take action to protect public health. When contamination is
suspected, countries should have contingency plans to identify, detain and dispose of
contaminated feed and food. The affected population should be examined in terms of
exposure (e.g. measuring the contaminants in blood or human milk) and effects (e.g.
clinical surveillance to detect signs of ill health).
What should consumers do to reduce their risk of exposure?

Trimming fat from meat and consuming low fat dairy products may decrease the
exposure to dioxin compounds. Also, a balanced diet (including adequate amounts of
fruits, vegetables and cereals) will help to avoid excessive exposure from a single source.
This is a long-term strategy to reduce body burdens and is probably most relevant for
girls and young women to reduce exposure of the developing fetus and when
breastfeeding infants later on in life. However, the possibility for consumers to reduce
their own exposure is somewhat limited.
What does it take to identify and measure dioxins in the environment and food?
The quantitative chemical analysis of dioxins requires sophisticated methods that are
available only in a limited number of laboratories around the world. The analysis costs are
very high and vary according to the type of sample, but range from over US$ 1000 for the
analysis of a single biological sample to several thousand US dollars for the
comprehensive assessment of release from a waste incinerator.
Increasingly, biological (cell- or antibody) -based screening methods are being developed,
and theuse of such methods for food and feed samples is increasingly being validated.
Such screening methods allow more analyses at a lower cost, and in case of a positive
screening test, confirmation of results must be carried out by more complex chemical
analysis.
WHO activities related to dioxins
Reducing dioxin exposure is an important public health goal for disease reduction. To
provide guidance on acceptable levels of exposure, WHO has held a series of expert
meetings to determine a tolerable intake of dioxins.
In the latest expert meetings held in 2001, the Joint FAO/WHO Expert Committee on Food
Additives (JECFA) performed an updated comprehensive risk assessment of PCDDs,
PCDFs, and dioxin-like PCBs.
In order to assess long- or short-term risks to health due to these substances, total or
average intake should be assessed over months, and the tolerable intake should be
assessed over a period of at least 1 month. The experts established a provisional
tolerable monthly intake (PTMI) of 70 picogram/kg per month. This level is the amount of
dioxins that can be ingested over lifetime without detectable health effects.
WHO, in collaboration with the Food and Agriculture Organization (FAO), through the
Codex Alimentarius Commission, has established a Code of Practice for the Prevention
and Reduction of Dioxin and Dioxin-like PCB Contamination in Foods and Feed. This
document gives guidance to national and regional authorities on preventive measures.
WHO is also responsible for the Global Environment Monitoring Systems Food
Contamination Monitoring and Assessment Programme. Commonly known as GEMS/Food,
the programme provides information on levels and trends of contaminants in food
through its network of participating laboratories in over 50 countries around the world.
Dioxins are included in this monitoring programme.
WHO also conducted periodic studies on levels of dioxins in human milk. These studies
provide an assessment of human exposure to dioxins from all sources. Recent exposure
data indicate that measures introduced to control dioxin release in a number of
developed countries have resulted in a substantial reduction in exposure over the past
two decades.
WHO is continuing these studies now in collaboration with the United Nations
Environmental Programme (UNEP), in the context of the Stockholm Convention, an
international agreement to reduce emissions of certain persistent organic pollutants

(POPs), including dioxins. A number of actions are being considered to reduce the
production of dioxins during incineration and manufacturing processes. WHO and UNEP
are undertaking now global breast milk surveys, including in many developing countries,
to monitor trends in dioxin contamination across the globe and the effectiveness of
measures implemented under the Stockholm convention.
Dioxins occur as a complex mixture in the environment and in food. In order to assess the
potential risk of the whole mixture, the concept of toxic equivalence has been applied to
this group of contaminants.
During the last 15 years, WHO, through the International Programme on Chemical Safety
(IPCS), has established and regularly re-evaluated toxic equivalency factors (TEFs) for
dioxins and related compounds through expert consultations. WHO-TEF values have been
established which apply to humans, mammals, birds and fish.

Poisson distribution
In probability theory and statistics, the Poisson distribution (French pronunciation [pwas ]; in
English usually /pwsn/), named after French mathematician Simon Denis Poisson, is a discrete
probability distribution that expresses the probability of a given number of events occurring in a fixed
interval of time and/or space if these events occur with a known average rate and independently of
the time since the last event.[1] The Poisson distribution can also be used for the number of events in
other specified intervals such as distance, area or volume.
For instance, an individual keeping track of the amount of mail they receive each day may notice that
they receive an average number of 4 letters per day. If receiving any particular piece of mail doesn't
affect the arrival times of future pieces of mail, i.e., if pieces of mail from a wide range of sources
arrive independently of one another, then a reasonable assumption is that the number of pieces of
mail received per day obeys a Poisson distribution. [2] Other examples that may follow a Poisson: the
number of phone calls received by a call center per hour, the number of decay events per second
from a radioactive source, or the number of taxis passing a particular street corner per hour.

What is Uniform Distribution?


Uniform distribution is a statistical distribution in which every possible outcome has an equal
chance, or likelihood, of occurring (1 out of the total number of outcomes). For example, imagine a
man standing on a street corner handing a $50 bill to a lucky passersby. If it were completely
random, then every person that walked by would have an equal chance of getting the $50 bill. This is
an example of a uniform probability distribution. It's uniform because everyone has an equal chance
(probability percent is equal to one divided by the number of people walking by). If the man favored
tall people or dark-haired people and was more likely to give them the money instead of others, well,
that would not be uniform, because some would have a higher probability of getting a dollar than
others.
A graph of this example (see Graph 1.1 below) looks like a large rectangle made of smaller thinner
rectangles, with the width of the larger rectangle equal to the number of people walking by and the
height equal to the probability of each person getting the $50 bill (the probability is equal to 1 divided
by the number of people).

Graph 1.1 - Uniform probability distribution

In statistics, graphs of uniform distributions all have this flat characteristic in which the top and sides
are parallel to the x and y axes. Here's another graph showing the probability distribution when
rolling a fair die, meaning each side has an equal chance, orprobability of turning up. Because there
are six sides to each die, there are six possible outcomes, with each outcome having a probability of
1/6th (16.7%).

Introduction to ROC Curves


The sensitivity and specificity of a diagnostic test depends on more than just the
"quality" of the test--they also depend on the definition of what constitutes an
abnormal test. Look at the the idealized graph at right showing the number of patients
with and without a disease arranged according to the value of a diagnostic test. This
distributions overlap--the test (like most) does not distinguish normal from disease
with 100% accuracy. The area of overlap indicates where the test cannot distinguish
normal from disease. In practice, we choose a cutpoint (indicated by the vertical black
line) above which we consider the test to be abnormal and below which we consider
the test to be normal. The position of the cutpoint will determine the number of true
positive, true negatives, false positives and false negatives. We may wish to use
different cutpoints for different clinical situations if we wish to minimize one of the
erroneous types of test results.
We can use the hypothyroidism data from the likelihood ratio section to illustrate how
sensitivity and specificity change depending on the choice of T4 level that defines
hypothyroidism. Recall the data on patients with suspected hypothyroidism reported
by Goldstein and Mushlin (J Gen Intern Med 1987;2:20-24.). The data on T4 values in
hypothyroid and euthyroid patients are shown graphically (below left) and in a
simplified tabular form (below right).

Description
Allows to create ROC curve and a complete sensitivity/specificity report. The ROC curve is a fundamental tool
for diagnostic test evaluation.
In a ROC curve the true positive rate (Sensitivity) is plotted in function of the false positive rate (100-Specificity)
for different cut-off points of a parameter. Each point on the ROC curve represents a sensitivity/specificity pair
corresponding to a particular decision threshold. The area under the ROC curve (AUC) is a measure of how well
a parameter can distinguish between two diagnostic groups (diseased/normal).
Theory summary
The diagnostic performance of a test, or the accuray of a test to discriminate diseased cases from normal cases
is evaluated using Receiver Operating Characteristic (ROC) curve analysis (Metz, 1978; Zweig & Campbell,
1993). ROC curves can also be used to compare the diagnostic performance of two or more laboratory or
diagnostic tests (Griner et al., 1981).
When you consider the results of a particular test in two populations, one population with a disease, the other
population without the disease, you will rarely observe a perfect separation between the two groups. Indeed, the
distribution of the test results will overlap, as shown in the following figure.

For every possible cut-off point or criterion value you select to discriminate between the two populations, there
will be some cases with the disease correctly classified as positive (TP = True Positive fraction), but some cases
with the disease will be classified negative (FN = False Negative fraction). On the other hand, some cases
without the disease will be correctly classified as negative (TN = True Negative fraction), but some cases without
the disease will be classified as positive (FP = False Positive fraction).

The ROC curve

In a Receiver Operating Characteristic (ROC) curve the true positive rate (Sensitivity) is plotted in function of
the false positive rate (100-Specificity) for different cut-off points. Each point on the ROC curve represents a
sensitivity/specificity pair corresponding to a particular decision threshold. A test with perfect discrimination
(no overlap in the two distributions) has a ROC curve that passes through the upper left corner (100%
sensitivity, 100% specificity). Therefore the closer the ROC curve is to the upper left corner, the higher the
overall accuracy of the test (Zweig & Campbell, 1993).

This type of graph is called a Receiver Operating Characteristic curve (or ROC
curve.) It is a plot of the true positive rate against the false positive rate for the
different possible cutpoints of a diagnostic test.
An ROC curve demonstrates several things:
1. It shows the tradeoff between sensitivity and specificity (any increase in
sensitivity will be accompanied by a decrease in specificity).
2. The closer the curve follows the left-hand border and then the top border of the
ROC space, the more accurate the test.
3. The closer the curve comes to the 45-degree diagonal of the ROC space, the
less accurate the test.
4. The slope of the tangent line at a cutpoint gives the likelihood ratio (LR) for
that value of the test. You can check this out on the graph above. Recall that the
LR for T4 < 5 is 52. This corresponds to the far left, steep portion of the curve.
The LR for T4 > 9 is 0.2. This corresponds to the far right, nearly horizontal
portion of the curve.
5. The area under the curve is a measure of text accuracy.

Sensitivity (with optional 95% Confidence Interval): Probability that a test result will be positive when the
disease is present (true positive rate).
Specificity (with optional 95% Confidence Interval): Probability that a test result will be negative when
the disease is not present (true negative rate).

Positive likelihood ratio (with optional 95% Confidence Interval): Ratio between the probability of a
positive test result given the presence of the disease and the probability of a positive test result given the
absence of the disease.
Negative likelihood ratio (with optional 95% Confidence Interval): Ratio between the probability of a
negative test result given the presence of the disease and the probability of a negative test result given
the absence of the disease.
Positive predictive value (with optional 95% Confidence Interval): Probability that the disease is
present when the test is positive.
Negative predictive value (with optional 95% Confidence Interval): Probability that the disease is not
present when the test is negative.
Cost*: The average cost resulting from the use of the diagnostic test at that decision level. Note that the
cost reported here excludes the "overhead cost", i.e. the cost of doing the test, which is constant at all
decision levels.

The Area Under an ROC Curve


The graph at right shows three ROC curves representing excellent, good, and
worthless tests plotted on the same graph. The accuracy of the test depends on how
well the test separates the group being tested into those with and without the disease
in question. Accuracy is measured by the area under the ROC curve. An area of 1
represents a perfect test; an area of .5 represents a worthless test. A rough guide for
classifying the accuracy of a diagnostic test is the traditional academic point system:

.90-1 = excellent (A)


.80-.90 = good (B)

.70-.80 = fair (C)

.60-.70 = poor (D)

.50-.60 = fail (F)

Measures of Effect Size of an Intervention

A key question needed to interpret the results of a clinical trial is whether the
measured effect size is clinically important. Three commonly used measures of effect
size are relative risk reduction (RRR), absolute risk reduction (ARR), and
the number needed to treat (NNT) to prevent one bad outcome. These terms are
defined below. The material in this section is adapted from Evidence-based medicine:
How to practice and teach EBM by DL Sackett, WS Richardson, W Rosenberg and
RB Haynes. 1997, New York: Churchill Livingston.
Consider the data from the Diabetes Control and Complications Trial (DCCT-Ann
Intern Med 1995;122:561-8.). Neuropathy occurred in 9.6% of the usual care group
and in 2.8% of the intensively treated group. These rates are sometimes referred to
as risks by epidemiologists. For our purposes, risk can be thought of as the rate of
some outcome.
Relative risk reduction

Relative risk measures how much the risk is reduced in the experimental group
compared to a control group. For example, if 60% of the control group died and 30%
of the treated group died, the treatment would have a relative risk reduction of 0.5 or
50% (the rate of death in the treated group is half of that in the control group).
The formula for computing relative risk reduction is: (CER - EER)/CER. CER is the
control group event rate and EER is the experimental group event rate. Using the
DCCT data, this would work out to (0.096 - 0.028)/0.096 = 0.71 or 71%. This means
that neuropathy was reduced by 71% in the intensive treatment group compared with
the usual care group.
One problem with the relative risk measure is that without knowing the level of risk in
the control group, one cannot assess the effect size in the treatment group. Treatments
with very large relative risk reductions may have a small effect in conditions where
the control group has a very low bad outcome rate. On the other hand, modest relative
risk reductions can assume major clinical importance if the baseline (control) rate of
bad outcomes is large.
Absolute risk reduction

Absolute risk reduction is just the absolute difference in outcome rates between the
control and treatment groups: CER - EER. The absolute risk reduction does not
involve an explicit comparison to the control group as in the relative risk reduction

and thus, does not confound the effect size with the baseline risk. However, it is a less
intuitve measure to interpret.
For the DCCT data, the absolute risk reduction for neuropathy would be (0.096 0.028) = 0.068 or 6.8%. This means that for every 100 patients enrolled in the
intensive treatment group, about seven bad outcomes would be averted.
Number needed to treat

The number needed to treat is basically another way to express the absolute risk
reduction. It is just 1/ARR and can be thought of as the number of patients that would
need to be treated to prevent one additional bad outcome. For the DCCT data, NNT =
1/.068 = 14.7. Thus, for every 15 patients treated with intensive therapy, one case of
neuropathy would be prevented.
The NNT concept has been gaining in popularity because of its simplicity to compute
and its ease of interpretion. NNT data are especially useful in comparing the results of
multiple clinical trials in which the relative effectiveness of the treatments are readily
apparent. For example, the NNT to prevent stroke by treating patients with very high
blood pressures (DBP 115-129) is only 3 but rises to 128 for patients with less severe
hypertension (DBP 90-109).

Introduction to Likelihood Ratios


Before you read this section, you should understand the concepts of sensitivity,
specificity, pretest probability, predicitive value of a positive test, and predictive value
of a negative test. You should be comfortable working the problems on the 2 by 2
table practice page.
Likelihood ratios are an alternate method of assessing the performance of a diagnostic
test. As with sensitivity and specificity, two measures are needed to describe a
dichotomous test (one with only two possible results). These two measures are
the likelihood ratio of a positive test and the likelihood ratio of a negative test.
Before defining these terms, it might help to list a few advantages of learning and
using the likelihood ratio method. After all, if you already know how to compute
posttest probability using sensitivity and specificity, why bother with likelihood
ratios?

Advantages of the likelihood ratio approach

1. The likelihood ratio form of Bayes Theorem is easy to


remember: Posttest Odds = Pretest Odds x LR.
2. Likelihood ratios can deal with tests with more than two
possible results (not just normal/abnormal).
3. The magnitude of the likelihood ratio give intuitive meaning as
to how strongly a given test result will raise (rule-in) or lower
(rule-out) the likelihood of disease.
4. Computing posttest odds after a series of diagnostic tests is
much easier than using the sensitivity/specificity
method. Posttest Odds = Pretest Odds x LR1 x LR2 x
LR3 ... x LRn.
General definition of likelihood ratio

The likelihood ratio is a ratio of two probabilities:


LR = The probability of a given test result among people with a disease divided
by the probability of that test result among people without the disease.
In probability notation: LR = P(Ti|D+) / P(Ti|D-).
Don't worry if this does not make much sense yet. The next two sections will apply
likelihood ratios using both simple and more complex examples. Their meaning and
utility should become more apparent then.

Likelihood ratios for tests with only two possible results


Since sensitivity and specificity can only deal with dichotomous tests (those with only
two possible results), we will first consider how to apply likelihood ratios to the same
types of problems.
Example 1

Consider the use of the ANA (antinuclear antibody) test in the diagnosis of SLE
(systemic lupus erythematosus). In a rheumatology practice, the prevalence of SLE in
patients on whom an ANA test was done was 2.88%. The sensitivity of the ANA for

SLE is 98% and the specificity is 93%. Suppose a patient of this rheumatologist has a
positive ANA. What is the probability of SLE?
Traditional Method
The traditional way to solve this problem would be to draw a two by two table and fill
it in with a hypothetical population of, say, 100000 patients. Knowing the prevalence
of SLE is 2.88%, the column totals of patients with and without SLE can be easily
computed as shown:
SLE

No SLE

Positive
ANA

TP

FP

Negativ
e
ANA

FN

TN

2880 97120 100000

Multiplying the sensitivity (0.98) by the number with SLE (2880) yields the number
of true positives (2822). Multiplying the specificity (0.93) by the number without SLE
(97120) yields the number of true negatives (97120).
SLE
Positive
ANA

No SLE

2822

Negativ
e
ANA

FP

FN 90322

2880 97120 100000

The rest of the table entries are filled in by simple addition and subtraction:
SLE

No SLE

Positive
ANA
Negativ
e
ANA

2822

6798

9620

58 90322 90380

2880 97120 100000

We can now answer the question of posttest probability given a positive test as
2822/9620 = 0.293.
Likelihood ratio method
The likelihood ratio of a positive ANA test is 14 and the likelihood ratio of a negative
ANA test is 0.02. These numbers, as with the sensitivity and specificity, are obtained
from the literature -- they are properties of the diagnostic test. From the likelhood
ratio form of Bayes theorem above, we can see that multiplying the pretest odds by 14
will give posttest odds. But wait, 0.0288 times 14 = 0.40. This is not the answer we
got using the traditional method.
The source of the discrepancy is that likelihood ratios are multiplied by the
pretest odds not the pretest probability. We must first compute the pretest probability
of 0.0288 to odds. The formula is:
Odds = Probability / (1 - Probability)
Thus, pretest odds = 0.0288 / 0.9712. This is about equal to 0.03 to 1.
We can now apply the likelihood ratio for a positive ANA to compute the posttest
odds: 0.03 x 14 = 0.42 to 1. We still do not have the answer we got above because we
now have to convert the odds back to a probability. The formula is:
Probability = Odds / (1 + Odds)
Posttest probability = 0.42 / 1.42 = 0.296 -- essentially the same answer as with the
traditional method.
Here is a little calculator you can use to work through likelihood ratio problems. Click
the buttons in sequence to work through the problem. Try changing the prior
probability or likelihood ratio values and recompute the posttest probabilitity. Once

you understand the difference between odds and probability, using likelihood ratios is
much easier than working through two by two tables.

Computing likelihood ratios from sensitivity and


specificity
Often, the literature reports diagnostic test characteristics as sensitivity and specificity
rather than likelihood ratios. You could compute them by constructing a two by two
table as we did above.
SLE
Positive
ANA
Negative
ANA

No SLE
2822

6798

58

90322

2880

97120

100000

Going back to the original definition of likelihood ratio, we can compute the
probability of a positive ANA test in patients with SLE: (2822 / 2880) or 0.98. We can
also compute the probability of a positive ANA test in patients without SLE: (6798 /
97120) or 0.07. The likelihood ratio for a positive ANA is then 0.98 / 0.07 or 14.
Using an analagous approach, you should be able to compute the likelihood ratio for a
negative ANA (0.02). In more general terms:
LR+ = Sensitivity / (1 - Specificity)
LR- = (1 - Sensitivity) / (Specificity)

Likelihood ratios for tests with more than two


possible results
Most laboratory tests are reported on a numerical scale -- not merely as normal or
abnormal. For ease of interpretation, we often choose a value for the upper (or lower)
limit of normal. Grouping the "normal" and "abnormal" values allows us to compute
the sensitivity and specificity of the test. When we do this, however, we lose
substantial information. Consider two patients with suspected hypothyroidism: one
has a thyroxine (T4) of 5 and the other a T4 of 9 (lower limit of normal for T4 = 4.5).
By our criterion, both patients would be considered "normal." Common sense says
that the first patient is much more likely to be hypothyroid than the second. But, using
sensitivity and specificity numbers based on normal versus abnormal, we get the same
posttest probabilities. The likelihood ratio method can take into account test results at
multiple different levels of severity.
Example 2: Patients with Suspected Hypothyroidism

Consider the following data on patients with suspected hypothyroidism reported by


Goldstein and Mushlin (J Gen Intern Med 1987;2:20-24.). They measured T4 and
TSH values in ambulatory patients with suspected hypothyroidism and used the TSH
values as a gold standard for determining which patients were truly hypothyroid.
T4
value
5 or less

Hypothyr Euthyro
oid
id
18

5.1 - 7

17

7.1 - 9

36

9 or
more

39

32

93

Totals:

Notice that these authors found considerable overlap in T4 values among the
hypothyroid and euthyroid patients. Further, the lower the T4 value, the more likely
the patients are to be hypothyroid. We can compute likelihood ratios for each of the
four groupings of test results by recalling the definition of a likelihood ratio:

LRi = P(Ti|D+) / P(Ti|D-)


For example, for the 5 or less group, LR 5 or less = (18/32) / (1/93) = 52.
Here is the table with likelihood ratio numbers added:
T4
value

Hypothyr Euthyr Likelihood


oid
oid
Ratio

5 or less

18

52

5.1 - 7

17

1.2

7.1 - 9

36

.3

9 or
more

39

.2

32

93

Totals:

Notice that the likelihood ratios give you an intuitive feel for how a given test result
affects the likelihood of disease. Likelihood ratios greater than one increase the
likelihood; those less than one decrease the likelihood. Values near one indicate a
result that does not substantially change disease likelihood. Use the calculator below
to compute the posttest probability of hypothyroidism for a patient with a 0.1 pretest
probability given each of the possible results shown above.
Example 3: Patients with Suspected Pulmonary Embolism

Likelihood ratios also work well for tests with multiple qualitative results such as a
ventilation perfusion (V/Q) scan which can be interpreted as normal, low probability,
intermediate probability, and high probability of pulmonary embolism. For example,
the PIOPED Study (JAMA 1990;263:2753-2759) compared the V/Q scan with
angiography and reported the following data:
Scan Category

Sensitivity, Sepecificity,
%
%

High probability

41

97

High or
intermediate

82

52

probability
High,
intermediate,
or low probability

98

10

Now suppose you have a patient with a 30% pretest probability of pulmonary
embolism who has an intermediate probability V/Q scan. What is the posttest
probability of disease? Try computing the likelihood ratio for a high or
intermediate probability scan from the sensitivity and specificity data.. (Click here
if you need to review the formula.). Plug this number into the calculator below and
work through the posttest test probability of disease.
This result, however, is not the best use of the available data because it lumps the high
probability and intermediate probability scans together so that a sensitivity and
specificity can be reported. The paper also lists the raw data by individual test
category. From these data (shown below in the two left columns), you should be able
to compute the likelihood ratio for each test result. This is shown below in the right
column.
Scan Category

P.E.
present

P.E.
absent

Likelihood
ratio

High probability

102

14

13.9

Intermediate
probability

105

217

0.93

Low probability

39

199

0.37

Normal or near
normal

50

0.19

251

480

Total

Now we can compute the posttest probability for our patient with a 30% pretest
probability and an intermediate probability scan. Work though the calculations below:
This posttest probability is lower than the previously obtained because we are using of
the information in the data we have available. The likelihood ratio approach allows us
to work with individual test results without having to choose an artibrary cutpoint by
which to dichotomize the results into "positive" and "negative." Also notice again, the

intuitive value of the likelihood ratio number. An intermediate probability scan has a
likelihood ratio very close to 1. This means that intermediate probability scans should
not appreciably change your pretest diagnostic suspicion.

ROC space
The contingency table can derive several evaluation "metrics" (see infobox). To draw a ROC curve, only the
true positive rate (TPR) and false positive rate (FPR) are needed (as functions of some classifier parameter).
The TPR defines how many correct positive results occur among all positive samples available during the test.
FPR, on the other hand, defines how many incorrect positive results occur among all negative samples
available during the test.
A ROC space is defined by FPR and TPR as x and y axes respectively, which depicts relative trade-offs
between true positive (benefits) and false positive (costs). Since TPR is equivalent to sensitivity and FPR is
equal to 1 specificity, the ROC graph is sometimes called the sensitivity vs (1 specificity) plot. Each
prediction result or instance of a confusion matrix represents one point in the ROC space.
The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC
space, representing 100% sensitivity (no false negatives) and 100% specificity (no false positives). The (0,1)
point is also called a perfect classification. A completely random guess would give a point along a diagonal line
(the so-called line of no-discrimination) from the left bottom to the top right corners (regardless of the positive
and negative base rates). An intuitive example of random guessing is a decision by flipping coins (heads or
tails). As the size of the sample increases, a random classifier's ROC point migrates towards (0.5,0.5).
The diagonal divides the ROC space. Points above the diagonal represent good classification results (better
than random), points below the line poor results (worse than random). Note that the output of a consistently
poor predictor could simply be inverted to obtain a good predictor.
Let us look into four prediction results from 100 positive and 100 negative instances:
A

TP=6
3

FP=28

FN=
37

TN=72

100

100

TPR = 0.63
FPR = 0.28
PPV = 0.69
F1 = 0.66
ACC = 0.68

91

10
9

TP=7
7

FP=77

15
4

TP=2
4

FP=88

11
2

TP=7
6

FP=12

88

FN=
23

TN=23

46

FN=
76

TN=12

88

FN=
24

TN=88

11
2

100

100

20
0

100

100

20
0

100

100

20
0

20

TPR = 0.77
FPR = 0.77
PPV = 0.50
F1 = 0.61
ACC = 0.50

TPR = 0.24
FPR = 0.88
PPV = 0.21
F1 = 0.22
ACC = 0.18

TPR = 0.76
FPR = 0.12
PPV = 0.86
F1 = 0.81
ACC = 0.82

Plots of the four results above in the ROC space are given in the figure. The result of method A clearly shows
the best predictive power among A, B, and C. The result of B lies on the random guess line (the diagonal line),
and it can be seen in the table that the accuracy of B is 50%. However, when C is mirrored across the center
point (0.5,0.5), the resulting method C is even better than A. This mirrored method simply reverses the

predictions of whatever method or test produced theC contingency table. Although the original C method has
negative predictive power, simply reversing its decisions leads to a new predictive method C which has
positive predictive power. When the C method predicts p or n, the C method would predict n or p, respectively.
In this manner, the C test would perform the best. The closer a result from a contingency table is to the upper
left corner, the better it predicts, but the distance from the random guess line in either direction is the best
indicator of how much predictive power a method has. If the result is below the line (i.e. the method is worse
than a random guess), all of the method's predictions must be reversed in order to utilize its power, thereby
moving the result above the random guess line.

Receiver operating characteristic


In statistics, a receiver operating characteristic (ROC), or ROC curve, is a graphical plot that
illustrates the performance of a binary classifier system as its discrimination threshold is varied. The
curve is created by plotting the true positive rate against the false positive rate at various threshold
settings. (The true-positive rate is also known as sensitivity in biomedicine, or recall in machine
learning. The false-positive rate is also known as the fall-out and can be calculated as 1 - specificity).
The ROC curve is thus the sensitivity as a function of fall-out. In general, if the probability
distributions for both detection and false alarm are known, the ROC curve can be generated by
plotting the cumulative distribution function (area under the probability distribution from

to
) of the detection probability in the y-axis versus the cumulative distribution function of the
false-alarm probability in x-axis.
ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones
independently from (and prior to specifying) the cost context or the class distribution. ROC analysis
is related in a direct and natural way to cost/benefit analysis of diagnostic decision making.
The ROC curve was first developed by electrical engineers and radar engineers during World War II
for detecting enemy objects in battlefields and was soon introduced to psychology to account for
perceptual detection of stimuli. ROC analysis since then has been used
in medicine, radiology, biometrics, and other areas for many decades and is increasingly used
in machine learning and data miningresearch.
The ROC is also known as a relative operating characteristic curve, because it is a comparison of
two operating characteristics (TPR and FPR) as the criterion changes. [1]

Electronic Waste Management


Each year in California hundreds of thousands of computers, monitors, copiers, fax machines, printers, televisions,
and other electronic items become "obsolete" in the eyes of consumers. Rapid advances in technology and an
expanding demand for new features accelerate the generation of "old" electronic equipment ("e-waste"). The result is
a growing challenge for businesses, residents, and local governments as they search for ways to reuse, recycle, or
properly dispose of this equipment.
To meet this challenge, many communities are initiating electronic product collection programs, manufacturers are
developing recycling programs for their customers, and innovative companies are finding new markets for the old
equipment.
Get updates, information and guidance on the implementation of the Electronic Waste Recycling Act of 2003!
Many components of electronic equipment--including metals, plastic, and glass--can be recycled, while others may
present environmental hazards if not managed correctly. This site provides information and resources on how to
properly manage your electronic products.

E-waste is the most rapidly growing segment of the municipal solid waste stream.

E-waste contains many valuable, recoverable materials such as aluminum, copper, gold, silver, plastics, and
ferrous metals. In order to conserve natural resources and the energy needed to produce new electronic
equipment from virgin resources, electronic equipment can be refurbished, reused, and recycled instead of being
landfilled.

E-waste also contains toxic and hazardous materials including mercury, lead, cadmium, beryllium, chromium, and
chemical flame retardants, which have the potential to leach into our soil and water.

What are the benefits and advantages of recycle e-waste?


There are several!

Conserves natural resources. Recycling recovers valuable materials from old electronics that can be
used to make new products. As a result, we save energy, reduce pollution, reduce greenhouse gas
emissions, and save resources by extracting fewer raw materials from the earth.

Protects your surroundings. Safe recycling of outdated electronics promotes sound management of toxic
chemicals such as lead and mercury.

Helps others. Donating your used electronics benefits your community by passing on ready-to-use or
refurbished equipment to those who need it.

Create Jobs. eCycling creates jobs for professional recyclers and refurbishers and creates new markets
for the valuable components that are dismantled.

Saves landfill space. E-waste is a growing waste stream. By recycling these items, landfill space is
conserved.

Electronic waste management options hierarchy:


1.

Reuse of whole units: Reuse functioning electronic equipment by donating it to someone who can still
use it.

2.

Repair/refurbishment/remanufacturing of units

3.

Recovery/reuse of functional peripherals or components

4.

Recycling of constituent materials: Recycle those components that cannot be repaired.

5.

Last. Responsible disposal of hazardous and non-hazardous waste in permitted landfills.

Do you know where your cell-phones and laptops go to die? If they are not recycled or disposed-off,
they pose real threat to the people living on this planet.
The global pile-up of e-waste is getting out of control. While there are various predictions from UN
about the future size of waste, there are hardly any suggestions for counter-measures to minimize
the load. On top of that, the new generation electronic devices like smartphones and laptops have
an average lifespan of less than 2 years. This means a lot of e-waste will haunt us soon. Hence, it is
important that every citizen in Philadelphia should know a few facts about the problem. Ladies and
gentlemen, lets embrace the horror!

Where is it going?
Electronic waste disposal and management has grown into a globalized business because around
80% of this waste is shipped to the third world countries where it is further processed before being
dumped into landfills. There is a reason why the e-waste continues to have some value in the
developing countries. It is sorted or burnt to extract and sell scrap metal.

Why a proper e-waste disposal is important?


There is no doubt in the fact that e-waste is full of toxic elements. Up to 60 elements from the
periodic table can be found in e-waste items. Moreover, flame retardants and toxic complex
chemicals further pose difficulties in effective disposal. For instance, cadmium is a hazardous
chemical which can pose real threats when exposed to humans or the environment. Hence the ewaste disposal is a big challenge. Other harmful components in e-waste include:

Lead
PVC
Beryllium
Mercury

What can it be turned into?


As we all know, the metals in e-waste devices are highly valuable, but the biggest impediment is the
lack of safety protocol to extract them properly. The metals from cell phones, batteries can be
recycled to manufacture new phones and other electronics items. In fact, some companies in
Philadelphia extract metals like platinum, gold and selenium from used devices so that it can be
used in the processing of refurbished devices.

Know about your recycler


Before you throw away your e-junk, make sure it does not end up with a fake recycler. The recycling
of this waste can be a lucrative business if the junk is exported to the developing countries where
scrap metal is reused. If it is sent off to be disassembled, then there are no safety precautions or
enough money for the people who actually work on this waste.

The disparity of laws governing e-waste disposal


Its ironic how information on e-waste disposal is hard to come by. Several attempts by federal
governments to develop laws for electronic waste management have never come to fruition.
Philadelphia is also part of the Coalition for American Electronics Recycling (CAER) that operates in
218 facilities in almost 34 states. This organization is trying to get the bill passed for e-waste
disposal, but so far only 25 states have passed and enacted laws regarding e-waste recycling.

A new approach for trade-ins


All the big guns of the tech industry are now becoming a part of trade-in programs. Apple announced
one such program that provides value against iPhone 4. If a client intends to buy a newer iPhone,
the company will transfer the credit towards a newer product. The companies that took part and
initiated the trade-ins include Dell, Apple and Google.

Guiyu, China is the e-waste capital


Did you know that China is the second country that generates most amount of e-waste after US?
That might not be a surprise for some, but did you know that it is also the place where most of the
electronic waste from the US is dumped. Guiyu is a town in Guangdong Province which is
manufacturing zone that has also served as an electronic waste dumping hub for years. Now the
region is extremely polluted because of the burning of circuit board and the use of hazardous
chemical to recover important metal. This poses danger for the visitors and residents living in the
surrounding areas.

The awareness is low


In the absence of federal laws, the state laws are not counted as a top priority. In this backdrop, it is
really hard for the recyclers in Philadelphia to understand how the world deals with ewaste. As the lifespan of our devices get shorter and the average number of electronics grows, it is
getting harder for the government in Philadelphia to estimate how much waste is actually out there.
There are some companies that claim to have recycled some of the e-waste products, but in reality,
they are actually sold to other developing countries that rarely follow safety codes for extraction and
recycling. Furthermore, EPA has confirmed that there is no reliable data available on the export of ewaste.
Do you know what becomes of the e-waste discarded by you? Is it piled up locally or burnt without a
safety protocol? Stay informed, and share this e-waste fact file to spread the word.

Electronic waste or e-waste describes discarded electrical or electronic devices. Used


electronics which are destined for reuse, resale, salvage, recycling or disposal are also considered
as e-waste. Informal processing of electronic waste in developing countries may cause serious
health and pollution problems, as these countries have limited regulatory oversight of e-waste
processing.
Electronic scrap components, such as CRTs, may contain contaminants such as lead, cadmium,
beryllium, or brominated flame retardants. Even in developed countries recycling and disposal of
e-waste may involve significant risk to workers and communities and great care must be taken to
avoid unsafe exposure in recycling operations and leaking of materials such as heavy metals
from landfills and incinerator ashes. [1]

Contents
[hide]

1 Definition
2 Amount of electronic waste world-wide

3 Global trade issues


o

3.1 Guiyu waste dump

3.2 Trade

4 Environmental impact

5 Information security

6 E-waste management

6.1 Recycling

6.2 Consumer awareness efforts

6.3 Processing techniques

6.4 Benefits of recycling

7 Electronic waste substances


o

7.1 Hazardous

7.2 Generally non-hazardous

8 See also

9 References

10 Further reading

11 External links

Definition[edit]

Hoarding (left), disassembling (center) and collecting (right) electronic waste in Bengaluru, India
"Electronic waste" may be defined as discarded computers, office electronic equipment,
entertainment device electronics, mobile phones, television sets, and refrigerators. This includes
used electronics which are destined for reuse, resale, salvage, recycling, or disposal. Others are
re-usables (working and repairable electronics) and secondary scrap (copper, steel, plastic, etc.)
to be "commodities", and reserve the term "waste" for residue or material which is dumped by
the buyer rather than recycled, including residue from reuse and recycling operations. Because
loads of surplus electronics are frequently commingled (good, recyclable, and non-recyclable),
several public policy advocates apply the term "e-waste" broadly to all surplus electronics.
Cathode ray tubes (CRTs) are considered one of the hardest types to recycle.[2]
CRTs have relatively high concentration of lead and phosphors (not to be confused with
phosphorus), both of which are necessary for the display. The United States Environmental
Protection Agency (EPA) includes discarded CRT monitors in its category of "hazardous

household waste"[3] but considers CRTs that have been set aside for testing to be commodities if
they are not discarded, speculatively accumulated, or left unprotected from weather and other
damage.
The EU and its member states operate a system via the European Waste Catalogue (EWC)- a
European Council Directive, which is interpreted into "member state law". In the UK (an EU
member state). This is in the form of the List of Wastes Directive. However, the list (and EWC)
gives broad definition (EWC Code 16 02 13*) of Hazardous Electronic wastes, requiring "waste
operators" to employ the Hazardous Waste Regulations (Annex 1A, Annex 1B) for refined
definition. Constituent materials in the waste also require assessment via the combination of
Annex II and Annex III, again allowing operators to further determine whether a waste is
hazardous.[4]
Debate continues over the distinction between "commodity" and "waste" electronics definitions.
Some exporters are accused of deliberately leaving difficult-to-recycle, obsolete, or nonrepairable equipment mixed in loads of working equipment (though this may also come through
ignorance, or to avoid more costly treatment processes). Protectionists may broaden the
definition of "waste" electronics in order to protect domestic markets from working secondary
equipment.
The high value of the computer recycling subset of electronic waste (working and reusable
laptops, desktops, and components like RAM) can help pay the cost of transportation for a larger
number of worthless pieces than can be achieved with display devices, which have less (or
negative) scrap value. In A 2011 report, "Ghana E-Waste Country Assessment",[5] found that of
215,000 tons of electronics imported to Ghana, 30% were brand new and 70% were used. Of the
used product, the study concluded that 15% was not reused and was scrapped or discarded. This
contrasts with published but uncredited claims that 80% of the imports into Ghana were being
burned in primitive conditions.

Amount of electronic waste world-wide[edit]

A fragment of discarded circuit board.


Rapid changes in technology, changes in media (tapes, software, MP3), falling prices, and
planned obsolescence have resulted in a fast-growing surplus of electronic waste around the
globe. Technical solutions are available, but in most cases a legal framework, a collection,
logistics, and other services need to be implemented before a technical solution can be applied.

Display units (CRT, LCD, LED monitors), processors (CPU, GPU, or APU chips), memory
(DRAM or SRAM), and audio components have different useful lives. Processors are most
frequently out-dated (by software no longer being optimized) and are more likely to become "ewaste", while display units are most often replaced while working without repair attempts, due to
changes in wealthy nation appetites for new display technology.
An estimated 50 million tons of E-waste are produced each year.[1] The USA discards 30 million
computers each year and 100 million phones are disposed of in Europe each year. The
Environmental Protection Agency estimates that only 15-20% of e-waste is recycled, the rest of
these electronics go directly into landfills and incinerators.[6][7]
According to a report by UNEP titled, "Recycling - from E-Waste to Resources," the amount of
e-waste being produced - including mobile phones and computers - could rise by as much as 500
percent over the next decade in some countries, such as India.[8] The United States is the world
leader in producing electronic waste, tossing away about 3 million tons each year.[9] China
already produces about 2.3 million tons (2010 estimate) domestically, second only to the United
States. And, despite having banned e-waste imports, China remains a major e-waste dumping
ground for developed countries.[9]
Electrical waste contains hazardous but also valuable and scarce materials. Up to 60 elements
can be found in complex electronics.
In the United States, an estimated 70% of heavy metals in landfills comes from discarded
electronics.[10][11]
While there is agreement that the number of discarded electronic devices is increasing, there is
considerable disagreement about the relative risk (compared to automobile scrap, for example),
and strong disagreement whether curtailing trade in used electronics will improve conditions, or
make them worse. According to an article in Motherboard, attempts to restrict the trade have
driven reputable companies out of the supply chain, with unintended consequences.[12]

Global trade issues[edit]


See also: Global Waste Trade
See also: Electronic waste by country

Electronic waste is often exported to developing countries.

4.5-volt, D, C, AA, AAA, AAAA, A23, 9-volt, CR2032, and LR44 cells are all recyclable in
most countries.
One theory is that increased regulation of electronic waste and concern over the environmental
harm in mature economies creates an economic disincentive to remove residues prior to export.
Critics of trade in used electronics maintain that it is still too easy for brokers calling themselves
recyclers to export unscreened electronic waste to developing countries, such as China,[13] India
and parts of Africa, thus avoiding the expense of removing items like bad cathode ray tubes (the
processing of which is expensive and difficult). The developing countries have become toxic
dump yards of e-waste. Proponents of international trade point to the success of fair trade
programs in other industries, where cooperation has led to creation of sustainable jobs, and can
bring affordable technology in countries where repair and reuse rates are higher.
Defenders of the trade[who?] in used electronics say that extraction of metals from virgin mining has
been shifted to developing countries. Recycling of copper, silver, gold, and other materials from
discarded electronic devices is considered better for the environment than mining. They also
state that repair and reuse of computers and televisions has become a "lost art" in wealthier
nations, and that refurbishing has traditionally been a path to development.
South Korea, Taiwan, and southern China all excelled in finding "retained value" in used goods,
and in some cases have set up billion-dollar industries in refurbishing used ink cartridges, singleuse cameras, and working CRTs. Refurbishing has traditionally been a threat to established
manufacturing, and simple protectionism explains some criticism of the trade. Works like "The
Waste Makers" by Vance Packard explain some of the criticism of exports of working product,
for example the ban on import of tested working Pentium 4 laptops to China, or the bans on
export of used surplus working electronics by Japan.
Opponents of surplus electronics exports argue that lower environmental and labor standards,
cheap labor, and the relatively high value of recovered raw materials leads to a transfer of
pollution-generating activities, such as smelting of copper wire. In China, Malaysia, India,
Kenya, and various African countries, electronic waste is being sent to these countries for
processing, sometimes illegally. Many surplus laptops are routed to developing nations as
"dumping grounds for e-waste".[14]
Because the United States has not ratified the Basel Convention or its Ban Amendment, and has
few domestic federal laws forbidding the export of toxic waste, the Basel Action Network
estimates that about 80% of the electronic waste directed to recycling in the U.S. does not get
recycled there at all, but is put on container ships and sent to countries such as China.[15][16][17][18]
This figure is disputed as an exaggeration by the EPA, the Institute of Scrap Recycling
Industries, and the World Reuse, Repair and Recycling Association.
This article needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be challenged

and removed. (May 2013)


Independent research by Arizona State University showed that 87-88% of imported used
computers did not have a higher value than the best value of the constituent materials they
contained, and that "the official trade in end-of-life computers is thus driven by reuse as opposed
to recycling".[19]

Guiyu waste dump[edit]


Main article: Electronic waste in China

The E-waste centre of Agbogbloshie, Ghana, where electronic waste is burnt and disassembled
with no safety or environmental considerations.
Guiyu in the Shantou region of China is a huge electronic waste processing area.[15][20][21] It is often
referred to as the e-waste capital of the world. The city employs over 150,000 e-waste workers
that work through 16-hour days disassembling old computers and recapturing whatever metals
and parts they can reuse or sell. The thousands of individual workshops employ laborers to snip
cables, pry chips from circuit boards, grind plastic computer cases into particles, and dip circuit
boards in acid baths to dissolve the lead, cadmium, and other toxic metals. Others work to strip
insulation from all wiring in an attempt to salvage tiny amounts of copper wire.[22] Uncontrolled
burning, disassembly, and disposal causes a variety of environmental problems such as
groundwater contamination, atmospheric pollution, or even water pollution either by immediate
discharge or due to surface runoff (especially near coastal areas), as well as health problems
including occupational safety and health effects among those directly and indirectly involved,
due to the methods of processing the waste.
Only limited investigations have been carried out on the health effects of Guiyu's poisoned
environment. One of them was carried out by Professor Huo Xia, of the Shantou University
Medical College, which is an hour and a half's drive from Guiyu. She tested 165 children for
concentrations of lead in their blood. 82% of the Guiyu children had blood/lead levels of more
than 100. Anything above that figure is considered unsafe by international health experts. The
average reading for the group was 149.[23]
High levels of lead in young children's blood can impact IQ and the development of the central
nervous system. The highest concentrations of lead were found in the children of parents whose
workshop dealt with circuit boards and the lowest was among those who recycled plastic.[23]

Six of the many villages in Guiyu specialize in circuit-board disassembly, seven in plastics and
metals reprocessing, and two in wire and cable disassembly. About a year ago the environmental
group Greenpeace sampled dust, soil, river sediment and groundwater in Guiyu where e-waste
recycling is done. They found soaring levels of toxic heavy metals and organic contaminants in
both places.[24] Lai Yun, a campaigner for the group found "over 10 poisonous metals, such as
lead, mercury and cadmium, in Guiyu town."
Guiyu is only one example of digital dumps but similar places can be found across the world
such as Asia and Africa. With amounts of e-waste growing rapidly each year urgent solutions are
required. While the waste continues to flow into digital dumps like Guiyu there are measures that
can help reduce the flow of e-waste.[23]
A preventative step that major electronics firms should take is to remove the worst chemicals in
their products in order to make them safer and easier to recycle. It is important that all companies
take full responsibility for their products and, once they reach the end of their useful life, take
their goods back for re-use or safely recycle them.

Trade[edit]
Proponents of the trade say growth of internet access is a stronger correlation to trade than
poverty. Haiti is poor and closer to the port of New York than southeast Asia, but far more
electronic waste is exported from New York to Asia than to Haiti. Thousands of men, women,
and children are employed in reuse, refurbishing, repair, and remanufacturing, unsustainable
industries in decline in developed countries. Denying developing nations access to used
electronics may deny them sustainable employment, affordable products, and internet access, or
force them to deal with even less scrupulous suppliers. In a series of seven articles for The
Atlantic, Shanghai-based reporter Adam Minter describes many of these computer repair and
scrap separation activities as objectively sustainable.[25]
Opponents of the trade argue that developing countries utilize methods that are more harmful and
more wasteful. An expedient and prevalent method is simply to toss equipment onto an open fire,
in order to melt plastics and to burn away non-valuable metals. This releases carcinogens and
neurotoxins into the air, contributing to an acrid, lingering smog. These noxious fumes include
dioxins and furans.[26] Bonfire refuse can be disposed of quickly into drainage ditches or
waterways feeding the ocean or local water supplies.[18][27]
In June 2008, a container of electronic waste, destined from the Port of Oakland in the U.S. to
Sanshui District in mainland China, was intercepted in Hong Kong by Greenpeace.[28] Concern
over exports of electronic waste were raised in press reports in India,[29][30] Ghana,[31][32][33] Cte
d'Ivoire,[34] and Nigeria.[35]

Environmental impact[edit]

Old keyboards
The processes of dismantling and disposing of electronic waste in the third world lead to a
number of environmental impacts as illustrated in the graphic. Liquid and atmospheric releases
end up in bodies of water, groundwater, soil, and air and therefore in land and sea animals both
domesticated and wild, in crops eaten by both animals and human, and in drinking water.[36]
One study of environmental effects in Guiyu, China found the following:

Airborne dioxins one type found at 100 times levels previously measured
Levels of carcinogens in duck ponds and rice paddies exceeded international standards
for agricultural areas and cadmium, copper, nickel, and lead levels in rice paddies were
above international standards
Heavy metals found in road dust lead over 300 times that of a control villages road
dust and copper over 100 times[37]

The environmental impact of the processing of different electronic waste components


E-Waste Component
Cathode ray tubes (used
in TVs, computer
monitors, ATM, video
cameras, and more)
Printed circuit board
(image behind table - a
thin plate on which chips
and other electronic
components are placed)

Process Used

Potential Environmental Hazard

Lead, barium and other heavy metals


Breaking and removal of
leaching into the ground water and release
yoke, then dumping
of toxic phosphor

De-soldering and removal


of computer chips; open Air emissions as well as discharge into
burning and acid baths to rivers of glass dust, tin, lead, brominated
remove final metals after dioxin, beryllium cadmium, and mercury
chips are removed.
Hydrocarbons, heavy metals, brominated
substances discharged directly into rivers
Chemical stripping using
Chips and other gold
acidifying fish and flora. Tin and lead
nitric and hydrochloric
plated components
contamination of surface and groundwater.
acid and burning of chips
Air emissions of brominated dioxins, heavy
metals and hydrocarbons
Plastics from printers,
Shredding and low temp Emissions of brominated dioxins, heavy
keyboards, monitors, etc. melting to be reused
metals and hydrocarbons

Computer wires

Open burning and


Hydrocarbon ashes released into air, water
stripping to remove copper and soil.

[38]

Information security[edit]
E-waste presents a potential security threat to individuals and exporting countries. Hard drives
that are not properly erased before the computer is disposed of can be reopened, exposing
sensitive information. Credit card numbers, private financial data, account information, and
records of online transactions can be accessed by most willing individuals. Organized criminals
in Ghana commonly search the drives for information to use in local scams.[39]
Government contracts have been discovered on hard drives found in Agbogbloshie. Multimillion dollar agreements from United States security institutions such as the Defense
Intelligence Agency (DIA), the Transportation Security Administration and Homeland Security
have all resurfaced in Agbogbloshie.[39][40]

E-waste management[edit]
Recycling[edit]

Computer monitors are typically packed into low stacks on wooden pallets for recycling and then
shrink-wrapped.[26]
See also: Computer recycling
Today the electronic waste recycling business is in all areas of the developed world a large and
rapidly consolidating business. People tend to forget that properly disposing of or reusing
electronics can help prevent health problems, create jobs, and reduce greenhouse-gas emissions.
[41]
Part of this evolution has involved greater diversion of electronic waste from energy-intensive
downcycling processes (e.g., conventional recycling), where equipment is reverted to a raw
material form. This recycling is done by sorting, dismantling, and recovery of valuable materials.
[42]
This diversion is achieved through reuse and refurbishing. The environmental and social
benefits of reuse include diminished demand for new products and virgin raw materials (with

their own environmental issues); larger quantities of pure water and electricity for associated
manufacturing; less packaging per unit; availability of technology to wider swaths of society due
to greater affordability of products; and diminished use of landfills.
Audiovisual components, televisions, VCRs, stereo equipment, mobile phones, other handheld
devices, and computer components contain valuable elements and substances suitable for
reclamation, including lead, copper, and gold.
One of the major challenges is recycling the printed circuit boards from the electronic wastes.
The circuit boards contain such precious metals as gold, silver, platinum, etc. and such base
metals as copper, iron, aluminum, etc. One way e-waste is processed is by melting circuit boards,
burning cable sheathing to recover copper wire and open- pit acid leaching for separating metals
of value.[43] Conventional method employed is mechanical shredding and separation but the
recycling efficiency is low. Alternative methods such as cryogenic decomposition have been
studied for printed circuit board recycling,[44] and some other methods are still under
investigation.

Consumer awareness efforts[edit]


The examples and perspective in this section may not represent a worldwide view of
the subject. Please improve this article and discuss the issue on the talk page. (December
2011)

The U.S. Environmental Protection Agency encourages electronic recyclers to become certified
by demonstrating to an accredited, independent third party auditor that they meet specific
standards to safely recycle and manage electronics. This works to ensure the highest
environmental standards are being maintained. Two certifications for electronic recyclers
currently exist and are endorsed by the EPA. Customers are encouraged to choose certified
electronics recyclers. Responsible electronics recycling reduces environmental and human health
impacts, increases the use of reusable and refurbished equipment and reduces energy use while
conserving limited resources. The two EPA-endorsed certification programs are: Responsible
Recyclers Practices (R2) and E-Stewards. Certified companies ensure they are meeting strict
environmental standards which maximize reuse and recycling, minimize exposure to human
health or the environment, ensure safe management of materials and require destruction of all
data used on electronics. Certified electronics recyclers have demonstrated through audits and
other means that they continually meet specific high environmental standards and safely manage
used electronics. Once certified, the recycler is held to the particular standard by continual
oversight by the independent accredited certifying body. A certification board accredits and
oversees certifying bodies to ensure that they meet specific responsibilities and are competent to
audit and provide certification. [45]
Some U.S. retailers offer opportunities for consumer recycling of discarded electronic devices. [46]
[47]

In the US, the Consumer Electronics Association (CEA) urges consumers to dispose properly of
end-of-life electronics through its recycling locator at www.GreenerGadgets.org. This list only

includes manufacturer and retailer programs that use the strictest standards and third-party
certified recycling locations, to provide consumers assurance that their products will be recycled
safely and responsibly. CEA research has found that 58 percent of consumers know where to take
their end-of-life electronics, and the electronics industry would very much like to see that level
of awareness increase. Consumer electronics manufacturers and retailers sponsor or operate more
than 5,000 recycling locations nationwide and have vowed to recycle one billion pounds
annually by 2016,[48] a sharp increase from 300 million pounds industry recycled in 2010.
The Sustainable Materials Management Electronic Challenge was created by the United States
Environmental Protection Agency (EPA). Participants of the Challenge are manufacturers of
electronics and electronic retailers. These companies collect end-of-life (EOL) electronics at
various locations and send them to a certified, third-party recycler. Program participants are then
able publicly promote and report 100% responsible recycling for their companies.[49]
The Electronics TakeBack Coalition[50] is a campaign aimed at protecting human health and
limiting environmental effects where electronics are being produced, used, and discarded. The
ETBC aims to place responsibility for disposal of technology products on electronic
manufacturers and brand owners, primarily through community promotions and legal
enforcement initiatives. It provides recommendations for consumer recycling and a list of
recyclers judged environmentally responsible.[51]
The Certified Electronics Recycler program[52] for electronic recyclers is a comprehensive,
integrated management system standard that incorporates key operational and continual
improvement elements for quality, environmental and health and safety (QEH&S) performance.
The grassroots Silicon Valley Toxics Coalition focuses on promoting human health and addresses
environmental justice problems resulting from toxins in technologies.
The World Reuse, Repair, and Recycling Association (wr3a.org) is an organization dedicated to
improving the quality of exported electronics, encouraging better recycling standards in
importing countries, and improving practices through "Fair Trade" principles.
Take Back My TV[53] is a project of The Electronics TakeBack Coalition and grades television
manufacturers to find out which are responsible and which are not.
The e-Waste Association of South Africa (eWASA)[54] has been instrumental in building a
network of e-waste recyclers and refurbishers in the country. It continues to drive the sustainable,
environmentally sound management of all e-waste in South Africa.
E-Cycling Central is a website from the Electronic Industry Alliance which allows you to search
for electronic recycling programs in your state. It lists different recyclers by state to find reuse,
recycle, or find donation programs across the country.[55]
Ewasteguide.info is a Switzerland-based website dedicated to improving the e-waste situation in
developing and transitioning countries. The site contains news, events, case studies, and more.[56]

StEP: Solving the E-Waste Problem This website of StEP, an initiative founded by various UN
organizations to develop strategies to solve the e-waste problem, follows its activities and
programs.[42][57]

Processing techniques[edit]

Recycling the lead from batteries.


In many developed countries, electronic waste processing usually first involves dismantling the
equipment into various parts (metal frames, power supplies, circuit boards, plastics), often by
hand, but increasingly by automated shredding equipment. A typical example is the NADIN
electronic waste processing plant in Novi Iskar, Bulgariathe largest facility of its kind in
Eastern Europe.[58][59] The advantages of this process are the human's ability to recognize and save
working and repairable parts, including chips, transistors, RAM, etc. The disadvantage is that the
labor is cheapest in countries with the lowest health and safety standards.
In an alternative bulk system,[60] a hopper conveys material for shredding into an unsophisticated
mechanical separator, with screening and granulating machines to separate constituent metal and
plastic fractions, which are sold to smelters or plastics recyclers. Such recycling machinery is
enclosed and employs a dust collection system. Some of the emissions are caught by scrubbers
and screens. Magnets, eddy currents, and trommel screens are employed to separate glass,
plastic, and ferrous and nonferrous metals, which can then be further separated at a smelter.
Leaded glass from CRTs is reused in car batteries, ammunition, and lead wheel weights,[26] or sold
to foundries as a fluxing agent in processing raw lead ore. Copper, gold, palladium, silver and tin
are valuable metals sold to smelters for recycling. Hazardous smoke and gases are captured,
contained and treated to mitigate environmental threat. These methods allow for safe reclamation
of all valuable computer construction materials.[18] Hewlett-Packard product recycling solutions
manager Renee St. Denis describes its process as: "We move them through giant shredders about
30 feet tall and it shreds everything into pieces about the size of a quarter. Once your disk drive
is shredded into pieces about this big, it's hard to get the data off".[61]
An ideal electronic waste recycling plant combines dismantling for component recovery with
increased cost-effective processing of bulk electronic waste.
Reuse is an alternative option to recycling because it extends the lifespan of a device. Devices
still need eventual recycling, but by allowing others to purchase used electronics, recycling can
be postponed and value gained from device use.

Benefits of recycling[edit]
Recycling raw materials from end-of-life electronics is the most effective solution to the growing
e-waste problem. Most electronic devices contain a variety of materials, including metals that
can be recovered for future uses. By dismantling and providing reuse possibilities, intact natural
resources are conserved and air and water pollution caused by hazardous disposal is avoided.
Additionally, recycling reduces the amount of greenhouse gas emissions caused by the
manufacturing of new products.[62]
Benefits of recycling are extended when responsible recycling methods are used. In the U.S.,
responsible recycling aims to minimize the dangers to human health and the environment that
disposed and dismantled electronics can create. Responsible recycling ensures best management
practices of the electronics being recycled, worker health and safety, and consideration for the
environment locally and abroad.[63]

Electronic waste substances[edit]

Several sizes of button and coin cell with 2 9v batteries as a size comparison. They are all
recycled in many countries since they contain lead, mercury and cadmium.
Some computer components can be reused in assembling new computer products, while others
are reduced to metals that can be reused in applications as varied as construction, flatware, and
jewelry.[61]
Substances found in large quantities include epoxy resins, fiberglass, PCBs, PVC (polyvinyl
chlorides), thermosetting plastics, lead, tin, copper, silicon, beryllium, carbon, iron and
aluminium.
Elements found in small amounts include cadmium, mercury, and thallium.[64]
Elements found in trace amounts include americium, antimony, arsenic, barium, bismuth, boron,
cobalt, europium, gallium, germanium, gold, indium, lithium, manganese, nickel, niobium,
palladium, platinum, rhodium, ruthenium, selenium, silver, tantalum, terbium, thorium, titanium,
vanadium, and yttrium.
Almost all electronics contain lead and tin (as solder) and copper (as wire and printed circuit
board tracks), though the use of lead-free solder is now spreading rapidly. The following are
ordinary applications:

Hazardous[edit]

Recyclers in the street in So Paulo, Brazil with old computers

Americium: The radioactive source in smoke alarms. It is known to be carcinogenic.

Mercury: Found in fluorescent tubes (numerous applications), tilt switches (mechanical


doorbells, thermostats),[65] and flat screen monitors. Health effects include sensory
impairment, dermatitis, memory loss, and muscle weakness. Exposure in-utero causes
fetal deficits in motor function, attention and verbal domains.[66] Environmental effects in
animals include death, reduced fertility, and slower growth and development.

Sulphur: Found in lead-acid batteries. Health effects include liver damage, kidney
damage, heart damage, eye and throat irritation. When released into the environment, it
can create sulphuric acid.

BFRs: Used as flame retardants in plastics in most electronics. Includes PBBs, PBDE,
DecaBDE, OctaBDE, PentaBDE. Health effects include impaired development of the
nervous system, thyroid problems, liver problems. Environmental effects: similar effects
as in animals as humans. PBBs were banned from 1973 to 1977 on. PCBs were banned
during the 1980s.

Cadmium: Found in light-sensitive resistors, corrosion-resistant alloys for marine and


aviation environments, and nickel-cadmium batteries. The most common form of
cadmium is found in Nickel-cadmium rechargeable batteries. These batteries tend to
contain between 6 and 18% cadmium. The sale of Nickel-Cadmium batteries has been
banned in the European Union except for medical use. When not properly recycled it can
leach into the soil, harming microorganisms and disrupting the soil ecosystem. Exposure
is caused by proximity to hazardous waste sites and factories and workers in the metal
refining industry. The inhalation of cadmium can cause severe damage to the lungs and is
also known to cause kidney damage.[67] Cadmium is also associated with deficits in
cognition, learning, behavior, and neuromotor skills in children.[66]

Lead: Solder, CRT monitor glass, lead-acid batteries, some formulations of PVC.[68] A
typical 15-inch cathode ray tube may contain 1.5 pounds of lead,[3] but other CRTs have
been estimated as having up to 8 pounds of lead.[26] Adverse effects of lead exposure
include impaired cognitive function, behavioral disturbances, attention deficits,
hyperactivity, conduct problems and lower IQ[66]

Beryllium oxide: Filler in some thermal interface materials such as thermal grease used
on heatsinks for CPUs and power transistors,[69] magnetrons, X-ray-transparent ceramic
windows, heat transfer fins in vacuum tubes, and gas lasers.

Perfluorooctanoic acid (PFOA): Found in Non-stick cookware (PTFE), used as an


antistatic additive in industrial applications, and found in electronics. PFOAs are formed
synthetically through environmental degradation and, in mice, after oral uptake. Studies
in mice have found the following health effects: Hepatotoxicity, developmental toxicity,
immunotoxicity, hormonal effects and carcinogenic effects. Studies have found increased
maternal PFOA levels to be associated with an increased risk of spontaneous abortion
(miscarriage) and stillbirth. Increased maternal levels of PFOA are also associated with
decreases in mean gestational age (preterm birth), mean birth weight (low birth weight),
mean birth length (small for gestational age), and mean APGAR score.[70]

Hexavalent chromium: A known carcinogen after occupational inhalation exposure.[66]

There is also evidence of cytotixic and genotoxic effects of some chemicals, which have been
shown to inhibit cell proliferation, cause cell membrane lesion, cause DNA single-strand breaks,
and elevate Reactive Oxygen Species (ROS) levels.[71]

DNA breaks can increase the likelihood of developing cancer (if the damage is to a tumor
suppressor gene)
DNA damages are a special problem in non-dividing or slowly dividing cells, where
unrepaired damages will tend to accumulate over time. On the other hand, in rapidly
dividing cells, unrepaired DNA damages that do not kill the cell by blocking replication
will tend to cause replication errors and thus mutation
Elevated Reactive Oxygen Species (ROS) levels can cause damage to cell structures
(oxidative stress)[71]

Generally non-hazardous[edit]

An iMac G4 that has been repurposed into a lamp (photographed next to a Mac Classic and a flip
phone).
Aluminium: nearly all electronic goods using more than a few watts of power (heatsinks),
electrolytic capacitors.
Copper: copper wire, printed circuit board tracks, component leads.

Germanium: 1950s1960s transistorized electronics (bipolar junction transistors).

Gold: connector plating, primarily in computer equipment.

Iron: steel chassis, cases, and fixings.

Lithium: lithium-ion batteries.

Nickel: nickel-cadmium batteries.

Silicon: glass, transistors, ICs, printed circuit boards.

Tin: solder, coatings on component leads.

Zinc: plating for steel parts.

See also[edit]
Environment portal
Electronics portal

2000s commodities boom


Computer Recycling

Digger gold

eDay

Electronic waste in Japan

Green computing

Mobile phone recycling

Material safety data sheet

Polychlorinated biphenyls

Retrocomputing

Policy and conventions:

Basel Action Network (BAN)


Basel Convention

China RoHS

e-Stewards

Restriction of Hazardous Substances Directive (RoHS)

Soesterberg Principles

Sustainable Electronics Initiative (SEI)

Waste Electrical and Electronic Equipment Directive

Organizations

Asset Disposal and Information Security Alliance (ADISA)[72]

Empa

IFixit

International Network for Environmental Compliance and Enforcement

Institute of Scrap Recycling Industries (ISRI)

Solving the E-waste Problem

World Reuse, Repair and Recycling Association

General:

Retail hazardous waste


Computer recycling

Sustainable Electronics Initiative (SEI)

Waste

Computer recycling

Electronic waste in New Zealand

E-Cycling

"E-cycling" or "E-waste" is an initiative by the United States Environmental Protection Agency


(EPA) which refers to donations, reuse, shredding and general collection of used electronics.
Generically, the term refers to the process of collecting, brokering, disassembling, repairing and
recycling the components or metals contained in used or discarded electronic equipment,
otherwise known as electronic waste (e-waste). "E-cyclable" items include, but are not limited to:
televisions, computers, microwave ovens, vacuum cleaners, telephones and cellular phones,
stereos, and VCRs and DVDs just about anything that has a cord, light or takes some kind of
battery.
Investment in e-cycling facilities has been increasing recently due to technologys rapid rate of
obsolescence, concern over improper methods, and opportunities for manufacturers to influence
the secondary market (used and reused products). The higher metal prices is also having more
recycling taking place. The controversy around methods stems from a lack of agreement over
preferred outcomes.
World markets with lower disposable incomes, consider 75% repair and reuse to be valuable
enough to justify 25% disposal. Debate and certification standards may be leading to better
definitions, though civil law contracts, governing the expected process are still vital to any
contracted process, as poorly defined as "e-cycling".

eDay

eDay is an annual New Zealand initiative, started by Computer Access New Zealand
(CANZ), aimed to raise awareness of the potential dangers associated with
electronic waste and to offer the opportunity for such waste to be disposed of in an
environmentally friendly fashion.

eDay was first held in Wellington in 2006, as a pilot sponsored by Dell, the event bought in 54
tonnes (119,000 lb) of old computers, mobile phones and other non-biodegradable electronic
material.[1] In 2007 the initiative was extended to cover 12 locations, which resulted in it
becoming a national initiative,[2] 946 tonnes (2,086,000 lb) were collected.[3]
eDay 2008 was held on October 4 and extended to 32 centres.[4] In 2009 an estimated 966 tonnes
(2,130,000 lb) was collected at 38 locations around the country.[5]

The initiative was started to minimise the amount of electronic waste being
disposed on in landfills, based on evidence from reports that there was an
estimated 16 million electronic devices in use in New Zealand and that 1 million
new devices were being introduced every year, the report found that the majority of
these devices were being disposed in landfills rather than being recycled. [6][7] A
separate report found that half of New Zealand schools did not recycle outdated and
replaced equipment, opting instead to deposit it in landfills. [7][8] When disposed in
landfills there is a possibility of the harmful chemicals in the electronic equipment,
such as mercury, lead and cadmium, contaminating groundwater and coming into
contact with humans or animals, the toxins in the chemicals are capable of causing
serious health issues, such as nervous system and brain damage. [4][9] When recycled,
the chemicals are disposed of safely and potentially valuable parts can be reused.

On the day, drive-thru collection points are established and volunteers operate each
centre. Businesses, schools and the public are encouraged to dispose of old
computer hardware, mobile phones and printer cartridges. As well as collecting
material, the initiative is also designed to increase awareness about the harmful
effects of electronic waste
CANZ were awarded the New Zealand Ministry for the Environment 2008 Green
Ribbon Award for Community action and involvement
Computer recycling, electronic recycling or e-waste recycling is the recycling
of computers and any other electronic devices. Recycling is the complete
deconstruction of electronic devices in order to cut down on mining the raw
materials and rather extract the materials from old and obsolete electronics.

Recycling methods

Data erasure
Data remanence

Degaussing

Digger gold

Electronic waste

Polychlorinated biphenyls

Trashware

Electronic Waste Recycling Fee

Material safety data sheet

Computer technology for developing areas

CBL Data Recovery

Policy and conventions


Basel Convention
Electronic Waste Recycling Act

Restriction of Hazardous Substances Directive (RoHS)

China RoHS

Waste Electrical and Electronic Equipment Directive (WEEE directive)

Sustainable Electronics Initiative (SEI)

Organisations
Camara
Computers For Schools

eDay

Empower Up

Free Geek

International Network for Environmental Compliance and Enforcement

Nonprofit Technology Resources

Silicon Valley Toxics Coalition

Solving the E-waste Problem

World Computer Exchange

The word dioxin can refer in a general way to compounds which have a dioxin core skeletal
structure with substituent molecular groups attached to it. For example, dibenzo-1,4-dioxin is a
compound whose structure consists of two benzo- groups fused onto a 1,4-dioxin ring.

Polychlorinated dibenzodioxins[edit]
Main article: polychlorinated dibenzodioxins
Because of their extreme importance as environmental pollutants, current scientific literature
uses the name dioxins commonly for simplification to denote the chlorinated derivatives of
dibenzo-1,4-dioxin, more precisely the polychlorinated dibenzodioxins (PCDDs), among which
2,3,7,8-tetrachlorodibenzodioxin (TCDD), a tetrachlorinated derivative, is the best known. The
polychlorinated dibenzodioxins, which can also be classified in the family of halogenated
organic compounds, have been shown to bioaccumulate in humans and wildlife due to their
lipophilic properties, and are known teratogens, mutagens, and carcinogens.
PCDDs are formed through combustion, chlorine bleaching and manufacturing processes.[3] The
combination of heat and chlorine creates dioxin.[3] Since chlorine is often a part of the Earth's
environment, natural ecological activity such as volcanic activity and forest fires can lead to the
formation of PCDDs.[3] Nevertheless, PCDDs are mostly produced by human activity.[3]
Famous PCDD exposure cases include Agent Orange sprayed over vegetation by the British
military in Malaya during the Malayan Emergency and the U.S. military in Vietnam during the
Vietnam War, the Seveso disaster, and the poisoning of Viktor Yushchenko.
Polychlorinated dibenzofurans are a related class compounds to PCDDs which are often included
within the general term dioxins.
The Basel Convention on the Control of Transboundary Movements of Hazardous Wastes
and Their Disposal, usually known as the Basel Convention, is an international treaty that was
designed to reduce the movements of hazardous waste between nations, and specifically to
prevent transfer of hazardous waste from developed to less developed countries (LDCs). It does
not, however, address the movement of radioactive waste. The Convention is also intended to
minimize the amount and toxicity of wastes generated, to ensure their environmentally sound
management as closely as possible to the source of generation, and to assist LDCs in
environmentally sound management of the hazardous and other wastes they generate.
The Convention was opened for signature on 22 March 1989, and entered into force on 5 May
1992. As of January 2015, 182 states and the European Union are parties to the Convention.
Haiti and the United States have signed the Convention but not ratified it

History

With the tightening of environmental laws (for example, RCRA) in developed nations in the
1970s, disposal costs for hazardous waste rose dramatically. At the same time, globalization of
shipping made transboundary movement of waste more accessible, and many LDCs were
desperate for foreign currency. Consequently, the trade in hazardous waste, particularly to LDCs,
grew rapidly.
One of the incidents which led to the creation of the Basel Convention was the Khian Sea waste
disposal incident, in which a ship carrying incinerator ash from the city of Philadelphia in the
United States dumped half of its load on a beach in Haiti before being forced away. It sailed for
many months, changing its name several times. Unable to unload the cargo in any port, the crew
was believed to have dumped much of it at sea.
Another is the 1988 Koko case in which 5 ships transported 8,000 barrels of hazardous waste
from Italy to the small town of Koko in Nigeria in exchange for $100 monthly rent which was
paid to a Nigerian for the use of his farmland.
These practices have been deemed "Toxic Colonialism" by many developing countries.
At its most recent meeting, 27 November 1 December 2006, the Conference of the parties of
the Basel Agreement focused on issues of electronic waste and the dismantling of ships.
According to Maureen Walsh, only around 4% of hazardous wastes that come from OECD
countries are actually shipped across international borders.[3] These wastes include, among others,
chemical waste, radioactive waste, municipal solid waste, asbestos, incinerator ash, and old tires.
Of internationally shipped waste that comes from developed countries, more than half is shipped
for recovery and the remainder for final disposal.
Increased trade in recyclable materials has led to an increase in a market for used products such
as computers. This market is valued in billions of dollars. At issue is the distinction when used
computers stop being a "commodity" and become a "waste".
As of January 2015, there are 183 parties to the treaty, which includes 180 UN member states
plus the Cook Islands, the European Union, and the State of Palestine. The 13 UN member states
that are not party to the treaty are Angola, East Timor, Fiji, Grenada, Haiti, San Marino, Sierra
Leone, Solomon Islands, South Sudan, Tajikistan, Tuvalu, United States, and Vanuatu

Solving the E-waste Problem (StEP) is an international initiative, created to develop solutions
to address issues associated with Waste Electrical and Electronic Equipment (WEEE). Some of
the most eminent players in the fields of Production, Reuse and Recycling of Electrical and
Electronic Equipment (EEE), government agencies and NGOs as well as UN Organisations
count themselves among its members. StEP encourages the collaboration of all stakeholders

connected with e-waste, emphasising a holistic, scientific yet applicable approach to the
problem.

Contents
History
Waste Electrical and Electronic Equipment (WEEE) is increasing every day. The volume of
WEEE is becoming a serious environmental problem that has yet to become recognised by the
greater public. To guarantee the neutrality required to give analysis and recommendations the
necessary credibility, StEP has been started. After a starting period of three years, initiated by the
United Nations University UNU, promotion team wetzlar and Hewlett-Packard, the StEP
Initiative had its official launch in March 2007.

Aims and means


One of the most important aims of the StEP Initiative is to elaborate a set of global guidelines
for the treatment of e-waste and the promotion of sustainable material recycling Press
communiqu of the initiative
The initiative comprises five cooperating Task Forces, each addressing specific aspects of ewaste, while covering the entire life-cycle of electric and electronic equipment. In all its
activities, the initiative places emphasis on working with policy-making bodies to allow results
from its research to impact current practices. StEP is being coordinated by the science and
research body of the UN System, the United Nations University (UNU). The long-term goal of
StEP is to develop based on scientific analysis a globally accepted standard for the
refurbishment, recycling of e-waste. Herewith, StEPs aim is to reduce dangers to humans and
the environment, which result from inadequate and irresponsible treatment practices, and
advance resource efficiency. (Ruediger Kuehr, Executive Secretary of the StEP Initiative). To
achieve this, StEP conceives and implements projects based on the results of multidisciplinary
dialogues. The projects seek to develop sustainable solutions that reduce environmental risk and
enhance development.

Organization of the initiative


The supreme body of the StEP Initiative is its General Assembly, which decides its general
direction and development. This General Assembly is based on a Memorandum of
Understanding, which is signed by all members and states the guiding principles of StEP. A
Secretariat, hosted by the UNU in Bonn, is mandated with the accomplishment of the day-to-day
managerial work of the initiative. A Steering Committee, composed of representatives from key
stakeholders, monitors the progress of the Initiative. The core work is accomplished by the five
Task Forces (TF): Policy, ReDesign, ReUse, ReCycle and Capacity Building. These
Task Forces conduct research and analysis in their respective domains and seek to implement
innovative projects.

TF 1 Policy: The aim of this Task Force is to assess and analyse current governmental
approaches and regulations related to WEEE. Starting from this analysis, recommendations for
future regulating activities shall be formulated.
TF2 ReDesign: This Task Force works on the design of EEE, focusing on the reduction of
negative consequences of electrical and electronic appliances throughout their entire life cycle.
The Task Force especially takes heed of the situation in developing countries.
TF3 ReUse: The focus of this Task Force lies in the development of sustainable, transmissible
principles and standards for the reuse of EEE.
TF4 ReCycle: The objective of this Task Force is to improve infrastructures, systems and
technologies to realize a sustainable recycling on a global level.
TF5 Capacity Building: The aim of this Task Force is to draw attention to the problems
connected to WEEE. This aim shall be achieved by making the results of the research of the Task
Forces and other stakeholders publicly available. In doing so, the Task Force relies on personal
networks, the internet, collaborative working tools etc.

Guiding Principles
"1. StEPs work is founded on scientific assessments and incorporates a comprehensive view of
the social, environmental and economic aspects of e-waste.
2. StEP conducts research on the entire life-cycle of electronic and electrical equipment and
their corresponding global supply, process and material flows.
3. StEPs research and pilot projects are meant to contribute to the solution of e-waste problems.
4. StEP condemns all illegal activities related to e-waste including illegal shipments and reuse/
recycling practices that are harmful to the environment and human health.
5. StEP seeks to foster safe and eco/energy-efficient reuse and recycling practices around the
globe in a socially responsible manner."

Household hazardous waste (HHW), sometimes called retail hazardous waste or "home
generated special materials', is post-consumer waste which qualifies as hazardous waste when
discarded. It includes household chemicals and other substances for which the owner no longer
has a use, such as consumer products sold for home care, personal care, automotive care, pest
control and other purposes. These products exhibit many of the same dangerous characteristics as
fully regulated hazardous waste due to their potential for reactivity, ignitability, corrosivity,
toxicity, or persistence. Examples include drain cleaners, oil paint, motor oil, antifreeze, fuel,

poisons, pesticides, herbicides and rodenticides, fluorescent lamps, lamp ballasts, smoke
detectors, medical waste, some types of cleaning chemicals, and consumer electronics (such as
televisions, computers, and cell phones).
Certain items such as batteries and fluorescent lamps can be returned to retail stores for disposal.
The Rechargeable Battery Recycling Corporation (RBRC) maintains a list of battery recycling
locations and your local environmental organization should have list of fluorescent lamp
recycling locations. The classification "household hazardous waste" has been used for decades
and does not accurately reflect the larger group of materials that during the past several years
have become known as "household hazardous wastes". These include items such as latex paint,
non-hazardous household products and other items that do not generally exhibit hazardous
characteristics which are routinely included in "household hazardous waste" disposal programs.
The term "home generated special materials" more accurately identifies a broader range of items
that public agencies are targeting as recyclable and/or should not be disposed of into a landfill.

Stockholm Convention on Persistent Organic Pollutants is an international


environmental treaty, signed in 2001 and effective from May 2004, that aims to
eliminate or restrict the production and use of persistent organic pollutants (POPs).
History

In 1995, the Governing Council of the United Nations Environment Programme (UNEP) called
for global action to be taken on POPs, which it defined as "chemical substances that persist in the
environment, bio-accumulate through the food web, and pose a risk of causing adverse effects to
human health and the environment".
Following this, the Intergovernmental Forum on Chemical Safety (IFCS) and the International
Programme on Chemical Safety (IPCS) prepared an assessment of the 12 worst offenders, known
as the dirty dozen.
The INC met five times between June 1998 and December 2000 to elaborate the convention, and
delegates adopted the Stockholm Convention on POPs at the Conference of the Plenipotentiaries
convened from 2223 May 2001 in Stockholm, Sweden.
The negotiations for the Convention were completed on 23 May 2001 in Stockholm. The
convention entered into force on 17 May 2004 with ratification by an initial 128 parties and 151
signatories. Co-signatories agree to outlaw nine of the dirty dozen chemicals, limit the use of
DDT to malaria control, and curtail inadvertent production of dioxins and furans.
Parties to the convention have agreed to a process by which persistent toxic compounds can be
reviewed and added to the convention, if they meet certain criteria for persistence and
transboundary threat. The first set of new chemicals to be added to the Convention were agreed
at a conference in Geneva on 8 May 2009.

As of May 2013, there are 179 parties to the Convention, (178 states and the European Union).
Notable non-ratifying states include the United States, Israel, Malaysia, Italy and Iraq.
The Stockholm Convention was adopted to EU legislation in REGULATION (EC) No 850/2004

Bhopal gas tragedy latter outcomes and effects

Abstract
On December 3 1984, more than 40 tons of methyl isocyanate gas leaked from a pesticide plant
in Bhopal, India, immediately killing at least 3,800 people and causing significant morbidity and
premature death for many thousands more. The company involved in what became the worst
industrial accident in history immediately tried to dissociate itself from legal responsibility.
Eventually it reached a settlement with the Indian Government through mediation of that
country's Supreme Court and accepted moral responsibility. It paid $470 million in
compensation, a relatively small amount of based on significant underestimations of the longterm health consequences of exposure and the number of people exposed. The disaster indicated
a need for enforceable international standards for environmental safety, preventative strategies to
avoid similar accidents and industrial disaster preparedness.
Since the disaster, India has experienced rapid industrialization. While some positive changes in
government policy and behavior of a few industries have taken place, major threats to the
environment from rapid and poorly regulated industrial growth remain. Widespread
environmental degradation with significant adverse human health consequences continues to
occur throughout India.
December 2004 marked the twentieth anniversary of the massive toxic gas leak from Union
Carbide Corporation's chemical plant in Bhopal in the state of Madhya Pradesh, India that killed
more than 3,800 people. This review examines the health effects of exposure to the disaster, the
legal response, the lessons learned and whether or not these are put into practice in India in terms
of industrial development, environmental management and public health.

Long-term health effects


Some data about the health effects are still not available. The Indian Council of
Medical Research (ICMR) was forbidden to publish health effect data until 1994. [5]
A total of 36 wards were marked by the authorities as being "gas affected,"
affecting a population of 520,000. Of these, 200,000 were below 15 years of age,
and 3,000 were pregnant women. The official immediate death toll was 2,259, and
in 1991, 3,928 deaths had been officially certified. Others estimate 8,000 died
within two weeks.[4][5]
The government of Madhya Pradesh confirmed a total of 3,787 deaths related to the
gas release.[3]

Later, the affected area was expanded to include 700,000 citizens. A government
affidavit in 2006 stated the leak caused 558,125 injuries including 38,478
temporary partial injuries and approximately 3,900 severely and permanently
disabling injuries.[7]
A cohort of 80 021 exposed people was registered, along with a control group, a
cohort of 15 931 people from areas not exposed to MIC. Nearly every year since
1986, they have answered the same questionnaire. It shows overmortality and
overmorbidity in the exposed group. However, bias and confounding factors cannot
be excluded from the study. Because of migration and other factors, 75% of the
cohort is lost, as the ones who moved out are not followed. [5][21]
A number of clinical studies are performed. The quality varies, but the different
reports support each others.[5] Studied and reported long term health effects are:

Eyes: Chronic conjunctivitis, scars on cornea, corneal opacities, early


cataracts
Respiratory tracts: Obstructive and/or restrictive disease, pulmonary fibrosis,
aggravation of TB and chronic bronchitis

Neurological system: Impairment of memory, finer motor skills, numbness


etc.

Psychological problems: Post traumatic stress disorder (PTSD)

Childrens health: Peri- and neonatal death rates increased. Failure to grow,
intellectual impairment etc.

Missing or insufficient fields for research are female reproduction, chromosomal


aberrations, cancer, immune deficiency, neurological sequelae, post traumatic
stress disorder (PTSD) and children born after the disaster. Late cases that might
never be highlighted are respiratory insufficiency, cardiac insufficiency (cor
pulmonale), cancer and tuberculosis.

Health care
The Government of India had focused primarily on increasing the hospital-based
services for gas victims thus hospitals had been built after the disaster. When UCC
wanted to sell its shares in UCIL, it was directed by the Supreme Court to finance a
500-bed hospital for the medical care of the survivors. Thus, Bhopal Memorial
Hospital and Research Centre (BMHRC) was inaugurated in 1998 and was obliged to
give free care for survivors for eight years. BMHRC was a 350-bedded super
speciality hospital where heart surgery and hemodialysis were done. However, there
was a dearth of gynaecology, obstetrics and paediatrics. Eight mini-units (outreach
health centres) were started and free health care for gas victims were to be offered
till 2006.[5] The management had also faced problems with strikes, and the quality

of the health care being disputed.[22][23] Sambhavna Trust is a charitable trust,


registered in 1995, that gives modern as well as ayurvedic treatments to gas
victims, free of charge.[5][24]

Environmental rehabilitation
When the factory was closed in 1986, pipes, drums and tanks were sold. The MIC
and the Sevin plants are still there, as are storages of different residues. Isolation
material is falling down and spreading.[5] The area around the plant was used as a
dumping area for hazardous chemicals. In 1982 tubewells in the vicinity of the UCIL
factory had to be abandoned and tests in 1989 performed by UCC's laboratory
revealed that soil and water samples collected from near the factory and inside the
plant were toxic to fish.[25] Several other studies had also shown polluted soil and
groundwater in the area. Reported polluting compounds include 1-naphthol,
naphthalene, Sevin, tarry residue, mercury, toxic organochlorines, volatile
organochlorine compounds, chromium, copper, nickel, lead, hexachloroethane,
hexachlorobutadiene, and the pesticide HCH.[5]
In order to provide safe drinking water to the population around the UCIL factory,
Government of Madhya Pradesh presented a scheme for improvement of water
supply.[26] In December 2008, the Madhya Pradesh High Court decided that the toxic
waste should be incinerated at Ankleshwar in Gujarat, which was met by protests
from activists all over India.[27] On 8 June 2012, the Centre for incineration of toxic
Bhopal waste agreed to pay 250 million (US$4.2 million) to dispose of UCIL
chemical plants waste in Germany. [28] On 9 August 2012, Supreme court directed the
Union and Madhya Pradesh Governments to, take immediate steps for disposal of
toxic waste lying around and inside the factory within six months. [29]
A U.S. court rejected the lawsuit blaming UCC for causing soil and water pollution
around the site of the plant and ruled that responsibility for remedial measures or
related claims rested with the State Government and not with UCC. [30] In 2005, the
state government invited various Indian architects to enter their "concept for
development of a memorial complex for Bhopal gas tragedy victims at the site of
Union Carbide". In 2011, a conference was held on the site, with participants from
European universities which was aimed for the same. [31][32]

Occupational and habitation rehabilitation


33 of the 50 planned work-sheds for gas victims started. All except one was closed
down by 1992. 1986, the MP government invested in the Special Industrial Area
Bhopal. 152 of the planned 200 work sheds were built and in 2000, 16 were partially
functioning. It was estimated that 50,000 persons need alternative jobs, and that
less than 100 gas victims had found regular employment under the government's
scheme. The government also planned 2,486 flats in two- and four-story buildings in

what is called the "widow's colony" outside Bhopal. The water did not reach the
upper floors and it was not possible to keep cattle which were their primary
occupation. Infrastructure like buses, schools, etc. were missing for at least a
decade.[5]

Economic rehabilitation
Immediate relieves were decided two days after the tragedy. Relief measures
commenced in 1985 when food was distributed for a short period along with ration
cards.[5] Madhya Pradesh government's finance department allocated 874 million
(US$15 million) for victim relief in July 1985.[33][34] Widow pension of 200
(US$3.30)/per month (later 750 (US$13)) were provided. The government also
decided to pay 1500 (US$25) to families with monthly income 500 (US$8.40) or
less. As a result of the interim relief, more children were able to attend school, more
money was spent on treatment and food, and housing also eventually improved.
From 1990 interim relief of 200 (US$3.30) was paid to everyone in the family who
was born before the disaster.[5]
The final compensation, including interim relief for personal injury was for the
majority 25,000 (US$420). For death claim, the average sum paid out was 62,000
(US$1,000). Each claimant were to be categorised by a doctor. In court, the
claimants were expected to prove "beyond reasonable doubt" that death or injury in
each case was attributable to exposure. In 1992, 44 percent of the claimants still
had to be medically examined.[5]
By the end of October 2003, according to the Bhopal Gas Tragedy Relief and
Rehabilitation Department, compensation had been awarded to 554,895 people for
injuries received and 15,310 survivors of those killed. The average amount to
families of the dead was $2,200.[35]
In 2007, 1,029,517 cases were registered and decided. Number of awarded cases
were 574,304 and number of rejected cases 455,213. Total compensation awarded
was 15465 million (US$260 million).[26] On 24 June 2010, the Union Cabinet of the
Government of India approved a 12650 million (US$210 million) aid package which
would be funded by Indian taxpayers through the government. [36]

Other impacts
In 1985, Henry Waxman, a California Democrat, called for a U.S. government inquiry
into the Bhopal disaster, which resulted in U.S. legislation regarding the accidental
release of toxic chemicals in the United States. [37]

MATCH, PATCH, MAPP

Multilevel Approach To Community Health


(MATCH)
Matching Alcoholism Treatments To Client
Heterogeneity (MATCH)
Multilevel approach to community health refers to an ecological planning viewpoint
that distinguishes that intervention approaches should be aimed at a number of
objectives and individuals. It is usually abbreviated as MATCH.

Planned Approach To Community Health


(MATCH)
Mobilizing for Action through Planning and
Partnerships (MAPP)
Resources | Staff Contacts
Mobilizing for Action through Planning and Partnerships (MAPP) is a communitydriven strategic planning process for improving community health. Facilitated by
public health leaders, this framework helps communities apply strategic thinking to
prioritize public health issues and identify resources to address them. MAPP is not
an agency-focused assessment process; rather, it is an interactive process that can
improve the efficiency, effectiveness, and ultimately the performance of local public
health systems.

MAPP Save the Date: July 7-8, 2014


Areyouinterestedinimprovingyourcommunityshealth?Areyoulookingforpracticalwaystoengage
communitypartners?JoinusattheSummerMobilizingforActionthroughPlanningand
Partnerships(MAPP)Trainingtolearnhowtotakeactionstepstowardahealthiercommunity.

Prime minister youth loan scheme SMEDA


Planned Approach To Community Health (SMEDA)

Prime Minister's Youth Business Loan Introduction


Prime Ministers Youth Business Loan, for young entrepreneurs between the age group of 21 45 years, is designed to provide subsidised financing at 8.0% mark-up per annum for one
hundred thousand (100,000) beneficiaries, through designated financial institutions, initially
through National Bank of Pakistan (NBP) and First Women Bank Ltd. (FWBL).
Small business loan with tenure up to 8 years, with first year grace period, and a debt : equity of
90 : 10 will be disbursed to SME beneficiaries across Pakistan, covering; Punjab, Sindh, Khyber
Pakhtunkhwa, Balochistan, Gilgit Baltistan, Azad Jammu & Kashmir and Federally
Administered Tribal Areas (FATA). It has a 50% quota for women and 5% quota for families of
Shaheeds, Widows and Disabled persons.
SMEDA has been tasked with an advisory role in the implementation of PMs scheme by
providing more than fifty five updated pre-feasibilities for referencing by Loan beneficiaries and
participating banks to optimally utilize their financial resources. SMEDA shall continue to add
additional prefeasibilities. However, it is not necessary to develop a project on these
prefeasibilities. Any other projects will also be entertained by the banks.

Integrated health program

Integrated Health Program (IHP)


The Integrated Health Program is a collaboration between University Health
Services and the Counseling and Mental Health Center at The University of Texas
at Austin. This program brings mental health providers to University Health
Services, creating a holistic team approach in the treatment of UHS patients. The
Integrated Health Program has also initiated classes, programs, and other
interventions available to students through the Counseling and Mental Health
Center.
The Integrated Health team consists of psychologists, clinical social workers, and
psychiatrists. This program uses the concept of mindfulness as the foundation of its
approach to health. We emphasize a broad definition of health which views optimal
health as the integration of physical, psychological, emotional, relational, and
spiritual well being.

What the Integrated Health Program Offers


Students

Initial consultations with IH counselors to address students'


health goals
Brief individual counseling

Guidance in managing issues such as anxiety, depression, and


self esteem

Transitional support services and referrals to helpful resources at


the University

MindBody Lab for self-paced stress reduction interventions


MindBody Lab

Psychiatric services, when indicated

Referrals for other needed services in the Austin community

Integrated health care services

At Integrated Healthcare Services (IHS), we work hard to help you solve your problems and
make your job easier while improving patient outcomes and satisfaction.
Through our unique and comprehensive Workers' Compensation Medical Cost Containment
Program, IHS can really make a difference when it comes to saving you time and money.
IHS provides a full array of durable medical equipment, medical supplies and services, and
specialized management services to meet your unique needs.

ISIS model of sustainable health system


Information Society Integrated System (ISIS)
Institute for Science and International Security (ISIS)
Internet Student Information System
Independent Schools Information Service

Growth chart preparation and interpretation

INTERPRETING/ Understanding THE GROWTH CHART (using height as


the example)
A growth chart shows how a child's height compares to other children the exact same age and
sex. After the age of 2, most children maintain fairly steady growth until they hit puberty. They
generally follow close to the same percentile they had at the age of 2. Children over 2 years of
age who move away (loosing or gaining more than 15 percentile points) from their established
growth curve should be thoroughly evaluated and followed by a doctor, no matter how tall they
are. Here is an example of a growth chart and an explanation about how to read/figure it out.
How do I figure out what percentile my child is in?
On each growth chart there is a series of lines swirving from the lower left and climbing up to
the right side of the chart. These lines help people follow along (so to speak) so that you can see
where your child falls on a growth curve.

Accident prevention Haddon matrix


The Haddon Matrix is the most commonly used paradigm in the injury prevention field.
Developed by William Haddon in 1970, the matrix looks at factors related to personal attributes,
vector or agent attributes, and environmental attributes before, during and after an injury or
death. By utilizing this framework, one can then think about evaluating the relative importance
of different factors and design interventions.[1]
A typical Haddon Matrix :
Phase

Precrash

Human Factors

Vehicles and
Equipment Factors

Information

Roadworthiness

Attitudes

Lighting

Impairment

Police
Enforcement

Road design and


road layout

Braking

Speed limits

Speed
Management
Occupant
restraints

Pedestrian facilities

Other safety
devices

Crash-protective
roadside objects

Crash-protective
design

Ease of access

Rescue facilities

Fire risk

Congestion

Crash

PostCrash

Use of restraints

Impairments

First-aid skills

Access to
medics

Environmental Factors

BMI Cut off points for under weight, overweight, four levels of obese
BMI classification

Body Mass Index (BMI) is a simple index of weight-for-height that is commonly used to classify
underweight, overweight and obesity in adults. It is defined as the weight in kilograms divided
by the square of the height in metres (kg/m ). For example, an adult who weighs 70kg and whose
height is 1.75m will have a BMI of 22.9.
2

BMI = 70 kg / (1.75 m ) = 70 / 3.06 = 22.9


2

Table 1: The International Classification of adult underweight, overweight and obesity


according to BMI
Classification

BMI(kg/m )
2

Principal cut-off
points
Underweight

<18.50

<18.50

<16.00

<16.00

16.00 - 16.99

16.00 - 16.99

17.00 - 18.49

17.00 - 18.49

Severe thinness
Moderate
thinness
Mild thinness

Additional cut-off
points

18.50 - 22.99
Normal range

18.50 - 24.99
23.00 - 24.99

Overweight

25.00

25.00
25.00 - 27.49

Pre-obese

25.00 - 29.99
27.50 - 29.99

Obese

30.00

30.00
3

Obese class I

30.00 - 34.99
32.50 - 34.99

Obese c

ass II

35.00 - 37.49

35.00 - 39.99
37.50 - 39.99
Obese class

40.00

III

40.00

Source: Adapted from WHO, 1995, WHO, 2000 and WHO 2004.

BMI values are age-independent and the same for both sexes. However, BMI may not
correspond to the same degree of fatness in different populations due, in part, to different body
proportions. The health risks associated with increasing BMI are continuous and the
interpretation of BMI gradings in relation to risk may differ for different populations.
In recent years, there was a growing debate on whether there are possible needs for developing
different BMI cut-off points for different ethnic groups due to the increasing evidence that the
associations between BMI, percentage of body fat, and body fat distribution differ across
populations and therefore, the health risks increase below the cut-off point of 25 kg/m that
defines overweight in the current WHO classification.
2

There had been two previous attempts to interpret the BMI cut-offs in Asian and Pacific
populations , which contributed to the growing debates. Therefore, to shed the light on this
debates, WHO convened the Expert Consultation on BMI in Asian populations (Singapore, 8-11
July, 2002) .
3,4

The WHO Expert Consultation concluded that the proportion of Asian people with a high risk of
type 2 diabetes and cardiovascular disease is substantial at BMI's lower than the existing WHO
cut-off point for overweight (= 25 kg/m ). However, the cut-off point for observed risk varies
from 22 kg/m to 25 kg/m in different Asian populations and for high risk, it varies from 26
kg/m to 31 kg/m . The Consultation, therefore, recommended that the current WHO BMI cut-off
points (Table 1) should be retained as the international classification.
5

But the cut-off points of 23, 27.5, 32.5 and 37.5 kg/m are to be added as points for public health
action. It was, therefore, recommended that countries should use all categories (i.e. 18.5, 23, 25,
27.5, 30, 32.5 kg/m , and in many populations, 35, 37.5, and 40 kg/m ) for reporting purposes,
with a view to facilitating international comparisons.
2

Discussion updates
A WHO working group was formed by the WHO Expert Consultation and is currently
undertaking a further review and assessment of available data on the relation between waist
circumference and morbidity and the interaction between BMI, waist circumference, and health
risk.
5

SAAL seasonal awareness alert letter, measles and dengue stage measured
in terms of DEWS, DMIS

Chlorination graph stage 1, 2, 3, 4

Theory
Disinfection with chlorine is very popular in water and wastewater treatment
because of its low cost, ability to form a residual, and its effectivness at low
concentrations. Although it is used as a disinfectant, it is a dangerous and
potentially fatal chemical if used improperly.
Despite the fact the disinfection process may seem simple, it is actually a quite
complicated process. Chlorination in wastewater treatment systems is a fairly
complex science which requires knowledge of the plant's effluent characteristics.
When free chlorine is added to the wastewater, it takes on various forms depending
on the pH of the wastewater. It is important to understand the forms of chlorine
which are present because each has a different disinfecting capability. The acid
form, HOCL, is a much stronger disinfectant than the hypochlorite ion, OCL-. The
graph below depicts the chlorine fractions at different pH values (Drawing by Erik
Johnston).

Ammonia present in the effluent can also cause problems as chloramines are
formed, which have very little disinfecting power. Some methods to overcome the
types of chlorine formed are to adjust the pH of the wastewater prior to chlorination
or to simply add a larger amount of chlorine. An adjustment in the pH would allow
the operators to form the most desired form of chlorine, hypochlorus acid, which

has the greatest disinfecting power. Adding larger amounts of chlorine would be an
excellent method to combat the chloramines because the ammonia present would
bond to the chlorine but further addition of chlorine would stay in the hypochlorus
acid or hypochlorite ion state.
a) Chlorine gas, when exposed to water reacts readily to form hypochlorus acid,
HOCl, and hydrochloric acid. Cl2 + H2O -> HOCl + HCl
b) If the pH of the wastewater is greater than 8, the hypochlorus acid will dissociate
to yield hypochlorite ion. HOCl <-> H+ + OCl-- If however, the pH is much less than
7, then HOCl will not dissociate.
c) If ammonia is present in the wastewater effulent, then the hypochlorus acid will
react to form one three types of chloramines depending on the pH, temperature,
and reaction time.
Monochloramine and dichloramine are formed in the pH range of 4.5 to 8.5,
however, monochloramine is most common when the pH is above 8. When the pH
of the wastewater is below 4.5, the most common form of chloramine is
trichloramine which produces a very foul odor. The equations for the formation of
the different chloramines are as follows: (Reynolds & Richards, 1996)
Monochloramine: NH3 + HOCl -> NH2Cl + H2O
Dichloramine: NH2Cl + 2HOCl -> NHCl2 + 2H2O
Trichloramine: NHCl2 + 3HOCl -> NHCl3 + 3H2O
Chloramines are an effective disinfectant against bacteria but not against viruses.
As a result, it is necessary to add more chlorine to the wastewater to prevent the
formation of chloramines and form other stronger forms of disinfectants.
d) The final step is that additional free chlorine reacts with the chloramine to
produce hydrogen ion, water , and nitrogen gas which will come out of solution. In
the case of the monochloramine, the following reaction occurs:
2NH2Cl + HOCl -> N2 + 6HCl + H2O
Thus, added free chlorine reduces the concentration of chloramines in the
disinfection process. Instead the chlorine that is added is allowed to form the
stronger disinfectant, hypochlorus acid.
Perhaps the most important stage of the wastewater treatment process is the
disinfection stage. This stage is most critical because it has the greatest effect on
public health as well as the health of the world's aquatic systems. It is important to

realize that wastewater treatment is not a cut and dry process but requires in depth
knowledge about the type of wastewater being treated and its characteristics to
obtain optimum results. (White, 1972)

The graph shown above depicts the chlorine residual as a function of increasing
chlorine dosage with descriptions of each zone given below (Drawing by Erik
Johnston, adapted from Reynolds and Richards, 1996).
Zone I: Chlorine is reduced to chlorides.
Zone II: Chloramines are formed.
Zone III: Chloramines are broken down and converted to nitrogen gas which
leaves the system (Breakpoint).
Zone IV: Free residual.
Therefore, it is very important to understand the amount and type of chlorine that
must be added to overcome the difficulties in the strength of the disinfectant which
results from the wastewater's characteristics.

Implementation
Water Treatment
The following is a schematic of a water treatment plant (Drawing by Matt Curtis).

In water treatment, pre-chlorination is utilized mainly in situations where the inflow


is taken from a surface water source such as a river, lake, or reservoir. Chlorine is
usually added in the rapid mixing chamber and effectively prevents the majority of
algal growth. Algae is a problem in water treatment plants because it builds up on
the filter media and increases the head which means that the filters need to be
backwashed more frequently. In addition, the algal growth on the filter media
causes taste and odor problems in the treated water. (Reynolds & Richards, 1996)
In the picture to the left, a residual monitor checks the chlorine level in the water
leaving the treatment plant. A minimum value is required to prevent regrowth of
bacteria throughout the distribution system, and a maximum value is established to
prevent taste, odor, and health problems (Photo by Matt Curtis).

Post chlorination is almost always done in water treatment, but can be replaced with
chlorine dioxide or chloramines. In this stage chlorine is fed to the drinking water
stream which is then sent to the chlorine contact basin to allow the chlorine a long
enough detention time to kill all viruses, bacteria, and protozoa that were not
removed and rendered inactive in the prior stages of treatment (Photo by Matt
Curtis).
Drinking water requires a large addition of chlorine because there must be a
residual amount of chlorine in the water that will carry through the system until it

reaches the tap of the user. After post chlorination, the water is retained in a clear
well prior to distribution. In the picture to the right, the clear pipe with the floater
designates the height of the water within the clear well. (Reynolds & Richards,
1996)

Survival paradox e.g. sometimes obese live more

The male-female health-survival paradox: a


survey and register study of the impact of
sex-specific selection and information bias
This study examined whether the healthsurvival paradox could be due partially to
sex-specific selection and information bias in surveys. METHODS: The study is based
on the linkage of three population-based surveys of 15,330 Danes aged 46102
years with health registers covering the total Danish population regarding
hospitalizations within the last 2 years and prescription medicine within 6 months
before the baseline surveys. RESULTS: Men had higher participation rates than
women at all ages. Hospitalized women and women taking medications had higher
participation rate compared with non-hospitalized women (difference = 0.7%3.0%)
and female nonusers (difference = 0.8%7.6%), respectively, whereas no consistent
pattern was found among men according to hospitalization or medication use
status. Men used fewer medications than women, but they underreported
medication use to a similar degree as did women. CONCLUSIONS: Hospitalized
women, as well as women using prescription medicine, were slightly
overrepresented in the surveys. Hence, the study found some evidence that
selection bias in surveys may contribute to the explanation of the healthsurvival
paradox, but its contribution is likely to be small. However, there was no evidence
for sex-specific reporting of medication use among study participants.

Obesity-survival paradox-still a controversy?


Since the original description of the obesity-survival paradox in 1999, which
suggested a survival advantage for overweight and obese patients undergoing
hemodialysis, a large body of evidence supporting the paradox has accumulated.
The reason for the paradox has yet to be defined. Better nutrition may be a partial
explanation, or it may be that in uremic milieu, excessive fat and surplus calories

might confer some survival advantage. The "surplus calorie theory" as a potential
mechanism for the paradox is of great interest. If proven to be correct, it might
explain why peritoneal dialysis patients who receive excessive calories through
dialysis do not exhibit the paradox and, secondly and more importantly, therapy
could be directed to enhance a greater caloric intake by renal failure patients to
engender a better survival outcome. Finally, other clinical settings, for example,
congestive heart failure, have their own obesity-survival paradox. Thus, the paradox
appears to be a wider phenomenon and might merely be the external expression of
a larger principle yet to be uncovered.

Demographic graph preindustrial, early industrial, late industrial, developed

Summary of the theory

Demographic change in Sweden from 1735 to 2000.


Red line: crude death rate (CDR), blue line: (crude) birth rate (CBR)

The transition involves four stages, or possibly five.

In stage one, pre-industrial society, death rates and birth rates are high and
roughly in balance. All human populations are believed to have had this
balance until the late 18th century, when this balance ended in Western
Europe.[6] In fact, growth rates were less than 0.05% at least since the
Agricultural Revolution over 10,000 years ago. [6] Birth and death rates both
tend to be very high in this stage.[6] Because both rates are approximately in
balance, population growth is typically very slow in stage one. [6]
In stage two, that of a developing country, the death rates drop rapidly due to
improvements in food supply and sanitation, which increase life spans and
reduce disease. The improvements specific to food supply typically include
selective breeding and crop rotation and farming techniques. [6] Other
improvements generally include access to technology, basic healthcare, and
education. For example, numerous improvements in public health reduce
mortality, especially childhood mortality. [6] Prior to the mid-20th century,
these improvements in public health were primarily in the areas of food
handling, water supply, sewage, and personal hygiene. [6] One of the variables
often cited is the increase in female literacy combined with public health
education programs which emerged in the late 19th and early 20th centuries.
[6]
In Europe, the death rate decline started in the late 18th century in
northwestern Europe and spread to the south and east over approximately
the next 100 years.[6] Without a corresponding fall in birth rates this produces
an imbalance, and the countries in this stage experience a large increase in
population.

In stage three, birth rates fall due to access to contraception, increases in


wages, urbanization, a reduction in subsistence agriculture, an increase in the
status and education of women, a reduction in the value of children's work,
an increase in parental investment in the education of children and other
social changes. Population growth begins to level off. The birth rate decline in
developed countries started in the late 19th century in northern Europe. [6]
While improvements in contraception do play a role in birth rate decline, it
should be noted that contraceptives were not generally available nor widely
used in the 19th century and as a result likely did not play a significant role in
the decline then.[6] It is important to note that birth rate decline is caused also
by a transition in values; not just because of the availability of
contraceptives.[6]

During stage four there are both low birth rates and low death rates. Birth
rates may drop to well below replacement level as has happened in countries
like Germany, Italy, and Japan, leading to a shrinking population, a threat to
many industries that rely on population growth. As the large group born
during stage two ages, it creates an economic burden on the shrinking
working population. Death rates may remain consistently low or increase
slightly due to increases in lifestyle diseases due to low exercise levels and
high obesity and an aging population in developed countries. By the late 20th
century, birth rates and death rates in developed countries leveled off at
lower rates.[5]

As with all models, this is an idealized picture of population change in these countries. The
model is a generalization that applies to these countries as a group and may not accurately
describe all individual cases. The extent to which it applies to less-developed societies today
remains to be seen. Many countries such as China, Brazil and Thailand have passed through the
Demographic Transition Model (DTM) very quickly due to fast social and economic change.
Some countries, particularly African countries, appear to be stalled in the second stage due to
stagnant development and the effect of AIDS.

Demographic Transition Theory


As societies develop, they transition from high birth and
high death rates to low birth and low death rates.
Key Points

During the transition, birth rates remain high while death rates drop, leading to
population growth.
During the pre-industrial stage, societies have high birth rates and high death rates.

During the industrial revolution, societies have high birth rates but death rates begin to
fall, leading to population growth.

During the post-industrial stage, societies have low birth rates and low death rates and
population stabilizes.

Terms

The demographic dividend

The demographic dividend is a window of opportunity in the development of a society or


nation that opens up as fertility rates decline when faster rates of economic growth and
human development are possible when combined with effective policies and markets.

Economic development

Economic development generally refers to the sustained, concerted actions of


policymakers and communities that promote the standard of living and economic health
of a specific area.

Examples

Most of Western Europe has undergone a demographic transition in the past four
centuries. Prior to the Industrial Revolution, before the 1700s, the European population
was stable as both birth and death rates were high. The Industrial Revolution in the 1700s
and 1800s led to a population explosion as the birth rate remained high while the death
rate fell rapidly. By the 1900s and 2000s, the birth rate dropped to match the death rate,
and the population stabilized or, occasionally, began to shrink sightly.

Give us feedback on this content:

According to Thomas Malthus, population growth is limited by available resources. But if that's
so, why is the world's population growing so rapidly in the regions that have the fewest
resources? In part, this puzzle can be explained by the demographic transition .
theory/images/the-demographic-transition/">
The Demographic Transition
This model illustrates the demographic transition, as birth and death rates rise and
fall but eventually reach equilibrium.

The demographic transition is a model and theory describing the transition from high birth and
death rates to low birth and death rates that occurs as part of the economic development of a
country. As countries industrialize, they undergo a transition during which death rates fall but

birth rates remain high. Consequently, population grows rapidly. This transition can be broken
down into four stages.

Stage One: The Pre-Industrial Stage


During the pre-industrial stage, societies have high birth and death rates. Because both rates are
high, population grows slowly and also tends to be very young: Many people are born, but few
live very long.
In pre-industrial society, children are an economic benefit to families, reinforcing high birth
rates. Children contribute to the household economy by carrying water and firewood, caring for
younger siblings, cleaning, cooking, or working in fields and household chores. With few
educational opportunities, raising children costs little more than feeding them. As they became
adults, children become major contributors to the family income and also become the primary
form of insurance for adults in old age.

Stage Two: The Industrial Revolution


In stage two, as countries begin to industrialize, death rates drop rapidly. The decline in the death
rate is due initially to two factors: Improved food production and improved health and sanitation.
Food production is improved by more efficient agricultural practices and better transportation
and food distribution, which collectively prevent death due to starvation and lack of water.
Health improves with improved sanitation, especially water supply, sewerage, food handling, and
general personal hygiene, as well as medical progress.
As death rates fall, birth rates remain high, resulting in a population explosion. Population
growth is not due to increasing fertility, but to decreasing deaths: Many people continue to be
born, but now, more of them live longer. Falling death rates also change the age structure of the
population. In stage one, mortality is especially high among children between five and 10 years
old. The decline in death rates in stage two improves the odds of survival for children. Hence, the
age structure of the population becomes increasingly youthful.
In Western Europe, stage two occurred during the nineteenth century, with the Industrial
Revolution. Many less-developed countries entered stage two during the second half of the
twentieth century, creating the recent worldwide population explosion.

Stage Three: Post-Industrial Revolution


During the post-industrial stage, birth rates fall, eventually balancing the lower death rates.
Falling birth rates coincide with many other social and economic changes, including better
access to contraception, higher wages, urbanization, commercialization of agriculture, a
reduction in the value of children's work, and greater parental investment in the education of
children. Increasing female literacy and employment lower the uncritical acceptance of
childbearing and motherhood as measures of the status of women. Although the correlation
between birth rates and these changes is widely observed, it is not certain whether

industrialization and higher incomes lead to lower population or whether lower populations lead
to industrialization and higher incomes.
As birth rates fall, the age structure of the population changes again. Families have fewer
children to support, decreasing the youth dependency ratio. But as people live longer, the
population as a whole grows older, creating a higher rate of old age dependency. During the
period between the decline in youth dependency and rise in old age dependency, there is a
demographic window of opportunity called the demographic dividend: The population has fewer
dependents (young and old) and a higher proportion of working-age adults, yielding increased
economic growth. This phenomenon can further the correlation between demographic transition
and economic development.

Stage Four: Stabilization


During stage four, population growth stabilizes as birth rates fall into line with death rates. In
some cases, birth rates may even drop below replacement level, resulting in a shrinking
population. Death rates in developed countries may remain consistently low or increase slightly
due to lifestyle diseases related to low exercise levels and high obesity and an aging population.
As population growth slows, the large generations born during the previous stages put a growing
economic burden on the smaller, younger working population. Thus, some countries in stage four
may have difficulty funding pensions or other social security measures for retirees.

SWOT analysis for situation analysis

Six Steps to a Strategic Situation Analysis with SWOT


Written by Ramana Metlapall

Creating a Company Strategy: The Challenge


Jack, a new senior vice president, had been tasked with preparing a situation analysis report that
senior management would use to help form company strategy. Since Jack had never contributed
to company strategy before, he reviewed his predecessor's analysis. He discovered that the
previous year's report did not capture the information that senior management would need, nor
did it include a way to present that information in a manner that management could use to create
a clear strategy. Additionally, the analysis did not reflect his team's capabilities or ideas.
It became apparent to Jack that his predecessor hadn't used the situation analysis report as an
opportunity to create a collaborative exercise that involved all the team members. This explained
something else he had observed when he had taken over: his team didn't know anything about
the previous situation analysis and had no idea how it had contributed to the company's strategy.
Not surprisingly, his team felt disconnected from the company's current strategy and goals.

Assessing the Situation


Jack realized the necessity of acquiring tools to elicit and articulate his team's contributions and
also present them effectively to management, otherwise it was unlikely that his input would be
incorporated into the company's strategic plans. Without that input, Jack's team would lack the
focus, commitment and buy-in essential to the success of any strategic overhaul. And, if the
process didn't motivate and energize his subordinates, involvement in preparing the report would
be viewed as one more task imposed on them that would negatively impact their day-to-day
responsibilities.

The Solution: The Six-Step SWOT Analysis


The team adopted the SWOT Analysis framework as part of a series of collaborative work
sessions that would effectively prepare their current situation analysis report. SWOT analysis is a
strategic planning method used to evaluate the Strengths, Weaknesses, Opportunities and Threats
involved in a project or business venture, indicating where opportunities and risk should be
pursued, where certain threats or risks should be avoided, and if resources are allocated properly.
With my help, Jack implemented the following six-step process:

Step 1. Organize Teams of Four to Six Individuals


Once the teams are divided, designate a facilitation team that will be responsible for
orchestrating the multiple meetings of the other teams, including coordinating the logistics
(meeting rooms, pens, flip charts, etc.) for the meetings.

Step 2. Provide Pre-work to Prepare the Participants


Create detailed event information and send the package to meeting participants in advance.
Include listings of all the meetings, agendas for each of those meetings, and the purpose and
objectives of the process. Recommend document sources that staff can use to augment their
individual thoughts on internal and external factors influencing the business (e.g., analyst
observations on industry environment, reports on macroeconomic conditions, market
segmentation data, internal performance metrics). Ask participants to group key factors under
categories that you provide (e.g., resources, competencies, managerial deficiencies, inadequately
skilled resources). This prompts the participants to identify broader categories that specific
factors fall into.

Step 3. Conduct Round-robin Meetings to Collect Input on


Internal Factors
A facilitator for each team will ask each participant to provide a list of internal factors. Write the
internal factors on individual sticky notes or 3x5 cards and place them so they are displayed
clearly. Group the factors that enhance the company's situation under Strengths and those that
weaken the situation and competitive position under Weaknesses. Facilitate a discussion that
generates more factors, deepens understandings of the factors and uncovers relationships
between them. Record the results of these discussions on the related notes or cards.

Step 4. Conduct Round-robin Meetings to Collect Ideas on


External Factors
Have the teams repeat the round-robin exercise looking at external factors. Group these under the
headings of Threats and Opportunities. Threats are those factors that are possibly detrimental to
the organization's competitive position in the marketplace; opportunities are factors that enhance
the company's position. Make it clear that some factors can appear on more than one list. For
example, an opportunity can also be interpreted as a threat (if, for example, the opportunity is
seized by a competitor).

Step 5. Vote on Top Strengths, Weaknesses, Threats and


Opportunities
Collate the lists from the individual teams and put them in a central location where all of the
teams can review them. Schedule a time for participants to vote on the top three strengths,
weaknesses, opportunities and threats.

Step 6. Prioritize Strategic Alternatives


Have the teams brainstorm, using the "top three" lists created in the previous steps. For each
opportunity, have participants identify the company's relevant strengths and weaknesses. Repeat

the process for each threat, identifying the strengths that the company can use to defend itself
from the threats and the weaknesses that leave the company exposed. When the company has
corresponding strengths and few weaknesses, this opportunity should be pursued vigorously. On
the other hand, the company should consider exiting those areas where it has many threats and
many weaknesses (especially if the threats target the company's weaknesses). Where it makes
sense for the company to stay in threatened areas, the teams should recommend how existing
strengths can be redeployed or acquired. Where there are opportunities worth pursuing, but the
company lacks strengths, recommendations can be prepared that include partnering with other
organizations or acquiring the necessary skills or resources through other means.

Final Assessment
Jack personally orchestrated the process, setting up a series of half-day work sessions that
involved his direct reports and several members of the functional areas reporting to him. He had
the groups use SWOT analysis as a key job aid in their work sessions, supported by facilitators
who understood the process. Jack also brought in outside facilitators to elicit objective opinions
and discussions.
The teams, which in previous years dreaded the paperwork demanded in creating a situation
analysis, were now energized by the interactive work sessions. Furthermore, they left the
meetings feeling their ideas would be used in the strategic plan that the corporation adopted and
that any resulting strategy was going to be their strategy. They weren't disappointed, either.
Because the resulting report summarized key factors and tied them directly to strategic
alternatives, the document had a significant impact on the development of the company's
strategic plan.

Health economics dollars per life years gained, dollars per quality adjusted
life years gained. It is a measure of cost effective analysis which measures
outcome in terms of DALYS

The
quality-adjusted life year or quality-adjusted life-year (QALY) is a measure of disease
burden, including both the quality and the quantity of life lived.[1][2] It is used in assessing the
value for money of a medical intervention. According to Pliskin et al., The QALY model requires
utility independent, risk neutral, and constant proportional tradeoff behaviour.[3]
The QALY is based on the number of years of life that would be added by the intervention. Each
year in perfect health is assigned the value of 1.0 down to a value of 0.0 for being dead. If the
extra years would not be lived in full health, for example if the patient would lose a limb, or be
blind or have to use a wheelchair, then the extra life-years are given a value between 0 and 1 to
account for this.[citation needed] Under certain methods, such as the EQ-5D, QALY can be negative
number
The disability-adjusted life year (DALY) is a measure of overall disease burden, expressed as
the number of years lost due to ill-health, disability or early death.
Originally developed by Harvard University for the World Bank in 1990, the World Health
Organization subsequently adopted the method in 1996 as part of the Ad hoc Committee on
Health Research "Investing in Health Research & Development" report. The DALY is becoming
increasingly common in the field of public health and health impact assessment (HIA). It
"extends the concept of potential years of life lost due to premature death...to include equivalent
years of 'healthy' life lost by virtue of being in states of poor health or disability."[2] In so doing,
mortality and morbidity are combined into a single, common metric.
Traditionally, health liabilities were expressed using one measure: (expected or average number
of) 'Years of Life Lost' (YLL). This measure does not take the impact of disability into account,
which can be expressed by: 'Years Lived with Disability' (YLD). DALYs are calculated by taking
the sum of these two components. In a formula:
DALY = YLL + YLD.[3]

The DALY relies on an acceptance that the most appropriate measure of the effects of chronic
illness is time, both time lost due to premature death and time spent disabled by disease. One
DALY, therefore, is equal to one year of healthy life lost. Japanese life expectancy statistics are
used as the standard for measuring premature death, as the Japanese have the longest life
expectancies.[4]

Looking at the burden of disease via DALYs can reveal surprising things about a population's
health. For example, the 1990 WHO report indicated that 5 of the 10 leading causes of disability
were psychiatric conditions. Psychiatric and neurologic conditions account for 28% of all years
lived with disability, but only 1.4% of all deaths and 1.1% of years of life lost. Thus, psychiatric
disorders, while traditionally not regarded as a major epidemiological problem, are shown by
consideration of disability years to have a huge impact on populations

Cost benefit analysis outcome measured in monetary terms- not suitable for
health

Cost-benefit analysis (CBA), sometimes called benefitcost analysis (BCA), is a systematic


approach to estimating the strengths and weaknesses of alternatives that satisfy transactions,
activities or functional requirements for a business. It is a technique that is used to determine
options that provide the best approach for the adoption and practice in terms of benefits in
labour, time and cost savings etc. (David, Ngulube and Dube, 2013). The CBA is also defined as
a systematic process for calculating and comparing benefits and costs of a project, decision or
government policy (hereafter, "project").
Broadly, CBA has two purposes:
1.
2.

To determine if it is a sound investment/decision (justification/feasibility),


To provide a basis for comparing projects. It involves comparing the total
expected cost of each option against the total expected benefits, to see
whether the benefits outweigh the costs, and by how much. [1]

CBA is related to, but distinct from cost-effectiveness analysis. In CBA, benefits and costs are
expressed in monetary terms, and are adjusted for the time value of money, so that all flows of
benefits and flows of project costs over time (which tend to occur at different points in time) are
expressed on a common basis in terms of their "net present value."
Closely related, but slightly different, formal techniques include cost-effectiveness analysis,
costutility analysis, economic impact analysis, fiscal impact analysis, and Social return on
investment (SROI) analysis.

Social marketing models in relation to health product, price, place,


promotion
The existing literature suggests that theories and models can serve as valuable
frameworks for the design and evaluation of health interventions. However,
evidence on the use of theories and models in social marketing interventions is
sparse. The purpose of this systematic review is to identify to what extent papers
about social marketing health interventions report using theory, which theories are
most commonly used, and how theory was used. A systematic search was
conducted for articles that reported social marketing interventions for the
prevention or management of cancer, diabetes, heart disease, HIV, STDs, and
tobacco use, and behaviors related to reproductive health, physical activity,
nutrition, and smoking cessation. Articles were published in English, after 1990,
reported an evaluation, and met the 6 social marketing benchmarks criteria
(behavior change, consumer research, segmentation and targeting, exchange,
competition and marketing mix). Twenty-four articles, describing 17 interventions,
met the inclusion criteria. Of these 17 interventions, 8 reported using theory and 7
stated how it was used. The transtheoretical model/stages of change was used more
often than other theories. Findings highlight an ongoing lack of use or
underreporting of the use of theory in social marketing campaigns and reinforce the
call to action for applying and reporting theory to guide and evaluate interventions.

Social marketing seeks to develop and integrate marketing concepts with other approaches to
influence behaviors that benefit individuals and communities for the greater social good. It seeks
to integrate research, best practice, theory, audience and partnership insight, to inform the
delivery of competition sensitive and segmented social change programs that are effective,
efficient, equitable and sustainable.[1]
Although "social marketing" is sometimes seen only as using standard commercial marketing
practices to achieve non-commercial goals, this is an oversimplification. The primary aim of
social marketing is "social good", while in "commercial marketing" the aim is primarily
"financial". This does not mean that commercial marketers can not contribute to achievement of
social good.

Increasingly, social marketing is being described as having "two parents"a "social parent",
including social science and social policy approaches, and a "marketing parent", including
commercial and public sector marketing approaches.

What is social marketing?


In the preface to Marketing Social Change, Andreasen defines social marketing as the
application of proven concepts and techniques drawn from the commercial sector to promote
changes in diverse socially important behaviors such as drug use, smoking, sexual behavior...
This marketing approach has an immense potential to affect major social problems if we can only
learn how to harness its power.1 By proven techniques Andreasen meant methods drawn from
behavioural theory, persuasion psychology, and marketing science with regard to health
behaviour, human reactions to messages and message delivery, and the marketing mix or four
Ps of marketing (place, price, product, and promotion).2 These methods include using
behavioural theory to influence behaviour that affects health; assessing factors that underlie the
receptivity of audiences to messages, such as the credibility and likeability of the argument; and
strategic marketing of messages that aim to change the behaviour of target audiences using the
four Ps.3
Go to:

How is social marketing applied to health?


Social marketing is widely used to influence health behaviour. Social marketers use a wide range
of health communication strategies based on mass media; they also use mediated (for example,
through a healthcare provider), interpersonal, and other modes of communication; and marketing
methods such as message placement (for example, in clinics), promotion, dissemination, and
community level outreach. Social marketing encompasses all of these strategies.
Communication channels for health information have changed greatly in recent years. One-way
dissemination of information has given way to a multimodal transactional model of
communication. Social marketers face challenges such as increased numbers and types of health
issues competing for the public's attention; limitations on people's time; and increased numbers
and types of communication channels, including the internet.4 A multimodal approach is the most
effective way to reach audiences about health issues.5
Figure 1 summarises the basic elements or stages of social marketing.6 The six basic stages are:
developing plans and strategies using behavioural theory; selecting communication channels and
materials based on the required behavioural change and knowledge of the target audience;
developing and pretesting materials, typically using qualitative methods; implementing the
communication programme or campaign; assessing effectiveness in terms of exposure and
awareness of the audience, reactions to messages, and behavioural outcomes (such as improved
diet or not smoking); and refining the materials for future communications. The last stage feeds
back into the first to create a continuous loop of planning, implementation, and improvement.

Fig 1
Social marketing wheel

Audience segmentation
One of the key decisions in social marketing that guides the planning of most health
communications is whether to deliver messages to a general audience or whether to segment
into target audiences. Audience segmentation is usually based on sociodemographic, cultural,
and behavioural characteristics that may be associated with the intended behaviour change. For
example, the National Cancer Institute's five a day for better health campaign developed
specific messages aimed at Hispanic people, because national data indicate that they eat fewer
fruits and vegetables and may have cultural reasons that discourage them from eating locally
available produce.6
The broadest approach to audience segmentation is targeted communications, in which
information about population groups is used to prepare messages that draw attention to a generic
message but are targeted using a person's name (for example, marketing by mass mail). This
form of segmentation is used commercially to aim products at specific customer profiles (for
example, upper middle income women who have children and live in suburban areas). It has
been used effectively in health promotion to develop socially desirable images and prevention
messages (fig 2).

Fig 2
Image used in the American Legacy Foundation's Truth antismoking campaign
aimed at young people

Tailored communications are a more specific, individualised form of segmentation. Tailoring


can generate highly customised messages on a large scale. Over the past 10-15 years, tailored
health communications have been used widely for public health issues. Such communications
have been defined as any combination of information and behavior change strategies intended
to reach one specific person, based on characteristics that are unique to that person, related to the
outcome of interest, and derived from an individual assessment.7 Because tailored materials
consider specific cognitive and behavioural patterns as well as individual demographic
characteristics, they are more precise than targeted materials but are more limited in population
reach and may be more expensive to develop and implement.

Media trends and adapting commercial marketing


As digital sources of health information continue to proliferate, people with low income and low
education will find it more difficult to access health information. This digital divide affects a
large proportion of people in the United States and other Western nations. Thus, creating
effective health messages and rapidly identifying and adapting them to appropriate audiences
(which are themselves rapidly changing) is essential to achieving the Healthy People 2010 goal
of reducing health disparity within the US population.8
In response, social marketers have adapted commercial marketing for health purposes. Social
marketing now uses commercial marketing techniquessuch as analysing target audiences,
identifying the objectives of targeted behaviour changes, tailoring messages, and adapting
strategies like brandingto promote the adoption and maintenance of health behaviours. Key
trends include the recognition that messages on health behaviour vary along a continuum from
prevention to promotion and maintenance, as reflected by theories such as the transtheoretical
model9; the need for unified message strategies and methods of measuring reactions and
outcomes10; and competition between health messages and messages that promote unhealthy
behaviour from product marketers and others.11

Prevention versus promotion


Social marketing messages can aim to prevent risky health behaviour through education or the
promotion of behavioural alternatives. Early anti-drug messages in the US sought to prevent,
whereas the antismoking campaigns of the US Centers for Disease Control and Prevention and
the American Legacy Foundation offered socially desirable lifestyle alternatives (be cool by
not smoking).12,13 The challenge for social marketing is how best to compete against product
advertisers with bigger budgets and more ways to reach consumers.

Competing for attention


Social marketing aimed at changing health behaviour encounters external and internal
competition. Digital communications proffer countless unhealthy eating messages along with
seductive lifestyle images associated with cigarette brands. Cable television, the web, and video
games offer endless opportunities for comorbid behaviour. At the same time, product marketers
add to the confusion by marketing reduced risk cigarettes or obscure benefits of foods (such as
low salt content in foods high in saturated fat).
Go to:

How is social marketing used to change health behaviour?


Social marketing uses behavioural, persuasion, and exposure theories to target changes in health
risk behaviour. Social cognitive theory based on response consequences (of individual
behaviour), observational learning, and behavioural modelling is widely used.14 Persuasion
theory indicates that people must engage in message elaboration (developing favourable

thoughts about a message's arguments) for long term persuasion to occur.3 Exposure theorists
study how the intensity of and length of exposure to a message affects behaviour.10
Social marketers use theory to identify behavioural determinants that can be modified. For
example, social marketing aimed at obesity might use behavioural theory to identify connections
between behavioural determinants of poor nutrition, such as eating habits within the family,
availability of food with high calorie and low nutrient density (junk food) in the community, and
the glamorisation of fast food in advertising. Social marketers use such factors to construct
conceptual frameworks that model complex pathways from messages to changes in behaviour
(fig 3).

Fig 3
Example of social marketing conceptual framework

In applying theory based conceptual models, social marketers again use commercial marketing
strategies based on the marketing mix.2 For example, they develop brands on the basis of health
behaviour and lifestyles, as commercial marketers would with products. Targeted and tailored
message strategies have been used in antismoking campaigns to build brand equitya set of
attributes that a consumer has for a product, service, or (in the case health campaigns) set of
behaviours.13 Brands underlying the VERB campaign (which encourages young people to be
physically active) and Truth campaigns were based on alternative healthy behaviours, marketed
using socially appealing images that portrayed healthy lifestyles as preferable to junk food or fast
food and cigarettes.14,15
Go to:

Can social marketing change health behaviour?


The best evidence that social marketing is effective comes from studies of mass communication
campaigns. The lessons learned from these campaigns can be applied to other modes of
communication, such as communication mediated by healthcare providers and interpersonal
communication (for example, mass nutrition messages can be used in interactions between
doctors and patients).
Social marketing campaigns can change health behaviour and behavioural mediators, but the
effects are often small.5 For example, antismoking campaigns, such as the American Legacy
Foundation's Truth campaign, can reduce the number of people who start smoking and progress
to established smoking.16 From 1999 to 2002, the prevalence of smoking in young people in the
US decreased from 25.3% to 18%, and the Truth campaign was responsible for about 22% of that
decrease.16

Summary points
Social marketing uses commercial marketing strategies such as audience segmentation and
branding to change health behaviour
Social marketing is an effective way to change health behaviour in many areas of health risk
Doctors can reinforce these messages during their direct and indirect contact with patients
This is a small effect by clinical standards, but it shows that social marketing can have a big
impact at the population level. For example, if the number of young people in the US was 40
million, 10.1 million would have smoked in 1999, and this would be reduced to 7.2 million by
2002. In this example, the Truth campaign would be responsible for nearly 640 000 young
people not starting to smoke; this would result in millions of added life years and reductions in
healthcare costs and other social costs.
In a study of 48 social marketing campaigns in the US based on the mass media, the average
campaign accounted for about 9% of the favourable changes in health risk behaviour, but the
results were variable.17 Non-coercive campaigns (those that simply delivered health
information) accounted for about 5% of the observed variation.17
A study of 17 recent European health campaigns on a range of topics including promotion of
testing for HIV, admissions for myocardial infarction, immunisations, and cancer screening also
found small but positive effects.18 This study showed that behaviours that need to be changed
once or only a few times are easier to promote than those that must be repeated and maintained
over time.19 Some examples (such as breast feeding, taking vitamin A supplements, and
switching to skimmed milk) have shown greater effect sizes, and they seem to have higher rates
of success.19,20
Go to:

Implications for healthcare practitioners


This brief overview indicates that social marketing practices can be useful in healthcare practice.
Firstly, during social marketing campaigns, such as antismoking campaigns, practitioners should
reinforce media messages through brief counselling. Secondly, practitioners can make a valuable
contribution by providing another communication channel to reach the target audience. Finally,
because practitioners are a trusted source of health information, their reinforcement of social
marketing messages adds value beyond the effects of mass communication.
Go to:

Notes
Contributors and sources: WDE's research focuses on behaviour change and public education
intervention programmes designed to communicate science based information. He has published

extensively on the influence of the media on health behaviour, including the effects of social
marketing on changes in behaviour. This article arose from his presentation at and discussions
after a recent conference on diet and communication.
Competing interests: None declared.

Use of Social Marketing to Develop


Culturally Innovative Diabetes
Interventions
1. Rosemary Thackeray, PhD, MPH and
2. Brad L. Neiger, PhD, CHES
1. Address correspondence and reprint requests to Rosemary Thackeray, PhD,
MPH, Brigham Young University Department of Health Science, 229 B
Richards Bldg., Provo, UT 84602.

Abstract
Diabetes continues to increase in magnitude throughout the United States and abroad. It is
expected to increase by 165% from 2000 to 2050. Diabetes poses a particular burden to those in
ethnic minority populations. African Americans, Hispanics, and American Indians are more
likely to be affected by diabetes, to be less active in health-promoting behavior, and to have
fewer resources to address related complications compared with whites.
Because diabetes disproportionately affects ethnic minorities in the United States, it is imperative
that interventions be tailored to these audiences. To develop effective interventions, program
developers must identify an audience-centered planning process that provides a foundation for
culturally innovative interventions.
Social marketing efforts in both domestic and international settings have been successful at
improving the lives and health status of targeted individuals and communities. This article
describes how the social marketing process can be used to create interventions that are culturally
innovative and relevant. The Social Marketing Assessment and Response Tool (SMART) model
is used to establish a relationship between social marketing and culturally specific interventions.
The model incorporates a systematic and sequential process that includes preliminary planning;
audience, channel, and market analyses; materials development and pretesting; implementation;
and evaluation. Diabetes interventions that are developed and implemented with this approach
hold promise as solutions that are more likely to be adopted by targeted audiences and to result in
the desired health status changes.
Communication is a central aspect of health promotion and the opportunity for mass
communication makes the media a popular option amongst health promoters. The media in this

context includes any non-personal channels of communication, from leaflets to television


commercials to teaching packs. These channels can be employed directly using deliberately
designed media materials. Alternatively, they may be used indirectly by stimulating editorial
interest and comment on a particular issue. This paper will make some suggestions for improving
the use of the media in health promotion.
In seeking guidance about how to make best use of the media, health promoters can turn to a
number of disciplines, including education, medicine, social psychology and communication
theory. Another obvious source of insights is commercial marketing, where purposeful media
communication, most apparent in the form of advertising, is in continuous use. it is this source of
advice that we want to examine here.
In recent years much has been written about the application of commercial marketing approaches
to social issues-so called social marketing. This paper will begin by examining the origins of
social marketing. it will then discuss its key concepts and consider how these might help health
promoters communicate more effectively.
However, to interest active health promoters, the discussion needs to progress beyond ideas and
concepts, beyond theory and demonstrate practical benefits that social marketing can provide in
real life situations. The paper will therefore continue by examining the case histories of a number
of media based health promotion campaigns and illustrate how the most basic of social
marketing insights, consumer orientation, has contributed to them.

Policy document of health sector reforms

Health Sector Reforms


The focus is on:

Guidance on approaches to expand coverage of Essential Health Services


through a package of quality health services to improve access, and reduce
morbidity and mortality rates especially among women and children.
Advocacy for provision of higher quality of basic health services and
increased focus on serving the disadvantaged areas and the vulnerable
groups.

Promotion of full community participation in health services delivery through


their involvement in the planning, operation and control of formal health
services delivery.

Promoting the contracting of health services to take advantage of private


sector resources in the expanding availability and access to quality health
services.

The Regional Office has developed a guideline on Monitoring and Evaluation of Health sector
Reforms. The purpose of this guideline is to therefore provide planners and policymakers in the
African region with guidance on monitoring and evaluating the process and progress of health
sector reforms within and across countries on a regular basis. The latest countries that received
support from AFRO for their health sector reforms are DRC and Kenya.
Public Policy Reforms
Service Delivery Reforms
Universal Coverage Reforms
Leadership Reforms

Qualities of a good leader

Top 10 Qualities That Make A Great Leader


Comment Now
Follow Comments

Having a great idea, and assembling a team to bring that concept to life is the first step in
creating a successful business venture. While finding a new and unique idea is rare enough; the
ability to successfully execute this idea is what separates the dreamers from the entrepreneurs.
However you see yourself, whatever your age may be, as soon as you make that exciting first
hire, you have taken the first steps in becoming a powerful leader. When money is tight, stress
levels are high, and the visions of instant success dont happen like you thought, its easy to let
those emotions get to you, and thereby your team. Take a breath, calm yourself down, and
remind yourself of the leader you are and would like to become. Here are some key qualities that
every good leader should possess, and learn to emphasize.
Honesty
Whatever ethical plane you hold yourself to, when you are responsible for a team of people, its
important to raise the bar even higher. Your business and its employees are a reflection of
yourself, and if you make honest and ethical behavior a key value, your team will follow suit.
As we do at RockThePost, the crowdfunding platform for entrepreneurs and small businesses I
co-founded, try to make a list of values and core beliefs that both you and your brand represent,
and post this in your office. Promote a healthy interoffice lifestyle, and encourage your team to
live up to these standards. By emphasizing these standards, and displaying them yourself, you
will hopefully influence the office environment into a friendly and helpful workspace.
Ability to Delegate
Finessing your brand vision is essential to creating an organized and efficient business, but if you
dont learn to trust your team with that vision, you might never progress to the next stage. Its
important to remember that trusting your team with your idea is a sign of strength, not weakness.
Delegating tasks to the appropriate departments is one of the most important skills you can
develop as your business grows. The emails and tasks will begin to pile up, and the more you
stretch yourself thin, the lower the quality of your work will become, and the less you will
produce.
The key to delegation is identifying the strengths of your team, and capitalizing on them. Find
out what each team member enjoys doing most. Chances are if they find that task more
enjoyable, they will likely put more thought and effort behind it. This will not only prove to your
team that you trust and believe in them, but will also free up your time to focus on the higher

level tasks, that should not be delegated. Its a fine balance, but one that will have a huge impact
on the productivity of your business.
Communication
Knowing what you want accomplished may seem clear in your head, but if you try to explain it
to someone else and are met with a blank expression, you know there is a problem. If this has
been your experience, then you may want to focus on honing your communication skills. Being
able to clearly and succinctly describe what you want done is extremely important. If you cant
relate your vision to your team, you wont all be working towards the same goal.
Training new members and creating a productive work environment all depend on healthy lines
of communication. Whether that stems from an open door policy to your office, or making it a
point to talk to your staff on a daily basis, making yourself available to discuss interoffice issues
is vital. Your team will learn to trust and depend on you, and will be less hesitant to work harder.
Sense of Humor
If your website crashes, you lose that major client, or your funding dries up, guiding your team
through the process without panicking is as challenging as it is important. Morale is linked to
productivity, and its your job as the team leader to instill a positive energy. Thats where your
sense of humor will finally pay off. Encourage your team to laugh at the mistakes instead of
crying. If you are constantly learning to find the humor in the struggles, your work environment
will become a happy and healthy space, where your employees look forward to working in,
rather than dreading it. Make it a point to crack jokes with your team and encourage personal
discussions of weekend plans and trips. Its these short breaks from the task at hand that help
keep productivity levels high and morale even higher.
At RockThePost, we place a huge emphasis on humor and a light atmosphere. Our office is dog
friendly, and we really believe it is the small, light hearted moments in the day that help keep our
work creative and fresh. One tradition that we like to do and brings the team closer is we plan a
fun prank on all new employees, on their first day. It breaks the ice and immediately creates that
sense of familiarity.
Confidence
There may be days where the future of your brand is worrisome and things arent going
according to plan. This is true with any business, large or small, and the most important thing is
not to panic. Part of your job as a leader is to put out fires and maintain the team morale. Keep
up your confidence level, and assure everyone that setbacks are natural and the important thing is
to focus on the larger goal. As the leader, by staying calm and confident, you will help keep the
team feeling the same. Remember, your team will take cues from you, so if you exude a level of
calm damage control, your team will pick up on that feeling. The key objective is to keep
everyone working and moving ahead.
Commitment

If you expect your team to work hard and produce quality content, youre going to need to lead
by example. There is no greater motivation than seeing the boss down in the trenches working
alongside everyone else, showing that hard work is being done on every level. By proving your
commitment to the brand and your role, you will not only earn the respect of your team, but will
also instill that same hardworking energy among your staff. Its important to show your
commitment not only to the work at hand, but also to your promises. If you pledged to host a
holiday party, or uphold summer Fridays, keep your word. You want to create a reputation for not
just working hard, but also be known as a fair leader. Once you have gained the respect of your
team, they are more likely to deliver the peak amount of quality work possible.
Having a great idea, and assembling a team to bring that concept to life is the first step in
creating a successful business venture. While finding a new and unique idea is rare enough; the
ability to successfully execute this idea is what separates the dreamers from the entrepreneurs.
However you see yourself, whatever your age may be, as soon as you make that exciting first
hire, you have taken the first steps in becoming a powerful leader. When money is tight, stress
levels are high, and the visions of instant success dont happen like you thought, its easy to let
those emotions get to you, and thereby your team. Take a breath, calm yourself down, and
remind yourself of the leader you are and would like to become. Here are some key qualities that
every good leader should possess, and learn to emphasize.
Honesty
Whatever ethical plane you hold yourself to, when you are responsible for a team of people, its
important to raise the bar even higher. Your business and its employees are a reflection of
yourself, and if you make honest and ethical behavior a key value, your team will follow suit.
As we do at RockThePost, the crowdfunding platform for entrepreneurs and small businesses I
co-founded, try to make a list of values and core beliefs that both you and your brand represent,
and post this in your office. Promote a healthy interoffice lifestyle, and encourage your team to
live up to these standards. By emphasizing these standards, and displaying them yourself, you
will hopefully influence the office environment into a friendly and helpful workspace.
Ability to Delegate
Finessing your brand vision is essential to creating an organized and efficient business, but if you
dont learn to trust your team with that vision, you might never progress to the next stage. Its
important to remember that trusting your team with your idea is a sign of strength, not weakness.
Delegating tasks to the appropriate departments is one of the most important skills you can
develop as your business grows. The emails and tasks will begin to pile up, and the more you
stretch yourself thin, the lower the quality of your work will become, and the less you will
produce.
The key to delegation is identifying the strengths of your team, and capitalizing on them. Find
out what each team member enjoys doing most. Chances are if they find that task more
enjoyable, they will likely put more thought and effort behind it. This will not only prove to your
team that you trust and believe in them, but will also free up your time to focus on the higher

level tasks, that should not be delegated. Its a fine balance, but one that will have a huge impact
on the productivity of your business.
Communication
Knowing what you want accomplished may seem clear in your head, but if you try to explain it
to someone else and are met with a blank expression, you know there is a problem. If this has
been your experience, then you may want to focus on honing your communication skills. Being
able to clearly and succinctly describe what you want done is extremely important. If you cant
relate your vision to your team, you wont all be working towards the same goal.
Training new members and creating a productive work environment all depend on healthy lines
of communication. Whether that stems from an open door policy to your office, or making it a
point to talk to your staff on a daily basis, making yourself available to discuss interoffice issues
is vital. Your team will learn to trust and depend on you, and will be less hesitant to work harder.
Sense of Humor
If your website crashes, you lose that major client, or your funding dries up, guiding your team
through the process without panicking is as challenging as it is important. Morale is linked to
productivity, and its your job as the team leader to instill a positive energy. Thats where your
sense of humor will finally pay off. Encourage your team to laugh at the mistakes instead of
crying. If you are constantly learning to find the humor in the struggles, your work environment
will become a happy and healthy space, where your employees look forward to working in,
rather than dreading it. Make it a point to crack jokes with your team and encourage personal
discussions of weekend plans and trips. Its these short breaks from the task at hand that help
keep productivity levels high and morale even higher.
At RockThePost, we place a huge emphasis on humor and a light atmosphere. Our office is dog
friendly, and we really believe it is the small, light hearted moments in the day that help keep our
work creative and fresh. One tradition that we like to do and brings the team closer is we plan a
fun prank on all new employees, on their first day. It breaks the ice and immediately creates that
sense of familiarity.
Confidence
There may be days where the future of your brand is worrisome and things arent going
according to plan. This is true with any business, large or small, and the most important thing is
not to panic. Part of your job as a leader is to put out fires and maintain the team morale. Keep
up your confidence level, and assure everyone that setbacks are natural and the important thing is
to focus on the larger goal. As the leader, by staying calm and confident, you will help keep the
team feeling the same. Remember, your team will take cues from you, so if you exude a level of
calm damage control, your team will pick up on that feeling. The key objective is to keep
everyone working and moving ahead.
Commitment

If you expect your team to work hard and produce quality content, youre going to need to lead
by example. There is no greater motivation than seeing the boss down in the trenches working
alongside everyone else, showing that hard work is being done on every level. By proving your
commitment to the brand and your role, you will not only earn the respect of your team, but will
also instill that same hardworking energy among your staff. Its important to show your
commitment not only to the work at hand, but also to your promises. If you pledged to host a
holiday party, or uphold summer Fridays, keep your word. You want to create a reputation for not
just working hard, but also be known as a fair leader. Once you have gained the respect of your
team, they are more likely to deliver the peak amount of quality work possible.
Positive Attitude
You want to keep your team motivated towards the continued success of the company, and keep
the energy levels up. Whether that means providing snacks, coffee, relationship advice, or even
just an occasional beer in the office, remember that everyone on your team is a person. Keep the
office mood a fine balance between productivity and playfulness.
If your team is feeling happy and upbeat, chances are they wont mind staying that extra hour to
finish a report, or devoting their best work to the brand.
Creativity
Some decisions will not always be so clear-cut. You may be forced at times to deviate from your
set course and make an on the fly decision. This is where your creativity will prove to be vital. It
is during these critical situations that your team will look to you for guidance and you may be
forced to make a quick decision. As a leader, its important to learn to think outside the box and to
choose which of two bad choices is the best option. Dont immediately choose the first or easiest
possibility; sometimes its best to give these issues some thought, and even turn to your team for
guidance. By utilizing all possible options before making a rash decision, you can typically reach
the end conclusion you were aiming for.
Intuition
When leading a team through uncharted waters, there is no roadmap on what to do. Everything is
uncertain, and the higher the risk, the higher the pressure. That is where your natural intuition has
to kick in. Guiding your team through the process of your day-to-day tasks can be honed down to
a science. But when something unexpected occurs, or you are thrown into a new scenario, your
team will look to you for guidance. Drawing on past experience is a good reflex, as is reaching
out to your mentors for support. Eventually though, the tough decisions will be up to you to
decide and you will need to depend on your gut instinct for answers. Learning to trust yourself is
as important as your team learning to trust you.
Ability to Inspire
Creating a business often involves a bit of forecasting. Especially in the beginning stages of a
startup, inspiring your team to see the vision of the successes to come is vital. Make your team

feel invested in the accomplishments of the company. Whether everyone owns a piece of equity,
or you operate on a bonus system, generating enthusiasm for the hard work you are all putting in
is so important. Being able to inspire your team is great for focusing on the future goals, but it is
also important for the current issues. When you are all mired deep in work, morale is low, and
energy levels are fading, recognize that everyone needs a break now and then. Acknowledge the
work that everyone has dedicated and commend the team on each of their efforts. It is your job to
keep spirits up, and that begins with an appreciation for the hard work.

Leadership Qualities Everyone Can Use


Here are a few of the qualities and traits of great leaders that you can learn and practice:

Self-assessment: Effective leaders periodically take stock of their personal


strengths and shortcomings. They ask: What do I like to do? What am I really
good at? What are my areas of weakness, and what do I dislike doing?
Knowing your areas of weakness does not make you weak; on the contrary, it
allows you to delegate to others who have those abilities, in order to achieve
the common goal. Rather than clinging to the false belief that they can do it
all, great leaders hire people who complement, rather than supplement, their
skills. Working on your areas of weaknesses will improve your leadership
ability and recognizing them makes you more human.
Sharp perception: Do you know how people really perceive you? Effective
leaders do. They have an easy level of honest communication with their
teams and their peers, and a thorough understanding of how they are
perceived. Testing others perception of you can be as simple as observing
their behavior. Are your co-workers and team members relaxed around you?
Does all conversation stop when you enter the room?
If you really want to know what people think, just ask them. You may receive
feedback that youre not listening or showing appreciation as well as you
could be. If youve established an environment of honest and open
communication, you should be able to ask about your good qualities and the
areas you need to improve on. Your staff will appreciate your effort.

Responsive to the groups needs: Being perceptive can also help a leader
be more effective in knowing the needs of the team. Some teams value trust
over creativity; others prefer a clear communicator to a great organizer.
Building a strong team is easier when you know the values and goals of each
individual, as well as what they need from you as their leader.

Knowing the organization: Effective leaders know the organizations


overall purpose and goals, and the agreed-upon strategies to achieve these
goals; they also know how their team fits into the big picture, and the part

they play in helping the organization grow and thrive. Full knowledge of your
organization inside and out is vital to becoming an effective leader.

Learning Negotiation, Team Building, Motivation and Goal Setting Skills


Todays business professionals know that in order to achieve success, they must commit to
lifelong learning and skill building. Enrolling in online business courses is one route to
improving your leadership skill set, and earning valuable leadership certification.
Business courses that offer leadership certification often include professional instruction in these
essential areas:

Communication Good communication skills are required at every level of


business, but leaders must possess outstanding communication skills.
Luckily, this is a skill that can be learned.
Motivating teams Inspiring others is the mark of an effective leader.
Motivation is best done by example and guidance, not by issuing commands.

Team building Putting together strong teams that work well is another
trait of great leaders. The opposite is also true: if a team is weak and
dysfunctional, it is generally a failure in leadership.

Risk taking You can learn how to assess risk and run scenarios that will
help you make better decisions. Great leaders take the right risks at the right
time.

Vision and goal setting A team depends on its leader to tell them where
they are going, why they are going, and how theyre going to get there.
People are more motivated when a leader articulates his or her vision for a
project or for the organization, along with the steps or goals needed to
achieve it.

Online Business Courses Can Help You Become an Effective Leader


Becoming an effective leader is not a one-time thing. It takes time to learn and practice
leadership skills until they become a part of you. Why not approach the leadership process as a
lifelong venture? Enrolling in negotiation courses, online business courses and leadership
certification courses demonstrates a commitment to upgrading your skills and improving your
leadership abilities.
When you practice these leadership skills, you can become more effective at any stage of your
career, regardless of the size of your organization. There are opportunities to learn leadership
skills all around you; take advantage of them to improve your career and leadership prospects.
Over the last 20 years of having worked with many exceptional managers and leaders of some of
the finest companies in the world, I have tried summarizing key leadership traits which I have
observed and experienced. In my opinion it boils down to the following 17 qualities and views
which successful leaders have in common:

Fail young, often, and hard Learn from mistakes, admit them, and stay humble.
Think the impossible to realize the maximum possible Be bold and brave.
Exercise tough empathy towards your team Give them what they need in your opinion, and
not necessarily what they want.
Be effective and efficient at the same time Do the right things in the right way.
Practice execution as an art Be focused on making decisions and implement them until the
very end in the best possible manner.
Embrace a Poet and Peasant approach Have a strategic mind set and simultaneously dont
mind diving into details and rolling up your sleeves.
Stay human, approachable, and show respect Choose being people-focused over taskfocused, even and especially, when push comes to shove.
Be resilient and display a can-do-attitude If something does not work, try something else. Be
positive and radiate confidence and strength.
Over-communicate and youll over-perform Teams, peers, business partners, etc. need
clarity and transparency.
Recruit, develop and empower the best fitting ones Make sure that there is a cultural and
mental fit between company and employees built on a psychological contract.
Work hard, smart and have fun No output without input. At the same time you should love
and enjoy what you do. Only then you can be highly passionate and committed.
Under-promise and over-deliver Walk your talk.
Inspire Think, behave and communicate beyond pure targets and figures. Stimulate people
around you to play and to experiment.
Stay true to yourself and your core values Adapt yourself, if necessary. Never bend yourself.
If not, you might break and might lose your heart and soul.
Believe in the good Stay always open-minded and curious without being naive.
Its all about the long-term If needed, forgo and sacrifice short-term profit and benefits for
the sake of long-term growth and sustainability.
Lead a holistically fullfilled life Life is much more than work and making career. Spend
enough time with family, friends, and loved ones. Relish your hobbies and passions without bad

feelings.
What do you think? Looking forward to receiving your feedback. Join the discussion!
*****
Andreas von der Heydt is the Country Manager of Amazon BuyVIP in Germany. Before that he
hold senior management positions at L'Oral. Hes a leadership expert, management coach and
NLP master. He also founded Consumer Goods Club. Andreas worked and lived in Europe, the
U.S. and Asia.

What is motivation & its types?

Motivation is the driving force that causes the flux from desire to will in life. For example,
hunger is a motivation that elicits a desire to eat.
Motivation has been shown to have roots in physiological, behavioral, cognitive, and social
areas. Motivation may be rooted in a basic impulse to optimize well-being, minimize physical
pain and maximize pleasure. It can also originate from specific physical needs such as eating,
sleeping or resting, and sex.
Motivation is an inner drive to behave or act in a certain manner. These inner conditions such as
wishes, desires and goals, activate to move in a particular direction in behavior.
There are two types of motivation, Intrinsic and Extrinsic motivation. It's important to
understand that we are not all the same; thus effectively motivating your employees requires that
you gain an understanding of the different types of motivation. Such an understanding will
enable you to better categorize your team members and apply the appropriate type of motivation.
You will find each member different and each member's motivational needs will be varied as
well. Some people respond best to intrinsic which means "from within" and will meet any
obligation of an area of their passion. Quite the reverse, others will respond better to extrinsic
motivation which, in their world, provides that difficult tasks can be dealt with provided there is
a reward upon completion of that task. Become an expert in determining which type will work
best with which team members.

Intrinsic Motivation
Intrinsic motivation means that the individual's motivational stimuli are coming from within. The
individual has the desire to perform a specific task, because its results are in accordance with his
belief system or fulfills a desire and therefore importance is attached to it.
Our deep-rooted desires have the highest motivational power. Below are some examples:
Acceptance: We all need to feel that we, as well as our decisions, are

accepted by our co-workers.


Curiosity: We all have the desire to be in the know.
Honor: We all need to respect the rules and to be ethical.
Independence: We all need to feel we are unique.
Order: We all need to be organized.
Power: We all have the desire to be able to have influence.
Social contact: We all need to have some social interactions.

Social Status: We all have the desire to feel important.

Extrinsic Motivation
Extrinsic motivation means that the individual's motivational stimuli are coming from
outside. In other words, our desires to perform a task are controlled by an outside source.
Note that even though the stimuli are coming from outside, the result of performing the
task will still be rewarding for the individual performing the task.
Extrinsic motivation is external in nature. The most well-known and the most debated
motivation is money. Below are some other examples:

Employee of the month award


Benefit package

Bonuses

Organized activities

Read more: http://www.leadership-central.com/types-ofmotivation.html#ixzz30FSNj6Q3

Population projection graph and role of unmet need

World population growth


World population continues to increase. With current world population now
over 6 billion people, there is significant pressure for excess population to
migrate from more densely populated countries to those less populated.

U.S. population growth

The top line of the following graph shows actual U.S. population from 1970 to
1993, and the U.S. Census Bureau "medium projection" of total population size
from 1994 to 2050.2 It assumes fertility, mortality, and mass immigration levels
will remain similar to 1993. In fact, overall immigration has continued to rise
significantly, meaning that population growth will actually be higher than
shown below.

Sources: U.S. Census Bureau; demographer Leon Bouvier11


Roy Beck, Numbers USA

The green lower portion of the graph represents growth from 1970 Americans
and their descendants. There were 203 million people living in the U.S. in 1970.
The projection of growth in 1970-stock Americans and their descendants from
1994 to 2050 is based on recent native-born fertility and mortality rates. This
growth would occur despite below replacement-level fertility rates because of
population momentum, where today's children will grow up to have their own
children. This segment of Americans is on track to peak at 247 million in 2030
and then gradually decline.11
The red upper portion of the graph represents the difference between the number
of 1970-stock Americans and the total population. The tens of millions of people

represented by this block are the immigrants who have arrived, or are projected
to arrive, since 1970, plus their descendents, minus deaths. They are projected to
comprise 70% of all U.S. population growth between 1993 and 205033.

Immigration numbers
History shows the U.S. has traditionally allowed relatively small numbers to
immigrate, thus allowing for decades of assimilation. After the peak of about 8.7
million in the first decade of the 20th century (the "great wave"), numbers went
steadily down. Immigration averaged only 195,000 per year from 1921 through
1970!40

* Projections and graph courtesy Population Environment Balance, Sources:


U.S. Census Bureau2; Statistical Yearbook40, Bureau of Citizenship and
Immigration Services
Average 195,000 per year from 1921-1970

It is helpful to put current immigration statistics in perspective. With the


change in immigration law in 1965, mass immigration levels have drifted
upward from 250,000 per year to over 1 million per year. In other words, in

one year we accept a number equal to what we formerly took in five years; in
two years what took a decade, etc. In response to such concerns a national
bipartisan committee headed by the late Barbara Jordan concluded that the
numbers should be reduced. A recently released RAND report recommends
that the level be reduced.
It is interesting to see how this plays out in the real world. According to
journalist Roy Beck, in California it is necessary to construct a new classroom
every hour of the day, 24 hours per day 365 days of the year to accommodate
immigrant children. The financial cost is borne by native households, who
according to a National Academy of Science report, pay an additional $1200
per year in taxes because of mass immigration. Even so, the primary concern to
environmentalists and Sierra Club members is the tremendous environmental
impact that will be incurred as a consequence of continued U.S. population
growth.
Can not solve third-world population problems
U.S. overimmigration does not relieve overpopulation problems in third-world
countries. Over 4.9 billion people live in countries poorer than Mexico.43 Each
year the populations of the world's impoverished nations grow by tens of
millions. Mexico grows by 2.5 million per year, Latin America by 9.3 million,
South America by 5.4 million, and China by 8.3 million.4 U.S. overimmigration
cannot have any significant affect on this number, even at current high mass
immigration levels of over 1,000,000 per year.

Exponential growth
U.S. population is projected to double.2 Although current population growth
rates are not strictly exponential, an analysis of exponential growth44 reveals
how quickly a population can grow. Exponential growth is like compound
interest. With 1% growth rate, population will double in 70 years, a 2% growth
rate will cause doubling in 35 years, 10% in 10 years. (Divide 70 by the
percentage number to get the approximate doubling time).
U.S. population has grown by 1.2% per year over the last 50 years. This "low"
growth rate means it has taken only 58 years for our population to double.
We can expect this doubling to continue, drastically magnified by the impact of
unrealistically high levels of mass immigration.

Rate of Population
Increase

Years Required to
Double Population

0.01%

6,930

0.1%

693

0.5%

139

1.0%

70

1.5%

47

2.0%

35

2.5%

28

2.8%

25

3.0%

23

3.5%

20

4.0%

18

For a more thorough explanation of exponential growth and


doubling times, see Exponential Growth and The Rule of 7044.

Overimmigration Caused 60% of U.S. Population


Growth
The immigration share of U.S. population growth rises continuously as births
to recent immigrants are added to the annual flow of new arrivals. The usually
reported numbers reflect annual flow. But this flow does not fully represent the
impact of mass immigration on population size because the downstream
effects, i.e., family formation and births, are ignored.
Total immigration impact is annual immigration plus births to the foreign born
minus deaths and emigration of immigrants. The native-born account is births
minus deaths and emigration of this sector. Annual population growth is the

sum of the immigrant and native born accounts. These calculations for the year
1994, using National Center for Health Statistics (1996) figures on births and
deaths14 and Center for Immigration Studies (1995) figures on immigration,
yield startling results. The foreign born are about ten percent of the population
but had over 18 percent of births. Mass immigration and children born to the
foreign-born sector, in 1994, accounted for a net increase of 1.6 million
persons, or sixty percent, of the United States' annual population growth.

1994 Category
Immigration

Native Born

Foreign Born

Total

1,206,000

1,206,000

Births

3,264,505

731,262

3,995,767

Deaths

-2,074,136

-204,858

-2,278,994

Emigration

-125,000

-125,000

(Est.) -250,000

Population Growth

l,065,369

1,607,404

2,672,773

40%

60%

100%

Percentage Share

Analysis Carrying Capacity Network, and Dr. Virginia Abernathy.

Confusion About Numbers


Although the overall number of legal immigrants into the U.S. is readily
available, there is no easy answer to the numbers authorized in each legal
immigrant category.
Despite much effort, no statement has been found that explains the number of
immigrants that legally may be admitted to the U.S. each year. Even the 1994
U.S. Commission on Immigration Reform, chaired by Barbara Jordan,
produced no such statement.45, 46, 47
The difficulty arises because the numbers vary from year to year due to a
mixture of statutory law, administrative procedures, and prior year admissions.
The following table presents current data as accurately as possible, and
includes the Jordan Commission's recommendations.45, 46, 47

Legal Immigrant
Category

1996 U.S.
Admissions

Current
Legal Limits

Jordan Commission
Recommended

Family sponsored

596,264

Limits are "pierceable"

400,000

Employment-based

117,499

About 140,000

100,000

Diversity programs

58,790

About 55,000

Refugee adjustments

118,528

About 125,000 +/- 25,000

50,000

Asylee adjustments

10,037

No practical limit

Other

14,598

20,000+

Total

915,900
Analysis courtesy Colorado Population Coalition
Additional data: Federation for American Immigration Reform

550,000 +

For all practical purposes, the U.S. does not have overall limits on mass
immigration, and this is a major reason why the numbers have grown, and will
continue to grow, and why the issue needs to be addressed. Rather, the numbers
result from the wide range of adult "extended family reunification" categories
that have no limits on them. Historically, in the past there were some "caps"
included in immigration expansion legislation, but these caps were "pierceable"
if the need arose, and since the need always arose, these caps turned out to be
meaningless.
Categories, to the extent they exist, are few and small, and are not intended to
limit mass immigration, but rather to ensure that certain nationalities that were
being squeezed out by the extended family/clan "family reunification"
onslaught, have some access to immigration.
This is not to say that a focus on categories or quotas - ratios of categories - is
more important than a focus on overall numbers. Quotas aren't an
environmental issue, they are a social and legislative issue, and indeed, quotas
haven't been implemented for years. The proportion of immigrants allowed
under law to enter into the U.S. is not an environmental issue, and there is no
reason for environmentalists and the Sierra Club to become involved in this
social issue. There is, however, clear reason for environmentalists and the
Sierra Club to be concerned with overpopulation as a fundamental
environmental issue, and to address both of its causes: increase from natural
births and overall immigration numbers.

Causes of U.S. population growth


The legacy of U.S. overpopulation we are leaving to future generations does
not have to happen if we recognize and address the causes of our population
growth.

What is the chemoprophylaxis & treatment of leprosy?


Introduction
Leprosy (Hansen's disease (HD) ) is a chronic infectious disease, caused by the bacillus
Mycobacterium leprae, which affects the skin and peripheral nerves leading to skin lesions, loss
of sensation, and nerve damage. This in turn can lead to secondary impairments or deformities of
the eyes, hands and feet. For treatment purposes, leprosy is classified as either paucibacillary
(PB) or multibacillary (MB) leprosy. The standard treatment for leprosy is multidrug therapy
(MDT) [1]. PB patients are treated for 6 months with dapsone and rifampicin; MB patients are
treated for 12 months with dapsone, rifampicin and clofazamine.
The World Health Organisation (WHO) had set a goal in the early 1990s to eliminate leprosy as a
public health problem by the year 2000. Elimination was defined as reducing the global
prevalence of the disease to less than 1 case per 10 000 population [2]. The WHO elimination
strategy was based on increasing the geographical coverage of MDT and patients' accessibility to
the treatment. The expectation existed that reduction in prevalence through expanding MDT
coverage would eventually also lead to reduction in incidence of the disease and ultimately to
elimination in terms of zero incidence of the disease. An important assumption underlying the
WHO leprosy elimination strategy was that MDT would reduce transmission of M. leprae
through a reduction of the number of contagious individuals in the community [3].
Unfortunately, there is no convincing evidence for this hypothesis [4].
With a total of 249 007 new patients detected globally in 2008 [5], it remains necessary to
develop new and effective interventions to interrupt the transmission of M. leprae. BCG
vaccination against tuberculosis offers some but not full protection against leprosy and in the
absence of another more specific vaccination against the bacillus other strategies need to be
developed, such as preventive treatment (chemoprophylaxis) of possible sub-clinically infected
people at risk of developing leprosy. Recently, the results were published of randomised
controlled trial into the effectiveness of single dose rifampicin (SDR) in preventing leprosy in
contacts of patients [6]. It was shown that this intervention is effective at preventing the
development of leprosy at two years and that the initial effect was maintained afterwards.
In order to assess the economic benefits of SDR as an intervention in the control of leprosy, we
performed a cost-effectiveness analysis. We provide an overview of the direct costs of this new
chemoprophylaxis intervention and calculate the cost-effectiveness compared to standard MDT
provision only.

Discussion
Chemoprophylaxis with single dose rifampicin for preventing leprosy among contacts is a costeffective prevention strategy. At program level an incremental of $ 6 009 was invested and 38
incremental leprosy cases were prevented, resulting in an ICER of $ 158 per one additional
prevented leprosy case.

This is the first report on cost-effectiveness of single dose rifampicin as chemoprophylaxis in


contacts of leprosy patients. The analysis is based on the results of a large randomized controlled
trial in Bangladesh [6]. For the analysis, the health care perspective was taken because indirect
cost data were largely unavailable. The health care perspective excludes indirect costs (patient
costs), such as travel costs, loss of income due to illness and clinic visits, and long term
consequences of disability. Estimating these costs was beyond the scope of this study, but
inclusion would have rendered the intervention even more cost-effective. Another limitation of
the study is that a static approach was taken to the analysis, measuring the effect of the
intervention after two years only. After these two years, there was no further reduction of new
cases in the chemoprophylaxis arm of the trial compared to the placebo arm. Because leprosy is
an infectious disease, with person-to-person transmission of M. leprae, one can expect that
prevention of primary cases (as recorded in the trial) will lead to further prevention of secondary
cases. In time, this would lead to further cost-effectiveness of the intervention. Unfortunately, we
could not apply such a dynamic analysis approach because there is insufficient information about
the long term effects of the intervention, including the number of secondary cases prevented and
the number of primary cases prevented after two years that will eventually develop leprosy after
a longer period of time, beyond the 4 years observation period of the trial.
It is also important to understand that the results of the COLEP trial reflect a comparison
between the chemoprophylaxis intervention and standard MDT treatment plus contact surveys at
2-year intervals with treatment of newly diagnosed cases among contacts. A contact survey in
itself is an intervention that reduces transmission in contact groups and thus new leprosy patients
among contacts. The provision of chemoprophylaxis to contacts requires contact tracing, but
contact tracing is not part of leprosy control programs in many countries and doing so would
increase program costs considerably. WHO however, recognizes the importance of contact
tracing and now recommends that it is introduced in all control programs [21]. This would then
also lay a good foundation for introducing chemoprophylaxis.
WHO reports regarding cost-effectiveness analyses recommend using disability adjusted life
years (DALY) as outcome measure for such studies [22]. In leprosy two measures are common to
express disability: WHO grade 1 and 2 [23]. The disability weight for grade 2 disability (visible
deformity) has been determined at 0.153 [24], but no weight is available for grade 1. Of all
newly detected leprosy cases, a relatively low percentage (235%) have grade 2 disability [25].
In our study we chose for the number of leprosy cases prevented as outcome, because there is
little information available about survival of patients with grade 2 disability and also because the
choice for DALY's would have given a less favourable result due to the low weight of leprosy
disability.
There are a number of issues to take into account when relating the outcome of this study to
other countries. Firstly, the cost level to conduct leprosy control will differ per country, due to
economic standard, budget allocated to primary health care, salaries of health care workers, etc.
In our calculation, program costs were similar for both the standard MDT treatment and
chemoprophylaxis intervention, but these costs will vary per country. The treatment costs are
based on real cost estimates and will vary less between countries and programs. Therefore the
actual costs will differ, but the conclusion that the intervention is cost-effective is very likely to
remain the same. Secondly, the clinical presentation of leprosy differs between countries and

regions. Globally the distribution is around 40% for MB and 60% for PB in newly detected
leprosy cases, but with widely varying ratios between countries [25]. Since costs for treating PB
and MB leprosy are different, these differences are likely to affect the outcome of the costeffectiveness analysis. Thirdly, the percentage of newly detected cases that are a household
contact of a known leprosy patient differs per country and is possibly determined by the
endemicity level of leprosy in a country or area. In Bangladesh, in the high endemic area where
the COLEP study was conducted, approximately 25% of newly detected cases had a known
index case within the family, whereas in a low endemic area (Thailand) this proportion was 62%
[26]. An intervention aimed at close (household) contacts may therefore be more cost-effective in
countries where relatively many new cases are household contacts. But the background and
implications of such differences on effectiveness of chemoprophylaxis needs further research.
Only few articles have been published about cost-effectiveness analyses of interventions in
leprosy [27]. Most articles assess small parts of leprosy control, such as footwear provision [28],
MDT delivery costs [29], or the economic aspects of hospitalisation versus ambulatory care of
neuritis in leprosy reactions [30]. Only two studies provided a more general cost-effect analysis.
Naik and Ganapati included several costs in their economic evaluation, but a limitation of the
study is the lack of reference about how they obtained their cost data [31]. Remme et al. based
the cost calculations in their study on the limited available published cost data, program
expenditure data and expert opinion, and also provide limited insight into how they obtained
certain costs and effects [30]. Both studies do not mention well how the costs are obtained, (e.g.
real costs, bottom-up or top-down costs). Our current article is basically one of the first
structured cost-effective analyses for leprosy presenting an overview of the costs involved and
can be used for the assessment of the costs of leprosy control in general.
This report shows that chemoprophylaxis with single dose rifampicin given to contacts of newly
diagnosed leprosy patients is a cost-effective intervention strategy. Implementation studies in the
field are necessary to establish whether this intervention is acceptable and feasible in other
leprosy endemic areas of the world.

Treatment of leprosy
Several drugs are used in combination in multidrug therapy (MDT). (See table) These drugs must
never be used alone as monotherapy for leprosy.
Dapsone, which is bacteriostatic or weakly bactericidal against M. leprae, was the mainstay
treatment for leprosy for many years until widespread resistant strains appeared. Combination
therapy has become essential to slow or prevent the development of resistance. Rifampicin is
now combined with dapsone to treat paucibacillary leprosy. Rifampicin and clofazimine are
now combined with dapsone to treat multibacillary leprosy.
A single dose of combination therapy has been used to cure single lesion paucibacillary leprosy:
rifampicin (600 mg), ofloxacin (400 mg), and minocycline (100 mg). The child with a single
lesion takes half the adult dose of the 3 medications.

WHO has designed blister pack medication kits for both paucibacillary leprosy and for
multibacillary leprosy. Each easy-to use kit contains medication for 28 days. The blister pack
medication kit for single lesion paucibacillary leprosy contains the necessary medication for the
one time administration of the 3 medications.
Any patient with a positive skin smear must be treated with the MDT regimen for multibacillary
leprosy. The regimen for paucibacillary leprosy should never be given to a patient with
multibacillary leprosy. Therefore, if the diagnosis in a particular patient is uncertain, treat that
patient with the MDT regimen for multibacillary leprosy.
Ideally, the patient should go to the leprosy clinic once a month so that clinic personnel may
supervise administration of the drugs prescribed once a month. However, many countries with
leprosy have poor coverage of health services and monthly supervision of drug administration by
health care workers may not be possible. In these cases, it may be necessary to designate a
responsible third party, such as a family member or a person in the community, to supervise the
monthly drug administration. Where health care service coverage is poor and supervision of the
monthly administration of drugs by health workers is not possible, the patient may be given more
than the 28 days supply of multidrug therapy blister packs. This tactic helps make multidrug
therapy easily available, even to those patients who live under difficult conditions or in remote
areas. Patients who ask for diagnosis and treatment are often sufficiently motivated to take full
responsibility for their own treatment of leprosy. In this situation, it is important to educate the
patient regarding the importance of compliance with the regimen and to give the patient
responsibility for taking his or her medication correctly and for reporting any untoward signs and
symptoms promptly. The patient should be warned about possible lepra reactions.

Prevention of dengue fever. What is WHO doing for dengue fever?


Dengue is transmitted by the bite of a mosquito infected with one of the four dengue virus
serotypes. It is a febrile illness that affects infants, young children and adults with symptoms
appearing 3-14 days after the infective bite.
Dengue is not transmitted directly from person-to-person and symptoms range from mild
fever, to incapacitating high fever, with severe headache, pain behind the eyes, muscle and
joint pain, and rash. There is no vaccine or any specific medicine to treat dengue. People who
have dengue fever should rest, drink plenty of fluids and reduce the fever using paracetamol
or see a doctor.
Severe dengue (also known as dengue hemorrhagic fever) is characterized by
fever, abdominal pain, persistent vomiting, bleeding and breathing difficulty
and is a potentially lethal complication, affecting mainly children. Early
clinical diagnosis and careful clinical management by trained physicians and
nurses increase survival of patients.

Personality traits

The five factors


A summary of the factors of the Big Five and their constituent traits, such that they form the
acronym OCEAN:[4]

Openness to experience: (inventive/curious vs. consistent/cautious).


Appreciation for art, emotion, adventure, unusual ideas, curiosity, and variety
of experience. Openness reflects the degree of intellectual curiosity,
creativity and a preference for novelty and variety a person has. It is also
described as the extent to which a person is imaginative or independent, and
depicts a personal preference for a variety of activities over a strict routine.
Some disagreement remains about how to interpret the openness factor,
which is sometimes called "intellect" rather than openness to experience.
Conscientiousness: (efficient/organized vs. easy-going/careless). A
tendency to be organized and dependable, show self-discipline, act dutifully,
aim for achievement, and prefer planned rather than spontaneous behavior.

Extraversion: (outgoing/energetic vs. solitary/reserved). Energy, positive


emotions, surgency, assertiveness, sociability and the tendency to seek
stimulation in the company of others, and talkativeness.

Agreeableness: (friendly/compassionate vs. analytical/detached). A


tendency to be compassionate and cooperative rather than suspicious and
antagonistic towards others. It is also a measure of one's trusting and helpful
nature, and whether a person is generally well tempered or not.

Neuroticism: (sensitive/nervous vs. secure/confident). The tendency to


experience unpleasant emotions easily, such as anger, anxiety, depression,
and vulnerability. Neuroticism also refers to the degree of emotional stability
and impulse control and is sometimes referred to by its low pole, "emotional
stability".

Psychosomatic disorders
Psychosomatic means mind (psyche) and body (soma). A psychosomatic disorder is
a disease which involves both mind and body. Some physical diseases are thought
to be particularly prone to be made worse by mental factors such as stress and
anxiety. Your current mental state can affect how bad a physical disease is at any
given time

Which diseases are psychosomatic?


To an extent, most diseases are psychosomatic - involving both mind and body.

There is a mental aspect to every physical disease. How we react to and cope
with disease varies greatly from person to person. For example, the rash of
psoriasis may not bother some people very much. However, the rash
covering the same parts of the body in someone else may make them feel
depressed and more ill.
There can be physical effects from mental illness. For example, with some
mental illnesses you may not eat, or take care of yourself, very well which
can cause physical problems.

However, the term psychosomatic disorder is mainly used to mean ... "a physical disease
that is thought to be caused, or made worse, by mental factors".

Some physical diseases are thought to be particularly prone to be made worse by mental
factors such as stress and anxiety. For example, psoriasis, eczema, stomach ulcers, high
blood pressure and heart disease. It is thought that the actual physical part of the illness
(the extent of a rash, the level of the blood pressure, etc) can be affected by mental
factors. This is difficult to prove. However, many people with these and other physical
diseases say that their current mental state can affect how bad their physical disease is at
any given time.

Some people also use the term psychosomatic disorder when mental factors cause
physical symptoms but where there is no physical disease. For example, a chest pain may
be caused by stress and no physical disease can be found. Physical symptoms that are
caused by mental factors are discussed further in another leaflet called
Somatisation/Somatoform Disorders.

How can the mind affect physical diseases?


It is well known that the mind can cause physical symptoms. For example, when we are afraid or
anxious we may develop:

A fast heart rate

A thumping heart (palpitations)


Feeling sick (nauseated)
Shaking (tremor)
Sweating
Dry mouth
Chest pain
Headaches
A knot in the stomach
Fast breathing

These physical symptoms are due to increased activity of nervous impulses sent from the
brain to various parts of the body and to the release of adrenaline (epinephrine) into the
bloodstream when we are anxious.
However, the exact way that the mind can cause certain other symptoms is not clear.
Also, how the mind can affect actual physical diseases (rashes, blood pressure, etc) is not
clear. It may have something to do with nervous impulses going to the body, which we do
not fully understand. There is also some evidence that the brain may be able to affect
certain cells of the immune system, which is involved in various physical diseases.

What are the treatments for psychosomatic disorders?


Each disease has its own treatment options. For physical diseases, physical treatments such as
medication or operations are usually the most important. However, healthcare workers will
usually try to treat a person as a whole and take into account mental and social factors which
may be contributing to a disease. Therefore, treatments to ease stress, anxiety, depression, etc,
may help if they are thought to be contributing to your physical disease.

Psychosocial disorders (tension, depression, anxiety )


Psychological disorders, also known as mental disorders, are patterns of behavioral
or psychological symptoms that impact multiple areas of life. These disorders create
distress for the person experiencing these symptoms. The following list of
psychological disorders includes some of the major categories of psychological
disorders listed in the Diagnostic and Statistical Manual of Mental Disorders as well
as several examples of each type of psychological disorder.

Adjustment Disorders
This classification of mental disorders is related to an identifiable source of stress that causes
significant emotional and behavioral symptoms. The diagnostic criteria listed by the DSM-IV
diagnostic criteria included:

(1) Distress that is marked and excessive for what would be expected from
the stressor and
(2) Creates significant impairment in school, work or social environments.

In addition to these requirements, the symptoms must occur within three months of exposure to
the stressor, the symptoms must not meet the criteria for an Axis I or Axis II disorder, the
symptoms must not be related to bereavement and the symptoms must not last for longer than six
months after exposure to the stressor.
The DSM-V (released in May of 2013) moved adjustment disorder to a newly created section of
stress-related syndromes.

Ads
Panic attacks?www.flowygame.comFlowy is a game that can help! Sign up for our trial now
schizophreniawww.newsonmentaldisorders.comonline newspaper free
Aspergers: Causes & Cureswww.aspergerssociety.orgLearn Treatments To Help Your Child Be
Happy And Content In Who He Is!

Anxiety Disorders
Anxiety disorders are those that are characterized by excessive and abnormal fear, worry and
anxiety. In one recent survey published in the Archives of General Psychology1, it was estimated
that as many as 18% of American adults suffer from at least one anxiety disorder.
Types of anxiety disorders include:

Generalized anxiety disorder

Agoraphobia

Social anxiety disorder

Phobias

Panic disorder

Post-traumatic stress disorder

Separation anxiety

Dissociative Disorders
Dissociative disorders are psychological disorders that involve a dissociation or interruption in
aspects of consciousness, including identity and memory. Dissociative disorders include:

Dissociative disorder (formerly known as multiple personality disorder


Dissociative fugue

Dissociative identity disorder

Depersonalization/ derealization disorder

Eating Disorders
Eating disorders are characterized by obsessive concerns with weight and disruptive eating
patterns that negatively impact physical and mental health. Types of eating disorders include:

Anorexia nervosa
Bulimia nervosa

Rumination disorder

Factitious Disorders
These psychological disorders are those in which an individual acts as if he or she has an illness,
often be deliberately faking or exaggerating symptoms or even self-inflicting damage to the
body. Types of factitious disorders include:

Munchausen syndrome
Munchausen syndrome by proxy

Ganser syndrome

Impulse-Control Disorders
Impulse-control disorders are those that involve an inability to control impulses, resulting in
harm to oneself or others. Types of impulse-control disorders include:

Kleptomania (stealing)
Pyromania (fire-starting)

Trichotillomania (hair-pulling)

Pathological gambling

Intermittent explosive disorder

Dermatillomania (skin-picking)

Mental Disorders Due to a General Medical Condition


This type of psychological disorder is caused by an underlying medical condition. Medical
conditions can cause psychological symptoms such as catatonia and personality changes.
Examples of mental disorders due to a general medical condition include:

Psychotic disorder due to epilepsy


Depression caused by diabetes

AIDS related psychosis

Personality changes due to brain damage

Neurocognitive Disorders
These psychological disorders are those that involve cognitive abilities such as memory, problem
solving and perception. Some anxiety disorder, mood disorders and psychotic disorders are
classified as cognitive disorders. Types of cognitive disorders include:

Alzheimer's disease
Delirium

Dementia

Amnesia

Ads
Dementiawww.chinastemcell.com.cnStemcell Treatment for Dementia, Mental problem
PiecesOfMewww.2piecesofme.weebly.comBlog about Mental Health and more! Share your story
here

Mood Disorders
Mood disorder is a term given to a group of mental disorders that are all characterized by
changes in mood. Examples of mood disorders include:

Bipolar disorder
Major depressive disorder

Cyclothymic disorder

Neurodevelopmental Disorders
Developmental disorders, also referred to as childhood disorders, are those that are typically
diagnosed during infancy, childhood, or adolescence. These psychological disorders include:

Intellectual Disability (or Intellectual Developmental Disorder), formerly


referred to as mental retardation
Learning disabilities

Communication disorders

Autism

Attention-deficit hyperactivity disorder

Conduct disorder

Oppositional defiant disorder

Psychosocial Disorders

Definition
A psychosocial disorder is a mental illness caused or influenced by life experiences,
as well as maladjusted cognitive and behavioral processes.

Description
The term psychosocial refers to the psychological and social factors that influence
mental health. Social influences such as peer pressure, parental support, cultural
and religious background, socioeconomic status, and interpersonal relationships all
help to shape personality and influence psychological makeup. Individuals with
psychosocial disorders frequently have difficulty functioning in social situations and
may have problems effectively communicating with others.

In the American Psychiatric Association it distinguishes 16 different subtypes (or


categories) of mental illness. Although psychosocial variables arguably have some
degree of influence on all subtypes of mental illness, the major categories of mental
disorders thought to involve significant psychosocial factors include:

Substance-related disorders. Disorders related to alcohol and drug use,


abuse, dependence, and withdrawal.
Schizophrenia and other psychotic disorders. These include the schizoid
disorders (schizophrenia, schizophreniform, and schizoaffective disorder),
delusional disorder, and psychotic disorders.

Mood disorders. Affective disorders such as depression (major, dysthymic)


and bipolar disorders.

Anxiety disorders. Disorders in which a certain situation or place triggers


excessive fear and/or anxiety symptoms (i.e., dizziness, racing heart), such
as panic disorder, agoraphobia, social phobia, obsessive-compulsive disorder,
post-traumatic stress disorder, and generalized anxiety disorders.

Somatoform disorders. Somatoform disorders involve clinically significant


physical symptoms that cannot be explained by a medical condition (e.g.,
somatization disorder, conversion disorder, pain disorder, hypochondriasis,
and body dysmorphic disorder).

Factitious disorders. Disorders in which an individual creates and complains of


symptoms of a non-existent illness in order to assume the role of a patient (or
sick role).

Sexual and gender identity disorders. Disorders of sexual desire, arousal, and
performance. It should be noted that the categorization of gender identity
disorder as a mental illness has been a point of some contention among
mental health professionals.

Eating disorders. Anorexia and bulimia nervosa.

Adjustment disorders. Adjustment disorders involve an excessive emotional


or behavioral reaction to a stressful event.

Personality disorders. Maladjustments of personality, including paranoid,


schizoid, schizotypal, anti-social, borderline, histrionic, narcissistic, avoidant,
dependent, and obsessive-compulsive personality disorder (not to be
confused with the anxiety disorder OCD).

Disorders usually first diagnosed in infancy childhood, or adolescence. Some


learning and developmental disorders (i.e., ADHD) may be partially
psychosocial in nature.

Causes and symptoms


It is important to note that the causes of mental illness are diverse and not
completely understood. The majority of psychological disorders are thought to be

caused by a complex combination of biological, genetic (hereditary), familial, and


social factors or biopsychosocial influences. In addition, the role that each of these
play can differ from person to person, so that a disorder such as depression that is
caused by genetic factors in one person may be caused by a traumatic life event in
another.
The symptoms of psychosocial disorders vary depending on the diagnosis in
question. In addition to disorder-specific symptoms, individuals with psychosocial
dysfunction usually have difficulty functioning normally in social situations and may
have trouble forming and maintaining close interpersonal relationships.

Diagnosis
Patients with symptoms of psychosocial disorders or other mental illness should
undergo a thorough physical examination and patient history to rule out an organic
cause for the illness (such as a neurological disorder). If no organic cause is
suspected, a psychologist or other mental healthcare professional will meet with the
patient to conduct an interview and take a detailed social and medical history. If the
patient is a minor, interviews with a parent or guardian may also be part of the
diagnostic process. The physician may also administer one or more psychological
tests (also called clinical inventories, scales, or assessments).

Treatment
Counseling is typically a front-line treatment for psychosocial disorders. A number of
counseling or talk therapy approaches exist, including psychotherapy, cognitive
therapy, behavioral therapy, and group therapy. Therapy or counseling may be
administered by social workers, nurses, licensed counselors and therapists,
psychologists, or psychiatrists.
Psychoactive medication may also be prescribed for symptom relief in patients with
mental disorders considered psychosocial in nature. For disorders such as major
depression or bipolar disorder, which may have psychosocial aspects but also have
known organic causes, drug therapy is a primary treatment approach. In cases such
as personality disorder that are thought to not have biological roots, psychoactive
medications are usually considered a secondary, or companion treatment to
psychotherapy.
Many individuals are successful in treating psychosocial disorders through regular
attendance in self-help groups or 12-step programs such as Alcoholics Anonymous.
This approach, which allows individuals to seek advice and counsel from others in
similar circumstances, can be extremely effective.
In some cases, treating mental illness requires hospitalization of the patient. This
hospitalization, also known as inpatient treatment, is usually employed in situations
where a controlled therapeutic environment is critical for the patient's recovery
(e.g., rehabilitation treatment for alcoholism or other drug addictions), or when

there is a risk that the patient may harm himself (suicide) or others. It may also be
necessary when the patient's physical health has deteriorated to a point where lifesustaining treatment is necessary, such as with severe malnutrition associated with
anorexia nervosa.

Alternative treatment
Therapeutic approaches such as art therapy that encourage self-discovery and
empowerment may be useful in treating psychosocial disorders. Art therapy, the
use of the creative process to express and understand emotion, encompasses a
broad range of humanistic disciplines, including visual arts, dance, drama, music,
film, writing, literature, and other artistic genres. This use of the creative process is
believed to provide the patient/artist with a means to gain insight to emotions and
thoughts they might otherwise have difficulty expressing. After the artwork is
created, the patient/artist continues the therapeutic journey by interpreting its
meaning under the guidance of a trained therapist.

Key terms
Affective disorder An emotional disorder involving abnormal highs and/or lows
in mood.
Bipolar disorder An affective mental illness that causes radical emotional
changes and mood swings, from manic highs to depressive lows. The majority of
bipolar individuals experience alternating episodes of mania and depression.
Bulimia An eating disorder characterized by binge eating and inappropriate
compensatory behavior such as vomiting, misusing laxatives, or excessive exercise.
Cognitive processes Thought processes (i.e., reasoning, perception, judgment,
memory).
Learning disorders Academic difficulties experienced by children and adults of
average to above-average intelligence that involve reading, writing, and/or
mathematics, and which significantly interfere with academic achievement or daily
living.
Schizophrenia A debilitating mental illness characterized by delusions,
hallucinations, disorganized speech and behavior, and flattened affect (i.e., a lack of
emotions) that seriously hampers normal functioning.

Prognosis
According to the National Institute of Mental Health, more than 90% of Americans
who commit suicide have a diagnosable mental disorder, so swift and appropriate
treatment is important. Because of the diversity of types of mental disorders
influenced by psychosocial factors, and the complexity of diagnosis and treatment,
the prognosis for psychosocial disorders is highly variable. In some cases, they can

be effectively managed with therapy and/or medication. In others, mental illness


can cause long-term disability.

Prevention
Patient education (i.e., therapy or self-help groups) can encourage patients to take
an active part in their treatment program and to recognize symptoms of a relapse of
their condition. In addition, educating friends and family members on the nature of
the psychosocial disorder can assist them in knowing how and when to provide
support to the patient.

Resources

Periodicals
Epperly, Ted D., and Kevin E. Moore. "Health Issues in Men: Part II. Common
Psychosocial Disorders." American Family Physician 62 (July 2000): 117-24.

Organizations
National Institute of Mental Health. 6001 Executive Boulevard, Rm. 8184, MSC 9663,
Bethesda, MD 20892-9663. (301) 443-4513.

Other
Satcher, David. Mental Health: A Report of the Surgeon General. Washington, DC:
Government Printing Office, 1999.

There were also many graphs for interpretation.

Polio surveillance graph and its indicators

Surveillance
Acute Flaccid Paralysis (AFP) surveillance
Nationwide AFP (acute flaccid paralysis) surveillance is the gold standard for detecting cases of
poliomyelitis. The four steps of surveillance are:
1. finding and reporting children with acute flaccid paralysis (AFP)
2. transporting stool samples for analysis
3. isolating and identifying poliovirus in the laboratory
4. mapping the virus to determine the origin of the virus strain.

Environmental surveillance
Environmental surveillance involves testing sewage or other environmental samples for the
presence of poliovirus. Environmental surveillance often confirms wild poliovirus infections in
the absence of cases of paralysis. Systematic environmental sampling (e.g. in Egypt and
Mumbai, India) provides important supplementary surveillance data. Ad-hoc environmental
surveillance elsewhere (especially in polio-free regions) provides insights into the international
spread of poliovirus.

Surveillance indicators
Indicator

Minimum levels for certification standard surveillance

At least 80% of expected routine (weekly or monthly) AFP surveillance


Completeness of reports should be received on time, including zero reports where no AFP cases
reporting
are seen. The distribution of reporting sites should be representative of the
geography and demography of the country
Sensitivity of
surveillance

At least one case of non-polio AFP should be detected annually per 100 000
population aged less than 15 years. In endemic regions, to ensure even higher
sensitivity, this rate should be two per 100 000.

Completeness of All AFP cases should have a full clinical and virological investigation with at
case investigation least 80% of AFP cases having adequate stool specimens collected.
Adequate stool specimens are two stool specimens of sufficient quantity for
laboratory analysis, collected at least 24 hours apart, within 14 days after the
onset of paralysis, and arriving in the laboratory by reverse cold chain and

Indicator

Minimum levels for certification standard surveillance


with proper documentation.

Completeness of At least 80% of AFP cases should have a follow-up examination for residual
follow-up
paralysis at 60 days after the onset of paralysis
Laboratory
performance

All AFP case specimens must be processed in a WHO-accredited laboratory


within the Global Polio Laboratory Network (GPLN)

Performance of AFP surveillance and incidence of


poliomyelitis
- See more at:
http://www.polioeradication.org/Dataandmonitoring/Surveillance.aspx#sthash.06Zrv
BBX.dpuf

Poliomyelitis
Description: Poliomyelitis, or polio, is a crippling disease caused by any one of three related
viruses, poliovirus types 1, 2 or 3. The only way to spread poliovirus is through the faecal/oral

route. The virus enters the body through the mouth when people eat food or drink water that is
contaminated with faeces. The virus then multiplies in the intestine, enters the bloodstream, and
may invade certain types of nerve cells, which it can damage or destroy. Polioviruses spread very
easily in areas with poor hygiene
Prevention: Live oral polio vaccine (OPV) - four doses in endemic countries
or Inactivated polio vaccine (IPV) given by injection - two-three doses depending on country
schedule

The Tail End of Eradication, an Elusive Goal


We are nowhere near eradicating malaria with hundreds of thousands of cases annually
throughout the world. It reappears in Greece, and in subclinical form stymies surveillance
efforts in the Solomon Islands. But eventually we will close in on this parasite. What can we
learn from eradication efforts of another scourge, polio?
Recently the Express Tribune published an article that provided some shock not only in Pakistan,
where the issue was detected, but throughout the polio eradication community. The Prime
Ministers polio cell, the World Health Organisation (WHO), and the United Nations
Childrens Fund (UNICEF) confirmed a newly-found strain of the polio virus.
The technical reason for the new stain was explained by the international health agencies:
cVDPV cases that cause type 2 poliovirus mutate and attain a form that can cause paralysis after
passing through multiple children in environments with substandard sanitation. Fortunately polio
associated with vaccines is extremely rare, but a more damning administrative explanation of
why this may have happened in Pakistan is poor routine immunization coverage that enabled
these mutations to occur.
Administrative problems include poor scheduling of the current immunization round during a
sacred religious period resulted in four districts not participating, but on top of this was a more
pressing problem, the global shortage of the oral polio vaccines especially as anti-polio
campaigns are increasing . This calls into question the upcoming second round of immunization
in December. The problem is persistent since it was reported earlier this year that, Polio
coverage (in Pakistan) remained sub-optimal during the past year in Islamabad, as revealed by an
independent evaluation report on the post-polio campaign conducted by the World Health
Organization.
Four endemic countries remain as seen in the
graph, and Pakistans performance to date is
actually better than some of the others, but the
situation is volatile, as is the civil/political
situation in the remaining affected countries.
Interestingly, another eradication-targeted
disease, Guinea Worm, was down to 1058 cases

in 2011 and remains in only 4 countries, but this is 17 years after the initial date set for its
eradication.
Polio and Guinea Worm offer malaria some lessons for the present in countries approaching preelimination now and those who will hopefully join them over the next decade (if global funding
levels are maintained). One lessons is that surveillance is an active part of current polio
eradication efforts, otherwise these reports on progress and its challenges would not be
published. But the key lesson is that regardless of the effectiveness of the technical intervention
(e.g. a vaccine), deployment of the technical intervention is subject to human, administrative,
managerial and social complications.
Polio focuses on a vaccine; malaria has treatment medicines, preventive medicines, insecticide
sprays, treated bednets, diagnostic tests, and maybe also one day an effective vaccine. It is not
too early to plan on how to coordinate all this into achieving effective disease elimination,
nationally, regionally and globally.
This entry was posted on Saturday, November 24th, 2012 at 5:35 pm and is filed under Eradication. You can follow
any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

SIAD technique for polio

Food safety and food contamination

Health system performance financing, stewardship, responsiveness of the


health system to non health needs of the public

A performance appraisal (PA), also referred to as a


performance review, performance evaluation,[1] (career)
development discussion,[2] or employee appraisal[3] is a
method by which the job performance of an employee
is documented and evaluated. Performance appraisals
are a part of career development and consist of
regular reviews of employee performance within
organizations.
Performance Evaluation is an international journal published by Elsevier. The
current Editor-in-chief is Philippe Nain. The journal was previously published by
North-Holland Publisher.

Challenge
Often the biggest challenge facing developing countries is not a lack of information. More
frequently, the challenge is bringing together the many disparate sources and types of data that
are being produced. Developing countries are often overwhelmed with the distinct and
competing requirements for data tied to external program investments, with the greatest burden
falling on the lowest levels of the health system.
In addition, routinely collected data (such as facility-based data) is often not associated with the
intermittently collected data (such as surveys and census data), leaving large gaps in measuring
health systems performance.These challenges often keep policymakers from modifying health
planning and resource allocation based on accurate, timely and relevant health systems data.

Approach
The Health Systems 20/20 strategy for measuring and monitoring health systems is to provide
and maximize the use of a set of established, innovative tools creating a standardized
measurement. A key starting point for countries is to identify the relative strengths and
weaknesses of the health system, priority issues, and potential recommendations by conducting a
health systems assessment.
After an initial assessment, health information systems tools can be implemented to improve
linkages between health care entities at the local, regional, and central levels to increase the flow
of accurate, complete data in a timely manner. HIS strengthening includes leveraging key
analytical tools, such as Geographic Information System technology, to identify trends that
inform program planning and decision making and to correlate service delivery with health
outcomes.

At higher levels, the web-based Health Systems Database allows users to easily compile and
analyze country data from multiple sources to quickly assess the performance of a countrys
health system, benchmark performance against other countries on key indicators, and monitor
progress toward system strengthening goals.

Causes of lung cancer in women. Why its incidence is increasing in women

Lung cancer in women differs from lung cancer in men in many ways. Yet, despite obvious
differences in our appearance, we tend to lump men and women together when talking about
lung cancer. This is unfortunate, since the causes, response to various treatments, survival rate,
and even symptoms to watch for differ. What are some facts about lung cancer in women?

Statistics About Lung Cancer in Women


Lung cancer is the leading cause of cancer deaths in women, killing more women
each year than breast cancer, uterine cancer, and ovarian cancer combined. While
smoking is the number one cause, 20% of these women have never touched a
cigarette.

Once considered a mans disease, lung cancer is no longer discriminatory. In 2005, the last
year for which we have statistics, 82,271 women (vs 107,416 men) were diagnosed with lung
cancer, and 69,078 (vs 90,139 men) died.
While lung cancer diagnoses decreased each year from 1991-2005 for men, the incidence
increased 0.5% each year for women. The reason for this is not completely clear.
Lung cancer in women occurs at a slightly younger age, and almost half of lung cancers in
people under 50 occur in women.

Causes
Even though smoking is the number one cause of lung cancer in women, a higher
percentage of women who develop lung cancer are life-long non-smokers. Some of
the causes may include exposure to radon in our homes, secondhand smoke, other
environmental and occupational exposures, or a genetic predisposition. Recent
studies suggest infection with the human papilloma virus (HPV) may also play a
role.

Smoking Status
Some, but not all, studies suggest that women may be more susceptible to the
carcinogens in cigarettes, and women tend to develop lung cancer after fewer years
of smoking.

Lung Cancer Types


Whereas men are more likely to develop squamous cell lung cancer, another form of
non-small cell lung cancer, adenocarcinoma is the most common type of lung
cancer found in women.

BAC (Bronchioalveolar carcinoma) is a rare form of lung cancer that is more common in
women. For unknown reasons, the incidence of BAC appears to be increasing worldwide,
especially among younger, non-smoking women.

Symptoms
We hear about the symptoms of a heart attack being different in women from in
men. The same could hold true for lung cancer. Squamous cell lung cancer (the type
more common in men) grows near the airways, and often presents with the classic
symptoms of lung cancer, such as a cough and coughing up blood.
Adenocarcinomas (the type of lung cancer that is more common in women), often
develops in the outer regions of the lungs. These tumors can grow quite large or
spread before they cause any symptoms. Symptoms of fatigue, the gradual onset of
shortness of breath, or chest and back pain from the spread of lung cancer to bone,
may be the first sign that something is wrong.

More About Symptoms of Lung Cancer in Women

Lung Cancer in Women The Role of Estrogen


It is likely that estrogen plays a role in the development and progression of lung
cancer and research is being done to define this further. Women who have their
ovaries removed surgically before menopause may be at higher risk of developing
lung cancer. Recent research suggests that treatment with estrogen and
progesterone (hormone replacement therapy) after menopause may increase the
risk of dying from lung cancer. In contrast, both the use of birth control pills and
hormone replacement therapy (excepting those who use hormones after surgical
menopause) are associated with a lower risk of developing lung cancer. This
contrast between dying from, and development of, lung cancer, suggests that
estrogen plays more than one role in lung cancer.

More About Estrogen and Lung Cancer

Treatment
Women historically respond to a few chemotherapy medications used for lung
cancer better than men. One of the new targeted therapies, erlotinib (Tarceva), also
appears to be more effective for women. Women who are able to be treated with
surgery for lung cancer also tend to fair better. In one study, the median survival
after surgery for lung cancer was twice as long for women as for men.

On the other hand, even though the National Cancer Institute recommends that all patients with
stage 3 lung cancer be considered candidates for clinical trials, women are less likely to be
involved in clinical trials than are men.

Survival
The survival rate for lung cancer in women is higher than for men at all stages of
the disease. Sadly, the overall 5-year survival rate is only 16% (vs 12% for men).

Awareness and Funding


Even though many more women die from lung cancer than breast cancer, much
more funding is devoted to breast cancer research than lung cancer research.
According to the Lung Cancer Alliance, federal research funding in 2007 from the
National Cancer Institute, Department of Defense, and Centers for Disease
Control amounted to $23,754 per breast cancer death, and only $1,414 per lung
cancer death. Due to a lower survival rate, and the symptoms of lung cancer
(many survivors cannot walk and run for the cure), as well as the stigma, private
fundraising also lags significantly behind that of breast cancer.

Growth Charts show you...how your child compares to other children


his/her age.

What length of time is needed so


see my child's growth pattern?

The more recorded measurements you have the better! Seeing a "pattern" of
growth over several years helps you understand how your child has
progressed. Most Pediatric Endocrinologists (growth specialists) want at least
12 months (measuring at the beginning and end of that year) to establish a
growth pattern.
IMPORTANT NOTE: If you get measurement records from other sources- you
MUST be careful! If they measured your child incorrectly (with his/her shoes
on, or with "items" in their hair, feet not totally flat or without making them
stretch fully etc.)- it will make a big difference on their growth chart as it is
plotted out. So don't panic if some items don't seem to line up correctly.

INTERPRETING/ Understanding THE GROWTH CHART (using height as


the example)

A growth chart shows how a child's height compares to other children the
exact same age and sex. After the age of 2, most children maintain fairly
steady growth until they hit puberty. They generally follow close to the same
percentile they had at the age of 2. Children over 2 years of age who move
away (loosing or gaining more than 15 percentile points) from their
established growth curve should be thoroughly evaluated and followed by a
doctor, no matter how tall they are. Here is an example of a growth chart and
an explanation about how to read/figure it out.

How do I figure out what percentile my child is in?

On each growth chart there is a series of lines swirving from the lower left
and climbing up to the right side of the chart. These lines help people follow
along (so to speak) so that you can see where your child falls on a growth
curve.

What do percentiles mean?


Percentiles are the most commonly used markers to determine the size and
growth patterns for children. Percentiles rank a child by showing what percent
of kids would be smaller or taller than your child. If your child is in the 5th
percentile, 95 out of 100 children the same sex and age, would be taller than
your child. If you child is in the 70th percentile, he or she is taller than 70 out
of 100 children the same age and sex.
Please keep in mind that your child's percentile doesn't necessarily indicate
how well they are growing. A child at the 5th percentile can be growing just
as well as a child at the 95th percentile. It is more important to look at your
child's growth over time. If he/she has always been at the 5th percentile, then
he/she is likely growing normally. It would be concerning if your child had
previously been at the 50th or 75th percentile and had now fallen down to
the 25th or lower percentile.
It is not uncommon for children under the age of 2 to change percentiles.
However, after the age of 2.5 ro 3 years, children should follow their growth
curves fairly closely. Again, discuss any concerns with your Pediatrician.
Keep in mind that many factors influence how children grow, including their
genetic potential (how tall their parents and other family members are),
underlying medical problems (such as congenital heart disease, kidney
disease, syndromes, etc.), and their overall nutrition plays a big role in every
child's growth and development.

If you are concerned about your child's height or weight, talk with your
Pediatrician. Continue to watch the growth annually (or more frequently if you
see your child falling below a normal pattern). It is also important to make
sure your child is not crossing percentiles in an upward swing because this
too can represent a problem (see Precocious Puberty).

Our imaginary child, Sally Sue, is 6 years old and stands 45.5 (115 cm) tall for
this example.

If you look at the very bottom of the chart- you will see numbers starting with
the number 2. Those numbers are the age of the child. In this example-we
listed our sample girls as being 6 years old. Therefore, her growth is on that
line on the bottom.

Next we found the mark on the left side of the page that matched her height
45.5 inches (115cm).

After we had her height and her age...we matched the two points and placed
the blue dot where that information met.

Now.... see the curved lines going from the lower left side upwards towards
the right side? Those lines have numbers too. If you enlarge the picture (or
look at the growth chart generated from our automatic growth chart) you will

see that the lowest line is the 3rd percentile and the top curved line is the
97th percentile. This chart shows that Sally Sue is on the 50th percentile.
That means that out of 100 girls her same age- half are taller and half are
smaller than she is. If she were on the 10th percentile, it would mean that
she was taller than 10 girls and shorter than 90 girls her same age. At the
50th percentile (following the line to age 16 or so) you might GUESStimate
that her final adult height would be somewhere between 63.5 and 64.5
inches tall. But that is true guess work! Genetics and many factors play a
huge role in this process.

Toddler and Infant Growth Chart


Information
The toddler and infant growth chart page has links to growth charts for boys and
girls from birth to 2 years of age and for boys and girls over 2 years of age, as well
as head circumference charts for infants under 2 year of age.

You can also read about how to interpret these charts for your child.

The growth charts are those used by the Center for Disease Control (CDC) - the under two year
charts are based on data from the WHO (World Health Organisation) as the WHO data is better
for children under 2 years.

How are growth charts interpreted?


What is normal growth?

What does it mean if my baby has crossed centile lines?

Is length in babies reliable?

What if weight and height centiles are different?

What if weight centile is much more than height centile?

When to see your doctor

Download growth charts

How are Growth Charts Interpreted?

Before you open the links to the infant growth chart of your choice, you will want to understand
how growth charts are made.
Growth Charts are created by looking at a cross section of the
population at one time and then plotting the weight and height of
all the infants and toddlers.
There is a range because we are not all the same size. That range is represented by
centile (or percentile) lines on the child or infant growth chart.
The growth charts here have lines representing the 3rd, 10th, 25th, 50th, 75th,
90th, and 97th centiles (also called percentiles) for the over 2 year olds and lines
representing 2nd, 5th, 10th, 25th, 50th, 75th, 90th, 95th and 98th centiles for under
2 year olds.

The 3rd centile line gives an indication where the lower end of the normal range is - actually 3%
of normal infants and toddlers will be below the 3rd centile (or for the 2nd centile, 2% of the
population will sit below this centile).
The 50th centile is where 50% of the population will sit.
The 97th centile gives an indication where the upper end of the normal range is actually 3% of normal infants and toddlers will be above the 97th centile.
So anywhere between the 2nd and 98th centiles is appropriate growth. It can be
normal to be slightly above the 98th centile or slightly below the 2nd centile. What
is more important than an individual reading is the trend.

Back to list
What is normal growth?

It is far more important to look at the toddler or infant growth chart trend than one reading.
Generally infants and toddlers should follow one centile line (or grow parallel to one centile line)
for height and weight.
Trends are easier to see when time has passed so don't be concerned if there isn't
an appropriate increase in weight over 1 week - wait and see what happens over 3
months. Children get lots of viral illnesses so they may have weight that fluctuates
with those illnesses - over time, they will usually manage to put on the required
weight.
Normal growth is a trend that follows a centile line and is similar for height and
weight on the infant growth chart.
Back to list
What does it mean if my baby crossed centile lines?

Sometimes, there will be a natural moving across the centile lines for weight on the
infant growth chart in the first 6 months or so. This is because babies who are
destined to be small people, because of their genes, can be big babies. They have
to get on their "right" centile line and will do this over the first months.
This is called "Catch Down Growth" but once your baby finds her growth centile,
she should follow that line on the infant growth chart. If she keeps crossing centile
lines, that is not normal. Usually, "catch down growth" involves starting at a high
centile like 90th and then crossing no more than 2 centile lines, say to the 50th on
the infant growth chart.
I often have babies referred to me because their weight is falling away from the
initial centile on the infant growth chart. If the baby is well and is feeding
appropriately, I don't worry too much and just wait and see what happens over the
next month. I don't advocate weekly weighing in these cases because it can be
misleading and stressful. Particularly if you are breast-feeding your baby, you don't
need to be stressed about your baby's weight.

Back to list
Is length in babies a reliable measurement?

Not usually. It depends how much your baby is stretched out before measuring.
Height is a more reliable measurement when your child can stand up straight.

Back to list

How do I interpret weight centiles that are different from height centiles?

As well as looking at the trend, it is also important to look at the weight in relation
to the height - being on the 90th centile for weight is not appropriate if your toddler
is on the 2nd or 3rd centile for height.
Often infants and toddlers are one centile apart for weight and height and this is
usually not a problem - so on the 10th centile for height and the 25th centile for
weight or vice versa is fine.
Back to list
What if my child's weight on a centile line is much more than her height centile
line?

If your toddler's weight is more than 2 centiles above the height


centile (weight on the 50th centile but height on the 10th centile or
below), she is overweight. Your doctor can confirm this by
calculating the BMI (Body Mass Index).
Be aware of her eating habits and watch her weight closely to prevent it moving
even further away from her height centile. It is much easier to prevent obesity than
to try and treat it later.
Back to list
When to be concerned about you baby or toddler's growth

You should see your doctor if:

your baby crosses more than 3 centile lines


your child's weight centile is more than 2 centiles greater than her height
centile

your baby is less than the 3rd centile for weight and is growing away from the
centiles

Case fatality graphs

Demography graphs interpretation IMR age, female reproductive age and


its mortality causes, male productive age and its mortality causes, old age
group 4% and its causes

Accident prevention Haddon matrix

Epidemiological transition pointed out by OMRAN 4 stages, pestilence stage,


early degenerative disease stage, late degenerative disease stage

BMI Cut off points for under weight, overweight, four levels of obese

SAAL seasonal awareness alert letter, measles and dengue stage measured
in terms of DEWS, DMIS

Food safety and food contamination

A child suffering from polio brought for vaccination to clinic which vaccines
to give same as given at stage 0 after looking for BCG scar

Chlorination graph stage 1, 2, 3, 4

Carriers

Survival paradox e.g. sometimes obese live more

Healthy worker effect

Bioremediation therapy use of biological agents to remove contaminants

Bioterrorism

RODS realtime outbreak disease surveillance

Demographic graph preindustrial, early industrial, late industrial, developed

SWOT analysis for situation analysis

Hawthorne effect

Health economics dollars per life years gained, dollars per quality adjusted
life years gained. It is a measure of cost effective analysis which measures
outcome in terms of DALYS

Cost benefit analysis outcome measured in monetary terms- not suitable for
health

MDG

PERT analysis

Personality traits

Psychosomatic disorders

Psychosocial disorders
FCPS

PRISM analysis performance of routine information management system


improve HMIS, develop performances, track progress, create awareness,
develop interventions for strengthening, develop targets, monitoring and
evaluation, create knowledge

Social marketing models in relation to health product, price, place,


promotion

Health system performance financing, stewardship, responsiveness sof the


health system to non health needs of the public

Double burden of disease communicable and non communicable due to life


style

Ob gene

Hidden hunger

Policy document of health sector reforms

Factorial Randomized Control Trials

Strategic planning

Health planning models

VIVA FCPS-II
I remember only these questions. There were also many graphs for interpretation. They also
asked about different sampling techniques and research designs
Q.1 Qualities of a good leader
2. What is health sector reform?
3. What is validity of a research design? How will you increase the validity of a study?
4. What are types of validity?

5. What social factors are responsible for differences in development of developed &
developing countries?
6. What is motivation & its types?
7. Causes of lung cancer in women. Why its incidence is increasing in women?
8. What is pink ribbon strategy?
9. What is confounding and how it is removed?
10. What percentage of budget is allocated for health in Pakistan & how much it should
ideally be?
11. What is the chemoprophylaxis & treatment of leprosy?

Introduction
Leprosy (Hansen's disease (HD) ) is a chronic infectious disease, caused by the bacillus
Mycobacterium leprae, which affects the skin and peripheral nerves leading to skin lesions, loss
of sensation, and nerve damage. This in turn can lead to secondary impairments or deformities of
the eyes, hands and feet. For treatment purposes, leprosy is classified as either paucibacillary
(PB) or multibacillary (MB) leprosy. The standard treatment for leprosy is multidrug therapy
(MDT) [1]. PB patients are treated for 6 months with dapsone and rifampicin; MB patients are
treated for 12 months with dapsone, rifampicin and clofazamine.
The World Health Organisation (WHO) had set a goal in the early 1990s to eliminate leprosy as a
public health problem by the year 2000. Elimination was defined as reducing the global
prevalence of the disease to less than 1 case per 10 000 population [2]. The WHO elimination
strategy was based on increasing the geographical coverage of MDT and patients' accessibility to
the treatment. The expectation existed that reduction in prevalence through expanding MDT
coverage would eventually also lead to reduction in incidence of the disease and ultimately to
elimination in terms of zero incidence of the disease. An important assumption underlying the
WHO leprosy elimination strategy was that MDT would reduce transmission of M. leprae
through a reduction of the number of contagious individuals in the community [3].
Unfortunately, there is no convincing evidence for this hypothesis [4].
With a total of 249 007 new patients detected globally in 2008 [5], it remains necessary to
develop new and effective interventions to interrupt the transmission of M. leprae. BCG
vaccination against tuberculosis offers some but not full protection against leprosy and in the
absence of another more specific vaccination against the bacillus other strategies need to be
developed, such as preventive treatment (chemoprophylaxis) of possible sub-clinically infected
people at risk of developing leprosy. Recently, the results were published of randomised
controlled trial into the effectiveness of single dose rifampicin (SDR) in preventing leprosy in

contacts of patients [6]. It was shown that this intervention is effective at preventing the
development of leprosy at two years and that the initial effect was maintained afterwards.
In order to assess the economic benefits of SDR as an intervention in the control of leprosy, we
performed a cost-effectiveness analysis. We provide an overview of the direct costs of this new
chemoprophylaxis intervention and calculate the cost-effectiveness compared to standard MDT
provision only.

Discussion
Chemoprophylaxis with single dose rifampicin for preventing leprosy among contacts is a costeffective prevention strategy. At program level an incremental of $ 6 009 was invested and 38
incremental leprosy cases were prevented, resulting in an ICER of $ 158 per one additional
prevented leprosy case.
This is the first report on cost-effectiveness of single dose rifampicin as chemoprophylaxis in
contacts of leprosy patients. The analysis is based on the results of a large randomized controlled
trial in Bangladesh [6]. For the analysis, the health care perspective was taken because indirect
cost data were largely unavailable. The health care perspective excludes indirect costs (patient
costs), such as travel costs, loss of income due to illness and clinic visits, and long term
consequences of disability. Estimating these costs was beyond the scope of this study, but
inclusion would have rendered the intervention even more cost-effective. Another limitation of
the study is that a static approach was taken to the analysis, measuring the effect of the
intervention after two years only. After these two years, there was no further reduction of new
cases in the chemoprophylaxis arm of the trial compared to the placebo arm. Because leprosy is
an infectious disease, with person-to-person transmission of M. leprae, one can expect that
prevention of primary cases (as recorded in the trial) will lead to further prevention of secondary
cases. In time, this would lead to further cost-effectiveness of the intervention. Unfortunately, we
could not apply such a dynamic analysis approach because there is insufficient information about
the long term effects of the intervention, including the number of secondary cases prevented and
the number of primary cases prevented after two years that will eventually develop leprosy after
a longer period of time, beyond the 4 years observation period of the trial.
It is also important to understand that the results of the COLEP trial reflect a comparison
between the chemoprophylaxis intervention and standard MDT treatment plus contact surveys at
2-year intervals with treatment of newly diagnosed cases among contacts. A contact survey in
itself is an intervention that reduces transmission in contact groups and thus new leprosy patients
among contacts. The provision of chemoprophylaxis to contacts requires contact tracing, but
contact tracing is not part of leprosy control programs in many countries and doing so would
increase program costs considerably. WHO however, recognizes the importance of contact
tracing and now recommends that it is introduced in all control programs [21]. This would then
also lay a good foundation for introducing chemoprophylaxis.
WHO reports regarding cost-effectiveness analyses recommend using disability adjusted life
years (DALY) as outcome measure for such studies [22]. In leprosy two measures are common to
express disability: WHO grade 1 and 2 [23]. The disability weight for grade 2 disability (visible

deformity) has been determined at 0.153 [24], but no weight is available for grade 1. Of all
newly detected leprosy cases, a relatively low percentage (235%) have grade 2 disability [25].
In our study we chose for the number of leprosy cases prevented as outcome, because there is
little information available about survival of patients with grade 2 disability and also because the
choice for DALY's would have given a less favourable result due to the low weight of leprosy
disability.
There are a number of issues to take into account when relating the outcome of this study to
other countries. Firstly, the cost level to conduct leprosy control will differ per country, due to
economic standard, budget allocated to primary health care, salaries of health care workers, etc.
In our calculation, program costs were similar for both the standard MDT treatment and
chemoprophylaxis intervention, but these costs will vary per country. The treatment costs are
based on real cost estimates and will vary less between countries and programs. Therefore the
actual costs will differ, but the conclusion that the intervention is cost-effective is very likely to
remain the same. Secondly, the clinical presentation of leprosy differs between countries and
regions. Globally the distribution is around 40% for MB and 60% for PB in newly detected
leprosy cases, but with widely varying ratios between countries [25]. Since costs for treating PB
and MB leprosy are different, these differences are likely to affect the outcome of the costeffectiveness analysis. Thirdly, the percentage of newly detected cases that are a household
contact of a known leprosy patient differs per country and is possibly determined by the
endemicity level of leprosy in a country or area. In Bangladesh, in the high endemic area where
the COLEP study was conducted, approximately 25% of newly detected cases had a known
index case within the family, whereas in a low endemic area (Thailand) this proportion was 62%
[26]. An intervention aimed at close (household) contacts may therefore be more cost-effective in
countries where relatively many new cases are household contacts. But the background and
implications of such differences on effectiveness of chemoprophylaxis needs further research.
Only few articles have been published about cost-effectiveness analyses of interventions in
leprosy [27]. Most articles assess small parts of leprosy control, such as footwear provision [28],
MDT delivery costs [29], or the economic aspects of hospitalisation versus ambulatory care of
neuritis in leprosy reactions [30]. Only two studies provided a more general cost-effect analysis.
Naik and Ganapati included several costs in their economic evaluation, but a limitation of the
study is the lack of reference about how they obtained their cost data [31]. Remme et al. based
the cost calculations in their study on the limited available published cost data, program
expenditure data and expert opinion, and also provide limited insight into how they obtained
certain costs and effects [30]. Both studies do not mention well how the costs are obtained, (e.g.
real costs, bottom-up or top-down costs). Our current article is basically one of the first
structured cost-effective analyses for leprosy presenting an overview of the costs involved and
can be used for the assessment of the costs of leprosy control in general.
This report shows that chemoprophylaxis with single dose rifampicin given to contacts of newly
diagnosed leprosy patients is a cost-effective intervention strategy. Implementation studies in the
field are necessary to establish whether this intervention is acceptable and feasible in other
leprosy endemic areas of the world.

Treatment of leprosy
Several drugs are used in combination in multidrug therapy (MDT). (See table) These drugs must
never be used alone as monotherapy for leprosy.
Dapsone, which is bacteriostatic or weakly bactericidal against M. leprae, was the mainstay
treatment for leprosy for many years until widespread resistant strains appeared. Combination
therapy has become essential to slow or prevent the development of resistance. Rifampicin is
now combined with dapsone to treat paucibacillary leprosy. Rifampicin and clofazimine are
now combined with dapsone to treat multibacillary leprosy.
A single dose of combination therapy has been used to cure single lesion paucibacillary leprosy:
rifampicin (600 mg), ofloxacin (400 mg), and minocycline (100 mg). The child with a single
lesion takes half the adult dose of the 3 medications.
WHO has designed blister pack medication kits for both paucibacillary leprosy and for
multibacillary leprosy. Each easy-to use kit contains medication for 28 days. The blister pack
medication kit for single lesion paucibacillary leprosy contains the necessary medication for the
one time administration of the 3 medications.
Any patient with a positive skin smear must be treated with the MDT regimen for multibacillary
leprosy. The regimen for paucibacillary leprosy should never be given to a patient with
multibacillary leprosy. Therefore, if the diagnosis in a particular patient is uncertain, treat that
patient with the MDT regimen for multibacillary leprosy.
Ideally, the patient should go to the leprosy clinic once a month so that clinic personnel may
supervise administration of the drugs prescribed once a month. However, many countries with
leprosy have poor coverage of health services and monthly supervision of drug administration by
health care workers may not be possible. In these cases, it may be necessary to designate a
responsible third party, such as a family member or a person in the community, to supervise the
monthly drug administration. Where health care service coverage is poor and supervision of the
monthly administration of drugs by health workers is not possible, the patient may be given more
than the 28 days supply of multidrug therapy blister packs. This tactic helps make multidrug
therapy easily available, even to those patients who live under difficult conditions or in remote
areas. Patients who ask for diagnosis and treatment are often sufficiently motivated to take full
responsibility for their own treatment of leprosy. In this situation, it is important to educate the
patient regarding the importance of compliance with the regimen and to give the patient
responsibility for taking his or her medication correctly and for reporting any untoward signs and
symptoms promptly. The patient should be warned about possible lepra reactions.

12. Prevention of dengue fever. What is WHO doing for dengue fever?

Dengue is transmitted by the bite of a mosquito infected with one of the four dengue virus
serotypes. It is a febrile illness that affects infants, young children and adults with symptoms
appearing 3-14 days after the infective bite.
Dengue is not transmitted directly from person-to-person and symptoms range from mild fever,
to incapacitating high fever, with severe headache, pain behind the eyes, muscle and joint pain,
and rash. There is no vaccine or any specific medicine to treat dengue. People who have dengue
fever should rest, drink plenty of fluids and reduce the fever using paracetamol or see a doctor.
Severe dengue (also known as dengue hemorrhagic fever) is characterized by fever, abdominal
pain, persistent vomiting, bleeding and breathing difficulty and is a potentially lethal
complication, affecting mainly children. Early clinical diagnosis and careful clinical management
by trained physicians and nurses increase survival of patients.

14. Causes of air pollution.

Sources

Dust storm approaching Stratford, Texas

Controlled burning of a field outside of Statesboro, Georgia in preparation for spring


planting

There are various locations, activities or factors which are responsible for releasing pollutants
into the atmosphere. These sources can be classified into two major categories.
Anthropogenic (man-made) sources:
These are mostly related to the burning of multiple types of fuel.

Stationary Sources include smoke stacks of power plants, manufacturing


facilities (factories) and waste incinerators, as well as furnaces and other
types of fuel-burning heating devices. In developing and poor countries,
traditional biomass burning is the major source of air pollutants; traditional
biomass includes wood, crop waste and dung. [6][7]

Mobile Sources include motor vehicles, marine vessels, and aircraft.

Chemicals', dust and controlled burn practices in agriculture and forest


management'. Controlled or prescribed burning is a technique sometimes
used in forest management, farming, prairie restoration or greenhouse gas
abatement. Fire is a natural part of both forest and grassland ecology and
controlled fire can be a tool for foresters. Controlled burning stimulates the
germination of some desirable forest trees, thus renewing the forest.

Fumes from paint, hair spray, varnish, aerosol sprays and other solvents

Waste deposition in landfills, which generate methane. Methane is highly


flammable and may form explosive mixtures with air. Methane is also an
asphyxiant and may displace oxygen in an enclosed space. Asphyxia or
suffocation may result if the oxygen concentration is reduced to below 19.5%
by displacement.

Military resources, such as nuclear weapons, toxic gases, germ warfare


and rocketry

Natural sources:

Dust from natural sources, usually large areas of land with few or no
vegetation
Methane, emitted by the digestion of food by animals, for example cattle

Radon gas from radioactive decay within the Earth's crust. Radon is a
colorless, odorless, naturally occurring, radioactive noble gas that is formed
from the decay of radium. It is considered to be a health hazard. Radon gas
from natural sources can accumulate in buildings, especially in confined
areas such as the basement and it is the second most frequent cause of lung
cancer, after cigarette smoking.

Smoke and carbon monoxide from wildfires

15.

Vegetation, in some regions, emits environmentally significant amounts of


VOCs on warmer days. These VOCs react with primary anthropogenic
pollutantsspecifically, NOx, SO2, and anthropogenic organic carbon
compoundsto produce a seasonal haze of secondary pollutants. [8]

Volcanic activity, which produces sulfur, chlorine, and ash particulates

What is correlation? What is r2. What is line of best fit?

charts mostly include population pyramids, scatter diagrams, line graphs, tables e.g
related to nutrition, validity, reliability etc.

NADRA

NADRA is one of the leading System Integrators in the global identification sector and
boasts extensive experience in designing, implementing and operating solutions for
corporate and public sector clients. NADRA offers its clients a portfolio of
customizable solutions for identification, e-governance and secure documents.
NADRA has successfully implemented the Multi-Biometric National Identity Card &
Multi-Biometric e-Passport solutions for Pakistan, Passport Issuing System for
Kenya, Bangladesh High Security Drivers License, and Civil Registration
Management System for Sudan amongst other projects.

Sustainable Human Development


Education, Food & Nutrition, Governance, Natural Resources, Urban Development
Outline
partners
Contact

The socioeconomic challenges facing populations, especially in developing and leastdeveloped countries, are enormous. These challenges underscore the need to strengthen
the institutions for sustainable human development in these countries. In the context of
contemporary development discourse and practice, the UNU Institute for Sustainability
and Peace (UNU-ISP) seeks to contribute to strengthening the institutions for sustainable
human development in developing countries. To that end, the Sustainable Human
Development Programme engages in the following: i) research on governance and
transparent management of revenues from extractive minerals in resource-rich
developing countries, including the role of transnational corporations in the extractive
industry; ii) research on how international trade, investment and emerging
biotechnological innovations affect food security; iii) the implications of emerging
publicprivate sector partnerships for sustainable development; iv) challenges of
sustainable rural/urban livelihoods in Africa; and v) capacity development focusing on
the role of higher education in sustainable development in Africa.
The research of this programme is closely linked to the research and teaching activities of
postgraduate programmes of UNU-ISP, including the Master of Science in Sustainability,
Development and Peace and a forthcoming PhD programme, which is expected to launch
in September 2012. The programme further contributes to other UNU-ISP teaching and
capacity development activities, such as the Postgraduate Course on Building Resilience
to Climate Change and other short-term postgraduate and credited courses.

Twinning
This programme envisages twinning with the UNU Institute for Natural Resources in
Africa (UNU-INRA) to jointly organize and facilitate two project workshops in Africa: i)
Impacts of Trade and Investment-Driven Biotechnological Innovations on Food Safety
Security in Africa, and ii) Governance and Institutional Reform for the Sustainable
Development and Use of Africas Natural Resources. In the planning of the workshops,
UNU-ISP and UNU-INRA will collaborate to identify relevant African scholars, experts
and policymakers to participate in the workshops in order to make policy-oriented
recommendations tailored to sustainable policy reform on these themes.

Focal Point
Dr. Obijiofor Aginam, Academic Programme Officer, is the focal point for this
programme.

Purpose
This programme seeks to find sustainable solutions to some of the most pressing
development issues facing developing countries: food security/hunger, management of
natural resources, rural/urban development and the role of higher education in sustainable
development in Africa.

Approach
Links with the parallel UNU-ISP programmes, the relevant UNU institutes, and leading
universities mostly in developing countries, as well as civil society, will facilitate a
holistic approach to these international development problems. The programme will
combine perspectives from a range of disciplines in the natural sciences, social sciences
and humanities in researching these problems.

Gender
Equitable geographical and gender representation will be sought in all the programmes
activities, including selection of project participants and access to research outcomes,
with particular attention to developing countries. It is envisaged that there will be a
gender balance, for instance, in the selection of workshop participants and other planned
outreach activities.

Target Audience
The programme aims to advance academic and policy debate that will enrich the
academic and policymaking communities in most developing countries, the UN system;
international, regional and sub-regional organizations; national governments; and civil
society.

Intended Impact
Impact: Influencing policymaking in the United Nations System
Target: This programme targets the United Nations Development Programme (UNDP)
and all other UN agencies working on the UN Millennium Development Goals, United
Nations Conference on Trade and Development (UNCTAD), the Food and Agriculture
Organization (FAO), the World Health Organization (WHO), the United Nations
Environment Programme (UNEP), the Secretariat of the Convention on Biological
Diversity, UN-Habitat, United Nations Industrial Development Organization (UNIDO)
and the United Nations Educational, Scientific and Cultural Organization (UNESCO).

How: The work and mandate of each of these UN agencies relate to aspects of
development. This programme contributes by addressing the gaps and limits of their
policies by generating concrete outputs of the highest quality to catalyze and inform
policy reform in the work and mandates of the relevant UN agencies.
Impact: Influencing policymaking at the national level
Target: The programme targets relevant development-related sectors in developing
countries.
How: This programme generates ideas aimed at policy reform in the relevant
development sectors of most developing countries. Applied policy recommendations are
made through the publication and dissemination of policy briefs and other outreach
activities.
Impact: Furthering knowledge in an academic field
Target: The programme focuses on such topics as governance and management of
Africas natural resources, biotech, foreign investment/trade and food security,
rural/urban livelihoods, transnational corporations and foreign direct investment, and
higher education and development in Africa.
How: This programme brings together scholars from the relevant disciplines to study and
produce policy briefs and edited books aimed at addressing the gaps in the literature
relevant to these topics.
Impact: Curriculum development
Target: The programme targets selected universities and higher education sectors in
Africa and Asia.
How: The programme identifies common themes in developing joint and collaborative
curricula and research networks between leading Japanese universities and universities in
Africa and other parts of Asia.
Impact: Teaching
Target: This programme relates to one of the core courses offered in the UNU-ISP
masters degree program.
How: The UNU-ISP Masters Degree on Sustainability, Development and Peace offers a
core course on International Cooperation and Development that focuses on some of the
themes to be covered in this programme.

Research Findings
The programme builds on existing development discourse by leading scholars,
policymakers and civil society activists as well as the work of relevant regional and
international institutions. It is envisaged that the project workshops under this program
will lead to research findings that will be published in policy briefs, special issues of
academic and policy journals and peer reviewed edited books, all aiming to inform the
academic and policy communities, especially in developing countries.

Policy Bridging
The programme targets policy reform in the development sectors of developing countries.
As such, it aims to make knowledge practically accessible and realizable by developing
user-friendly, accessible and policy-oriented recommendations. The programme aims to
bridge the dichotomy between the academic and policy communities by producing
concise policy briefs and manuals that target very specific sectors and the way forward in
addressing the gaps in these sectors.

Value Added
The programme links with relevant development programmes of leading UN agencies
UNDP, FAO, UNCTAD, UNIDO, UNESCO, and UNEP. It also links with international
development research and activities of the other research and training programmes in the
UNU system, especially the UNU World Institute for Development Economics Research,
the UNU Institute for Environment and Human Security, UNU-INRA, and the UNU
Vice-Rectorate in Europe (through the UNU-ISP Operating Unit in Bonn). It seeks to
build on the expertise and existing capacities in selected universities across the world to
address some of the most pressing socioeconomic development problems facing the
population, especially in developing countries. The programme also draws from available
expertise in civil society organizations.

Dissemination
Research to be undertaken by this programme will result in peer-reviewed academic
publications, policy briefs, and conference presentations that will be widely disseminated
within the relevant academic, (global, regional, national) policy and epistemic
communities.

Timeline/Programme Cycle
Most projects will proceed on a two-year timeline from initiation to completion, resulting
in an academic publication and/or other output within a third year.

Evaluation
Outputs of this programme will be mainly evaluated through an independent academic
peer-review process (for publications), and through student evaluations (for teaching).

Challenges
Poverty reduction strategies and development policies aimed at tackling socioeconomic
inequalities are some of the most pressing policy and governance issues facing
developing countries. One major challenge of this programme is to contribute to the
development discourse and practice in the context of the socioeconomic disparities

between the least-developed, developed and developing countries. To achieve this, the
programme seeks to build on existing good development practices and develop new
interdisciplinary perspectives and approaches to pressing human development problems
in developing countries.

Theory
Disinfection with chlorine is very popular in water and wastewater treatment
because of its low cost, ability to form a residual, and its effectivness at low
concentrations. Although it is used as a disinfectant, it is a dangerous and
potentially fatal chemical if used improperly.
Despite the fact the disinfection process may seem simple, it is actually a quite
complicated process. Chlorination in wastewater treatment systems is a fairly
complex science which requires knowledge of the plant's effluent characteristics.
When free chlorine is added to the wastewater, it takes on various forms depending
on the pH of the wastewater. It is important to understand the forms of chlorine
which are present because each has a different disinfecting capability. The acid
form, HOCL, is a much stronger disinfectant than the hypochlorite ion, OCL-. The
graph below depicts the chlorine fractions at different pH values (Drawing by Erik
Johnston).

Ammonia present in the effluent can also cause problems as chloramines are
formed, which have very little disinfecting power. Some methods to overcome the
types of chlorine formed are to adjust the pH of the wastewater prior to chlorination
or to simply add a larger amount of chlorine. An adjustment in the pH would allow
the operators to form the most desired form of chlorine, hypochlorus acid, which
has the greatest disinfecting power. Adding larger amounts of chlorine would be an
excellent method to combat the chloramines because the ammonia present would

bond to the chlorine but further addition of chlorine would stay in the hypochlorus
acid or hypochlorite ion state.
a) Chlorine gas, when exposed to water reacts readily to form hypochlorus acid,
HOCl, and hydrochloric acid. Cl2 + H2O -> HOCl + HCl
b) If the pH of the wastewater is greater than 8, the hypochlorus acid will dissociate
to yield hypochlorite ion. HOCl <-> H+ + OCl-- If however, the pH is much less than
7, then HOCl will not dissociate.
c) If ammonia is present in the wastewater effulent, then the hypochlorus acid will
react to form one three types of chloramines depending on the pH, temperature,
and reaction time.
Monochloramine and dichloramine are formed in the pH range of 4.5 to 8.5,
however, monochloramine is most common when the pH is above 8. When the pH
of the wastewater is below 4.5, the most common form of chloramine is
trichloramine which produces a very foul odor. The equations for the formation of
the different chloramines are as follows: (Reynolds & Richards, 1996)
Monochloramine: NH3 + HOCl -> NH2Cl + H2O
Dichloramine: NH2Cl + 2HOCl -> NHCl2 + 2H2O
Trichloramine: NHCl2 + 3HOCl -> NHCl3 + 3H2O
Chloramines are an effective disinfectant against bacteria but not against viruses.
As a result, it is necessary to add more chlorine to the wastewater to prevent the
formation of chloramines and form other stronger forms of disinfectants.
d) The final step is that additional free chlorine reacts with the chloramine to
produce hydrogen ion, water , and nitrogen gas which will come out of solution. In
the case of the monochloramine, the following reaction occurs:
2NH2Cl + HOCl -> N2 + 6HCl + H2O
Thus, added free chlorine reduces the concentration of chloramines in the
disinfection process. Instead the chlorine that is added is allowed to form the
stronger disinfectant, hypochlorus acid.
Perhaps the most important stage of the wastewater treatment process is the
disinfection stage. This stage is most critical because it has the greatest effect on
public health as well as the health of the world's aquatic systems. It is important to
realize that wastewater treatment is not a cut and dry process but requires in depth

knowledge about the type of wastewater being treated and its characteristics to
obtain optimum results. (White, 1972)

The graph shown above depicts the chlorine residual as a function of increasing
chlorine dosage with descriptions of each zone given below (Drawing by Erik
Johnston, adapted from Reynolds and Richards, 1996).
Zone I: Chlorine is reduced to chlorides.
Zone II: Chloramines are formed.
Zone III: Chloramines are broken down and converted to nitrogen gas which
leaves the system (Breakpoint).
Zone IV: Free residual.
Therefore, it is very important to understand the amount and type of chlorine that
must be added to overcome the difficulties in the strength of the disinfectant which
results from the wastewater's characteristics.

Implementation
Water Treatment
The following is a schematic of a water treatment plant (Drawing by Matt Curtis).

In water treatment, pre-chlorination is utilized mainly in situations where the inflow


is taken from a surface water source such as a river, lake, or reservoir. Chlorine is
usually added in the rapid mixing chamber and effectively prevents the majority of
algal growth. Algae is a problem in water treatment plants because it builds up on
the filter media and increases the head which means that the filters need to be
backwashed more frequently. In addition, the algal growth on the filter media
causes taste and odor problems in the treated water. (Reynolds & Richards, 1996)
In the picture to the left, a residual monitor checks the chlorine level in the water
leaving the treatment plant. A minimum value is required to prevent regrowth of
bacteria throughout the distribution system, and a maximum value is established to
prevent taste, odor, and health problems (Photo by Matt Curtis).

Post chlorination is almost always done in water treatment, but can be replaced with
chlorine dioxide or chloramines. In this stage chlorine is fed to the drinking water
stream which is then sent to the chlorine contact basin to allow the chlorine a long
enough detention time to kill all viruses, bacteria, and protozoa that were not
removed and rendered inactive in the prior stages of treatment (Photo by Matt
Curtis).
Drinking water requires a large addition of chlorine because there must be a
residual amount of chlorine in the water that will carry through the system until it
reaches the tap of the user. After post chlorination, the water is retained in a clear
well prior to distribution. In the picture to the right, the clear pipe with the floater

designates the height of the water within the clear well. (Reynolds & Richards,
1996)

"line of best fit."


When data is displayed with a scatter plot, it is often useful to
attempt to represent that data with the equation of a straight line for
purposes of predicting values that may not be displayed on the plot.
Such a straight line is called the "line of best fit."
It may also be called a "trend" line.
A line of best fit is a straight line that
best represents the data on a scatter
plot.
This line may pass through some of the points,
none of the points, or all of the points.

Materials for examining line of best fit: graph paper and a


strand of spaghetti

Is there a relationship between the fat grams


and the total calories in fast food?

Sandwich
Hamburger

Total
Total
Calorie
Fat (g)
s
9

260

Cheeseburger

13

320

Quarter Pounder

21

420

Quarter Pounder with

30

530

Cheese
Big Mac

31

560

Arch Sandwich Special

31

550

Arch Special with Bacon

34

590

Crispy Chicken

25

500

Fish Fillet

28

560

Grilled Chicken

20

440

300

Grilled Chicken Light

Can we predict the number of positionOur assistant, Bibs, helps


the strand of
total calories based upon the
spaghetti.
total fat grams?

Let's find out!


1. Prepare a scatter plot of the data.
2. Using a strand of spaghetti, position
the spaghetti so that the plotted points
are as close to the strand as possible.

3. Find two points that you think


will be on the "best-fit" line.
4. We are choosing the points (9,
260) and (30, 530). You may
choose different points.
5. Calculate the slope of the line
through your two points.

rounded to three decimal places.

Choose two points that you


think will form the line of best fit.

6. Write the equation of the line.

7. This equation can now be


used to predict information that
was not plotted in the scatter
plot.
Question: Predict the total
calories based upon 22 grams of
fat.

Predicting:
- If you are looking for values that
fall within the plotted values, you
are interpolating.
- If you are looking for values that
fall outside the plotted values, you
are extrapolating. Be careful
when extrapolating. The further
away from the plotted values you
go, the less reliable is your
prediction.

ANS: 427.141 calories

So who has the REAL "line-ofbest-fit"?


In step 4 above, we chose two points to form our line-of-best-fit. It is
possible, however, that someone else will choose a different set of
points, and their equation will be slightly different.

Your answer will be considered CORRECT, as long as your


calculations are correct for the two points that you chose. So, if
each answer may be slightly different, which answer is the REAL
"line-of-best-fit?

Least Squares Best Fit Straight Line Method


The "Least Squares Best Fit Straight Line" method is preferred by most transducer manufacturers
because it provides the closest possible best fit to all data points on the curve, and can be most
readily adapted to computerised calibration systems in common use. Mathematically, it produces
a result of "independent linearity".
The Least Squares Best Fit Straight Line is a statistical method and as such may not be a
"purists" approach. But provided that the characteristics of the transducers are correctly
optimised at the design and development stage and are represented by a continuous smooth
curve, the assessment is meaningful and accurate.
In practice, 3, 5, 11, or more calibration points are taken over the working range of the
transducer. The measured input and output values at each point are used to provide the data for
each calculation of the slope of the "Least Squares Best Fit Straight Line" using the following
equations.

Global Alert and Response (GAR)


WHO guidelines for epidemic preparedness and response to measles outbreaks
WHO Guidelines for Epidemic Preparedness and Response to Measles Outbreaks. Geneva,
Switzerland, May 1999
Part One: The Organism and the Disease
1.1 The Nature and Magnitude of the Problem
1.2 The Organism
1.3 The Disease (Pathogenesis and Clinical Problems)
1.4 Transmission and Immunity
Part Two: Prevention and Control
2.1 Phases of Measles Control
2.2 Measles Control phase
2.3 Outbreak Prevention Phase
2.4 Measles Elimination Phase
Part Three: Epidemic Control
3.1 Management
3.2 Detection
3.3 Confirmation
3.4 Response

3.4.1 Planning a response


3.4.2 Definition and agreement on response
3.4.3 Management of response
3.4.4 Public information
3.4.5 Predicting further outbreaks

3.4.6 Post-outbreak activities


Annex 1: Case Definitions for Measles
Annex 2: Case Management, Complications and Vitamin A
Annex 3: Measles Vaccine Suppliers to United Nations Agencies
Annex 4: Strategies for the Prevention of Outbreaks
Annex 5: Measles Elimination Strategies
Annex 6: Surveillance and Outbreak Thresholds
Annex 7A: Suspected Measles Case Investigation Form
Annex 7B: Measles Line Listing Form (suspected cases)
Annex 8: Laboratory Diagnostic Methods
Annex 9: Data Analysis and Epidemiological Calculations
Annex 10: Epidemic Response Teams - roles and responsibilities
Annex 11: Prediction of Severe Disease during Outbreaks
Annex 12: Causes of Measles Outbreaks
Annex 13: Supplementary Immunization Activities
Annex 14: Measles in Emergency Situations
Annex 15: Safety of Injections

Human Development Index

Very High
High

(55 countries)
(55 countries)

Medium

(57 countries)

Low

(47 countries)

Not Classified

Human Poverty Index (HPI)

(16 countries)

The Human Poverty Index (HPI) was an indication of the standard of living in a country,
developed by the United Nations (UN) to complement the Human Development Index (HDI) and
was first reported as part of the Human Development Report in 1997. It was considered to better
reflect the extent of deprivation in developed countries compared to the HDI. In 2010 it was
supplanted by the UN's Multidimensional Poverty Index.
The HPI concentrates on the deprivation in the three essential elements of human life already
reflected in the HDI: longevity, knowledge and a decent standard of living. The HPI is derived
separately for developing countries (HPI-1) and a group of select high-income OECD countries
(HPI-2) to better reflect socio-economic differences and also the widely different measures of
deprivation in the two groups

Multidimensional Poverty Index


The Multidimensional Poverty Index (MPI) was developed in 2010 by Oxford Poverty &
Human Development Initiative and the United Nations Development Programme[1] and uses
different factors to determine poverty beyond income-based lists. It replaced the previous Human
Poverty Index.
The MPI is an index of acute multidimensional poverty. It shows the number of people who are
multidimensionally poor (suffering deprivations in 33.33% of weighted indicators) and the
number of deprivations with which poor households typically contend. It reflects deprivations in
very rudimentary services and core human functioning for people across 104 countries. Although
deeply constrained by data limitations, MPI reveals a different pattern of poverty than income
poverty, as it illuminates a different set of deprivations.

Indicators used calculate the MPI


The following ten indicators are used to calculate the MPI:

Education (each indicator is weighted equally at 1/6)

1. Years of schooling: deprived if no household member has completed five


years of schooling
2. Child school attendance: deprived if any school-aged child is not attending
school up to class 8

Health (each indicator is weighted equally at 1/6)

3. Child mortality: deprived if any child has died in the family


4. Nutrition: deprived if any adult or child for whom there is nutritional
information is malnourished

Standard of Living (each indicator is weighted equally at 1/18)

5. Electricity: deprived if the household has no electricity


6. Sanitation: deprived if the households sanitation facility is not improved
(according to MDG guidelines), or it is improved but shared with other
households
7. Drinking water: deprived if the household does not have access to safe
drinking water (according to MDG guidelines) or safe drinking water is more
than a 30-minute walk from home roundtrip
8. Floor: deprived if the household has a dirt, sand or dung floor
9. Cooking fuel: deprived if the household cooks with dung, wood or charcoal
10.Assets ownership: deprived if the household does not own more than one
radio, TV, telephone, bike, motorbike or refrigerator and does not own a car
or truck

Why is the MPI better than the Human Poverty Index (HPI) which was previously
used in the Human Development Reports?
The MPI replaced the HPI, which was published from 1997 to 2009. Pioneering in its
day, the HPI used country averages to reflect aggregate deprivations in health,
education, and standard of living. It could not identify specific individuals,
households or larger groups of people as jointly deprived. The MPI addresses this
shortcoming by capturing how many people experience overlapping deprivations
(prevalence) and how many deprivations they face on average (intensity). The MPI
can be broken down by indicator to show how the composition of multidimensional
poverty changes for different regions, ethnic groups and so onwith useful
implications for policy.

The Human Poverty Index


Starting from this assumptions, the UNDP introduced the Human Poverty Index.
.

It is a combined measure using the dimensions of human life


already considered in the HDI: life length, knowledge, a decent
living standard. The index is calculated annually by the UNDP for all
countries according to the availability of statistical data. It is

prepared in two forms, depending on whether it is a developing


(HPI-1) or an industrialised economy (HPI-2).
The human poverty index for developing countries (HPI-1)
The human poverty index for industrialised countries (HPI-2)

The human poverty index for developing countries (HPI-1)


The following three dimensions are taken into account:

deprivation of longevity, measured as a percentage of the individuals


with a life expectancy lower than 40 years (P1).
deprivation of knowledge, expressed as a percentage of illiterate
adults (P2).

deprivation of decent living standards (P3). This last indicator is


made up by the simple average of three basic variables:

the percentage of the population without access to drinking water


(P31),

the percentage of population without access to health services (P 32)


and lastly,

the percentage of underweight children aged less than five (P 33).

The indicator P3, referred to the living standard, is then obtained as


an average of the three indicators, in this way:
[(P31 + P32 + P33) / 3
The global index HPI-1 is obtained by combining these three
dimensions into one single measure giving a greater weight to the
most disadvantaged situation.
The formula is:

HPI-1 = [(P13 + P23 + P33 ) / 3]1/3


The human poverty index for industrialised countries (HPI-2)

The human poverty index for industrialised countries uses the same
dimensions of the previous index, but the variables and reference
values are different:

deprivation of longevity is measured by the percentage of individuals


whose life expectancy is below 60 years of age (P1),
deprivation of knowledge is based on the percentage of adults
functionally illiterate according to the OECD definition (P2),
deprivation of decent life standards (P3) is the percentage of the
population living below the poverty level, as defined according to the
criteria of the International Standard of Poverty Line, thus being
equal to 50% of the per capita average national income.

HPI-2 also considers a fourth dimension, social exclusion, measured


with the long-term unemployment rate (P4), that is to say, the
percentage of those unemployed for 12 months or over compared to
the total workforce (the sum of those working and those seeking a
job).
The HPI-2 is calculated in a way analogous to that of the HPI-1:

HPI-2 = [(P13 + P13 + P33 + P43) / 4]1/3

Human Poverty Index


The Human Poverty Index (HPI) was first introduced into the Human Development Report by
the United Nations Development Programme (UNDP) in 1997 in an attempt to bring together in
a composite index the different features of deprivation in the quality of life to arrive at an
aggregate judgement on the extent of poverty in a community.
That is, if human development is seen as enlarging choices and expanding freedoms to enjoy a
decent standard of living, freedom, dignity, self-respect and the respect of others, then measures
of poverty should look at the deprivation of these freedoms.
Therefore the HPI looks at deprivations in the three basic dimensions captured in the Human
Development Index: a long and healthy life, as measured by the probability of not surviving past
the age of 40; knowledge, or exclusion from it, as measured by the adult literacy rate; and a
decent standard of living, or lack of essential services, as measured by the percentage of the
population not using an improved water source and the percentage of children underweight for
their age.
There are three indicators of the human poverty index (HPI): [1]

Survival: the likeliness of death at a relatively early age and is represented by the probability of
not surviving to ages 40 and 60 respectively for the HPI-1 and HPI-2.
Knowledge: being excluded from the world of reading and communication and is measured by
the percentage of adults who are illiterate.
Decent standard of living: In particular, overall economic provisioning.

Human Development Index (HDI)


The Human Development Index (HDI) is an index published by the United
Nations Development Program. It lists countries in order of human achievement.
According to the United Nations Development Program, the purpose of the report is
to 'simulate global, regional, and national policy discussions on issues that are
relevant to human development.'

Key Features of Human Development Index (HDI)


The HDI appears in the annual Human Development Report, which is also published by the
United Nations Development Program. The HDI calculates and ranks human achievement by
certain developmental criteria. It uses three dimensions and four indicators to determine human
development in a particular country. The indicators are used to measure the dimensions. The
three dimensions and related indicators can be summarized in the following table:
Dimension

Indicators

Health

Life expectancy at birth

Education

Mean years of schooling & Expected years of


schooling

Living standards

Gross national income per capita

The Components of the


Human Development Index
The important components of HDI:

Green
Yellow
Orange
Red
Grey

Very High Development


High Development
Medium Development
Low Development
Classification not available

The Human Development Index (HDI) is a summary measure of average


achievement in key dimensions of human development: a long and healthy life,
being knowledgeable and have a decent standard of living. The HDI is the geometric
mean of normalized indices for each of the three dimensions.

Defining Extreme Poverty


Income Based Definition
Common Criticism/Alternatives
Current Trends
Getting to Zero
Exacerbating Factors
International Conferences
Millennium Summit
2005 World Summit

Post-2015 Development Agenda

UN LDC Conferences

Organizations Working to End Extreme Poverty


International Organizations
o World Bank
o

UN

Bilateral Organizations
o

USAID

DfID

Non-Governmental Movements
o

NGOs

Campaigns

Absolute poverty
Absolute poverty refers to a set standard which is consistent over time and between countries.
First introduced in 1990, the dollar a day poverty line measured absolute poverty by the
standards of the worlds poorest countries. The World Bank defined the new international
poverty line as $1.25 a day for 2005 (equivalent to $1.00 a day in 1996 US prices). but have been
updated to be $1.25 and $2.50 per day. Absolute poverty, extreme poverty, or abject poverty is "a
condition characterized by severe deprivation of basic human needs, including food, safe
drinking water, sanitation facilities, health, shelter, education and information. It depends not
only on income but also on access to services." The term 'absolute poverty', when used in this
fashion, is usually synonymous with 'extreme poverty'
Extreme poverty, or absolute poverty, was originally defined by the United Nations in 1995 as
a condition characterized by severe deprivation of basic human needs, including food, safe

drinking water, sanitation facilities, health, shelter, education and information. It depends not
only on income but also on access to services

Relative poverty

Relative poverty views poverty as socially defined and dependent on social context,
hence relative poverty is a measure of income inequality. Usually, relative poverty is
measured as the percentage of population with income less than some fixed
proportion of median income. There are several other different income inequality
metrics, for example the Gini coefficient or the Theil Index.

Poverty reductionmeasures
Poverty reduction

Increasing the supply of basic needs


o Food and other goods

Health care and education

Removing constraints on government services

Reversing brain drain

Controlling overpopulation

Increasing personal income


o

Income grants

Economic freedoms

Financial services

Cultural factors to productivity

Electronic waste
Audiovisual components, televisions, VCRs, stereo equipment, mobile phones, other
handheld devices, and computer components contain valuable elements and
substances suitable for reclamation, including lead, copper, and gold. One of the
major challenges is recycling the printed circuit boards from the electronic wastes.

Electronic waste or e-waste describes discarded electrical or electronic devices. Used electronics
which are destined for reuse, resale, salvage, recycling or disposal are also considered as e-waste.
Informal processing of electronic waste in developing countries may cause serious health and
pollution problems, as these countries have limited regulatory oversight of e-wste processing.
Electronic scrap components, such as CRTs, may contain contaminants such as lead, cadmium,
beryllium, or brominated flame retardants. Even in developed countries recycling and disposal of
e-waste may involve significant risk to workers and communities and great care must be taken to
avoid unsafe exposure in recycling operations and leaking of materials such as heavy metals
from landfills and incinerator ashes. Scrap industry and U.S. EPA officials agree that materials
should be managed with caution[1]
"Electronic waste" may be defined as discarded computers, office electronic equipment,
entertainment device electronics, mobile phones, television sets, and refrigerators. This includes
used electronics which are destined for reuse, resale, salvage, recycling, or disposal. Others are
re-usables (working and repairable electronics) and secondary scrap (copper, steel, plastic, etc.)
to be "commodities", and reserve the term "waste" for residue or material which is dumped by
the buyer rather than recycled, including residue from reuse and recycling operations. Because
loads of surplus electronics are frequently commingled (good, recyclable, and non-recyclable),
several public policy advocates apply the term "e-waste" broadly to all surplus electronics.
Cathode ray tubes (CRTs) are considered one of the hardest types to recycle.[2]
CRTs have relatively high concentration of lead and phosphors (not to be confused with
phosphorus), both of which are necessary for the display. The United States Environmental
Protection Agency (EPA) includes discarded CRT monitors in its category of "hazardous
household waste"[3] but considers CRTs that have been set aside for testing to be commodities if
they are not discarded, speculatively accumulated, or left unprotected from weather and other
damage.
The EU and its member states operate a system via the European Waste Catalogue (EWC)- a
European Council Directive, which is interpreted into "member state law". In the UK (a EU
member state). This is in the form of the List of Wastes Directive. However, the list (and EWC)
gives broad definition (EWC Code 16 02 13*) of Hazardous Electronic wastes, requiring "waste
operators" to employ the Hazardous Waste Regulations (Annex 1A, Annex 1B) for refined
definition. Constituent materials in the waste also require assessment via the combination of
Annex II and Annex III, again allowing operators to further determine whether a waste is
hazardous.[4]

Debate continues over the distinction between "commodity" and "waste" electronics definitions.
Some exporters are accused of deliberately leaving difficult-to-recycle, obsolete, or nonrepairable equipment mixed in loads of working equipment (though this may also come through
ignorance, or to avoid more costly treatment processes). Protectionists may broaden the
definition of "waste" electronics in order to protect domestic markets from working secondary
equipment.
The high value of the computer recycling subset of electronic waste (working and reusable
laptops, desktops, and components like RAM) can help pay the cost of transportation for a larger
number of worthless pieces than can be achieved with display devices, which have less (or
negative) scrap value. In A 2011 report, "Ghana E-Waste Country Assessment",[5] found that of
215,000 tons of electronics imported to Ghana, 30% were brand new and 70% were used. Of the
used product, the study concluded that 15% was not reused and was scrapped or discarded. This
contrasts with published but uncredited claims that 80% of the imports into Ghana were being
burned in primitive conditions.

Amount of Electronic waste world-wide


Rapid changes in technology, changes in media (tapes, software, MP3), falling prices, and
planned obsolescence have resulted in a fast-growing surplus of electronic waste around the
globe. Dave Kruch, CEO of Cash For Laptops, regards electronic waste as a "rapidly expanding"
issue.[6] Technical solutions are available, but in most cases a legal framework, a collection,
logistics, and other services need to be implemented before a technical solution can be applied.
Display units (CRT, LCD, LED monitors), processors (CPU, GPU, or APU chips), memory
(DRAM or SRAM), and audio components have different useful lives. Processors are most
frequently out-dated (by software no longer being optimized) and are more likely to become "ewaste", while display units are most often replaced while working without repair attempts, due to
changes in wealthy nation appetites for new display technology.
An estimated 50 million tons of E-waste are produced each year.[1] The USA discards 30 million
computers each year and 100 million phones are disposed of in Europe each year. The
Environmental Protection Agency estimates that only 15-20% of e-waste is recycled, the rest of
these electronics go directly into landfills and incinerators.[7][8]
According to a report by UNEP titled, "Recycling - from E-Waste to Resources," the amount of
e-waste being produced - including mobile phones and computers - could rise by as much as 500
percent over the next decade in some countries, such as India.[9] The United States is the world
leader in producing electronic waste, tossing away about 3 million tons each year.[10] China
already produces about 2.3 million tons (2010 estimate) domestically, second only to the United
States. And, despite having banned e-waste imports, China remains a major e-waste dumping
ground for developed countries.[10]
Electrical waste contains hazardous but also valuable and scarce materials. Up to 60 elements
can be found in complex electronics.

In the United States, an estimated 70% of heavy metals in landfills comes from discarded
electronics.[11][12]
While there is agreement that the number of discarded electronic devices is increasing, there is
considerable disagreement about the relative risk (compared to automobile scrap, for example),
and strong disagreement whether curtailing trade in used electronics will improve conditions, or
make them worse. According to an article in Motherboard, attempts to restrict the trade have
driven reputable companies out of the supply chain, with unintended consequences.[13]

Global trade issues


Electronic waste is often exported to developing countries.
One theory is that increased regulation of electronic waste and concern over the environmental
harm in mature economies creates an economic disincentive to remove residues prior to export.
Critics of trade in used electronics maintain that it is still too easy for brokers calling themselves
recyclers to export unscreened electronic waste to developing countries, such as China,[14] India
and parts of Africa, thus avoiding the expense of removing items like bad cathode ray tubes (the
processing of which is expensive and difficult). The developing countries have become toxic
dump yards of e-waste. Proponents of international trade point to the success of fair trade
programs in other industries, where cooperation has led to creation of sustainable jobs, and can
bring affordable technology in countries where repair and reuse rates are higher.
Defenders of the trade[who?] in used electronics say that extraction of metals from virgin mining
has been shifted to developing countries. Recycling of copper, silver, gold, and other materials
from discarded electronic devices is considered better for the environment than mining. They
also state that repair and reuse of computers and televisions has become a "lost art" in wealthier
nations, and that refurbishing has traditionally been a path to development.
South Korea, Taiwan, and southern China all excelled in finding "retained value" in used goods,
and in some cases have set up billion-dollar industries in refurbishing used ink cartridges, singleuse cameras, and working CRTs. Refurbishing has traditionally been a threat to established
manufacturing, and simple protectionism explains some criticism of the trade. Works like "The
Waste Makers" by Vance Packard explain some of the criticism of exports of working product,
for example the ban on import of tested working Pentium 4 laptops to China, or the bans on
export of used surplus working electronics by Japan.
Opponents of surplus electronics exports argue that lower environmental and labor standards,
cheap labor, and the relatively high value of recovered raw materials leads to a transfer of
pollution-generating activities, such as smelting of copper wire. In China, Malaysia, India,
Kenya, and various African countries, electronic waste is being sent to these countries for
processing, sometimes illegally. Many surplus laptops are routed to developing nations as
"dumping grounds for e-waste".[6]
Because the United States has not ratified the Basel Convention or its Ban Amendment, and has
few domestic federal laws forbidding the export of toxic waste, the Basel Action Network

estimates that about 80% of the electronic waste directed to recycling in the U.S. does not get
recycled there at all, but is put on container ships and sent to countries such as China.[15][16][17][18]
This figure is disputed as an exaggeration by the EPA, the Institute of Scrap Recycling
Industries, and the World Reuse, Repair and Recycling Association.
Independent research by Arizona State University showed that 87-88% of imported used
computers did not have a higher value than the best value of the constituent materials they
contained, and that "the official trade in end-of-life computers is thus driven by reuse as opposed
to recycling".[19]

Electronic Waste Dump of the World: Guiyu, China


The E-waste centre of Agbogbloshie, Ghana, where electronic waste is burnt and disassembled
with no safety or environmental considerations.
Guiyu in the Shantou region of China is a huge electronic waste processing area.[15][20][21] It is
often referred to as the e-waste capital of the world. The city employs over 150,000 e-waste
workers that work through 16-hour days disassembling old computers and recapturing whatever
metals and parts they can reuse or sell. The thousands of individual workshops employ laborers
to snip cables, pry chips from circuit boards, grind plastic computer cases into particles, and dip
circuit boards in acid baths to dissolve the lead, cadmium, and other toxic metals. Others work to
strip insulation from all wiring in an attempt to salvage tiny amounts of copper wire.[22]
Uncontrolled burning, disassembly, and disposal causes a variety of environmental problems
such as groundwater contamination, atmospheric pollution, or even water pollution either by
immediate discharge or due to surface runoff (especially near coastal areas), as well as health
problems including occupational safety and health effects among those directly and indirectly
involved, due to the methods of processing the waste.
Only limited investigations have been carried out on the health effects of Guiyu's poisoned
environment. One of them was carried out by Professor Huo Xia, of the Shantou University
Medical College, which is an hour and a half's drive from Guiyu. She tested 165 children for
concentrations of lead in their blood. 82% of the Guiyu children had blood/lead levels of more
than 100. Anything above that figure is considered unsafe by international health experts. The
average reading for the group was 149.[23]
High levels of lead in young children's blood can impact IQ and the development of the central
nervous system. The highest concentrations of lead were found in the children of parents whose
workshop dealt with circuit boards and the lowest was among those who recycled plastic.[23]
Six of the many villages in Guiyu specialize in circuit-board disassembly, seven in plastics and
metals reprocessing, and two in wire and cable disassembly. About a year ago the environmental
group Greenpeace sampled dust, soil, river sediment and groundwater in Guiyu where e-waste
recycling is done. They found soaring levels of toxic heavy metals and organic contaminants in
both places.[24] Lai Yun, a campaigner for the group found "over 10 poisonous metals, such as
lead, mercury and cadmium, in Guiyu town."

Guiyu is only one example of digital dumps but similar places can be found across the world
such as Asia and Africa. With amounts of e-waste growing rapidly each year urgent solutions are
required. While the waste continues to flow into digital dumps like Guiyu there are measures that
can help reduce the flow of e-waste.[23]
A preventative step that major electronics firms should take is to remove the worst chemicals in
their products in order to make them safer and easier to recycle. It is important that all companies
take full responsibility for their products and, once they reach the end of their useful life, take
their goods back for re-use or safely recycle them.

Trade
Proponents of the trade say growth of internet access is a stronger correlation to trade than
poverty. Haiti is poor and closer to the port of New York than southeast Asia, but far more
electronic waste is exported from New York to Asia than to Haiti. Thousands of men, women,
and children are employed in reuse, refurbishing, repair, and remanufacturing, unsustainable
industries in decline in developed countries. Denying developing nations access to used
electronics may deny them sustainable employment, affordable products, and internet access, or
force them to deal with even less scrupulous suppliers. In a series of seven articles for The
Atlantic, Shanghai-based reporter Adam Minter describes many of these computer repair and
scrap separation activities as objectively sustainable.[25]
Opponents of the trade argue that developing countries utilize methods that are more harmful and
more wasteful. An expedient and prevalent method is simply to toss equipment onto an open fire,
in order to melt plastics and to burn away non-valuable metals. This releases carcinogens and
neurotoxins into the air, contributing to an acrid, lingering smog. These noxious fumes include
dioxins and furans.[26] Bonfire refuse can be disposed of quickly into drainage ditches or
waterways feeding the ocean or local water supplies.[18][27]
In June 2008, a container of electronic waste, destined from the Port of Oakland in the U.S. to
Sanshui District in mainland China, was intercepted in Hong Kong by Greenpeace.[28] Concern
over exports of electronic waste were raised in press reports in India,[29][30] Ghana,[31][32][33] Cte
d'Ivoire,[34] and Nigeria.[35]

Environmental Impact of Electronic Waste


Old keyboards
The processes of dismantling and disposing of electronic waste in the third world lead to a
number of environmental impacts as illustrated in the graphic. Liquid and atmospheric releases
end up in bodies of water, groundwater, soil, and air and therefore in land and sea animals both
domesticated and wild, in crops eaten by both animals and human, and in drinking water.[36]
One study of environmental effects in Guiyu, China found the following:

Airborne dioxins one type found at 100 times levels previously measured

Levels of carcinogens in duck ponds and rice paddies exceeded international standards
for agricultural areas and cadmium, copper, nickel, and lead levels in rice paddies were
above international standards

Heavy metals found in road dust lead over 300 times that of a control villages road
dust and copper over 100 times[37]

The environmental impact of the processing of different electronic waste components


E-Waste Component
Cathode ray tubes (used
in TVs, computer
monitors, ATM, video
cameras, and more)
Printed circuit board
(image behind table - a
thin plate on which chips
and other electronic
components are placed)

Process Used
Breaking and removal of
yoke, then dumping

Potential Environmental Hazard


Lead, barium and other heavy metals
leaching into the ground water and release
of toxic phosphor

De-soldering and removal


of computer chips; open Air emissions as well as discharge into
burning and acid baths to rivers of glass dust, tin, lead, brominated
remove final metals after dioxin, beryllium cadmium, and mercury
chips are removed.
Hydrocarbons, heavy metals, brominated
substances discharged directly into rivers
Chemical stripping using
Chips and other gold
acidifying fish and flora. Tin and lead
nitric and hydrochloric
plated components
contamination of surface and groundwater.
acid and burning of chips
Air emissions of brominated dioxins, heavy
metals and hydrocarbons
Plastics from printers,
Shredding and low temp Emissions of brominated dioxins, heavy
keyboards, monitors, etc. melting to be reused
metals and hydrocarbons
Open burning and
Hydrocarbon ashes released into air, water
Computer wires
stripping to remove copper and soil.

Information security
E-waste presents a potential security threat to individuals and exporting countries. Hard drives
that are not properly erased before the computer is disposed of can be reopened, exposing
sensitive information. Credit card numbers, private financial data, account information, and
records of online transactions can be accessed by most willing individuals. Organized criminals
in Ghana commonly search the drives for information to use in local scams.[39]
Government contracts have been discovered on hard drives found in Agbogbloshie. Multimillion dollar agreements from United States security institutions such as the Defense
Intelligence Agency (DIA), the Transportation Security Administration and Homeland Security
have all resurfaced in Agbogbloshie.[39][40]

E-waste management

1. Recycling
Computer monitors are typically packed into low stacks on wooden pallets for recycling and then
shrink-wrapped.[26]
See also: Computer recycling
Today the electronic waste recycling business is in all areas of the developed world a large and
rapidly consolidating business. People tend to forget that properly disposing of or reusing
electronics can help prevent health problems, create jobs, and reduce greenhouse-gas emissions.
[41]
Part of this evolution has involved greater diversion of electronic waste from energy-intensive
downcycling processes (e.g., conventional recycling), where equipment is reverted to a raw
material form. This recycling is done by sorting, dismantling, and recovery of valuable materials.
[42]
This diversion is achieved through reuse and refurbishing. The environmental and social
benefits of reuse include diminished demand for new products and virgin raw materials (with
their own environmental issues); larger quantities of pure water and electricity for associated
manufacturing; less packaging per unit; availability of technology to wider swaths of society due
to greater affordability of products; and diminished use of landfills.
Audiovisual components, televisions, VCRs, stereo equipment, mobile phones, other handheld
devices, and computer components contain valuable elements and substances suitable for
reclamation, including lead, copper, and gold.
One of the major challenges is recycling the printed circuit boards from the electronic wastes.
The circuit boards contain such precious metals as gold, silver, platinum, etc. and such base
metals as copper, iron, aluminum, etc. One way e-waste is processed is by melting circuit boards,
burning cable sheathing to recover copper wire and open- pit acid leaching for separating metals
of value.[43] Conventional method employed is mechanical shredding and separation but the
recycling efficiency is low. Alternative methods such as cryogenic decomposition have been
studied for printed circuit board recycling,[44] and some other methods are still under
investigation.

Consumer awareness efforts

The U.S. Environmental Protection Agency encourages electronic recyclers to become


certified by demonstrating to an accredited, independent third party auditor that they meet
specific standards to safely recycle and manage electronics. This works to ensure the
highest environmental standards are being maintained. Two certifications for electronic
recyclers currently exist and are endorsed by the EPA. Customers are encouraged to
choose certified electronics recyclers. Responsible electronics recycling reduces
environmental and human health impacts, increases the use of reusable and refurbished
equipment and reduces energy use while conserving limited resources. The two EPAendorsed certification programs are: Responsible Recyclers Practices (R2) and EStewards. Certified companies ensure they are meeting strict environmental standards
which maximize reuse and recycling, minimize exposure to human health or the
environment, ensure safe management of materials and require destruction of all data
used on electronics. Certified electronics recyclers have demonstrated through audits and

other means that they continually meet specific high environmental standards and safely
manage used electronics. Once certified, the recycler is held to the particular standard by
continual oversight by the independent accredited certifying body. A certification
accreditation board accredits certifying bodies and oversees certifying bodies to ensure
that they meet specific responsibilities and are competent to audit and provide
certification. EPA supports and will continue to push for continuous improvement of
electronics recycling practices and standards.[45]

e-Cycle, LLC: e-Cycle, LLC is the first mobile buyback and recycling company in the
world to be e-Stewards, R2 and ISO 14001 certified. They work with the largest
organizations in the world, including 16 of the Fortune 20 and 356 of the Fortune 500, to
raise awareness on the global e-waste crisis.[46]

Best Buy: Best Buy accepts electronic items for recycling, even if they were not
purchased at Best Buy. For a full list of acceptable items and locations, visit Best Buys
Recycling information page.[47]

Staples: Staples also accepts electronic items for recycling at no additional cost. They
also accept ink and printer toner cartridges. For a full list of acceptable items and
locations, visit the Staples Recycling information page.[48]

In the US, the Consumer Electronics Association (CEA) urges consumers to dispose
properly of end-of-life electronics through its recycling locator at
www.GreenerGadgets.org. This list only includes manufacturer and retailer programs that
use the strictest standards and third-party certified recycling locations, to provide
consumers assurance that their products will be recycled safely and responsibly. CEA
research has found that 58 percent of consumers know where to take their end-of-life
electronics, and the electronics industry would very much like to see that level of
awareness increase. Consumer electronics manufacturers and retailers sponsor or operate
more than 5,000 recycling locations nationwide and have vowed to recycle one billion
pounds annually by 2016,[49] a sharp increase from 300 million pounds industry recycled
in 2010.

The Sustainable Materials Management Electronic Challenge was created by the United
States Environmental Protection Agency (EPA). Participants of the Challenge are
manufacturers of electronics and electronic retailers. These companies collect end-of-life
(EOL) electronics at various locations and send them to a certified, third-party recycler.
Program participants are then able publicly promote and report 100% responsible
recycling for their companies.[50]

AddressTheMess.com is a Comedy Central pro-social campaign that seeks to increase


awareness of the dangers of electronic waste and to encourage recycling. Partners in the
effort include Earth911.com, ECOInternational.com, and the U.S. Environmental
Protection Agency. Many Comedy Central viewers are early adopters of new electronics,
and produce a commensurate amount of waste that can be directed towards recycling
efforts. The station is also taking steps to reduce its own environmental impact, in

partnership with NativeEnergy.com, a company that specializes in renewable energy and


carbon offsets.

The Electronics TakeBack Coalition[51] is a campaign aimed at protecting human health


and limiting environmental effects where electronics are being produced, used, and
discarded. The ETBC aims to place responsibility for disposal of technology products on
electronic manufacturers and brand owners, primarily through community promotions
and legal enforcement initiatives. It provides recommendations for consumer recycling
and a list of recyclers judged environmentally responsible.[52]

The Certified Electronics Recycler program[53] for electronic recyclers is a


comprehensive, integrated management system standard that incorporates key operational
and continual improvement elements for quality, environmental and health and safety
(QEH&S) performance.

The grassroots Silicon Valley Toxics Coalition (svtc.org) focuses on promoting human
health and addresses environmental justice problems resulting from toxins in
technologies.

Basel Action Network (BAN.org) is uniquely focused on addressing global


environmental injustices as a result of the global toxic trade. It works for human rights
and the environment by preventing disproportionate dumping of hazardous waste on
developing countries, on a large scale. Today, BAN is not only the leading global source
of information and advocacy on toxic trade and international hazardous waste treaties,
but it has also developed market-based solutions that rely on the highest standards for
globally responsible recycling and rigorous accredited and independent certification to
those standards.

Texas Campaign for the Environment (texasenvironment.org) works to build grassroots


support for e-waste recycling and uses community organizing to pressure electronics
manufacturers and elected officials to enact producer takeback recycling policies and
commit to responsible recycling programs.

The World Reuse, Repair, and Recycling Association (wr3a.org) is an organization


dedicated to improving the quality of exported electronics, encouraging better recycling
standards in importing countries, and improving practices through "Fair Trade"
principles.

Take Back My TV[54] is a project of The Electronics TakeBack Coalition and grades
television manufacturers to find out which are responsible and which are not.

The e-Waste Association of South Africa (eWASA)[55] has been instrumental in building a
network of e-waste recyclers and refurbishers in the country. It continues to drive the
sustainable, environmentally sound management of all e-waste in South Africa.

E-Cycling Central is a website from the Electronic Industry Alliance which allows you to
search for electronic recycling programs in your state. It lists different recyclers by state
to find reuse, recycle, or find donation programs across the country.[56]

Ewaste.guide.info is a Switzerland-based website dedicated to improving the e-waste


situation in developing and transitioning countries. The site contains news, events, case
studies, and more.[57]

StEP: Solving the E-Waste Problem This website of StEP, an initiative founded by
various UN organizations to develop strategies to solve the e-waste problem, follows its
activities and programs.[42][58]

Processing techniques
Recycling the lead from batteries.
In many developed countries, electronic waste processing usually first involves dismantling the
equipment into various parts (metal frames, power supplies, circuit boards, plastics), often by
hand, but increasingly by automated shredding equipment. A typical example is the NADIN
electronic waste processing plant in Novi Iskar, Bulgariathe largest facility of its kind in
Eastern Europe.[59][60] The advantages of this process are the human's ability to recognize and save
working and repairable parts, including chips, transistors, RAM, etc. The disadvantage is that the
labor is cheapest in countries with the lowest health and safety standards.
In an alternative bulk system,[61] a hopper conveys material for shredding into an unsophisticated
mechanical separator, with screening and granulating machines to separate constituent metal and
plastic fractions, which are sold to smelters or plastics recyclers. Such recycling machinery is
enclosed and employs a dust collection system. Some of the emissions are caught by scrubbers
and screens. Magnets, eddy currents, and trommel screens are employed to separate glass,
plastic, and ferrous and nonferrous metals, which can then be further separated at a smelter.
Leaded glass from CRTs is reused in car batteries, ammunition, and lead wheel weights,[26] or
sold to foundries as a fluxing agent in processing raw lead ore. Copper, gold, palladium, silver
and tin are valuable metals sold to smelters for recycling. Hazardous smoke and gases are
captured, contained and treated to mitigate environmental threat. These methods allow for safe
reclamation of all valuable computer construction materials.[18] Hewlett-Packard product
recycling solutions manager Renee St. Denis describes its process as: "We move them through
giant shredders about 30 feet tall and it shreds everything into pieces about the size of a quarter.
Once your disk drive is shredded into pieces about this big, it's hard to get the data off".[62]
An ideal electronic waste recycling plant combines dismantling for component recovery with
increased cost-effective processing of bulk electronic waste.
Reuse is an alternative option to recycling because it extends the lifespan of a device. Devices
still need eventual recycling, but by allowing others to purchase used electronics, recycling can
be postponed and value gained from device use.

Benefits of recycling
Recycling raw materials from end-of-life electronics is the most effective solution to the growing
e-waste problem. Most electronic devices contain a variety of materials, including metals that
can be recovered for future uses. By dismantling and providing reuse possibilities, intact natural
resources are conserved and air and water pollution caused by hazardous disposal is avoided.
Additionally, recycling reduces the amount of greenhouse gas emissions caused by the
manufacturing of new products.[63]
Benefits of recycling are extended when responsible recycling methods are used. In the U.S.,
responsible recycling aims to minimize the dangers to human health and the environment that
disposed and dismantled electronics can create. Responsible recycling ensures best management
practices of the electronics being recycled, worker health and safety, and consideration for the
environment locally and abroad.[64]

Electronic waste substances


Several sizes of button and coin cell with 2 9v batteries as a size comparison. They are all
recycled in many countries since they contain lead, mercury and cadmium.
Some computer components can be reused in assembling new computer products, while others
are reduced to metals that can be reused in applications as varied as construction, flatware, and
jewelry.[62]
Substances found in large quantities include epoxy resins, fiberglass, PCBs, PVC (polyvinyl
chlorides), thermosetting plastics, lead, tin, copper, silicon, beryllium, carbon, iron and
aluminium.
Elements found in small amounts include cadmium, mercury, and thallium.[65]
Elements found in trace amounts include americium, antimony, arsenic, barium, bismuth, boron,
cobalt, europium, gallium, germanium, gold, indium, lithium, manganese, nickel, niobium,
palladium, platinum, rhodium, ruthenium, selenium, silver, tantalum, terbium, thorium, titanium,
vanadium, and yttrium.
Almost all electronics contain lead and tin (as solder) and copper (as wire and printed circuit
board tracks), though the use of lead-free solder is now spreading rapidly. The following are
ordinary applications:

Hazardous
Recyclers in the street in So Paulo, Brazil with old computers
Americium: The radioactive source in smoke alarms. It is known to be carcinogenic.
Mercury: Found in fluorescent tubes (numerous applications), tilt switches (mechanical
doorbells, thermostats),[66] and flat screen monitors. Health effects include sensory
impairment, dermatitis, memory loss, and muscle weakness. Exposure in-utero causes

fetal deficits in motor function, attention and verbal domains.[67] Environmental effects in
animals include death, reduced fertility, and slower growth and development.

Sulphur: Found in lead-acid batteries. Health effects include liver damage, kidney
damage, heart damage, eye and throat irritation. When released into the environment, it
can create sulphuric acid.

BFRs: Used as flame retardants in plastics in most electronics. Includes PBBs, PBDE,
DecaBDE, OctaBDE, PentaBDE. Health effects include impaired development of the
nervous system, thyroid problems, liver problems. Environmental effects: similar effects
as in animals as humans. PBBs were banned from 1973 to 1977 on. PCBs were banned
during the 1980s.

Cadmium: Found in light-sensitive resistors, corrosion-resistant alloys for marine and


aviation environments, and nickel-cadmium batteries. The most common form of
cadmium is found in Nickel-cadmium rechargeable batteries. These batteries tend to
contain between 6 and 18% cadmium. The sale of Nickel-Cadmium batteries has been
banned in the European Union except for medical use. When not properly recycled it can
leach into the soil, harming microorganisms and disrupting the soil ecosystem. Exposure
is caused by proximity to hazardous waste sites and factories and workers in the metal
refining industry. The inhalation of cadmium can cause severe damage to the lungs and is
also known to cause kidney damage.[68] Cadmium is also associated with deficits in
cognition, learning, behavior, and neuromotor skills in children.[67]

Lead: Solder, CRT monitor glass, lead-acid batteries, some formulations of PVC.[69] A
typical 15-inch cathode ray tube may contain 1.5 pounds of lead,[3] but other CRTs have
been estimated as having up to 8 pounds of lead.[26] Adverse effects of lead exposure
include impaired cognitive function, behavioral disturbances, attention deficits,
hyperactivity, conduct problems and lower IQ[67]

Beryllium oxide: Filler in some thermal interface materials such as thermal grease used
on heatsinks for CPUs and power transistors,[70] magnetrons, X-ray-transparent ceramic
windows, heat transfer fins in vacuum tubes, and gas lasers.

Perfluorooctanoic acid (PFOA): Found in Non-stick cookware (PTFE), used as an


antistatic additive in industrial applications, and found in electronics. PFOAs are formed
synthetically through environmental degradation and, in mice, after oral uptake. Studies
in mice have found the following health effects: Hepatotoxicity, developmental toxicity,
immunotoxicity, hormonal effects and carcinogenic effects. Studies have found increased
maternal PFOA levels to be associated with an increased risk of spontaneous abortion
(miscarriage) and stillbirth. Increased maternal levels of PFOA are also associated with
decreases in mean gestational age (preterm birth), mean birth weight (low birth weight),
mean birth length (small for gestational age), and mean APGAR score.[71]

Hexavalent chromium: A known carcinogen after occupational inhalation exposure.[67]

There is also evidence of cytotixic and genotoxic effects of some chemicals, which have been
shown to inhibit cell proliferation, cause cell membrane lesion, cause DNA single-strand breaks,
and elevate Reactive Oxygen Species (ROS) levels.[72]

DNA breaks can increase the likelihood of developing cancer (if the damage is to a tumor
suppressor gene)
DNA damages are a special problem in non-dividing or slowly dividing cells, where
unrepaired damages will tend to accumulate over time. On the other hand, in rapidly
dividing cells, unrepaired DNA damages that do not kill the cell by blocking replication
will tend to cause replication errors and thus mutation
Elevated Reactive Oxygen Species (ROS) levels can cause damage to cell structures
(oxidative stress)[72]

Generally non-hazardous
An iMac G4 that has been repurposed into a lamp (photographed next to a Mac Classic and a flip
phone).
Aluminium: nearly all electronic goods using more than a few watts of power (heatsinks),
electrolytic capacitors.
Copper: copper wire, printed circuit board tracks, component leads.

Germanium: 1950s1960s transistorized electronics (bipolar junction transistors).

Gold: connector plating, primarily in computer equipment.

Iron: steel chassis, cases, and fixings.

Lithium: lithium-ion batteries.

Nickel: nickel-cadmium batteries.

Silicon: glass, transistors, ICs, printed circuit boards.

Tin: solder, coatings on component leads.

Zinc: plating for steel parts.

Computer Recycling
Digger gold

eDay

Electronic waste in Japan

Green computing

Mobile phone recycling

Material safety data sheet

Polychlorinated biphenyls

Retrocomputing

Policy and conventions:

Basel Action Network (BAN)


Basel Convention

China RoHS

e-Stewards

Restriction of Hazardous Substances Directive (RoHS)

Soesterberg Principles

Sustainable Electronics Initiative (SEI)

Waste Electrical and Electronic Equipment Directive

Organizations
Asset Disposal and Information Security Alliance (ADISA)[73]
Empa

IFixit

Institute of Scrap Recycling Industries (ISRI)

Solving the E-waste Problem

World Reuse, Repair and Recycling Association

General:

Retail hazardous waste


Waste

Correlation
Screening curves
Meta analysis

In statistics, a meta-analysis refers to methods that focus on contrasting and combining results
from different studies, in the hope of identifying patterns among study results, sources of
disagreement among those results, or other interesting relationships that may come to light in the
context of multiple studies.[1] In its simplest form, meta-analysis is normally done by
identification of a common measure of effect size. A weighted average of that common measure
is the output of a meta-analysis. The weighting is related to sample sizes within the individual
studies. More generally there are other differences between the studies that need to be allowed
for, but the general aim of a meta-analysis is to more powerfully estimate the true effect size as
opposed to a less precise effect size derived in a single study under a given single set of
assumptions and conditions
A meta-analysis therefore gives a thorough summary of several studies that have been done on
the same topic, and provides the reader with extensive information on whether an effect exists
and what size that effect has.
Meta analysis can be thought of as "conducting research about research."
Meta-analyses are often, but not always, important components of a systematic
review procedure. For instance, a meta-analysis may be conducted on several clinical trials of a
medical treatment, in an effort to obtain a better understanding of how well the treatment works.
Here it is convenient to follow the terminology used by the Cochrane Collaboration,[2] and use
"meta-analysis" to refer to statistical methods of combining evidence, leaving other aspects of
'research synthesis' or 'evidence synthesis', such as combining information from qualitative
studies, for the more general context of systematic reviews.
Meta-analysis forms part of a framework called estimation statistics which relies on effect
sizes, confidence intervals and precision planning to guide data analysis, and is an alternative
to null hypothesis significance testing.

Advantages of meta analysis


Conceptually, a meta-analysis uses a statistical approach to combine the results
from multiple studies in an effort to increase power (over individual studies),
improve estimates of the size of the effect and/or to resolve uncertainty when
reports disagree. Basically, it produces a weighted average of the included study
results and this approach has several advantages:

Results can be generalized to a larger population,

The precision and accuracy of estimates can be improved as more data is


used. This, in turn, may increase the statistical power to detect an effect.

Inconsistency of results across studies can be quantified and analyzed. For


instance, does inconsistency arise from sampling error, or are study results
(partially) influenced by between-study heterogeneity.

Hypothesis testing can be applied on summary estimates,

Moderators can be included to explain variation between studies,

The presence of publication bias can be investigated

Pitfalls
A meta-analysis of several small studies does not predict the results of a single
large study.[9] Some have argued that a weakness of the method is that sources of
bias are not controlled by the method: a good meta-analysis of badly designed
studies will still result in bad statistics. [10] This would mean that only
methodologically sound studies should be included in a meta-analysis, a practice
called 'best evidence synthesis'.[10]
Other meta-analysts would include weaker studies, and add a study-level predictor
variable that reflects the methodological quality of the studies to examine the effect
of study quality on the effect size.[11] However, others have argued that a better
approach is to preserve information about the variance in the study sample, casting
as wide a net as possible, and that methodological selection criteria introduce
unwanted subjectivity, defeating the purpose of the approach. [12]
steps of meta analysis

1. Formulation of the problem


2. Search of literature
3. Selection of studies ('incorporation criteria')

Based on quality criteria, e.g. the requirement of randomization and blinding


in a clinical trial

Selection of specific studies on a well-specified subject, e.g. the treatment of


breast cancer.

Decide whether unpublished studies are included to avoid publication bias


(file drawer problem)

4. Decide which dependent variables or summary measures are allowed. For


instance:

Differences (discrete data)

Means (continuous data)

Hedges' g is a popular summary measure for continuous data that is


standardized in order to eliminate scale differences, but it incorporates an index
of variation between groups:

in which
the pooled variance.

is the treatment mean,

is the control mean,

5. Selection of meta-regression statistic model. e.g. Simple regression, fixedeffect meta regression and random-effect meta regression. Meta-regression is a
tool used in meta-analysis to examine the impact of moderator variables on
study effect size using regression-based techniques. Meta-regression is more
effective at this task than are standard regression techniques.
Meta-analysis
combines the
results of several
studies

What is meta-analysis?
Meta-analysis is the use of statistical methods to combine results of
individual studies. This allows us to make the best use of all the
information we have gathered in our systematic review by increasing
the power of the analysis. By statistically combining the results of

similar studies we can improve the precision of our estimates of


treatment effect, and assess whether treatment effects are similar in
similar situations. The decision about whether or not the results of
individual studies are similar enough to be combined in a meta-analysis
is essential to the validity of the result, and will be covered in the next
module on heterogeneity. In this module we will look at the process of
combining studies and outline the various methods available.
There are many approaches to meta-analysis. We have discussed already
that meta-analysis is not simply a matter of adding up numbers of
participants across studies (although unfortunately some non-Cochrane
reviews do this). This is the 'pooling participants' or 'treat-as-one-trial'
method and we will discuss it in a little more detail now.
Pooling participants (not a valid approach to meta-analysis).
This method effectively considers the participants in all the studies as if
they were part of one big study. Suppose the studies are randomised
controlled trials: we could look at everyone who received the
experimental intervention by adding up the experimental group events
and sample sizes and compare them with everyone who received the
control intervention. This is a tempting way to 'pool results', but let's
demonstrate how it can produce the wrong answer.
A Cochrane review of trials of daycare for pre-school children included
the following two trials. For this example we will focus on the outcome
of whether a child was retained in the same class after a period in either
a daycare treatment group or a non-daycare control group. In the first
trial (Gray 1970), the risk difference is -0.16, so daycare looks
promising:
Gray
1970
Daycare

Retain
ed

Total

Risk

19

36

0.528

Risk
difference

-0.16
Control

13

19

0.684

In the second trial (Schweinhart 1993) the absolute risk of being


retained in the same class is considerably lower, but the risk difference,
while small, still lies on the side of a benefit of daycare:
Schweinh

Retain Tota

Risk

Risk

art
Daycare

ed

58

difference
0.103
4
-0.004

Control

65

0.107
7

What would happen if we pooled all the children as if they were part of
a single trial?
Pooled
results
Daycare

Control

We don't add up
patients across
trials

We don't use
simple averages to
calculate a metaanalysis

Retain Tota
Risk
ed
l
25

20

94

84

0.26
6
0.23
8

Risk
difference

+0.03
WRONG!

It suddenly looks as if daycare may be harmful: the risk difference is


now bigger than 0. This is called Simpson's paradox (or bias), and is
why we don't pool participants directly across studies. The first rule of
meta-analysis is to keep participants within each study grouped together,
so as to preserve the effects of randomisation and compare like with
like. Therefore, we must take the comparison of risks within each of the
two trials and somehow combine these. In practice, this means we need
to calculate a single measure of treatment effect from each study before
contemplating meta-analysis. For example, for a dichotomous outcome
(like being retained in the same class) we calculate a risk ratio, the risk
difference or the odds ratio for each study separately, then pool these
estimates of effect across the studies.
Simple average of treatment effects (not used in Cochrane reviews)
If we obtain a treatment effect separately from each study, what do we
do with them in the meta-analysis? How about taking the average? The
average of the risk differences in the two trials above is (-0.004 - 0.16) /
2 = - 0.082. This may seem fair at first, but the second trial randomised
more than twice as many children as the first, so the contribution of
each randomised child in the first trial is diminished. It is not
uncommon for a meta-analysis to contain trials of vastly different sizes.

To give each one the same influence cannot be reasonable. So we need a


better method than a simple average.
Definition:
What is a meta-analysis?
A meta-analysis is a type of research study in which the researcher compiles
numerous previously published studies on a particular research question and reanalyzes the results to find the general trend for results across the studies.
A meta-analysis is a useful tool because it can help overcome the problem of small
sample sizes in the original studies, and can help identify trends in an area of the
research literature that may not be evident by merely reading the published studies.
Graphs
Economic growth

Definition of 'Economic Growth'

An increase in the capacity of an economy to produce goods and services,


compared from one period of time to another. Economic growth can be measured in
nominal terms, which include inflation, or in real terms, which are adjusted for
inflation. For comparing one country's economic growth to another, GDP or GNP per
capita should be used as these take into account population differences between
countries.

Increase in a country's productive capacity, as measured by comparing gross national product


(GNP) in a year with the GNP in the previous year.
Increase in the capital stock, advances in technology, and improvement in the quality and level
of literacy are considered to be the principal causes of economic growth. In recent years, the idea
of sustainable development has brought in additional factors such as environmentally
sound processes that must be taken into account in growing an economy.

Economic growth is the increase in the market value of the goods and services produced by
an economy over time. It is conventionally measured as the percent rate of increase in real gross
domestic product, or real GDP.[1] Of more importance is the growth of the ratio of GDP to
population (GDP per capita), which is also called per capita income. An increase in per capita
income is referred to as intensive growth. GDP growth caused only by increases in population or
territory is called extensive growth.[2]
Growth is usually calculated in real terms i.e., inflation-adjusted terms to eliminate the
distorting effect of inflation on the price of goods produced. In economics, "economic growth" or
"economic growth theory" typically refers to growth of potential output, i.e., production at "full
employment".
As an area of study, economic growth is generally distinguished from development economics.
The former is primarily the study of how countries can advance their economies. The latter is the
study of the economic aspects of the development process in low-income countries. See
also Economic development.
Since economic growth is measured as the annual percent change of gross domestic product
(GDP), it has all the advantages and drawbacks of that measure. For example, GDP only
measures the market economy, which tends to overstate growth during the change over from a
farming economy with household production.[3] An adjustment was made for food grown on and
consumed on farms, but no correction was made for other household production. Also, there is
no allowance in GDP calculations for depletion of natural resources.

Pros
5. Quality of life
Cons
6. Resource depletion
7. Environmental impact
8. Global warming

Inflation graphs

Growth rate decreases vs inflation increases?

Inflation and Economic Growth


David Henderson explains:
The idea that an increase in economic growth leads to an increase in inflation and that
decreased growth reduces inflation is reflected endlessly in the media. On April 28, for
example, AP writer Rajesh Mahapatra claimed that high economic growth of more than 8.5%
annually in India since 2003 has spurred demand and caused prices to rise. This makes no
sense.
All other things being equal, an increase in economic growth must cause inflation to drop, and a
reduction in growth must cause inflation to rise. In his congressional testimony yesterday,
Federal Reserve chairman Ben Bernanke thankfully did not state that the higher economic
growth he expects will lead to higher inflation. Although he didnt connect growth and inflation
at all, Mr. Bernanke has long understood that higher growth leads to lower inflation.
Heres why. Inflation, as the old saying goes, is caused by too much money chasing too few
goods. Just as more money means higher prices, fewer goods also mean higher prices. The

connection between the level of production and the level of prices also holds for the rate of
change of production (that is, the rate of economic growth) and the rate of change of prices (that
is, the inflation rate).
Some simple arithmetic will clarify. Start with the famous equation of exchange, MV = Py, where
M is the money supply; V is the velocity of money that is, the speed at which money circulates;
P is the price level; and y is the real output of the economy (real GDP.) A version of this
equation, incidentally, was on the license plate of the late economist Milton Friedman, who
made a large part of his academic reputation by reviving, and giving evidence for, the role of
money growth in causing inflation.
If the growth rate of real GDP increases and the growth rates of M and V are held constant, the
growth rate of the price level must fall. But the growth rate of the price level is just another term
for the inflation rate; therefore, inflation must fall. An increase in the rate of economic growth
means more goods for money to chase, which puts downward pressure on the inflation rate. If
for example the money supply grows at 7% a year and velocity is constant and if annual
economic growth is 3%, inflation must be 4% (more exactly, 3.9%). If, however, economic
growth rises to 4%, inflation falls to 3% (actually, 2.9%.)

The April numbers for the index of industrial production (IIP), released on Thursday,
brought some cheer on the growthfront. The IIP grew by 3.4 per cent, its highest in
a long time. April, of course, was a month in which the entire country was deep in
electioneering. Therefore, some sort of stimulus from all the campaign spending
might have been reasonable to expect. The biggest beneficiary of this was the
category of "electrical machinery", which grew by over 66 per cent year on year,
reflecting all those campaign rallies, with their generators and audio equipment.
The other significant contributor to the growth in the overall index was electricity,
which grew by almost 12 per cent year on year, significantly higher than its growth
during 2013-14. Typically, a growth acceleration that relies heavily on one or two
sectoral surges does not have much staying power. It would require an across-theboard show of resurgence to allow people to conclude that a sustainable recovery
was under way. That is clearly not happening yet. However, these numbers do
reinforce the perception that things are not getting worse as far as growth is
concerned.

Likewise, there was some room for relief on the inflation front. The consumer price
index, or CPI, numbers for May 2014 showed headline inflation declining slightly,
from 8.6 per cent in April to 8.3 per cent in May. The Central Statistical Office is now
separately reporting a sub-index labelled consumer food price index, or CFPI, which
provides some convenience to observers. The index itself, though, offers little cheer.
It came down modestly between April and May, largely explaining the decline in the
headline rate, but is still significantly above nine per cent. At a time when there are
concerns about the performance of the monsoon and the impact of that on food
prices, these numbers should be a major cause of worry for the government. Milk,
eggs, fish and meat, vegetables and fruit contributed to the persistence of food
inflation. But cereals are also kicking in, as they have been for the past couple of
years, and the government must use its large stocks of rice and wheat quickly to
dampen at least this source of food inflation. It would be unconscionable not to do
so when risks of a resurgence of inflation are high. The larger point on inflation,
though, is how stubborn the rate is despite sluggish growth and high interest rates.
The limitations of monetary policy are being repeatedly underscored.
Against this backdrop, the government's prioritisation of its fight against inflation is
an extremely important development. It has to move quickly from intent to action
on a variety of reforms, from procurement policy to subsidies and to investment in
rural infrastructure. Many of these will generate benefits only over the medium
term. So those expecting a growth stimulus from the Reserve Bank of India any time
soon are bound to be in for a disappointment. Even so, room for optimism should
come from the fact that this government does have the capacity to design and
execute long-term strategies with complete credibility. The simple equation that it
needs to keep in mind is that inflation will not subside unless food prices moderate
and growth will not recover unless inflation subsides.

Which study design is good

The Best Study Design For Dummies


When I had those tired looks again, my mother in law recommended coenzyme Q, which
research had proven to have wondrous effects on tiredness. Indeed many sites and magazines
advocate this natural energy producing nutrient which mobilizes your mitochondria for cellular
energy! Another time she asked me if I thought komkommerslank(cucumber pills for slimming)
would work to lose some extra weight. She took my NO for granted.
It is often difficult to explain people that not all research is equally good, and that outcomes are
not always equally significant (both statistically and clinically). It is even more difficult to
understand levels of evidence and why we should even care. Pharmaceutical Industries
(especially the supplement-selling ones) take advantage of this ignorance and are very successful
in selling their stories and pills.

If properly conducted, the Randomized Controlled Trial (RCT) is the best study-design to
examine the clinical efficacy of health interventions. An RCT is an experimental study where
individuals who are similar at the beginning are randomly allocated to two or more treatment
groups and the outcomes of the groups are compared after sufficient follow-up time. However an
RCT may not always be feasible, because it may not be ethical or desirable to randomize people
or to expose them to certain interventions.
Observational studies provide weaker empirical evidence, because the allocation of factors is
not under control of the investigator, but just happen or are chosen (e.g. smoking). Of the
observational studies, cohort studies provide stronger evidence than case control studies,
because in cohort studies factors are measured before the outcome, whereas in case controls
studies factors are measured after the outcome.
Most people find such a description of study types and levels of evidence too theoretical and not
appealing.
Last year I was challenged to tell about how doctors search medical information (central theme =
Google) for and here it comes. the Society of History and ICT.
To explain the audience why it is important for clinicians to find the best evidence and how
methodological filters can be used to sift through the overwhelming amount of information in for
instance PubMed, I had to introduce RCTs and the levels of evidence. To explain it to them I
used an example that stroke me when I first read about it.
I showed them the following slide :

And clarified: Beta-carotene is a vitamine in carrots and many other vegetables, but you can
also buy it in pure form as pills. There is reason to believe that beta-carotene might help to
prevent lung cancer in cigarette smokers. How do you think you can find out whether betacarotene will have this effect?

Suppose you have two neighbors, both heavy smokers of the same age, both
males. The neighbor who doesnt eat much vegetables gets lung cancer, but the

neighbor who eats a lot of vegetables and is fond of carrots doesnt. Do you think
this provides good evidence that beta-carotene prevents lung cancer?
There is a laughter in the room, so they dont believe in n=1 experiments/case
reports. (still how many people dont think smoking does not necessarily do any
harm because their chainsmoking father reached his nineties in good health).
I show them the following slide with the lowest box only.

O.k. What about this study? Ive a group of lung cancer patients,
who smoke(d) heavily. I ask them to fill in a questionnaire about their eating habits
in the past and take a blood sample, and I do the same with a simlar group of
smokers without cancer (controls). Analysis shows that smokers developing lung
cancer eat much less beta-carotene containing vegetables and have lower
bloodlevels of beta-carotene than the smokers not developing cancer. Does this
mean
that
beta-carotene
is
preventing
lung
cancer?
Humming in the audience, till one man says: perhaps some people dont remember
exactly what they eat and then several people object that it is just an association
and you do not yet know whether beta-carotene really causes this. Right! I show
the box patient-control studies.

Than consider this study design. I follow a large cohort of healthy heavy
smokers and look at their eating habits (including use of supplements) and take
regular blood samples. After a long follow-up some heavy smokers develop lung
cancer whereas others dont. Now it turns out that the group that did not develop
lung cancer had significantly more beta-carotene in their blood and eat larger
amount of beta-carotene containing food. What do you think about that then?
Now the room is a bit quiet, there is some hesitation. Then someone says: well it is
more convincing and finally the chair says: but it may still not be the carrots, but
something else in their food or they may just have other healthy living habits
(including eating carrots). Cohort-study appears on the slide (What a perfect
audience!)

O.k. youre not convinced that these study designs give conclusive evidence.
How could we then establish that beta-carotene lowers the risk of lung cancer in

hea
wanted

to

know,

how

do

vy
you

smokers? Suppose
set
up
such

you really
a
study?

Grinning. Someone says by giving half of the smokersbeta-carotene and the other
half nothing. Or a placebo, someone else says. Right! Randomized Controlled
Trial is on top of the slide. And there is not much room left for another box, so we
are there. I only add that the best way to do it is to do it double blinded.

Than I reveal that all this research has really been done. There have been numerous observational
studies (case-control as well cohorts studies) showing a consistent negative correlation between
the intake of beta-carotene and the development of lung cancer in heavy smokers. The same has
been shown for vitamin E.
Knowing that, I asked the public: Would you as a heavy smoker participate in a trial where
you are randomly assigned to one of the following groups: 1. beta-carotene, 2. vitamin E, 3. both
or 4. neither vitamin (placebo)?
The recruitment fails. Some people say they dont believe in supplements, others say that it
would be far more effective if smokers quit smoking (laughter). Just 2 individuals said they
would at least consider it. But they thought there was a snag in it and they were right. Such
studies have been done, and did not give the expected positive results.
In the first large RCT (appr. 30,000 male smokers!), the ATBC Cancer Prevention Study, betacarotene rather increased the incidence of lung cancer with 18 percent and overall mortality with
8 percent (although harmful effects faded after men stopped taking the pills). Similar results were
obtained in the CARET-study, but not in a 3rd RCT, the Physicians Health Trial, the only
difference being that the latter trial was performed both with smokers nd non-smokers.
It is now generally thought that cigarette smoke causes beta-carotene to breakdown in
detrimental products, a process that can be halted by other anti-oxidants (normally present in
food). Whether vitamins act positively (anti-oxidant) or negatively (pro-oxidant) depends very
much on the dose and the situation and on whether there is a shortage of such supplements or
not.
I found that this way of explaining study designs to well-educated layman was very effective and
fun!
The take-home message is that no matter how reproducible the observational studies seem to
indicate a certain effect, better evidence is obtained by randomized control trials. It also shows
that scientists should be very prudent to translate observational findings directly in a particular
lifestyle advice.
On the other hand, I wonder whether all hypotheses have to be tested in a costly RCT (the costs
for the ATCB trial were $46 million). Shouldnt there be very very solid grounds to start a
prevention study with dietary supplements in healthy individuals ? Arent their any dangers?
Personally I think we should be very restrictive about these chemopreventive studies. Till now
most chemopreventive studies have not met the high expectations, anyway.
And what about coenzyme-Q and komkommerslank? Besides that I do not expect the evidence to
be convincing, tiredness can obviously be best combated by rest and I already eat enough
cucumbers. ;)
To be continued

Ecological studies are studies of risk-modifying factors on health or other outcomes based on
populations defined either geographically or temporally. Both risk-modifying factors and
outcomes are averaged for the populations in each geographical or temporal unit and then
compared using standard statistical methods.
Ecological studies have often found links between risk-modifying factors and health outcomes
well in advance of other epidemiological or laboratory approaches. Several examples are given
here.
The study by John Snow regarding a cholera outbreak in London is considered the first
ecological study to solve a health issue. He used a map of deaths from cholera to determine that
the source of the cholera was a pump on Broad Street. He had the pump handle removed in 1854
and people stopped dying there [Newsom, 2006]. It was only when Robert Koch discovered
bacteria years later that the mechanism of cholera transmission was understood.[1]
Dietary risk factors for cancer have also been studied using both geographical and temporal
ecological studies. Multi-country ecological studies of cancer incidence and mortality rates with
respect to national diets have shown that some dietary factors such as animal products (meat,
milk, fish and eggs), added sweeteners/sugar, and some fats appear to be risk factors for many
types of cancer, while cereals/grains and vegetable products as a whole appear to be risk
reduction factors for many types of cancer.[2][3] Temporal changes in Japan in the types of cancer
common in Western developed countries have been linked to the nutrition transition to the
Western diet.[4]
An important advancement in the understanding of risk-modifying factors for cancer was made
by examining maps of cancer mortality rates. The map of colon cancer mortality rates in the
United States was used by the brothers Cedric and Frank C. Garland to propose the hypothesis
that solar ultraviolet B (UVB) radiation, through vitamin D production, reduced the risk of
cancer (the UVB-vitamin D-cancer hypothesis).[5] Since then many ecological studies have been
performed relating the reduction of incidence or mortality rates of over 20 types of cancer to
lower solar UVB doses.[6]
Links between diet and Alzheimers disease have been studied using both geographical and
temporal ecological studies. The first paper linking diet to risk of Alzheimers disease was a
multicountry ecological study published in 1997.[7] It used prevalence of Alzheimers disease in
11 countries along with dietary supply factors, finding that total fat and total energy (caloric)
supply were strongly correlated with prevalence, while fish and cereals/grains were inversely

correlated (i.e., protective). Diet is now considered an important risk-modifying factor for
Alzheimers disease.[8] Recently it was reported that the rapid rise of Alzheimers disease in
Japan between 1985 and 2007 was likely due to the nutrition transition from the traditional
Japanese diet to the Western diet.[9]
Another example of the use of temporal ecological studies relates to influenza. John Cannell and
associates hypothesized that the seasonality of influenza was largely driven by seasonal
variations in solar UVB doses and calcidiol levels.[10] A randomized controlled trial involving
Japanese school children found that taking 1000 IU per day vitamin D3 reduced the risk of type
A influenza by two-thirds.[11]
Ecological studies are particularly useful for generating hypotheses since they can use existing
data sets and rapidly test the hypothesis. The advantages of the ecological studies include the
large number of people that can be included in the study and the large number of risk-modifying
factors that can be examined.
The term ecological fallacy means that the findings for the groups may not apply to individuals
in the group. However, this term also applies to observational studies and randomized controlled
trials. All epidemiological studies include some people who have health outcomes related to the
risk-modifying factors studied and some who do not. For example, genetic differences affect how
people respond to pharmaceutical drugs. Thus, concern about the ecological fallacy should not be
used to disparage ecological studies. The more important consideration is that ecological studies
should include as many known risk-modifying factors for any outcome as possible, adding others
if warranted. Then the results should be evaluated by other methods, using, for example, Hills
criteria for causality in a biological system.

The ecological fallacy may occur when conclusions about individuals are drawn from
analyses conducted on grouped data. The nature of this type of analysis tends to
overestimate the degree of association between variables.

Survival rate.
Life table.....

In actuarial science and demography, a life table (also called a mortality


table or actuarial table) is a table which shows, for each age, what the probability
is that a person of that age will die before his or her next birthday ("probability of
death"). From this starting point, a number of inferences can be derived.

the probability of surviving any particular year of age

remaining life expectancy for people at different ages

Life tables are also used extensively in biology and epidemiology. The concept is
also of importance in product life cycle management.

Using

from Table 1 data, the chart shows

with

(Age) ranging from 20 to 90 years and

ranging from 5 to 25 future years.


These curves show the probability that someone at (who has reached) the age of

will live at

least years and can be used to discuss annuity issues from the boomer viewpoint where an
increase in group size will have major effects.
For those in the age range covered by the chart, the "5 yr" curve indicates the group that will
reach beyond the life expectancy. This curve represents the need for support that
covers longevity requirements.
The "20 yr" and "25 yr" curves indicate the continuing diminishing of the life expectancy value
as "age" increases. The differences between the curves are very pronounced starting around the
age of 50 to 55 and ought to be used for planning based upon expectation models.

The "10 yr" and "15 yr" curves can be thought of as the trajectory that is followed by the life
expectancy curve related to those along the median which indicates that the age of 90 is not out
of the question.

A "life table" is a kind of bookkeeping system that ecologists often use to keep
track of stage-specific mortality in the populations they study.

It is an especially

useful approach in entomology where developmental stages are discrete and


mortality rates may vary widely from one life stage to another.

From a pest

management standpoint, it is very useful to know when (and why) a pest population
suffers high mortality -- this is usually the time when it is most vulnerable.

By

managing the natural environment to maximize this vulnerability, pest populations


can often be suppressed without any other control methods.
To create a life table, an ecologist follows the life history of many individuals in a
population, keeping track of how many offspring each female produces, when each
one dies, and what caused its death. After amassing data from different
populations, different years, and different environmental conditions, the ecologist

summarizes this data by calculating average mortality within each developmental


stage.
For example, in a hypothetical insect population, an average female will lay 200
eggs before she dies. Half of these eggs (on average) will be consumed by
predators, 90% of the larvae will die from parasitization, and three-fifths of the
pupae will freeze to death in the winter. (These numbers are averages, but they
are based on a large database of observations.) A life table can be created from the
above data. Start with a cohortof 200 eggs (the progeny of Mrs. Average Female).

This number represents the maximum biotic potential of the species (i.e. the
greatest number of offspring that could be produced in one generation under ideal
conditions). The first line of the life table lists the main cause(s) of death, the
number dying, and the percent mortality during the egg stage. In this example,
an average of only 100 individuals survive the egg stage and become larvae.
The second line of the table lists the mortality experience of these 100 larvae: only
10 of them survive to become pupae (90% mortality of the larvae). The third
line of the table lists the mortality experience of the 10 pupae -- three-fifths die of
freezing. This leaves only 4 individuals alive in the adult stage to reproduce. If we
assume a 1:1 sex ratio, then there are 2 males and 2 females to start the next
generation.
If there is no mortality of these females, they will each lay an average of 200 eggs
to start the next generation. Thus there are two females in the cohort to replace
the one original female -- this population is DOUBLING in size each generation!!
In ecology, the symbol "R" (capital R) is known as the replacement rate. It is a
way to measure the change in reproductive capacity from generation to
generation. The value of "R" is simply the number of reproductive daughters that
each female produces over her lifetime:

Number of daughters
R = ------------------------------Number of mothers

If the value of "R" is less than 1, the population is decreasing -- if this


situation persists for any length of time the population becomes extinct.
If the value of "R" is greater than 1, the population is increasing -- if this
situation persists for any length of time the population will grow beyond
the environment's carrying capacity. (Uncontrolled population growth is
usually a sign of a disturbed habitat, an introduced species, or some
other type of human intervention.)
If the value of "R" is equal to 1, the population is stable -- most natural
populations are very close to this value.

Practice Problem:
A typical female of the bubble gum maggot (Bubblicious blowhardi Meyer) lays 250
eggs. On average, 32 of these eggs are infertile and 64 are killed by parasites.

Of

the survivors, 64 die as larvae due to habitat destruction (gum is cleared away by
the janitorial staff) and 87 die as pupae because the gum gets too hard.

Construct

a life table for this species and calculate a value for "R", the replacement rate
(assume a 1:1 sex ratio). Is this population increasing, decreasing, or remaining
stable?

Practice Problem:
A typical female of the bubble gum maggot (Bubblicious blowhardi Meyer) lays 250
eggs. On average, 32 of these eggs are infertile and 64 are killed by parasites. Of
the survivors, 64 die as larvae due to habitat destruction (gum is cleared away by
the janitorial staff) and 87 die as pupae because the gum gets too hard.

Construct

a life table for this species and calculate a value for "R", the replacement rate
(assume a 1:1 sex ratio). Is this population increasing, decreasing, or remaining
stable?

How to compare life table, survival rate.

Relative Risk
Y-Y analysis
Forest Graph

A forest plot (or blobbogram[1]) is a graphical display designed to illustrate the relative strength
of treatment effects in multiple quantitative scientific studies addressing the same question. It
was developed for use in medical research as a means of graphically representing a metaanalysis of the results of randomized controlled trials. In the last twenty years, similar metaanalytical techniques have been applied in observational studies (e.g. environmental
epidemiology) and forest plots are often used in presenting the results of such studies also.
Although forest plots can take several forms, they are commonly presented with two columns.
The left-hand column lists the names of the studies (frequently randomized controlled
trials or epidemiological studies), commonly in chronological order from the top downwards.
The right-hand column is a plot of the measure of effect (e.g. an odds ratio) for each of these
studies (often represented by a square) incorporating confidence intervals represented by
horizontal lines. The graph may be plotted on a natural logarithmic scale when using odds ratios
or other ratio-based effect measures, so that the confidence intervals are symmetrical about the
means from each study and to ensure undue emphasis is not given to odds ratios greater than 1
when compared to those less than 1. The area of each square is proportional to the study's weight
in the meta-analysis. The overall meta-analysed measure of effect is often represented on the plot
as a dashed vertical line. This meta-analysed measure of effect is commonly plotted as a
diamond, the lateral points of which indicate confidence intervals for this estimate.
A vertical line representing no effect is also plotted. If the confidence intervals for individual
studies overlap with this line, it demonstrates that at the given level of confidence their effect
sizes do not differ from no effect for the individual study. The same applies for the metaanalysed measure of effect: if the points of the diamond overlap the line of no effect the overall
meta-analysed result cannot be said to differ from no effect at the given level of confidence.
Forest plots date back to at least the 1970s. One plot is shown in a 1985 book about metaanalysis.[2]:252 The first use in print of the word "forest plot" may be in an abstract for a poster at
the Pittsburgh (USA) meeting of the Society for Clinical Trials in May 1996.[3] An informative
investigation on the origin of the notion "forest plot" was published in 2001.[4] The name refers to
the forest of lines produced. In September 1990, Richard Peto joked that the plot was named

after a breast cancer researcher called Pat Forrest and as a result the name has sometimes been
spelt "forrest plot".[4]

Effective human resources management


Strategic Human Resource Management is done by linking of HRM with strategic
goals and objectives in order to improve business performance and developing
organizational cultures that foster innovation and flexibility. It involves planning HR
activities and deployment in such a way to enable organizations to achieve their
goals. Human Resource activities such as recruitment, selection, training and
rewarding personnel are done by keeping in view the company's goals and
objectives. Organizations focuses on identifying, analyzing and balancing two sorts
of forces that is; the organization's external opportunities and threats on one hand
and its internal strengths and weaknesses on the other. Alignment of the Human
Resource system with the strategic goals of firm has facilitated organizations to
achieve superb targets.

Effective Human Resource Management is the Center for Effective Organizations' (CEO) sixth
report of a fifteen-year study of HR management in today's organizations. The only long-term
analysis of its kind, this book compares the findings from CEO's earlier studies to new data
collected in 2010. Edward E. Lawler III and John W. Boudreau measure how HR management is
changing, paying particular attention to what creates a successful HR functionone that
contributes to a strategic partnership and overall organizational effectiveness. Moreover, the
book identifies best practices in areas such as the design of the HR organization and HR metrics.
It clearly points out how the HR function can and should change to meet the future demands of a
global and dynamic labor market.
For the first time, the study features comparisons between U.S.-based firms and companies in
China, Canada, Australia, the United Kingdom, and other European countries. With this new
analysis, organizations can measure their HR organization against a worldwide sample, assessing
their positioning in the global marketplace, while creating an international standard for HR
management.
(PDF 2 docs)

Policy?
1. Politics: (1) The basic principles by which a government is guided.
(2) The declared objectives that a government or party seeks to achieve and
preserve in the interest of national community. See also public policy.
2. Insurance: The formal contract issued by an insurer that contains terms and
conditions of the insurance cover and serves as its legal evidence.
3. Management: The set of basic principles and associated guidelines, formulated
and enforced by the governing body of an organization, to direct and limit
its actions in pursuit of long-term goals. See also corporate policy.

A policy is a principle or protocol to guide decisions and achieve rational outcomes. A policy is a
statement of intent, and is implemented as a procedure[1] or protocol. Policies are generally
adopted by the Board of or senior governance body within an organization whereas procedures
or protocols would be developed and adopted by senior executive officers. Policies can assist in
both subjective and objective decision making. Policies to assist in subjective decision making
would usually assist senior management with decisions that must consider the relative merits of a
number of factors before making decisions and as a result are often hard to objectively test
e.g. work-life balance policy. In contrast policies to assist in objective decision making are
usually operational in nature and can be objectively tested e.g. password policy.[citation needed]
The term may apply to government, private sector organizations and groups, as well as
individuals. Presidential executive orders, corporate privacy policies, and parliamentary rules of
order are all examples of policy. Policy differs from rules or law. While law can compel or
prohibit behaviors (e.g. a law requiring the payment of taxes on income), policy merely guides
actions toward those that are most likely to achieve a desired outcome.[citation needed]
Policy or policy study may also refer to the process of making important organizational
decisions, including the identification of different alternatives such as programs or spending
priorities, and choosing among them on the basis of the impact they will have. Policies can be
understood as political, management, financial, and administrative mechanisms arranged to reach
explicit goals. In public corporate finance, a critical accounting policy is a policy for a
firm/company or an industry which is considered to have a notably high subjective element, and
that has a material impact on the financial statements.[citation needed]

Micro-planning
Micro Planning: A tool to empower people
Micro-planning is a comprehensive planning approach wherein the community
prepares development plans themselves considering the priority needs of the
village. Inclusion and participation of all sections of the community is central to
micro-planning, thus making it an integral component of decentralized governance.
For village development to be sustainable and participatory, it is imperative that the
community owns its village development plans and that the community ensures
that development is in consonance with its needs.
However, from our experience of working with the panchayats in Mewat, we realized
that this bottom-up planning approach was never followed in making village
development plans in the past. Many a times, the elected panchayat
representatives had not even heard of this term.
Acknowledging the significance of micro-planning for village development, IRRADs
Capacity Building Center organized a week long training workshop on microplanning for elected representatives of panchayats and IRRADs staff working with
panchayats in the villages. The aim of this workshop was to educate the
participants about the concept of micro-planning and its importance in
decentralized governance system.
As part of this workshop the participants were explained, in detail about the
concept, why and how of micro planning; the difference between micro-planning
and the traditional planning approaches. To give practical exposure to the
participants, a three day micro-planning exercise was carried out in Untaka Village
of Nuh Block, Mewat. The objective of this exposure was to show participants how
micro-planning is carried out and what challenges may arise during its conduct and
prepare the village development plan following the micro-planning approach.
The village sarpanch led the process from the front, and the entire village and
panchayat members participated wholeheartedly in this exercise. Participatory Rural
Appraisal (PRA) technique which incorporates the knowledge and opinions of rural
people in the planning and management of development projects and programmes
was used to gather information and prioritize development works. Resource, social
and development issue prioritization maps were prepared by the villagers after
analyzing the collected information. The villagers further identified the problems
associated with village development and recommended solutions for specific
problems while working in groups. The planning process went on for two days

subsequent to which a Gram Sabha (village committee), the first power unit in the
panchayati raj system, was organized on the third day. About 250 people
participated in the Gram Sabha including 65 women and 185 men. The sarpanch
shared the final village analysis and development plans with the villagers present in
Gram Sabha and asked for their inputs and suggestions. After incorporating the
suggestions received, a plan was prepared and submitted to Block Development
Office for final approval and sanction of funds.
"After the successful conduct of Gram Sabha in our village, we now need to build
synergies with the district level departments to implement the plans drawn in the
meeting," said the satisfied Sarpanch of Untka after experiencing the conduct of
micro planning exercise in their village.

Macro-planning

Macro Planning and Policy Division (MPPD) is responsible for setting macroeconomic
policies and strategies in consultation with key agencies, such as the Reserve Bank
of Fiji (RBF) and Ministry of Finance. The Division analyzes and forecasts movements
in macroeconomic indicators and accounts, including Gross Domestic Product (GDP),
Exports and Imports, and the Balance of Payments (BOP). Macroeconomic
forecasting involves making assessments on production data in the various sectors
of the economy for compilation of quarterly forecasts of the National Accounts.
The

Division

also

involves

in

undertaking

assessments

and

research

on

macroeconomic indicators, internal external shocks and structural reform measures,


which include areas such as investment, labour market, goods market, trade, public
enterprises, and public service.
The Macro Policy and Planning Division:

Provides technical and policy advice;

Produces macroeconomic forecasts of Gross Domestic Product, Exports,


Imports and Balance of Payments on a quarterly basis;

Effective participation at policy development meetings and consultative


forums;

Undertake research on topical issues and provide pre-budget macroeconomic


analyses and advice.
1. Macro lesson planning
The term macro comes from Greek makros meaning long, large. For
teachers, macro lesson planning means coming up with the curriculum for the
semester/month/year/etc. Not all teachers feel they are responsible for this as many
schools have set curriculums and/or textbooks determined by the academic
coordinator. However, even in these cases, teachers may be called upon to devise a
curriculum for a new class, modify an older curriculum, or map out themes to match
the target lessons within the curriculum.
At my old school, for instance, I had the chance to develop the curriculum for a
TOEIC Intermediate and a TOEFL Advanced class when they were first introduced at
our school. Ive also modified older curricula (or curriculums, if you preferboth are

acceptable) for various levels because of students changing needs. And finally, my
old school kindly granted the teachers one day a month of paid prep time/new
student intake, where wed decide on the themes that wed be using for our class to
ensure there wasnt too much overlap with other classes. We did have a set
curriculum in terms of grammar points, but themes and supplementary materials
were up to us. Doing a bit of planning before the semester started ensured that we
stayed organized and kept the students interest throughout the semester.
Another benefit of macro lesson planning is that teachers can share the overall
goals of the course with their students on the first day, and they can reiterate those
goals as the semester progresses. Students often lose sight of the big picture and
get discouraged with their English level, and having clear goals that they see
themselves reaching helps prevent this.
2. Micro lesson planning
The term micro comes from the Greek mikros meaning small, little. In the ELT
industry, micro lesson planning refers to planning one specific lesson based on
one target (e.g., the simple past). It involves choosing a topic or grammar point and
building a full lesson to complement it. A typical lesson plan involves a warm-up
activity, which introduces the topic or elicits the grammar naturally, followed by an
explanation/lesson of the point to be covered. Next, teachers devise a few activities
that allow students to practice the target point, preferably through a mix of skills
(speaking, listening, reading, writing). Finally, teachers should plan a brief wrap-up
activity that brings the lesson to a close. This could be as simple as planning to ask
students to share their answers from the final activity as a class.
Some benefits of micro lesson planning include classes that runs smoothly and
students who dont get bored. Lesson planning ensures that youll be prepared for
every class and that youll have a variety of activities on hand for whatever
situation may arise (well, the majority of situationsIm sure weve all had those
classes where an activity we thought would rock ends up as an epic fail).

For more information on micro lesson planning, check out How to Make a Lesson
Plan, a blog post I wrote last year, where I emphasized the importance of planning
fun, interesting fillers so that students stay engaged. I also provided links in that
post to many examples of activities you can use for warm-ups, main activities,
fillers, homework, etc. There is also a good template for a typical lesson plan
at.docstoc.
Can anyone think of other benefits of macro or micro lesson planning? Does anyone
have a different definition of these terms? Let us know below.
Happy planning!
Tanya

Macro is big and micro is very small. Macro economics depends on big projects like
steel mills, big industrial units, national highway projects etc. which aim at
producing good and services at a very large quantity and serve a wide area. These
take time to porduce results because of the size of the projects. Micro economics is
on a small scale, limited to specific area or location and purpose and normally
produce results in a much shorter time. The best example of micro economics is the
Grameen Bank of Bangladesh started by Md. Yunus, who also got international
awards for his initative.The concept of Micro credit was pioneered by the
Bangladesh-based Grameen Bank, which broke away from the age old belief that
low income amounted to low savings and low investment. It started what came to
be a system which followed this sequence: low income, credit, investment, more
income, more credit, more investment, more income. It is owned by the poor
borrowers of the bank who are mostly women. Borrowers of Grameen Bank at
present own 95 per cent of the total equity and the balance 5% by the Govt. Micro
economics was also one of the policies of Mahatma Gandhi who wanted planning to
start from local village level and spread thru the country; unfortunately this has not
happened and even now the result of developments has not percolated to the
common man, particularly in the rural areas.

Macro planning vs. micro planning


Ideally, lesson planning should be done at two levels: macro planning and micro
planning. The former is planning over time, for instance, the planning for a month, a
term, or the whole course. The latter is planning for a specific lesson, which usually

lasts 40 or 50 minutes. Of course, there is no clear cut difference between these two
types of planning. Micro planning should be based on macro planning, and macro
planning is apt to be modified as lessons go on.
Read through the following items and decide which belong to macro planning and
which belong to micro planning. Some could belong to both. When you have
finished, compare your decisions with your partner.
Thinking and sharing activity
TASK 2
1.

Write down lesson notes to guide teaching.

2.

Decide on the overall aims of a course or programme

3.

Design activities and procedures for a lesson.

4.

Decide which language points to cover in a lesson.

5.

Study the textbooks and syllabus chosen by the institute.

6.

Decide which skills are to be practised.

7.

Prepare teaching aids.

8.

Allocate time for activities.

9.

Prepare games or songs for a lesson.

10.

Prepare supplementary materials.

In a sense, macro planning is not writing lesson plans for specific lessons but rather
familiarizing with the context in which language teaching is taking place. Macro
planning involves the following:
1) Knowing about the course: The teacher should get to know which language areas
and language skills should be taught or practised in the course, what materials and
teaching aids are available, and what methods and techniques can be used.
2) Knowing about the institution: The teacher should get to know the institution's
arrangements regarding time, length, frequency of lessons, physical conditions of
classrooms, and exam requirements.
3) Knowing about the learners: The teacher should acquire information about the
students?age range, sex ratio, social background, motivation, attitudes, interests,
learning needs and other individual factors.
4) Knowing about the syllabus: The teacher should be clear about the purposes,

requirements and targets specified in the syllabus.


Much of macro planning is done prior to the commencement of a course. However,
macro planning is a job that never really ends until the end of the course.
Macro planning provides general guidance for language teachers. However, most
teachers have more confidence if they have a kind of written plan for each lesson
they teach. All teachers have different personalities and different teaching
strategies, so it is very likely their lesson plans would differ from each other.
However, there are certain guidelines that we can follow and certain elements that
we can incorporate in our plans to help us create purposeful, interesting and
motivating lessons for our learners.

Components of policy/ planning


Five essential components
The five essential components that ensure an effective P&P program include the

organizational documentation process

information plan or architecture

documentation approach

P&P expertise, and

technologies (tools).
Definition of P&P program
A policies and procedures (P&P) program refers to the context in which an
organization formally plans, designs, implements, manages, and uses P&P
communication in support of performance-based learning and on-going reference.
Description of components
The five components of a formal P&P program are described below:

An organizational documentation process which describes how members of


the organization interact in the development and maintenance of the life span of
P&P content

The information plan or architecture which identifies the coverage and


organization of subject matter and related topics to be included

The documentation approach which designates how P&P content will be


designed and presented, including the documentation methods, techniques,
formats, and styles

The P&P expertise necessary for planning, designing, developing,


coordinating, implementing, and publishing P&P content, as well as the expertise
needed for managing the program and the content development projects

The designated technologies for developing, publishing, storing, accessing,


and managing content, as well as for monitoring content usage.
Implementing components
Every organization is usually at a different maturity stage for their P&P investment.
Therefore, before establishing or enhancing a current P&P program, it is important
to obtain an objective assessment of the organizational maturity, including where
your P&P program is now and where it needs to be in the future. Once the maturity
level is established, it is then necessary to develop a strategic P&P program plan.
The strategic plan will enable your organization to achieve the necessary level of
maturity for each component and ensure that your organization will maximize the
value of its P&P investment.
Conclusion
Organizations with informal P&P programs do not usually reap the benefits that
formal P&P programs provide. An effective P&P program must include five
components. It is essential to have an objective P&P program assessment to
determine the existing P&P maturity grade and where it should be. The P&P
strategic plan is the basis for achieving a higher level of performance in your P&P
program

The following information is provided as a template to assist learners draft a policy. However it
must be remembered that policies are written to address specific issues, and therefore the
structure and components of a policy will differ considerably according to the need. A policy
document may be many pages or it may be a single page with just a few simple statements.
The following template is drawn from an Information Bulletin "Policy and Planning" by Sport
and Recreation Victoria. It is suggested that there are nine components. The example given at the
right of the table should not be construed as a complete policy

Compone

Brief Example

nt

A statement of what

The following policy aims to ensure that XYZ

the organisation seeks

Association Inc. fulfills the expectation of its members

to achieve for its

for quality services in sport and recreation delivery.

clients

Underpinning principl

The underpinning principle of this policy is that the

es, values and

provision of quality services is of the utmost

philosophies

importance in building membership and participation.


Satisfied members are more likely to continue
participation, contribute to the organisation and renew
the memberships each year.

Broad
service objectives wh
ich explain the areas in

This policy aims to improve the quality of services


provided XYZ Assoc. Inc.:

which the organisation


will be dealing

The organisation and management of programs


and services

The management of association resources

These hypothetical examples are for


illustration. There is no substitute for research
and consultation in the development of
effective policies.

Strategies to achieve
each objective

Strategies to improve the quality of services in


program and event management include:

Provision of training for event officials

Implementing a participant survey

Fostering a culture of continuous improvement

Strategies to improve the quality of services through


the better management of resources through:

Implementation of best practice consultation


and planning processes

Professional development opportunities for the


human resources of the organisation

Instituting a risk management program

The maintenance of records and databases to


assist in the management process.

These hypothetical examples are for


illustration. There is no substitute for research
and consultation in the development of
effective policies.

Specific actions to be
taken

This policy recommends the following actions:

Participants are surveyed on a once-year basis


for satisfaction with programs and services

The quality of services to participants is


reviewed annually as part of the strategic
planning process

The operational planning process include


scheduling events for the professional
development of staff

The risk management program should be


reviewed on a yearly basis, and that this review
should involve risk management professionals

All clubs be consulted in the maintenance,


distribution of and usage of physical and
financial resources

These hypothetical examples are for


illustration. There is no substitute for research
and consultation in the development of
effective policies.

Desired outcomes of
specific actions

The desired outcomes of this policy are as follows:

Increased satisfaction of participants with the


association's events and programs

The best utilisation of then association's


resources in line with the expectations of
members

The better management of risks associated


with services delivery

Performance
indicators

The success of this policy may be measured in terms


of:

An increase in the average membership


duration An increase in the participation of
association events

An increase in the number of volunteer officials

A reduction in injuries

Management

This section of the policy provides further information

plans and day to day

and detail on how the policy is to be implemented and

operational rules

observed on a day-to-day basis.

covering all aspects of


services delivery

A review program

This policy should be review annually. The review


process should include an examination of the
performance indicators, consultation with members of
the association, and a discussionforum involving the
management committee and risk management
professionals.

Health care financing

Health Care Financing, Efficiency, and Equity


This paper examines the efficiency and equity implications of alternative health
care system financing strategies. Using data across the OECD, I find that almost all
financing choices are compatible with efficiency in the delivery of health care, and
that there has been no consistent and systematic relationship between financing
and cost containment. Using data on expenditures and life expectancy by income
quintile from the Canadian health care system, I find that universal, publiclyfunded health insurance is modestly redistributive. Putting $1 of tax funds into the
public health insurance system effectively channels between $0.23 and $0.26
toward the lowest income quintile people, and about $0.50 to the bottom two
income quintiles. Finally, a review of the literature across the OECD suggests that
the progressivity of financing of the health insurance system has limited
implications for overall income inequality, particularly over time.

Health financing systems are critical for reaching universal health coverage. Health
financing levers to move closer to universal health coverage lie in three interrelated
areas:

raising funds for health;


reducing financial barriers to access through prepayment and subsequent
pooling of funds in preference to direct out-of-pocket payments; and
allocating or using funds in a way that promotes efficiency and equity.
Developments in these key health financing areas will determine whether health
services exist and are available for everyone and whether people can afford to use
health services when they need them.
Guided by the World Health Assembly resolution WHA64.9 from May 2011 and
based on the recommendations from the World Health Report 2010 Health systems
financing: The path to universal coverage, WHO is supporting countries in
developing of health financing systems that can bring them closer to universal
coverage.

HEALTH CARE FINANCING


Management Sciences Health helps governments and nongovernmental organizations assess their
current financial situation and systems, understand service costs, develop financing solutions and

to use funds more effectively and efficiently. MSH believes in integrated approaches to health
finance and works with sets of policy levers that will produce the best outcomes, including
government regulations, budgeting mechanisms, insurance payment methods and provider and
patient incentives.

Healthcare Financing

The Need
More than 120 million people in Pakistan do not have health coverage. This pushes
the poor into debt and an inevitable medical-poverty trap. Two-thirds of households
surveyed over the last three years, reported that they were affected by one or more
health problems and went into debt to finance the cost. Many who cannot afford
treatment, particularly women, forego medical treatment altogether.

The Solution
To fill this vacuum in healthcare financing, the American Pakistan Foundation has
partnered with Heartfile Health Financing to support their groundbreaking work in
healthcare reform and health financing for the poor in Pakistan.

Heartfile is an innovative program that utilizes a custom-made technology platform


to transfer funds for treatment costs of the poor. The system, founded by Dr. Sania
Nishtar, is highly transparent and effective by providing a direct connection
between the donor, healthcare facility, and beneficiary patient.

Success Stories
At the age of 15 Majjid was the only breadwinner of his family. After being hit by a
tractor he was out of a job with a starving family and no money for an operation.
Through Heartfile he was able to get the treatment he needed and stay out of debt.
Majid

The Process

Heartfile is contacted via text or email when a person of dire financial need is
admitted into one of a list of preregistered hospitals.

Within 24 hours a volunteer is mobilized to see the patient, assess poverty status
and the eligibility by running their identity card information through the national
database authority.

Once eligibility is established, the patient is sent funds within 72 hours through a
cash transfer to their service provider.

Donors to Heartfile have full control over their donation through a web database
that allows them to decide where they want their funds to go. They are connected
to the people they support through a personal donation page that allows them to
see exactly how their funds were used.
Hills Criteria of Causation
Hills Criteria of Causation outlines the minimal
conditions needed to establish a causal relationship
between two items. These criteria were originally
presented by Austin Bradford Hill (1897-1991), a British
medical statistician, as a way of determining the causal
link between a specific factor (e.g., cigarette smoking)
and a disease (such as emphysema or lung
cancer). Hill's Criteria form the basis of modern
epidemiological research, which attempts to establish
scientifically valid causal connections between
potential disease agents and the many diseases that
afflict humankind. While the criteria established by
Hill (and elaborated by others) were developed as a
research tool in the medical sciences, they are equally
applicable to sociology, anthropology and other social
sciences, which attempt to establish causal
relationships among social phenomena. Indeed, the
principles set forth by Hill form the basis of evaluation
used in all modern scientific research. While it is quite
easy to claim that agent "A" (e.g., smoking) causes
disease "B" (lung cancer), it is quite another matter to

establish a meaningful, statistically valid connection


between the two phenomena. It is just as necessary to
ask if the claims made within the social and behavioral
sciences live up to Hill's Criteria as it is to ask the
question in epidemiology (which is also a social and
behavioral science). While it is quite easy to claim that
population growth causes poverty or that globalization
causes underdevelopment in Third World countries, it is
quite another thing to demonstrate scientifically that
such causal relationships, in fact, exist. Hill's
Criteria simply provides an additional valuable measure
by which to evaluate the many theories and
explanations proposed within the social sciences.
Hill's Criteria

Hills Criteria* are presented here as they have been applied in


epidemiological research, followed by examples which illustrate how they
would be applied to research in the social and behavioral sciences.
1.

Temporal Relationship:

Exposure always precedes the outcome. If factor "A" is believed to cause


a disease, then it is clear that factor "A" must necessarily always
precede the occurrence of the disease. This is the only absolutely
essential criterion. This criterion negates the validity of all functional
explanations used in the social sciences, including the functionalist
explanations that dominated British social anthropology for so many years
and the ecological functionalism that pervades much American cultural
ecology.
2.

Strength:

This is defined by the size of the association as measured by appropriate


statistical tests. The stronger the association, the more likely it is that
the relation of "A" to "B" is causal. For example, the more highly
correlated hypertension is with a high sodium diet, the stronger is the
relation between sodium and hypertension. Similarly, the higher the
correlation between patrilocal residence and the practice of male
circumcision, the stronger is the relation between the two social practices.
3.

Dose-Response Relationship:

An increasing amount of exposure increases the risk. If a dose-response


relationship is present, it is strong evidence for a causal relationship.
However, as with specificity (see below), the absence of a dose-response
relationship does not rule out a causal relationship. A threshold may exist
above which a relationship may develop. At the same time, if a specific
factor is the cause of a disease, the incidence of the disease should
decline when exposure to the factor is reduced or eliminated. An
anthropological example of this would be the relationship between
population growth and agricultural intensification. If population growth is
a cause of agricultural intensification, then an increase in the size of a
population within a given area should result in a commensurate increase
in the amount of energy and resources invested in agricultural
production. Conversely, when a population decrease occurs, we should
see a commensurate reduction in the investment of energy and resources
per acre. This is precisely what happened in Europe before and after the
Black Plague. The same analogy can be applied to global temperatures. If
increasing levels of CO2 in the atmosphere causes increasing global
temperatures, then "other things being equal", we should see both a
commensurate increase and a commensurate decrease in global
temperatures following an increase or decrease respectively in CO 2 levels
in the atmosphere.
4.

Consistency:

The association is consistent when results are replicated in studies in


different settings using different methods. That is, if a relationship is
causal, we would expect to find it consistently in different studies and
among different populations. This is why numerous experiments have to
be done before meaningful statements can be made about the causal
relationship between two or more factors. For example, it required
thousands of highly technical studies of the relationship between
cigarette smoking and cancer before a definitive conclusion could be made
that cigarette smoking increases the risk of (but does not cause) cancer.
Similarly, it would require numerous studies of the difference between
male and female performance of specific behaviors by a number of
different researchers and under a variety of different circumstances
before a conclusion could be made regarding whether a gender difference
exists in the performance of such behaviors.
5.

Plausibility:

The association agrees with currently accepted understanding of


pathological processes. In other words, there needs to be some

theoretical basis for positing an association between a vector and disease,


or one social phenomenon and another. One may, by chance, discover a
correlation between the price of bananas and the election of dog catchers
in a particular community, but there is not likely to be any logical
connection between the two phenomena. On the other hand, the
discovery of a correlation between population growth and the incidence of
warfare among Yanomamo villages would fit well with ecological theories
of conflict under conditions of increasing competition over resources. At
the same time, research that disagrees with established theory is not
necessarily false; it may, in fact, force a reconsideration of accepted
beliefs and principles.
6.

Consideration of Alternate Explanations:

In judging whether a reported association is causal, it is necessary to


determine the extent to which researchers have taken other possible
explanations into account and have effectively ruled out such alternate
explanations. In other words, it is always necessary to consider multiple
hypotheses before making conclusions about the causal relationship
between any two items under investigation.
7.

Experiment:

The condition can be altered (prevented or ameliorated) by an appropriate


experimental regimen.
8.

Specificity:

This is established when a single putative cause produces a specific


effect. This is considered by some to be the weakest of all the criteria.
The diseases attributed to cigarette smoking, for example, do not meet
this criteria. When specificity of an association is found, it provides
additional support for a causal relationship. However, absence of
specificity in no way negates a causal relationship. Because outcomes (be
they the spread of a disease, the incidence of a specific human social
behavior or changes in global temperature) are likely to have multiple
factors influencing them, it is highly unlikely that we will find a one-to-one
cause-effect relationship between two phenomena. Causality is most
often multiple. Therefore, it is necessary to examine specific causal
relationships within a larger systemic perspective.
9.

Coherence:

The association should be compatible with existing theory and


knowledge. In other words, it is necessary to evaluate claims of causality
within the context of the current state of knowledge within a given field
and in related fields. What do we have to sacrifice about what we
currently know in order to accept a particular claim of causality. What, for
example, do we have to reject regarding our current knowledge in
geography, physics, biology and anthropology in order to accept the
Creationist claim that the world was created as described in the Bible a
few thousand years ago? Similarly, how consistent are racist and sexist
theories of intelligence with our current understanding of how genes work
and how they are inherited from one generation to the next? However, as
with the issue of plausibility, research that disagrees with established
theory and knowledge are not automatically false. They may, in fact, force
a reconsideration of accepted beliefs and principles. All currently
accepted theories, including Evolution, Relativity and non-Malthusian
population ecology, were at one time new ideas that challenged
orthodoxy. Thomas Kuhn has referred to such changes in accepted
theories as "Paradigm Shifts".

The Bradford Hill criteria, otherwise known as Hill's criteria for causation, are
a group of minimal conditions necessary to provide adequate evidence of a causal
relationship between an incidence and a consequence, established by
the English epidemiologist Sir Austin Bradford Hill (18971991) in 1965.
The list of the criteria is as follows:
10.

Strength: A small association does not mean that there is not a causal
effect, though the larger the association, the more likely that it is causal. [1]

11.

Consistency: Consistent findings observed by different persons in different


places with different samples strengthens the likelihood of an effect. [1]

12.

Specificity: Causation is likely if a very specific population at a specific site


and disease with no other likely explanation. The more specific an
association between a factor and an effect is, the bigger the probability of a
causal relationship.[1]

13.

Temporality: The effect has to occur after the cause (and if there is an
expected delay between the cause and expected effect, then the effect must
occur after that delay).[1]

14.

Biological gradient: Greater exposure should generally lead to greater


incidence of the effect. However, in some cases, the mere presence of the
factor can trigger the effect. In other cases, an inverse proportion is
observed: greater exposure leads to lower incidence. [1]

15.

Plausibility: A plausible mechanism between cause and effect is helpful (but


Hill noted that knowledge of the mechanism is limited by current knowledge).
[1]

16.

Coherence: Coherence between epidemiological and laboratory findings


increases the likelihood of an effect. However, Hill noted that "... lack of such
[laboratory] evidence cannot nullify the epidemiological effect on
associations".[1]

17.

Experiment: "Occasionally it is possible to appeal to experimental


evidence".[1]

18.

Analogy: The effect of similar factors may be considered. [1]

Bhopal gas tragedy latter outcomes and effects


The Bhopal disaster and its aftermath: a review
Abstract
On December 3 1984, more than 40 tons of methyl isocyanate gas leaked
from a pesticide plant in Bhopal, India, immediately killing at least 3,800
people and causing significant morbidity and premature death for many
thousands more. The company involved in what became the worst industrial
accident in history immediately tried to dissociate

itself from legal

responsibility. Eventually it reached a settlement with the Indian Government


through mediation of that country's Supreme Court and accepted moral
responsibility. It paid $470 million in compensation, a relatively small amount
of

based

on

significant

underestimations

of

the

long-term

health

consequences of exposure and the number of people exposed. The disaster


indicated a need for enforceable international standards for environmental
safety, preventative strategies to avoid similar accidents and industrial
disaster preparedness.

Since the disaster, India has experienced rapid industrialization. While some
positive changes in government policy and behavior of a few industries have
taken place, major threats to the environment from rapid and poorly
regulated industrial growth remain. Widespread environmental degradation
with significant adverse human health consequences continues to occur
throughout India.

December 2004 marked the twentieth anniversary of the massive toxic gas
leak from Union Carbide Corporation's chemical plant in Bhopal in the state of
Madhya Pradesh, India that killed more than 3,800 people. This review
examines the health effects of exposure to the disaster, the legal response,
the lessons learned and whether or not these are put into practice in India in

terms of industrial development, environmental management and public


health.

History
In the 1970s, the Indian government initiated policies to encourage
foreign companies to invest in local industry. Union Carbide Corporation
(UCC) was asked to build a plant for the manufacture of Sevin, a pesticide
commonly used throughout Asia. As part of the deal, India's government
insisted that a significant percentage of the investment come from local
shareholders. The government itself had a 22% stake in the company's
subsidiary, Union Carbide India Limited (UCIL) [1]. The company built the
plant in Bhopal because of its central location and access to transport
infrastructure. The specific site within the city was zoned for light industrial
and commercial use, not for hazardous industry. The plant was initially
approved only for formulation of pesticides from component chemicals, such
as MIC imported from the parent company, in relatively small quantities.
However, pressure from competition in the chemical industry led UCIL to
implement "backward integration" the manufacture of raw materials and
intermediate products for formulation of the final product within one facility.
This was inherently a more sophisticated and hazardous process [2].

In 1984, the plant was manufacturing Sevin at one quarter of its production capacity due
to decreased demand for pesticides. Widespread crop failures and famine on the
subcontinent in the 1980s led to increased indebtedness and decreased capital for farmers
to invest in pesticides. Local managers were directed to close the plant and prepare it for
sale in July 1984 due to decreased profitability [3]. When no ready buyer was found,
UCIL made plans to dismantle key production units of the facility for shipment to another
developing country. In the meantime, the facility continued to operate with safety
equipment and procedures far below the standards found in its sister plant in Institute,
West Virginia. The local government was aware of safety problems but was reticent to

place heavy industrial safety and pollution control burdens on the struggling industry
because it feared the economic effects of the loss of such a large employer [3].

At 11.00 PM on December 2 1984, while most of the one million residents of Bhopal
slept, an operator at the plant noticed a small leak of methyl isocyanate (MIC) gas and
increasing pressure inside a storage tank. The vent-gas scrubber, a safety device designer
to neutralize toxic discharge from the MIC system, had been turned off three weeks prior
[3]. Apparently a faulty valve had allowed one ton of water for cleaning internal pipes to
mix with forty tons of MIC [1]. A 30 ton refrigeration unit that normally served as a
safety component to cool the MIC storage tank had been drained of its coolant for use in
another part of the plant [3]. Pressure and heat from the vigorous exothermic reaction in
the tank continued to build. The gas flare safety system was out of action and had been
for three months. At around 1.00 AM, December 3, loud rumbling reverberated around
the plant as a safety valve gave way sending a plume of MIC gas into the early morning
air [4]. Within hours, the streets of Bhopal were littered with human corpses and the
carcasses of buffaloes, cows, dogs and birds. An estimated 3,800 people died
immediately, mostly in the poor slum colony adjacent to the UCC plant [1,5]. Local
hospitals were soon overwhelmed with the injured, a crisis further compounded by a lack
of knowledge of exactly what gas was involved and what its effects were [1]. It became
one of the worst chemical disasters in history and the name Bhopal became synonymous
with industrial catastrophe [5].

Estimates of the number of people killed in the first few days by the plume
from the UCC plant run as high as 10,000, with 15,000 to 20,000 premature
deaths reportedly occurring in the subsequent two decades [6]. The Indian
government reported that more than half a million people were exposed to
the gas [7]. Several epidemiological studies conducted soon after the
accident showed significant morbidity and increased mortality in the exposed
population. Table Table1.1. summarizes early and late effects on health.
These data are likely to under-represent the true extent of adverse health

effects because many exposed individuals left Bhopal immediately following


the disaster never to return and were therefore lost to follow-up [8].

Aftermath
Immediately after the disaster, UCC began attempts to dissociate itself from
responsibility for the gas leak. Its principal tactic was to shift culpability to
UCIL, stating the plant was wholly built and operated by the Indian subsidiary.
It also fabricated scenarios involving sabotage by previously unknown Sikh
extremist groups and disgruntled employees but this theory was impugned
by numerous independent sources [1].

The toxic plume had barely cleared when, on December 7, the first multibillion dollar lawsuit was filed by an American attorney in a U.S. court. This
was the beginning of years of legal machinations in which the ethical
implications of the tragedy and its affect on Bhopal's people were largely
ignored. In March 1985, the Indian government enacted the Bhopal Gas Leak
Disaster Act as a way of ensuring that claims arising from the accident would
be dealt with speedily and equitably. The Act made the government the sole
representative of the victims in legal proceedings both within and outside
India. Eventually all cases were taken out of the U.S. legal system under the
ruling of the presiding American judge and placed entirely under Indian
jurisdiction much to the detriment of the injured parties.

In a settlement mediated by the Indian Supreme Court, UCC accepted moral


responsibility and agreed to pay $470 million to the Indian government to be distributed
to claimants as a full and final settlement. The figure was partly based on the disputed
claim that only 3000 people died and 102,000 suffered permanent disabilities [9]. Upon
announcing this settlement, shares of UCC rose $2 per share or 7% in value [1]. Had
compensation in Bhopal been paid at the same rate that asbestosis victims where being
awarded in US courts by defendant including UCC which mined asbestos from 1963 to
1985 the liability would have been greater than the $10 billion the company was worth

and insured for in 1984 [10]. By the end of October 2003, according to the Bhopal Gas
Tragedy Relief and Rehabilitation Department, compensation had been awarded to
554,895 people for injuries received and 15,310 survivors of those killed. The average
amount to families of the dead was $2,200 [9].

At every turn, UCC has attempted to manipulate, obfuscate and withhold scientific data
to the detriment of victims. Even to this date, the company has not stated exactly what
was in the toxic cloud that enveloped the city on that December night [8]. When MIC is
exposed to 200 heat, it forms degraded MIC that contains the more deadly hydrogen
cyanide (HCN). There was clear evidence that the storage tank temperature did reach this
level in the disaster. The cherry-red color of blood and viscera of some victims were
characteristic of acute cyanide poisoning [11]. Moreover, many responded well to
administration of sodium thiosulfate, an effective therapy for cyanide poisoning but not
MIC exposure [11]. UCC initially recommended use of sodium thiosulfate but withdrew
the statement later prompting suggestions that it attempted to cover up evidence of HCN
in the gas leak. The presence of HCN was vigorously denied by UCC and was a point of
conjecture among researchers [8,11-13].

As further insult, UCC discontinued operation at its Bhopal plant following the disaster
but failed to clean up the industrial site completely. The plant continues to leak several
toxic chemicals and heavy metals that have found their way into local aquifers.
Dangerously contaminated water has now been added to the legacy left by the company
for the people of Bhopal [1,14].

Lessons learned
The events in Bhopal revealed that expanding industrialization in developing countries
without concurrent evolution in safety regulations could have catastrophic consequences
[4]. The disaster demonstrated that seemingly local problems of industrial hazards and
toxic contamination are often tied to global market dynamics. UCC's Sevin production
plant was built in Madhya Pradesh not to avoid environmental regulations in the U.S. but
to exploit the large and growing Indian pesticide market. However the manner in which

the project was executed suggests the existence of a double standard for multinational
corporations operating in developing countries [1]. Enforceable uniform international
operating regulations for hazardous industries would have provided a mechanism for
significantly improved in safety in Bhopal. Even without enforcement, international
standards could provide norms for measuring performance of individual companies
engaged in hazardous activities such as the manufacture of pesticides and other toxic
chemicals in India [15]. National governments and international agencies should focus on
widely applicable techniques for corporate responsibility and accident prevention as
much in the developing world context as in advanced industrial nations [16]. Specifically,
prevention should include risk reduction in plant location and design and safety
legislation [17].

Local governments clearly cannot allow industrial facilities to be situated within urban
areas, regardless of the evolution of land use over time. Industry and government need to
bring proper financial support to local communities so they can provide medical and
other necessary services to reduce morbidity, mortality and material loss in the case of
industrial accidents.

Public health infrastructure was very weak in Bhopal in 1984. Tap water was available
for only a few hours a day and was of very poor quality. With no functioning sewage
system, untreated human waste was dumped into two nearby lakes, one a source of
drinking water. The city had four major hospitals but there was a shortage of physicians
and hospital beds. There was also no mass casualty emergency response system in place
in the city [3]. Existing public health infrastructure needs to be taken into account when
hazardous industries choose sites for manufacturing plants. Future management of
industrial development requires that appropriate resources be devoted to advance
planning before any disaster occurs [18]. Communities that do not possess infrastructure
and technical expertise to respond adequately to such industrial accidents should not be
chosen as sites for hazardous industry.

Since 1984
Following the events of December 3 1984 environmental awareness and activism in India
increased significantly. The Environment Protection Act was passed in 1986, creating the
Ministry of Environment and Forests (MoEF) and strengthening India's commitment to
the environment. Under the new act, the MoEF was given overall responsibility for
administering and enforcing environmental laws and policies. It established the
importance of integrating environmental strategies into all industrial development plans
for the country. However, despite greater government commitment to protect public
health, forests, and wildlife, policies geared to developing the country's economy have
taken precedence in the last 20 years [19].

India has undergone tremendous economic growth in the two decades since the Bhopal
disaster. Gross domestic product (GDP) per capita has increased from $1,000 in 1984 to
$2,900 in 2004 and it continues to grow at a rate of over 8% per year [20]. Rapid
industrial development has contributed greatly to economic growth but there has been
significant cost in environmental degradation and increased public health risks. Since
abatement efforts consume a large portion of India's GDP, MoEF faces an uphill battle as
it tries to fulfill its mandate of reducing industrial pollution [19]. Heavy reliance on coalfired power plants and poor enforcement of vehicle emission laws have result from
economic concerns taking precedence over environmental protection [19].

With the industrial growth since 1984, there has been an increase in small scale industries
(SSIs) that are clustered about major urban areas in India. There are generally less
stringent rules for the treatment of waste produced by SSIs due to less waste generation
within each individual industry. This has allowed SSIs to dispose of untreated wastewater
into drainage systems that flow directly into rivers. New Delhi's Yamuna River is
illustrative. Dangerously high levels of heavy metals such as lead, cobalt, cadmium,
chrome, nickel and zinc have been detected in this river which is a major supply of
potable water to India's capital thus posing a potential health risk to the people living
there and areas downstream [21].

Land pollution due to uncontrolled disposal of industrial solid and hazardous waste is
also a problem throughout India. With rapid industrialization, the generation of industrial
solid and hazardous waste has increased appreciably and the environmental impact is
significant [22].

India relaxed its controls on foreign investment in order to accede to WTO rules and
thereby attract an increasing flow of capital. In the process, a number of environmental
regulations are being rolled back as growing foreign investments continue to roll in. The
Indian experience is comparable to that of a number of developing countries that are
experiencing the environmental impacts of structural adjustment. Exploitation and export
of natural resources has accelerated on the subcontinent. Prohibitions against locating
industrial facilities in ecologically sensitive zones have been eliminated while
conservation zones are being stripped of their status so that pesticide, cement and bauxite
mines can be built [23]. Heavy reliance on coal-fired power plants and poor enforcement
of vehicle emission laws are other consequences of economic concerns taking precedence
over environmental protection [19].

In March 2001, residents of Kodaikanal in southern India caught the Anglo-Dutch


company, Unilever, red-handed when they discovered a dumpsite with toxic mercury
laced waste from a thermometer factory run by the company's Indian subsidiary,
Hindustan Lever. The 7.4 ton stockpile of mercury-laden glass was found in torn stacks
spilling onto the ground in a scrap metal yard located near a school. In the fall of 2001,
steel from the ruins of the World Trade Center was exported to India apparently without
first being tested for contamination from asbestos and heavy metals present in the twin
tower debris. Other examples of poor environmental stewardship and economic
considerations taking precedence over public health concerns abound [24].

The Bhopal disaster could have changed the nature of the chemical industry and caused a
reexamination of the necessity to produce such potentially harmful products in the first
place. However the lessons of acute and chronic effects of exposure to pesticides and
their precursors in Bhopal has not changed agricultural practice patterns. An estimated 3
million people per year suffer the consequences of pesticide poisoning with most

exposure occurring in the agricultural developing world. It is reported to be the cause of


at least 22,000 deaths in India each year. In the state of Kerala, significant mortality and
morbidity have been reported following exposure to Endosulfan, a toxic pesticide whose
use continued for 15 years after the events of Bhopal [25].

Aggressive marketing of asbestos continues in developing countries as a result of


restrictions being placed on its use in developed nations due to the well-established link
between asbestos products and respiratory diseases. India has become a major consumer,
using around 100,000 tons of asbestos per year, 80% of which is imported with Canada
being the largest overseas supplier. Mining, production and use of asbestos in India is
very loosely regulated despite the health hazards. Reports have shown morbidity and
mortality from asbestos related disease will continue in India without enforcement of a
ban or significantly tighter controls [26,27].

UCC has shrunk to one sixth of its size since the Bhopal disaster in an effort to
restructure and divest itself. By doing so, the company avoided a hostile takeover, placed
a significant portion of UCC's assets out of legal reach of the victims and gave its
shareholder and top executives bountiful profits [1]. The company still operates under the
ownership of Dow Chemicals and still states on its website that the Bhopal disaster was
"cause by deliberate sabotage". [28].

Some positive changes were seen following the Bhopal disaster. The British chemical
company, ICI, whose Indian subsidiary manufactured pesticides, increased attention to
health, safety and environmental issues following the events of December 1984. The
subsidiary now spends 3040% of their capital expenditures on environmental-related
projects. However, they still do not adhere to standards as strict as their parent company
in the UK. [24].

The US chemical giant DuPont learned its lesson of Bhopal in a different way. The
company attempted for a decade to export a nylon plant from Richmond, VA to Goa,
India. In its early negotiations with the Indian government, DuPont had sought and won a
remarkable clause in its investment agreement that absolved it from all liabilities in case

of an accident. But the people of Goa were not willing to acquiesce while an important
ecological site was cleared for a heavy polluting industry. After nearly a decade of
protesting by Goa's residents, DuPont was forced to scuttle plans there. Chennai was the
next proposed site for the plastics plant. The state government there made significantly
greater demand on DuPont for concessions on public health and environmental
protection. Eventually, these plans were also aborted due to what the company called
"financial concerns". [29].

Conclusion
The tragedy of Bhopal continues to be a warning sign at once ignored and heeded.
Bhopal and its aftermath were a warning that the path to industrialization, for developing
countries in general and India in particular, is fraught with human, environmental and
economic perils. Some moves by the Indian government, including the formation of the
MoEF, have served to offer some protection of the public's health from the harmful
practices of local and multinational heavy industry and grassroots organizations that have
also played a part in opposing rampant development. The Indian economy is growing at
a tremendous rate but at significant cost in environmental health and public safety as
large and small companies throughout the subcontinent continue to pollute. Far more
remains to be done for public health in the context of industrialization to show that the
lessons of the countless thousands dead in Bhopal have truly been heeded.

Thar disaster
Arid areas of the world are always prone to famines whenever the average annual rainfall is less than
250mm. The Thar region of Sindh, which has climatic and ecological conditions similar to the Indian state
of Rajasthans portion of Thar, faces severe droughts for two to three years in every 10-year cycle.
These areas have been witnessing famine-like conditions for ages. The average annual rainfall is less
than 250mm, which is usually uneven and erratic.
The northern sandy area, known as Achro Thar in districts Sanghar, Khairpur, Sukkur and Ghotki,
receives average annual precipitation of less than seven inches and therefore is termed hyper-arid. But
even then it is not a desert like the Sahara, as it has reasonable vegetative cover.
Some patches of sand in Sanghar district are barren and are termed dhain. The total geographical area of
Thar in Sindh is 48,000 square kilometres, out of which 25,000sq km is in Tharparkar and Umerkot
districts.
It is a potentially productive and vegetative sandy area which is turning into desert due to overexploitation, although it still has sufficient tree cover and shrubs, and if properly protected and managed
will remain productive.
A realistic approach is needed to make Thar less prone to disaster.
The sandy arid area with high wind velocity has indeed a fragile ecosystem. If its vegetative cover is
overexploited and marginal lands on the slopes of sandy dunes are brought under cultivation, the area will
turn into a barren desert.
The sandy arid area of Cholistan in Bahawalpur division of Punjab is also similar to Thar in its
geomorphology, but the desertification process is less as wind velocity is not as high as that of Thar, and
much of its area has been developed and brought under canal irrigation.
The sandy arid area of Rajasthan has been properly managed by the Indian government since 1953,
when the Central Arid Zone Research Institute was established. Unfortunately, no concerted efforts were
made to conserve and develop the potential of our portion of Thar through a scientific and institutional
approach and no government research and development institute has done any appreciable work in the
area.
The Sindh Arid Zone Development Authority, formed in 1985, was assigned multidisciplinary duties of all
the line departments of the Sindh government and due to its major role in civil works and services, it could
not carry out any sustainable development to ameliorate the suffering of Thars people. The main
emphasis of Sazda should have been aimed at income-generating activities through livestock
development, silvopastoral development and desertification control. But the resources were wasted in civil
works.

Lack of honesty and commitment among the functionaries was also a major cause of its failure. Sazda
was wound up in 2003 after the implementation of the devolved local bodies district government system.
Yet despite Sazdas questionable role, reasonable achievements were made in the groundwater
investigation sector. The credit for this goes to late Abdul Khalique Sheikh, chief hydro-geologist of Sazda,
whose dedicated efforts made it possible to explore groundwater sources beyond the depth of some 300
metres.
The area is now approachable through metalloid roads connecting all taluka headquarters and main
localities, while communication of information has also become faster and easier. This is why the
electronic media has been able to cover the Thar area.
The media has done well to highlight the sufferings of the people of Thar. But droughts and famines are
not new for the people of the region. Old-timers in the area are witness to the misery and death wrought
by the droughts and famines of 1951, 1968, 1969, 1987 and 1988. Similarly, the destruction and death in
the famines of 1899 and 1939 are also remembered in Thar and Rajasthan, when there was not a single
drop of rain throughout the years.
Those at the helm of affairs must adopt a realistic approach to make this area less prone to famines,
otherwise in the present global village and the age of free media the reputation of the government will be
jeopardized.
From my own experience of the area, I would suggest that the government should establish an
independent and autonomous institute of research and development to carry out research in agroforestry, range and livestock development, saline water use, fisheries, desertification control, ecology,
saline groundwater use for crops, rainwater harvesting and salt-resistant plants and grasses as lasting
solutions to Thars problems.
All government departments should continue their usual activities in the area with better funding by the
state. The agriculture, forest and livestock departments should strengthen their extension services and
carry the benefits of the research results to farmers. Nothing is impossible if there is an honest approach
and dedication to find solutions to problems.

Planning Models and Why You Need Them


Planning Models: What Are They?

Planned Approach to Community Health


During the 1980s, the CDC, encouraged by the evidence that
community-based prevention programs were effective in reducing
coronary heart disease risk factors, developed a protocol that could be
locally applied to develop community-based health promotion
programs.
The Planned Approach to Community Health (PATCH) was designed as
a working partnership between the CDC, SHAs and communities to
focus resources and activities on health promotion.
There are five phases to the PATCH process:
1. Mobilizing the communityestablishing a strong core of
representative local support and participation in the process
2. Collecting and organizing datagathering and analyzing local
community opinion and health data for the purpose of identifying
health priorities
3. Choosing health prioritiessetting objectives and standards to
denote progress and success
4. Interventiondesign and implementation of multiple intervention
strategies to meet objectives
5. Evaluationcontinued monitoring of problems and intervention
strategies to evaluate progress and detect need for
change
The PATCH process) show how communities can use public health
surveillance to define the baseline of the health problems they face.

PRECEDE-PROCEED is perhaps the most widely recognized


model. Developed and tested by Green and colleagues over a number
of years, this model has been widely used and well recognized based
on the authors many texts used in health education and public health
courses. PRECEDE includes five phases: social, epidemiological,
behavioral and environmental, educational and ecological, and
administrative
assessment. The PROCEED phases include implementation,
followed by process, impact, and outcome evaluations.

The Multilevel Approach to Community Health (MATCH) was


developed by the CDC in the form of intervention handbooks. It
includes five phases:

1.
2.
3.
4.
5.

health goal selection,


intervention planning,
development,
implementation, and
evaluation.

Each phase is broken down into steps. One strength of this model is
the explicit recognition of interventions that focus on the multiple
levels of individuals, organizations, and governments/communities.

Mobilizing for Action through Planning and Partnership


(MAPP) has been developed by the National Association of County
and City Health Officials (NACCHO)11 with CDC and Health
Resources and Services Administration (HRSA) funding.

The MAPP model emphasizes the role of public health agencies in


building
community participation to planning and implementing effective,
sustainable solutions to complex problems.

Nine MAPP steps include:


1. organizing for action,

2.
3.
4.
5.
6.
7.
8.
9.

developing objectives and establishing accountability,


developing action plans,
reviewing action plans for opportunities for coordination,
implementing and monitoring action plans,
preparing for evaluation activities,
focusing the evaluation,
gathering credible evidence and justifying conclusions, and lastly,
sharing lessons learned, and celebrating successes.

This model builds on the Assessment Protocol for Excellence in Public


Health
(APEX-PH) that was introduced by NACCHO in 1991. The original
APEX model was especially useful for building the planning capacity
of a local health department as it prepares to work with the local
hospitals and community-based organizations. Additional APEX
steps include community engagement and completing the cycle of
implementation and evaluation. Many of the methods and lessons
from APEX have been subsumed by the MAPP model.

The Protocol for Assessing Community Excellence in Environmental


Health
(PACEEH) is a model which focuses on environmental health planning
and was also developed by NACCHO.

The Community Tool Box is an expansive website developed


by the Work Group on Health Promotion and Community Development
at the University of Kansas.8 Developed as a resource for
Health Communities projects, the Tool Box has been online since
1995. It includes a model for community health planning and
development
that is similar to MATCH and PRECEDE-PROCEED, but
with an emphasis on organizational and leadership competencies
needed to progress through the community health improvement cycle.

Resources are arranged to provide guidance for tasks necessary to


promote
community health and development. Essentially an online textbook,
sections include leadership, strategic planning, community
assessment, grant writing, and evaluation. A framework for community
health planning and improvement similar to other comprehensive
models described above is provided on this website. Tools
include step-by-step guidelines, case examples, checklists of points,
and training materials. Also of use is a section on best practices
with links to other knowledge bases that have collected information
on best practices and evidence-based practices for general community
health and focused areas such as HIV, chronic diseases, and substance
abuse.

A comprehensive health communication model popularized by


the CDC is known as CDCynergy or Cynergy. Available on
CD-ROM, its emphasis is on understanding communication audiences,
segmentation techniques, and targeted communication strategies.
Similar to comprehensive planning models described, it
includes six phases, each with detailed steps. The CD-ROM includes
extensive examples and supporting material to assist in developing
targeted health communication campaigns. CDCynergy is by no
means the only comprehensive communications model. For a number
of years, the National Cancer Institute has promoted a model
entitled Making Health Communications Work.

Another example is the Social Marketing Assessment and Response


Tool (SMART) model. These and other models have been developed
based on
experiences in multiple settings. NACCHO has also created a
Public Health Communications toolkit to help local public health
agencies develop messages. The website also includes links to
promotional
materials that have been developed by other public health
departments.

Strategic Planning
Strategic planning implies that the planning process is significance.
In concept, it is usually done by higher-level decision makers within

the organization. The adjective strategic is often coupled with long


term or long range to convey a since of importance. The result of this
planning will be setting the organizations overall directions and
prioritizing
major initiatives.

In concept, strategic planning is a periodic,


information driven, proactive, and systematic process that sets
the overall business strategy of the organization for the years ahead.
In reality, an organizations strategic decisions are often made by
distant legislators or regulators in far-off bureaucracies. Often,
organizational
leaders can only plan their reaction to these decisions.

During the process, planners usually undertake what is commonly


termed strengths, weaknesses, opportunities, and threats (SWOT)
analysis, the assessment of strengths and weaknesses of the
organization
as well as the threats and opportunities presented in the operating
environment or market. Most texts are based on a competitive
model and devote considerable effort to the assessment of the
competitors
and what initiatives are likely to advance the planning organizations
position at the expense of competing organizations.

Strategic planning models include provisions for developing action


plans,
performance information, accountability, and periodic evaluation.

Subsistence agriculture
Subsistence agriculture is self-sufficiency farming in which the farmers
focus on growing enough food to feed themselves and their families.
The typical subsistence farm has a range of crops and animals needed
by the family to feed and clothe themselves during the year. Planting
decisions are made principally with an eye toward what the family will
need during the coming year, and secondarily toward market prices.

Demographic dividend
Demographic dividend refers to a period usually 20 to 30 years
when fertility rates fall due to significant reductions in child and infant
mortality rates. This fall is often accompanied by an extension in
average life expectancy that increases the portion of the population
that is in the working age-group. This cuts spending on dependents
and spurs economic growth. As women and families realize that fewer
children will die during infancy or childhood, they will begin to have
fewer children to reach their desired number of offspring further
reducing the proportion of non-productive dependents.
However, this drop in fertility rates is not immediate. The lag between produces a
generational population bulge that surges through society. For a period of time this
bulge is a burden on society and increases the dependency ratio. Eventually this group
begins to enter the productive labor force. With fertility rates continue to fall and older
generations having shorter life expectancies, the dependency ratio declines dramatically.
This demographic shift initiates the demographic dividend. With fewer younger
dependents, due to declining fertility and child mortality rates, and fewer older
dependents, due to the older generations having shorter life expectancies, and the largest

segment of the population of productive working age, the dependency ratio declines
dramatically leading to the demographic dividend. Combined with effective public
policies this time period of the demographic dividend can help facilitate more rapid
economic growth and puts less strain on families. This is also a time period when many
women enter the labor force for the first time. In many countries this time period has led
to increasingly smaller families, rising income, and rising life expectancy rates.
However, dramatic social changes can also occur during this time, such as increasing
divorce rates, postponement of marriage, and single-person households.

Demographic dividend
The freeing up of resources for a country's economic development and
the future prosperity of its populace as it switches from an agrarian to
an industrial economy. In the initial stages of this transition, fertility
rates fall, leading to a labor force that is temporarily growing faster
than the population dependent on it. All else being equal, per capita
income grows more rapidly during this time too

Demographic window is defined to be that period of time in a nation's


demographic evolution when the proportion of population of working
age group is particularly prominent. This occurs when the demographic
architecture of a population becomes younger and the percentage of
people able to work reaches its height. Typically, the demographic
window of opportunity lasts for 3040 years depending upon the
country. Because of the mechanical link between fertility levels and
age structures, the timing and duration of this period is closely
associated to those of fertility decline: when birth rates fall, the age
pyramid first shrinks with gradually lower proportions of young
population (under 15s) and the dependency ratio decreases as is
happening (or happened) in various parts of East Asia over several
decades. After a few decades, low fertility however causes the
population to get older and the growing proportion of elderly people
inflates again the dependency ratio as is observed in present-day
Europe. The exact technical boundaries of definition may vary. The UN
Population Department has defined it as period when the proportion of

children and youth under 15 years falls below 30 per cent and the
proportion of people 65 years and older is still below 15 per cent.

Four mechanisms for growth in the demographic dividend

During the course of the demographic dividend there are four mechanisms through which
the benefits are delivered. The first is the increased labor supply. However, the magnitude
of this benefit appears to be dependent on the ability of the economy to absorb and
productively employ the extra workers rather than be a pure demographic gift. The
second mechanism is the increase in savings. As the number of dependents decreases
individuals can save more. This increase in national savings rates increases the stock of
capital in developing countries already facing shortages of capital and leads to higher

productivity as the accumulated capital is invested. The third mechanism is human


capital. Decreases in fertility rates result in healthier women and fewer economic
pressures at home. This also allows parents to invest more resources per child, leading to
better health and educational outcomes. The fourth mechanism for growth is the
increasing domestic demand brought about by the increasing GDP per capita and the
decreasing dependency ratio.
Low fertility initially leads to low youth dependency and a high ratio of working age to
total population. However as the relatively large working age cohort grows older,
population aging sets in. The graph shows the ratio of working age to dependent
population (those 15 to 64 years old, divided by those above or below this age range - the
inverse of the dependency ratio) based on data and projections from the United Nations.
There is a strategic urgency to put in place policies which take advantage of the
demographic dividend for most countries. This urgency stems from the relatively small
window of opportunity countries have to plan for the demographic dividend when many
in their population are still young, prior to entering the work force. During this short
opportunity, countries traditionally try to promote investments which will help these
young people be more productive during their working years. Failure to provide
opportunities to the growing young population will result in rising unemployment and an
increased risk of social upheaval

POPULATION PYRAMID/ AGE PYRAMID / AGE PICTURE DIAGRAM


A population pyramid, also called an age pyramid or age picture diagram, is a graphical
illustration that shows the distribution of various age groups in a population (typically
that of a country or region of the world), which forms the shape of a pyramid when the
population is growing. It is also used in ecology to determine the overall age distribution
of a population; an indication of the reproductive capabilities and likelihood of the
continuation of a species.
It typically consists of two back-to-back bar graphs, with the population plotted on the Xaxis and age on the Y-axis, one showing the number of males and one showing females in
a particular population in five-year age groups (also called cohorts). Males are
conventionally shown on the left and females on the right, and they may be measured by
raw number or as a percentage of the total population.
Population pyramids are often viewed as the most effective way to graphically depict the
age and sex distribution of a population, partly because of the very clear image these
pyramids present.
A great deal of information about the population broken down by age and sex can be read
from a population pyramid, and this can shed light on the extent of development and
other aspects of the population. A population pyramid also tells how many people of each
age range live in the area. There tends to be more females than males in the older age
groups, due to females' longer life expectancy.

Types of population pyramid


While all countries' population pyramids differ, four general types have
been identified by the fertility and mortality rates of a country.
Stationary pyramid
A population pyramid typical of countries with low fertility and low
mortality, very similar to a constrictive pyramid.

Expansive pyramid
A population pyramid that is very wide at the base, indicating high
birth and death rates.

Constrictive pyramid
A population pyramid that comes in at the bottom. The population is
generally older on average, as the country has long life expectancy, a
low death rate, but also a low birth rate. This pyramid is becoming

more common, especially when immigrants are factored out, and is a


typical pattern for a very developed country, a high level of education,
easy access to and incentive to use birth control, good health care, and
few negative environmental factors.

Demographic transition
Demographic transition (DT) refers to the transition from high birth and death rates to
low birth and death rates as a country develops from a pre-industrial to an industrialized
economic system. This is typically demonstrated through a demographic transition model
(DTM). The theory is based on an interpretation of demographic history developed in
1929 by the American demographer Warren Thompson (18871973). Thompson
observed changes, or transitions, in birth and death rates in industrialized societies over
the previous 200 years. Most developed countries are in stage 3 or 4 of the model; the
majority of developing countries have reached stage 2 or stage 3. The major (relative)
exceptions are some poor countries, mainly in sub-Saharan Africa and some Middle
Eastern countries, which are poor or affected by government policy or civil strife, notably
Pakistan, Palestinian Territories, Yemen and Afghanistan.
Although this model predicts ever decreasing fertility rates, recent data show that beyond
a certain level of development fertility rates increase again.
A correlation matching the demographic transition has been established; however, it is
not certain whether industrialization and higher incomes lead to lower population or if
lower populations lead to industrialization and higher incomes. In countries that are now
developed this demographic transition began in the 18th century and continues today. In
less developed countries, this demographic transition started later and is still at an earlier
stage

The transition involves four stages, or possibly five.

In stage one; pre-industrial society, death rates and birth rates


are high and roughly in balance. All human populations are
believed to have had this balance until the late 18th century,
when this balance ended in Western Europe. In fact, growth rates
were less than 0.05% at least since the Agricultural Revolution
over 10,000 years ago. Birth and death rates both tend to be very
high in this stage. Because both rates are approximately in
balance, population growth is typically very slow in stage one.

In stage two, that of a developing country, the death rates drop


rapidly due to improvements in food supply and sanitation, which
increase life spans and reduce disease. The improvements
specific to food supply typically include selective breeding and

crop rotation and farming techniques. Other improvements


generally include access to technology, basic healthcare, and
education. For example, numerous improvements in public health
reduce mortality, especially childhood mortality. Prior to the mid20th century, these improvements in public health were primarily
in the areas of food handling, water supply, sewage, and personal
hygiene. One of the variables often cited is the increase in female
literacy combined with public health education programs which
emerged in the late 19th and early 20th centuries. [6] In Europe,
the death rate decline started in the late 18th century in
northwestern Europe and spread to the south and east over
approximately the next 100 years. [6] Without a corresponding fall
in birth rates this produces an imbalance, and the countries in
this stage experience a large increase in population.

In stage three, birth rates fall due to access to contraception,


increases in wages, urbanization, a reduction in subsistence
agriculture, an increase in the status and education of women, a
reduction in the value of children's work, an increase in parental
investment in the education of children and other social changes.
Population growth begins to level off. The birth rate decline in
developed countries started in the late 19th century in northern
Europe. While improvements in contraception do play a role in
birth rate decline, it should be noted that contraceptives were not
generally available nor widely used in the 19th century and as a
result likely did not play a significant role in the decline then. It is
important to note that birth rate decline is caused also by a
transition in values; not just because of the availability of
contraceptives.

During stage four there are both low birth rates and low death
rates. Birth rates may drop to well below replacement level as
has happened in countries like Germany, Italy, and Japan, leading
to a shrinking population, a threat to many industries that rely on
population growth. As the large group born during stage two
ages, it creates an economic burden on the shrinking working
population. Death rates may remain consistently low or increase
slightly due to increases in lifestyle diseases due to low exercise
levels and high obesity and an aging population in developed
countries. By the late 20th century, birth rates and death rates in
developed countries leveled off at lower rates.

Demographic gift
Demographic gift is a term in used, to describe the initially favorable
effect of falling fertility rates on the age dependency ratio, the fraction
of children and aged as compared to that of the working population.
Fertility declines in a population combined with falls in mortality ratesthe so-called
"demographic transition"produce a typical sequence of effects on age structures. The
child-dependency ratio (the ratio of children to those who support them) at first rises
somewhat due to more children surviving, then falls sharply as average family size
decreases. Later, the overall population ages rapidly, as currently seen in many developed
and rapidly developing nations. Between these two periods is a long interval of favorable
age distributions, known as the "demographic gift," with low and falling total dependency
ratios (including both children and aged persons).
The term was used by David Bloom and Jeffrey Williamson to signify the economic
benefits of a high ratio of working-age to dependent population during the demographic
transition. Bloom et al. introduced the term demographic dividend to emphasize the idea

that the effect is not automatic but must be earned by the presence of suitable economic
policies that allow a relatively large workforce to be productively employed.
The term has also been used by the Middle East Youth Initiative to describe the current
youth bulge in the Middle East and North Africa in which 15-29 year olds comprise
around 30% of the total population. It is believed that, through educational and
employment, the current youth population in the Middle East could fuel economic growth
and development as young East Asians were able to for the Asian Tigers.

FECUNDITY
Fecundity, derived from the word fecund, generally refers to the ability to reproduce. In
demography, fecundity is the potential reproductive capacity of an individual or
population. In biology, the definition is more equivalent to fertility, or the actual
reproductive rate of an organism or population, measured by the number of gametes
(eggs), seed set, or asexual propagules. This difference is because demography considers
human fecundity which is often intentionally limited, while biology assumes that
organisms do not limit fertility. Fecundity is under both genetic and environmental
control, and is the major measure of fitness. Fecundation is another term for fertilization.
Super fecundity refers to an organism's ability to store another organism's sperm (after
copulation) and fertilize its own eggs from that store after a period of time, essentially
making it appear as though fertilization occurred without sperm (i.e. parthenogenesis).
Fecundity is important and well studied in the field of population ecology. Fecundity can
increase or decrease in a population according to current conditions and certain regulating
factors. For instance, in times of hardship for a population, such as a lack of food,
juvenile and eventually adult fecundity has been shown to decrease (i.e. due to a lack of
resources the juvenile individuals are unable to reproduce, eventually the adults will run
out of resources and reproduction will cease).
Fecundity has also been shown to increase in ungulates with relation to warmer weather.
In sexual evolutionary biology, especially in sexual selection, fecundity is contrasted to
reproductivity.
In obstetrics and gynecology, fecundability is the probability of being pregnant in a single
menstrual cycle, and fecundity is the probability of achieving a live birth within a single
cycle.

FERTILITY
Fertility is the natural capability to produce offspring. As a measure, "fertility rate" is the
number of offspring born per mating pair, individual or population. Fertility differs from
fecundity, which is defined as the potential for reproduction (influenced by gamete
production, fertilization and carrying a pregnancy to term. A lack of fertility is infertility
while a lack of fecundity would be called sterility.
Human fertility depends on factors of nutrition, sexual behavior, culture, instinct,
endocrinology, timing, economics, way of life, and emotions.
Infertility
Infertility primarily refers to the biological inability of a person to contribute to
conception. Infertility may also refer to the state of a woman who is unable to
carry a pregnancy to full term. There are many biological causes of infertility,
including some that medical intervention can treat
Period measures

Crude birth rate (CBR) - the number of live births in a given


year per 1,000 people alive at the middle of that year. One
disadvantage of this indicator is that it is influenced by the age
structure of the population.

General fertility rate (GFR) - the number of births in a year


divided by the number of women aged 1544, times 1000. It
focuses on the potential mothers only, and takes the age
distribution into account.

Child-Woman Ratio (CWR) - the ratio of the number of children


under 5 to the number of women 15-49, times 1000. It is
especially useful in historical data as it does not require counting

births. This measure is actually a hybrid, because it involves


deaths as well as births. (That is, because of infant mortality
some of the births are not included; and because of adult
mortality, some of the women who gave birth are not counted
either.)

Coale's Index of Fertility - a special device used in historical


research

Cohort measures

Age-specific fertility rate (ASFR) - The number of births in a


year to women in a 5-year age group, divided by the number of
all women in that age group, times 1000. The usual age groups
are 10-14, 15-19, 20-24, etc.

Total fertility rate (TFR) - the total number of children a woman


would bear during her lifetime if she were to experience the
prevailing age-specific fertility rates of women. TFR equals the
sum for all age groups of 5 times each ASFR rate.

Gross Reproduction Rate (GRR) - the number of girl babies a


synthetic cohort will have. It assumes that all of the baby girls will
grow up and live to at least age 50.

Net Reproduction Rate (NRR) - the NRR starts with the GRR
and adds the realistic assumption that some of the women will
die before age 49; therefore they will not be alive to bear some of
the potential babies that were counted in the GRR. NRR is always
lower than GRR, but in countries where mortality is very low,
almost all the baby girls grow up to be potential mothers, and the
NRR is practically the same as GRR. In countries with high

mortality, NRR can be as low as 70% of GRR. When NRR = 1.0,


each generation of 1000 baby girls grows up and gives birth to
exactly 1000 girls. When NRR is less than one, each generation is
smaller than the previous one. When NRR is greater than 1 each
generation is larger than the one before. NRR is a measure of the
long-term future potential for growth, but it usually is different
from the current population growth rate.

Four sets of data with the same correlation of 0.816

The Pearson correlation coefficient indicates the strength of a linear relationship between two variables,
but its value generally does not completely characterize their relationship [16] . In particular, if the conditional
mean of Y given X, denoted E(Y|X), is not linear in X, the correlation coefficient will not fully determine the
form of E(Y|X).
The image on the right shows scatterplots of Anscombe's quartet, a set of four different pairs of variables
created by Francis Anscombe.[17] The four y variables have the same mean (7.5), variance (4.12),
correlation (0.816) and regression line (y = 3 + 0.5x). However, as can be seen on the plots, the
distribution of the variables is very different. The first one (top left) seems to be distributed normally, and
corresponds to what one would expect when considering two variables correlated and following the
assumption of normality. The second one (top right) is not distributed normally; while an obvious
relationship between the two variables can be observed, it is not linear. In this case the Pearson
correlation coefficient does not indicate that there is an exact functional relationship: only the extent to
which that relationship can be approximated by a linear relationship. In the third case (bottom left), the
linear relationship is perfect, except for one outlier which exerts enough influence to lower the correlation
coefficient from 1 to 0.816. Finally, the fourth example (bottom right) shows another example when one
outlier is enough to produce a high correlation coefficient, even though the relationship between the two
variables is not linear.

These examples indicate that the correlation coefficient, as a summary statistic, cannot replace visual
examination of the data. Note that the examples are sometimes said to demonstrate that the Pearson
correlation assumes that the data follow a normal distribution, but this is not correct.[4]

Several sets of (x, y) points, with the Pearson correlation coefficient ofx and y for each set. Note
that the correlation reflects the noisiness and direction of a linear relationship (top row), but not
the slope of that relationship (middle), nor many aspects of nonlinear relationships (bottom).
N.B.: the figure in the center has a slope of 0 but in that case the correlation coefficient is
undefined because the variance of Y is zero.

In statistics, an ecological correlation is a correlation between two variables that are


group means, in contrast to a correlation between two variables that describe
individuals. For example, one might study the correlation between physical activity and
weight among sixth-grade children. A study at the individual level might make use of 100
children, then measure both physical activity and weight; the correlation between the
two variables would be at the individual level. By contrast, another study might make
use of 100 classes of sixth-grade students, then measure the mean physical activity and
the mean weight of each of the 100 classes. A correlation between these group means
would be an example of an ecological correlation.
Because a correlation describes the measured strength of a relationship, correlations at
the group level can be much higher than those at the individual level. Thinking both are
equal is an example of ecological fallacy.

The French paradox is a catchphrase, first used in the late 1980s, which summarizes
the apparently paradoxicalepidemiological observation that French people have a
relatively low incidence of coronary heart disease (CHD), while having a diet relatively
rich in saturated fats,[1] in apparent contradiction to the widely held belief that the high
consumption of such fats is a risk factor for CHD. The paradox is that if the thesis linking
saturated fats to CHD is valid, the French ought to have a higher rate of CHD than
comparable countries where the per capita consumption of such fats is lower.
The French paradox implies two important possibilities. The first is that the hypothesis
linking saturated fats to CHD is not completely valid (or, at the extreme, is entirely
invalid). The second possibility is that the link between saturated fats and CHD is valid,
but that some additional factor in the French diet or lifestyle mitigates this risk
presumably with the implication that if this factor can be identified, it can be incorporated
into the diet and lifestyle of other countries, with the same lifesaving implications
observed in France. Both possibilities have generated considerable media interest, as
well as some scientific research.

The Israeli paradox is a catchphrase, first used in 1996, to summarize the apparently
paradoxical epidemiological observation that Israeli Jews have a relatively
high incidence of coronary heart disease (CHD), despite having a diet relatively low
in saturated fats, in apparent contradiction to the widely held belief that the high
consumption of such fats is a risk factor for CHD. The paradox is that if the thesis linking
saturated fats to CHD is valid, the Israelis ought to have a lower rate of CHD than
comparable countries where the per capita consumption of such fats is higher.
The observation of Israel's paradoxically high rate of CHD is one of a number of
paradoxical outcomes for which a literature now exists, regarding the thesis that a high
consumption of saturated fats ought to lead to an increase in CHD incidence, and that a
lower consumption ought to lead to the reverse outcome. The most famous of these
paradoxes is known as the "French paradox": France enjoys a relatively low incidence
of CHD despite a high per-capita consumption of saturated fat.
The Israeli paradox implies two important possibilities. The first is that the hypothesis
linking saturated fats to CHD is not completely valid (or, at the extreme, is entirely

invalid). The second possibility is that the link between saturated fats and CHD is valid,
but that some additional factor in the Israeli diet or lifestyle creates another CHD risk
presumably with the implication that if this factor can be identified, it can be isolated in
the diet and / or lifestyle of other countries, thereby allowing both the Israelis, and
others, to avoid that particular risk.

Stroke Belt or Stroke Alley is a name given to a region in the southeastern United
States that has been recognized by public health authorities for having an unusually
high incidence of stroke and other forms of cardiovascular disease. It is typically defined
as an 11-state region consisting
of Alabama,Arkansas, Georgia, Indiana, Kentucky, Louisiana, Mississippi, North
Carolina, South Carolina,Tennessee, and Virginia.
Although many possible causes for the high stroke incidence have been investigated,
the reasons for the phenomenon have not been determined.

Simpson's paradox, or the YuleSimpson effect, is


a paradox in probability and statistics, in which a trend that appears in different groups
of data disappears when these groups are combined. It is sometimes given the
impersonal title reversal paradox or amalgamation paradox.[1]
This result is often encountered in social-science and medical-science statistics, [2] and is
particularly confounding when frequency data are unduly given causal interpretations.
[3]

Simpson's Paradox disappears when causal relations are brought into consideration.

Many statisticians believe that the mainstream public should be informed of the counterintuitive results in statistics such as Simpson's paradox. [4][5]

The low birth-weight paradox is an apparently paradoxical observation relating to


the birth weights and mortality rate of children born to tobacco smokingmothers. Low

birth-weight children born to smoking mothers have a lower infant mortality rate than the
low birth weight children of non-smokers. It is an example ofSimpson's paradox.

Low birth weight paradox[edit]


The low birth weight paradox is an apparently paradoxical observation relating to the
birth weights and mortality of children born to tobacco smoking mothers. As a usual practice,
babies weighing less than a certain amount (which varies between different countries) have
been classified as having low birth weight. In a given population, babies with low birth weights
have had a significantly higher infant mortality rate than others. Normal birth weight infants of
smokers have about the same mortality rate as normal birth weight infants of non-smokers, and
low birth weight infants of smokers have a much lower mortality rate than low birth weight
infants of non-smokers, but infants of smokers overall have a much higher mortality rate than
infants of non-smokers. This is because many more infants of smokers are low birth weight, and
low birth weight babies have a much higher mortality rate than normal birth weight babies.[14]

Examples
Kidney stone treatment[edit]
This is a real-life example from a medical study[10] comparing the success rates of two
treatments for kidney stones.[11]
The table below shows the success rates and numbers of treatments for treatments involving
both small and large kidney stones, where Treatment A includes all open surgical procedures
and Treatment B is percutaneous nephrolithotomy (which involves only a small puncture). The
numbers in parentheses indicate the number of success cases over the total size of the group.
(For example, 93% equals 81 divided by 87.)

Small Stones

Treatment A

Treatment B

Group 1

Group 2

93% (81/87)

87% (234/270)

Large Stones

Both

Group 3

Group 4

73% (192/263)

69% (55/80)

78% (273/350)

83% (289/350)

The paradoxical conclusion is that treatment A is more effective when used on small stones, and
also when used on large stones, yet treatment B is more effective when considering both sizes
at the same time. In this example the "lurking" variable (or confounding variable) of the stone
size was not previously known to be important until its effects were included.
Which treatment is considered better is determined by an inequality between two ratios
(successes/total). The reversal of the inequality between the ratios, which creates Simpson's
paradox, happens because two effects occur together:
1. The sizes of the groups, which are combined when the lurking variable is ignored, are
very different. Doctors tend to give the severe cases (large stones) the better treatment
(A), and the milder cases (small stones) the inferior treatment (B). Therefore, the totals
are dominated by groups 3 and 2, and not by the two much smaller groups 1 and 4.
2. The lurking variable has a large effect on the ratios, i.e. the success rate is more strongly
influenced by the severity of the case than by the choice of treatment. Therefore, the
group of patients with large stones using treatment A (group 3) does worse than the
group with small stones, even if the latter used the inferior treatment B (group 2).
Based on these effects, the paradoxical result can be rephrased more intuitively as follows:
Treatment A, when applied to a patient population consisting mainly of patients with large
stones, is less successful than Treatment B applied to a patient population consisting mainly of
patients with small stones.

June 2014 viva

Maternal mortality
Maternal morbidity
Zoonotic diseases; anthrax, workers
Rabies injection schedule
Poliomyelitis; boarder restrictions; SIAD technique per week two doses;
not to go again as its a hard to reach area
Polio strategy
TB; DOT
Disability indicators

Correlation; r? value of correlation; positive value;


Survival rate
Time trend
Meta-analysis; graphs; interpretation
Forest graph

Epidemiology
Surveillance
Screening

Policy? Types of polio? Policy building components?


Policy; HRM; costing
Cost/ unit

Management graphs
Bar graphs
Components bar chart

Difference between developed and developing countries by means of :


GNP, GDP
Population growth rate
Life expectancy
Maternal and infant mortality rates

The Declaration of Helsinki is a set of ethical principles regarding human experimentation


developed for the medical community by the World Medical Association (WMA). It is widely
regarded as the cornerstone document on human research ethics.
It is not a legally binding instrument under the international law, but instead draws its authority
from the degree to which it has been codified in, or influenced, national or regional legislation
and regulations. Its role was described by a Brazilian forum in 2000 in these words "Even though
the Declaration of Helsinki is the responsibility of the World Medical Association, the document
should be considered the property of all humanity".
Ethical Review Committee Terms of Reference
1.0

Preamble

The Ethical Review Committee (ERC) shall be concerned with ethical issues involved
in proposals for research on human subjects. The terms of reference have taken into
consideration recommendations of a sub-committee of the Bio-ethics Group of
Faculty of Health Sciences (FHS) and particularly the report of the Royal College of
Physicians of London (1996) titled "Guidelines on the Practice of Ethics Committees
in Medical Research Involving Human Subjects". The terms have been derived
mainly from principles and generalised for application to both bio-medical and social
science research. A deliberate attempt has been made to avoid detail, with the
expectation that experience will determine the need for revision and elaboration.
2.0

Terms of Reference

2.1
All research projects involving human subjects, whether as individuals or
communities, including the use of foetal material, embryos and tissues from the
recently dead, undertaken or supported by Aga Khan University (AKU) faculty, staff
or students, wherever conducted, shall be reviewed by the ERC before a study can
begin.
2.2
The duration of approval for a study shall be limited. Any change in conditions
that could affect the rights of subjects during a study must be approved for the
study to continue.
2.3
The Committee shall provide written guidelines on ethical considerations for
research involving humans and review them at least once in two years. The
guidelines shall be based on but not restricted to the following principles:

Respect for an individual's capacity to make reasoned decisions, and protection of those
whose capacity is impaired or who are in some way dependent or vulnerable.
The risks of the proposed research in respect of expected benefits, the research design
and competence of the investigators having been assessed.

A proposal must state the purpose of the research; the reasons for using humans as the
subjects; the nature and degree of all known risks to the subjects; and the means for
ensuring that the subjects' consent will be adequately informed and voluntary.

The subjects of research should be clearly aware of the nature of the research and their
position in respect of it.

Consent must be valid. The participants must be sufficiently informed and have
adequate time to decide without pressure. Consent must be obtained from the subjects,
preferably written.

Subjects must be able to easily withdraw from a research protocol without giving
reasons and without incurring any penalty or alteration in their relationship with providers
of services.

Further guidance should be obtained from publications, such as the World Medical
Association Declaration of Helsinki: Recommendations Guiding Medical Doctors in
Biomedical Research Involving Human Subjects (1989), consultation with experts and
other sources, according to need.

Specify procedures, including periodic appraisal of the progress of approved projects, for
ensuring that subjects of research are protected from harm, their confidentiality is
maintained, and their rights are respected.

2.4
The Committee shall report annually or more frequently, as necessary, to the
URC and the Chief Academic Officer.
2.5

Method of Working

2.5.1
The Committee will need substantial administrative and secretarial
assistance from the Research Office.
2.5.2
Authors of research proposals may be invited to attend meetings of the ERC
when their study is being reviewed.
2.5.3
Some business may be conducted by mail, but reasonably frequent
meetings are essential to allow a committee ethos to develop.
2.5.4
A quorum should include a layperson, a research oriented member who is
broadly familiar with the proposed field of study, and a member of each gender.
2.5.5

Decisions will be made normally by consensus.

2.5.6
The Chairman's approval may be given for studies that pose no ethical
problems. Such approvals shall be reported to the next meeting of ERC for
ratification.

2.5.7
Investigators are entitled to have an adverse decision reviewed, and to
make written and/or oral representations to the Committee.
2.5.8
The Committee may withdraw approval if it is not satisfied with the conduct
of the investigation.
2.5.9
The ERC should approve amendments to protocols that affect human
subjects.
2.5.10

Confidentiality of the Committee's proceedings should be preserved.

2.5.11
Members of the Committee should declare their own interests, for
example when testing of the product of a company of which the member is an
advisor.
2.5.12

Serious adverse events should be reported promptly to the Chair.

2.5.13
Members should not be paid; however, if honorarium is necessary, then it
should be modest.
2.6

Membership

2.6.1
Membership shall include representation from researchers, the professional
disciplines currently found in the University, discerning public, legal expertise, and
both genders.
2.6.2
The Chief Academic Officer shall appoint the chair and members, in
consultation with the URC.
2.6.3
The tenure of AKU members on ERC shall be 3 years which may be renewed
for one term.
2.6.4
The tenure of External Members on ERC would be of one year which may be
renewed for another term of one year at the discretion of the Chair ERC based on
the attendance and quality of input by the member. Further extension would require
URC's approval.
2.6.5
For faculty, it is essential to select persons who are not members of other
research related committees.
2.6.6
The total number of members should not exceed 15, including the chair. In
addition, there will be one adjunct member from AKU-East Africa and one from ISMC
who would be called upon as required by the Chair to give input for projects falling
into their respective areas of expertise.

Terms of reference
Terms of reference show how the scope will be defined, developed, and verified. They should
also provide a documented basis for making future decisions and for confirming or developing a
common understanding of the scope among stakeholders. In order to meet these criteria, success
factors/risks and restraints should be fundamental keys. Very important for project proposals.
Creating detailed terms of reference is critical, as they define the:

Vision, objectives, scope and deliverables (i.e. what has to be achieved)


Stakeholders, roles and responsibilities (i.e. who will take part in it)

Resource, financial and quality plans (i.e. how it will be achieved)

Work breakdown structure and schedule (i.e. when it will be achieved)

Health System Reform in Asia Conference


Selected presentations from the 2011 conference available to download
now!
Health System Reform in Asia is a new conference, in association with Social
Science & Medicine - the worlds most cited social science journal. It is an
interdisciplinary conference focusing on the health system reforms Asian countries
have adopted, or are considering, during rapid economic, social, demographic and
epidemiologic change in the region.
Comparisons of reform experiences within and outside Asia are particularly
encouraged.
The conference welcomes empirical examinations of health outcomes, social,
economic and political analyses as well as theoretical and philosophical
contributions on these themes.
The Conference Committee welcomes submissions on any of the following topics:

Governance and regulation


Financing and payment arrangements

Service delivery, quality assurance and and organisation of care

Knowledge management for policy; policy capacity and policy process

Monitoring, evaluating and researching the impact of reform

Health workforce development

Medical pluralism and mainstreaming of traditional healthcare

Health access and improvement particularly for vulnerable populations and


equity impact

Aid management and the role of donors in health sector reform

Consumer and community participation

Use of Information Communication Technologies (ICTs); innovation and ehealth

Emerging ecological, social and epidemiological challenges (inc. chronic


illness)

Risk reduction and illness prevention

Fertility is the natural capability to produce offspring. As a measure, "fertility rate" is the
number of offspring born per mating pair, individual or population. Fertility differs from
fecundity, which is defined as the potential for reproduction (influenced by gamete production,
fertilization and carrying a pregnancy to term)[citation needed]. A lack of fertility is infertility while a
lack of fecundity would be called sterility.
Fecundity, derived from the word fecund, generally refers to the ability to reproduce. In
demography,[1][2] fecundity is the potential reproductive capacity of an individual or population.
In biology, the definition is more equivalent to fertility, or the actual reproductive rate of an
organism or population, measured by the number of gametes (eggs), seed set, or asexual
propagules. This difference is because demography considers human fecundity which is often
intentionally limited through contraception, while biology assumes that organisms do not limit
fertility. Fecundity is under both genetic and environmental control, and is the major measure of
fitness. Fecundation is another term for fertilization. Superfecundity refers to an organism's
ability to store another organism's sperm (after copulation) and fertilize its own eggs from that
store after a period of time, essentially making it appear as though fertilization occurred without
sperm (i.e. parthenogenesis).[citation needed]
Fecundity is important and well studied in the field of population ecology. Fecundity can
increase or decrease in a population according to current conditions and certain regulating
factors. For instance, in times of hardship for a population, such as a lack of food, juvenile and
eventually adult fecundity has been shown to decrease (i.e. due to a lack of resources the juvenile
individuals are unable to reproduce, eventually the adults will run out of resources and
reproduction will cease).
Demographic transition (DT) refers to the transition from high birth and death rates to low
birth and death rates as a country develops from a pre-industrial to an industrialized economic
system. This is typically demonstrated through a demographic transition model (DTM). The
theory is based on an interpretation of demographic history developed in 1929 by the American
demographer Warren Thompson (18871973).[1] Thompson observed changes, or transitions, in
birth and death rates in industrialized societies over the previous 200 years. Most developed
countries are in stage 3 or 4 of the model; the majority of developing countries have reached
stage 2 or stage 3. The major (relative) exceptions are some poor countries, mainly in subSaharan Africa and some Middle Eastern countries, which are poor or affected by government
policy or civil strife, notably Pakistan, Palestinian Territories, Yemen and Afghanistan.[2]
Population ageing is a phenomenon that occurs when the median age of a country or region
rises due to rising life expectancy and/or declining birth rates. There has been, initially in the
more economically developed countries ( MEDC ) but also more recently in LEDCs ( less
economically developed countries ), an increase in the life expectancy which causes ageing
population. This is the case for every country in the world except the 18 countries designated as
"demographic outliers" by the UN.[1][2] For the entirety of recorded human history, the world has
never seen as aged a population as currently exists globally.[3] The UN predicts the rate of
population ageing in the 21st century will exceed that of the previous century.[3] Countries vary
significantly in terms of the degree, and the pace, of these changes, and the UN expects

populations that began ageing later to have less time to adapt to the many implications of these
changes.[3]
Subsistence agriculture is self-sufficiency farming in which the farmers focus on growing
enough food to feed themselves and their families. The typical subsistence farm has a range of
crops and animals needed by the family to feed and clothe themselves during the year. Planting
decisions are made principally with an eye toward what the family will need during the coming
year, and secondarily toward market prices. Tony Waters [1] writes: "Subsistence peasants are
people who grow what they eat, build their own houses, and live without regularly making
purchases in the marketplace
According to the Encyclopedia of International Development, the term demographic trap is
used by demographers "to describe the combination of high fertility (birth rates) and declining
mortality (death rates) in developing countries, resulting in a period of high population growth
rate (PGR)."[1] High fertility combined with declining mortality happens when a developing
country moves through the demographic transition of becoming developed.
During "stage 2" of the demographic transition, quality of health care improves and death rates
fall, but birth rates still remain high, resulting in a period of high population growth.[1] The term
"demographic trap" is used by some demographers to describe a situation where stage 2
persists because "falling living standards reinforce the prevailing high fertility, which in turn
reinforces the decline in living standards."[2] This results in more poverty, where people rely on
more children to provide them with economic security. Social scientist John Avery explains that
this results because the high birth rates and low death rates "lead to population growth so rapid
that the development that could have slowed population is impossible."[3]
Demographic dividend refers to a period usually 20 to 30 years when fertility rates fall due
to significant reductions in child and infant mortality rates. This fall is often accompanied by an
extension in average life expectancy that increases the portion of the population that is in the
working age-group. This cuts spending on dependents and spurs economic growth. As women
and families realize that fewer children will die during infancy or childhood, they will begin to
have fewer children to reach their desired number of offspring,further reducing the proportion of
non-productive dependents.
However, this drop in fertility rates is not immediate. The lag between produces a generational
population bulge that surges through society. For a period of time this bulge is a burden on
society and increases the dependency ratio. Eventually this group begins to enter the productive
labor force. With fertility rates continuing to fall and older generations having shorter life
expectancies, the dependency ratio declines dramatically. This demographic shift initiates the
demographic dividend. With fewer younger dependents, due to declining fertility and child
mortality rates, and fewer older dependents, due to the older generations having shorter life
expectancies, and the largest segment of the population of productive working age, the
dependency ratio declines dramatically leading to the demographic dividend. Combined with
effective public policies this time period of the demographic dividend can help facilitate more
rapid economic growth and puts less strain on families. This is also a time period when many
women enter the labor force for the first time.[1] In many countries this time period has led to

increasingly smaller families, rising income, and rising life expectancy rates.[2] However,
dramatic social changes can also occur during this time, such as increasing divorce rates,
postponement of marriage, and single-person households.[3]
The Human Development Index (HDI) is a composite statistic of life expectancy, education,
and income indices used to rank countries into four tiers of human development. It was created
by the Pakistani economist Mahbub ul Haq and the Indian economist Amartya Sen in 1990[1] and
was published by the United Nations Development Programme.[2]
In the 2010 Human Development Report a further Inequality-adjusted Human Development
Index (IHDI) was introduced. While the simple HDI remains useful, it stated that "the IHDI is
the actual level of human development (accounting for inequality)" and "the HDI can be viewed
as an index of "potential" human development (or the maximum IHDI that could be achieved if
there were no inequality)".[3]
Demographic window is defined to be that period of time in a nation's demographic evolution
when the proportion of population of working age group is particularly prominent. This occurs
when the demographic architecture of a population becomes younger and the percentage of
people able to work reaches its height.[1] Typically, the demographic window of opportunity lasts
for 3040 years depending upon the country. Because of the mechanical link between fertility
levels and age structures, the timing and duration of this period is closely associated to those of
fertility decline: when birth rates fall, the age pyramid first shrinks with gradually lower
proportions of young population (under 15s) and the dependency ratio decreases as is happening
(or happened) in various parts of East Asia over several decades. After a few decades, low
fertility however causes the population to get older and the growing proportion of elderly people
inflates again the dependency ratio as is observed in present-day Europe.
The exact technical boundaries of definition may vary. The UN Population Department has
defined it as period when the proportion of children and youth under 15 years falls below 30 per
cent and the proportion of people 65 years and older is still below 15 per cent.
Europe's demographic window lasted from 1950 to 2000. It began in China in 1990 and is
expected to last until 2015. India is expected to enter the demographic window in 2010, which
may last until the middle of the present century. Much of Africa will not enter the demographic
window until 2045 or later.
Societies who have entered the demographic window have smaller dependency ratio (ratio of
dependents to working-age population) and therefore the demographic potential for high
economic growth as favorable dependency ratios tend to boost savings and investments in human
capital. But this so-called "demographic bonus" (or demographic dividend) remains only a
potential advantage as low participation rates (for instance among women) or rampant
uneployment may limit the impact of favorable age structures.
In demography and medical geography, epidemiological transition is a phase of development
witnessed by a sudden and stark increase in population growth rates brought about by medical
innovation in disease or sickness therapy and treatment, followed by a re-leveling of population

growth from subsequent declines in fertility rates. "Epidemiological transition" accounts for the
replacement of infectious diseases by chronic diseases over time due to expanded public health
and sanitation.[1] This theory was originally posited by Abdel Omran in 1971.[2]
A Malthusian catastrophe (also known as Malthusian check) was originally foreseen to be a
forced return to subsistence-level conditions once population growth had outpaced agricultural
production.
Demographic economics or population economics is the application of economic analysis to
demography, the study of human populations, including size, growth, density, distribution, and
vital statistics.[1][2]
Human overpopulation occurs if the number of people in a group exceeds the carrying capacity
of the region occupied by the group. The term often refers to the relationship between the entire
human population and its environment: the Earth,[1] or to smaller geographical areas such as
countries. Overpopulation can result from an increase in births, a decline in mortality rates, an
increase in immigration, or an unsustainable biome and depletion of resources. It is possible for
very sparsely populated areas to be overpopulated if the area has a meager or non-existent
capability to sustain life (e.g. a desert). Quality of life issues, rather than sheer carrying capacity
or risk of starvation, are a basis to argue against continuing high human population growth.
Demographic gift is a term in used to describe the initially favorable effect of falling fertility
rates on the age dependency ratio, the fraction of children and aged as compared to that of the
working population.
Overview
Fertility declines in a population combined with falls in mortality ratesthe so-called
"demographic transition"produce a typical sequence of effects on age structures.
The child-dependency ratio (the ratio of children to those who support them) at first
rises somewhat due to more children surviving, then falls sharply as average family
size decreases. Later, the overall population ages rapidly, as currently seen in many
developed and rapidly developing nations. Between these two periods is a long
interval of favorable age distributions, known as the "demographic gift," with low
and falling total dependency ratios (including both children and aged persons).

Use of the term


The term was used by David Bloom and Jeffrey Williamson [1] to signify the economic benefits
of a high ratio of working-age to dependent population during the demographic transition. Bloom
et al.[2] introduced the term demographic dividend to emphasize the idea that the effect is not
automatic but must be earned by the presence of suitable economic policies that allow a
relatively large workforce to be productively employed.
The term has also been used by the Middle East Youth Initiative to describe the current youth
bulge in the Middle East and North Africa in which 15-29 year olds comprise around 30% of the

total population.[3] It is believed that, through educational and employment, the current youth
population in the Middle East could fuel economic growth and development as young East
Asians were able to for the Asian Tigers.
The Preston curve is an empirical cross-sectional relationship between life expectancy and real
per capita income. It is named after Samuel H. Preston who described it in his article "The
Changing Relation between Mortality and Level of Economic Development" published in the
journal Population Studies in 1975.[1][2] Preston studied the relationship for the 1900s, 1930s and
the 1960s and found it held for each of the three decades. More recent work has updated this
research.[3]

The Preston curve, using cross-country data for 2005. The x-axis shows GDP per capita in 2005
international dollars, the y-axis shows life expectancy at birth. Each dot represents a particular
country.

Improvements in health technology shift the Preston Curve upwards. In panel A, the new
technology is equally applicable in all countries regardless of their level of income. In panel B,
the new technology has a disproportionately larger effect in rich countries. In panel C, poorer
countries benefit more.

Waithood (a portmanteau of "wait" and "adulthood") is a period of stagnation in the lives of


young unemployed college graduates in the Middle East, India, North Africa, etc. (MENA)
region, described as "a kind of prolonged adolescence",[1] and "the bewildering time in which
large proportions of Middle Eastern youth spend their best years waiting. It is a phase in which
the difficulties youth face in each of these interrelated spheres of life result in a debilitating state
of helplessness and dependency. Waithood can be best understood by examining outcomes and
linkages across five different sectors: education, employment, housing, credit, and marriage."[2]
Waithood is applicable only to college educated people who are not compelled to settle in blue
collar jobs due to the support from family elders or resources. Waithood is considered to be a
difficult and unpleasant period in life; without work, young people are unable to progress in
other areas of their development, such as purchasing a home and getting married.[3][4]
Birth dearth is a neologism referring to falling fertility rates. In the late 1980s, the term was
used in the context of American and European society.[1] The use of the term has since been
expanded to include many other industrialized nations. It is often cited as a response to
overpopulation, but is not incompatible with it. The term was coined by Ben J. Wattenburg in his
1987 book by that same name.
Countries and geographic regions that are currently experiencing falling population include
Russia, Europe, Japan, and populations of people of these descents in other countries such as in
the United States.
Reclaimed water or recycled water, is former wastewater (sewage) that is treated to remove
solids and certain impurities, and used in sustainable landscaping irrigation or to recharge
groundwater aquifers. The purpose of these processes is sustainability and water conservation,
rather than discharging the treated water to surface waters such as rivers and oceans. In some
cases, recycled water can be used for streamflow augmentation to benefit ecosystems and
improve aesthetics.[1] One example of this is along Calera Creek in the City of Pacifica, CA. [2]
The definition of reclaimed water, as defined by Levine and Asaneo, is "The end product of
wastewater reclamation that meets water quality requirements for biodegradable materials,
suspended matter and pathogens."[3] In more recent conventional use, the term refers to water that
is not treated as highly in order to offer a way to conserve drinking water. This water is given to
uses such as agriculture and sundry industry uses.
Cycled repeatedly through the planetary hydrosphere, all water on Earth is recycled water, but
the terms "recycled water" or "reclaimed water" typically mean wastewater sent from a home or
business through a pipeline system to a treatment facility, where it is treated to a level consistent
with its intended use. The water is then routed directly to a recycled water system for uses such
as irrigation or industrial cooling.
The recycling and recharging is often done by using the treated wastewater for designated
municipal sustainable gardening irrigation applications. In most locations, it is intended to only
be used for nonpotable uses, such as irrigation, dust control, and fire suppression.

There are examples of communities that have safely used recycled water for many years. Los
Angeles County's sanitation districts have provided treated wastewater for landscape irrigation in
parks and golf courses since 1929. The first reclaimed water facility in California was built at
San Francisco's Golden Gate Park in 1932. The Irvine Ranch Water District (IRWD) was the first
water district in California to receive an unrestricted use permit from the state for its recycled
water; such a permit means that water can be used for any purpose except drinking. IRWD
maintains one of the largest recycled water systems in the nation with more than 400 miles
serving more than 4,500 metered connections. The Irvine Ranch Water District and Orange
County Water District in Southern California are established leaders in recycled water. Further,
the Orange County Water District, located in Orange County, and in other locations throughout
the world such as Singapore, water is given more advanced treatments and is used indirectly for
drinking.[4]
In spite of quite simple methods that incorporate the principles of water-sensitive urban design
(WSUD)[5] for easy recovery of stormwater runoff, there remains a common perception that
reclaimed water must involve sophisticated and technically complex treatment systems,
attempting to recover the most complex and degraded types of sewage. As this effort is
supposedly driven by sustainability factors, this type of implementation should inherently be
associated with point source solutions, where it is most economical to achieve the expected
outcomes. Harvesting of stormwater or rainwater can be an extremely simple to comparatively
complex, as well as energy and chemical intensive, recovery of more contaminated sewage.
Strategy (Greek ""stratgia, "art of troop leader; office of general, command,
generalship"[1]) is a high level plan to achieve one or more goals under conditions of uncertainty.
Strategy is important because the resources available to achieve these goals are usually limited.
Henry Mintzberg from McGill University defined strategy as "a pattern in a stream of decisions"
to contrast with a view of strategy as planning,[2] while Max McKeown (2011) argues that
"strategy is about shaping the future" and is the human attempt to get to "desirable ends with
available means". Dr. Vladimir Kvint defines strategy as "a system of finding, formulating, and
developing a doctrine that will ensure long-term success if followed faithfully." [3]
HACCP Principles
Hazard Analysis Critical Control Points (HACCP) is a tool that can be
useful in the prevention of food safety hazards. While
extremely important, HACCP is only one part of a multicomponent food safety system. HACCP is not a stand alone
program. Other parts must include: good manufacturing
practices, sanitation standard operating procedures, and a
personal hygiene program.
Safety of the food supply is key to consumer confidence. In the past, periodic plant
inspections and sample testing have been used to ensure the quality and safety of
food products. Inspection and testing, however, are like a photographic snapshot.
They provide information about the product that is relevant only for the specific

time the product was inspected and tested. What happened before or after? That
information is not known! From a public health and safety point of view, these
traditional methods offer little protection or assurance.

New concepts have emerged which are far more promising for controlling food safety hazards
from production to consumption.
HACCP was introduced as a system to control safety as the product is manufactured, rather than
trying to detect problems by testing the finished product. This new system is based on assessing
the inherent hazards or risks in a particular product or process and designing a system to control
them. Specific points where the hazards can be controlled in the process are identified.
The HACCP system has been successfully applied in the food industry. The system fits in well
with modern quality and management techniques. It is especially compatible with the ISO 9000
quality assurance system and just in time delivery of ingredients. In this environment,
manufacturers are assured of receiving quality products matching their specifications. There is
little need for special receiving tests and usually time does not allow for extensive quality tests.
The general principles of HACCP are as follows:
Principle #1 Hazard Analysis

Hazards (biological, chemical, and physical) are conditions which may pose an unacceptable
health risk to the consumer. A flow diagram of the complete process is important in conducting
the hazard analysis. The significant hazards associated with each specific step of the
manufacturing process are listed. Preventive measures (temperature, pH, moisture level, etc.) to
control the hazards are also listed.
Principle #2 Identify Critical Control Points
Critical Control Points (CCP) are steps at which control can be applied and a food safety hazard
can be prevented, eliminated or reduced to acceptable levels. Examples would be cooking,
acidification or drying steps in a food process..
Principle #3 Establish Critical Limits
All CCP's must have preventive measures which are measurable! Critical limits are the
operational boundaries of the CCPs which control the food safety hazard(s). The criteria for the
critical limits are determined ahead of time in consultation with competent authorities. If the
critical limit criteria are not met, the process is "out of control", thus the food safety hazard(s) are
not being prevented, eliminated, or reduced to acceptable levels.
Principle #4 Monitor the CCP's
Monitoring is a planned sequence of measurements or observations to ensure the product or
process is in control (critical limits are being met). It allows processors to assess trends before a

loss of control occurs. Adjustments can be made while continuing the process. The monitoring
interval must be adequate to ensure reliable control of the process.
Principle #5 Establish Corrective Action
HACCP is intended to prevent product or process deviations. However, should loss of control
occur, there must be definite steps in place for disposition of the product and for correction of the
process. These must be pre-planned and written. If, for instance, a cooking step must result in a
product center temperature between 165oF and 175oF, and the temperature is 163oF, the
corrective action could require a second pass through the cooking step with an increase in the
temperature of the cooker..
Principle #6 Record keeping
The HACCP system requires the preparation and maintenance of a written HACCP plan together
with other documentation. This must include all records generated during the monitoring of each
CCP and notations of corrective actions taken. Usually, the simplest record keeping system
possible to ensure effectiveness is the most desirable.
Principle #7 Verification
Verification has several steps. The scientific or technical validity of the hazard analysis and the
adequacy of the CCP's should be documented. Verification of the effectiveness of the HACCP
plan is also necessary. The system should be subject to periodic revalidation using independent
audits or other verification procedures.
HACCP offers continuous and systematic approaches to assure food safety. In light of recent
food safety related incidents, there is a renewed interest in HACCP from a regulatory point of
view. Both FDA and USDA are proposing umbrella regulations which will require HACCP plans
of industry. The industry will do well to adopt HACCP approaches to food safety whether or not
it is required.
HACCP is a Tool
HACCP is merely a tool and is not designed to be a stand-alone program. To be
effective other tools must include adherence to Good Manufacturing Practices, use
of Sanitation Standard Operating Procedures, and Personal Hygiene Programs.

The Seven HACCP Principles


Principle 1: Conduct a hazard analysis.

Plants determine the food safety hazards identify the preventive measures
the plant can apply to control these hazards.

Principle 2: Identify critical control points.

A critical control point (CCP) is a point, step, or procedure in a food process at


which control can be applied and, as a result, a food safety hazard can be
prevented, eliminated, or reduced to an acceptable level. A food safety
hazard is any biological, chemical, or physical property that may cause a food
to be unsafe for human consumption.

Principle 3: Establish critical limits for each critical control point.

A critical limit is the maximum or minimum value to which a physical,


biological, or chemical hazard must be controlled at a critical control point to
prevent, eliminate, or reduce to an acceptable level.

Principle 4: Establish critical control point monitoring requirements.

Monitoring activities are necessary to ensure that the process is under control
at each critical control point. FSIS is requiring that each monitoring procedure
and its frequency be listed in the HACCP plan.

Principle 5: Establish corrective actions.

These are actions to be taken when monitoring indicates a deviation from an


established critical limit. The final rule requires a plant's HACCP plan to
identify the corrective actions to be taken if a critical limit is not met.
Corrective actions are intended to ensure that no product injurious to health
or otherwise adulterated as a result of the deviation enters commerce.

Principle 6: Establish record keeping procedures.

The HACCP regulation requires that allplants maintain certain documents,


including its hazard analysis and written HACCP plan, and records
documenting the monitoring of critical control points, critical limits,
verification activities, and the handling of processing deviations.

Principle 7: Establish procedures for verifying the HACCP system is working as intended.

Validation ensures that the plans do what they were designed to do; that is,
they are successful in ensuring the production of safe product. Plants will be
required to validate their own HACCP plans. FSIS will not approve HACCP
plans in advance, but will review them for conformance with the final rule.

Verification ensures the HACCP plan is adequate, that is, working as intended.
Verification procedures may include such activities as review of HACCP plans,
CCP records, critical limits and microbial sampling and analysis. FSIS is
requiring that the HACCP plan include verification tasks to be performed by
plant personnel. Verification tasks would also be performed by FSIS
inspectors. Both FSIS and industry will undertake microbial testing as one of
several verification activities. the occurrence of the identified food safety
hazard.

PPP diagrams

What is Disinfection?

Before water treatment became common, waterborne diseases


could spread quickly through a population, killing or harming
hundreds of people. The table below shows some common, watertransmitted diseases as well as the organisms (pathogens) which
cause each disease.
Disease Caused
Pathogen
Bacteria:
Anthrax
Escherichia coli
Myobacterium
tuberculosis
Salmonella
Vibrio cholerae
Viruses:

anthrax
E. coli infection
tuberculosis
salmonellosis,
paratyphoid
cholera

Hepatitis Virus
Polio Virus
Parasites:

Hepatitis A
polio

Cryptosporidium
Giardia lamblia

cryptosporidiosis
giardiasis

The primary goal of water treatment is to ensure that the water is


safe to drink and does not contain any disease-causing
microorganisms. The best way to ensure pathogen-free drinking
water is to make sure that the pathogens never enter the water in
the first place. However, this may be a difficult matter in a surface
water supply which is fed by a large watershed. Most treatments

plants choose to remove or kill pathogens in water rather than to


ensure that the entire watershed is free of pathogens.
Pathogens can be removed from water through physical or chemical
processes. Sedimentation and filtration can remove a large
percentage of bacteria and other microorganisms from the water by
physical means. Storage can also kill a portion of the diseasecausing bacteria in water.
Disinfection is the process of selectively destroying or inactivating
pathogenic organisms in water usually by chemical means.
Disinfection is different from sterilization, which is the complete
destruction of all organisms found in water and which is usually
expensive and unnecessary. Disinfection is a required part of the
water treatment process while sterilization is not.

Purpose
Chlorination is the application of chlorine to water to accomplish
some definite purpose. We will be concerned with the application of
chlorine for the purpose of disinfection, but you should be aware
that chlorination can also be used for taste and odor control, iron
and manganese removal, and to remove some gases such as
ammonia and hydrogen sulfide.
Chlorination is currently the most frequently used form of
disinfection in the water treatment field. However, other
disinfection processes have been developed.

Prechlorination and Postchlorination

Like several other water treatment processes, chlorination can be


used as a pretreatment process (prechlorination) or as part of the
primary treatment of water (postchlorination). Treatment usually
involves either postchlorination only or a combination of
prechlorination and postchlorination.
Prechlorination is the act of adding chlorine to the raw water. The
residual chlorine is useful in several stages of the treatment process
- aiding in coagulation, controlling algae problems in basins,
reducing odor problems, and controlling mudball formation. In
addition, the chlorine has a much longer contact time when added
at the beginning of the treatment process, so prechlorination
increases safety in disinfecting heavily contaminated water.
Postchlorination is the application of chlorine after water has been
treated but before the water reaches the distribution system. At this
stage, chlorination is meant to kill pathogens and to provide a
chlorine residual in the distribution system. Postchlorination is
nearly always part of the treatment process, either used in
combination with prechlorination or used as the sole disinfection
process.
Until the middle of the 1970s, water treatment plants typically used
both prechlorination and postchlorination. However, the longer
contact time provided by prechlorination allows the chlorine to react
with the organics in the water and produce carcinogenic substances
known as trihalomethanes. As a result of concerns over
trihalomethanes, prechlorination has become much less common in
the United States. Currently, prechlorination is only used in plants
where trihalomethane formation is not a problem.

Location in the Treatment Process

During prechlorination, chlorine is usually added to raw water after


screening and before flash mixing. Postchlorination, in contrast, is
often the last stage in the treatment process. After flowing through
the filter, water is chlorinated and then pumped to the clearwell to
allow a sufficient contact time for the chlorine to act. From the
clearwell, the water may be pumped into a large, outdoor storage
tank such as the one shown below. Finally, the water is released to
the customer.

Forms of Chlorine
Elemental Chlorine
Elemental chlorine is either liquid or gaseous in form.
In its liquid form, it must be under extreme pressure.
In its gaseous form, it is 2.5 times as heavy as air.

Liquid chlorine rapidly vaporizes to gas when unpressurized.


One volume of liquid yields about 450 volumes of gas.

Forms of Chlorine in Solution


There are two forms of chlorine in solution:
Hypochlorous Acid

The chemical symbol for hypochlorous acid is HOCl


HOCl retains the oxidizing and disinfecting property of chlorine.
Based on this principle, the disinfecting action of aqueous
chlorine solution occurs.

Hypochlorite Ion

The chemical symbol for hypochlorite is OClThe hypochlorite ion (OCl-) is not the same as the salts calcium
hypochlorite and sodium hypochlorite although the term is
commonly used for both the ion and the salts.

Liquid Chlorine
Liquid chlorine is a clear, amber colored liquid.
Common properties of chlorine are listed in the following table:

Vapor Pressure
Vapor pressure is a function of temperature and is independent of
volume. The gage pressure of a container with 1 pound of chlorine
will be essentially the same as if it contained 100 pounds, at the
same temperature conditions.
Vapor pressure increases as the temperature increases, as
demonstrated in the following figure:

Gaseous Chlorine
Gaseous chlorine is a greenish, yellow gas.
Common properties of gaseous chlorine are listed in the following
table:

Reactions in Aqueous Solution


Chlorine added to chemically pure water forms a mixture of
hypochlorous (HOCl) and hydrochloric (HCl) acids, as indicated in the
following chemical equation:
Cl2 + H2O HOCl + H+ + Cl-

At ordinary temperatures, the reaction is essentially complete within


a few seconds.
Hypochlorous acid dissociates into hydrogen and hypochlorite ions
almost instantaneously:
HOCl H+ + OCl-

The degree of dissociation is dependent on both pH and


temperature.

HOCl dissociates poorly at pH levels below 6; therefore,


predominately HOCl exists at relatively low pH levels.
At pH levels between 6 and 8.5, there is a sharp change from
undissociated HOCl to almost complete dissociation. At 20C
above pH 7.5 and at 0C above pH 7.8, hypochlorite ions (OCl -)
predominate.
OCl- exists almost exclusively above pH 9.5.

The normal pH of water supplies is within range where chlorine may


exist as both hypochlorous acid and hypochlorite ion. This is
indicated in the following figure. HOCl is a stronger oxidant and

disinfectant than OCL-, which is why disinfection is more effective at


lower pHs.

Chlorine Handling and Safety


Personnel Safety Protection
Basic Equipment
Forced air ventilation is required for all chlorine storage and feed
rooms.

Exhausts should be located near the floor since chlorine is


heavier than air.
Equipment should have capacity to replace all air in the room
within 3 minutes.
The switch should be located outside of the chlorine area.

There are two types of gas masks a canister type with a full face
piece and a self contained breathing apparatus.

The canister-type should be used only for short exposures and


for chlorine concentrations of less than 1%. It requires
sufficient oxygen (more than 16%).
The self contained breathing apparatus is available for longer
exposures and higher chlorine concentrations. It is the
preferred means of respiratory protection.

Protective clothing.
Emergency showers and eye-wash stations.
Automatic leak detection.

First Aid
The following guidelines should be adhered to in the event of
exposure to chlorine.

Inhalation
Remove the injured party to an uncontaminated outdoor area. Use
appropriate respiratory equipment during rescuedo not become
another victim.

Check for breathing and pulse. If not breathing, give artificial


respiration. If breathing is difficult, have trained personnel
administer oxygen as soon as possible. If no pulse, perform CPR.
Call for medical assistance as soon as possible.
Check for other injuries.
Keep the injured party warm and at rest.

Skin Contact
Immediately shower with large quantities of water.
Remove protective clothing and equipment while in shower.
Flush skin with water for at least 5 minutes.
Call for medical assistance.
Keep affected area cool.

Eye Contact
Immediately shower with large quantities of water while holding
eyes open.
Call a physician immediately.
Transfer promptly to medical facility.

Ingestion
Do not induce vomiting.
Give large quantities of water.
Call physician immediately.
Transfer promptly to a medical facility.

Chlorine Leaks and Response


Potential Points of Chlorine Leaks
Leaks can occur anywhere in the pressurized supply, including
connections and piping joints, cylinders or containers and feed
equipment.

Leak Detection
The sense of smell can detect chlorine concentrations as low as 4
parts per million (ppm).
Portable and permanent automatic chlorine detection devices can
detect at concentrations of 1 ppm or less.
A rag saturated with strong ammonia solution will indicate leaks by
the presence of white fumes.

Leak Repair
In the event of a chlorine leak, the following guidelines should be
followed.
Activate the chlorine leak absorption system, if available.

The system uses alkaline solution to react with and absorb


chlorine.

Repair leaks immediately or they will become worse.

Repair work should be performed by properly trained operators


wearing proper safety equipment.
Always work in pairs during chlorine leak detection and repair.

All other persons should leave the danger area until conditions
are safe again.

If the leak is large, evacuate the area and obtain help from the
local fire company. They have self-contained breathing
equipment and can assist with evacuation efforts. The local
police can also assist in the event there are curious sightseers.
Keep in mind that emergency vehicles and vehicle engines
may quit operating due to a lack of oxygen

If the leak is in the chlorine supply piping:

Close the container valve to isolate the leak.


Repair as required by tightening the packing gland nut for
leaks around valve stems, replacing the gasket for leaks at the
discharge valve outlet and/or using emergency repair kits for
leaks at fusible plugs and cylinder valves.

Clean, dry and test repair for leak prior to returning the system
to service.

If the leak is in the equipment:

Close the container valve to the equipment.


Continue to operate the equipment (without chlorine feed
supply) until all chlorine has been displaced.

Repair as required.

Clean, dry and test repair prior to returning system to service.

If the leak is in a cylinder or container:

Increase the feed rate if possible, and cool the tank to reduce
leak rate.
Turn, if possible, so that gas escapes rather than liquid. The
quantity of chlorine that escapes as gas is 1/15 that which
escapes as liquid through the same size hole.

Use the emergency repair kit appropriate to the container size:


Kit A for 100 and 150 pound cylinders
Kit B for 1 ton containers
Kit C for tank cars and tank trucks

Do not immerse a leaking container in water because the acid


formed will increase corrosion at the leak location and make
leak worse and gas will be released at water surface.

Call the supplier for instructions for returning leaking


containers or containers with leaking valves. DO NOT SHIP
leaking cylinders.

Other Chlorine Emergency Measures

Fire
Chlorine will not burn in air. It is a strong oxidizer and contact with
combustible materials may cause fire. When heated, chlorine is
dangerous and emits highly toxic fumes.
In the event of a fire caused by chlorine, the following fire fighting
measures should be adhered to:

Use appropriate extinguishing media for combustibles in the


area.
Move chlorine containers away from the fire source if possible.

Cool the container with water spray; however, do not apply


water to a leak.

Be sure to wear full protective equipment, including selfcontained breathing equipment.

Risk Management Plan


An emergency plan for chlorine is essential and should include the
following:
Training of personnel.
Periodic training drills.
A list of assistance available in the event of an emergency. The
suppliers name, address and emergency telephone number should
be posted.

Quantities

Storage Requirements
Separate rooms for storage and feed facilities should be provided.
Storage and feed rooms need to be separate from other operating
areas.
Rooms should have an inspection window to permit equipment to be
viewed without entering the room.
All openings between rooms and the remainder of the plant need to
be sealed.
Storage for a 30 day supply should be available.

Types of Storage Containers


100 and 150 lb. Cylinders
Position and store vertically.
Restraint chains are necessary to prevent accidents

Ton Containers
Provide storage area with 2 ton capacity monorail or crane for
cylinder movement and placement.
Roller trunions are necessary to properly position cylinders.
Cylinder valves must be positioned vertically. Gas flows from the top
valve and liquid flows from the bottom valve.

Tank Cars
Tank cars are generally only provided for the largest plants.
Rail siding is required.

Hypochlorites
Instead of using chlorine gas, some plants apply chlorine to water as
a hypochlorite, also known as bleach. Hypochlorites are less pure
than chlorine gas, which means that they are also less dangerous.
However, they have the major disadvantage that they decompose in
strength over time while in storage. Temperature, light, and

physical energy can all break down hypochlorites before they are
able to react with pathogens in water.
There are three types of hypochlorites - sodium hypochlorite,
calcium hypochlorite, and commercial bleach:

Sodium hypochlorite (NaOCl) comes in a liquid form which


contains up to 12% chlorine.
Calcium hypochlorite (Ca(OCl)2), also known as HTH, is a
solid which is mixed with water to form a hypochlorite
solution. Calcium hypochlorite is 65-70% concentrated.
Commercial bleach is the bleach which you buy in a grocery
store. The concentration of commercial bleach varies
depending on the brand - Chlorox bleach is 5% chlorine while
some other brands are 3.5% concentrated.

Hypochlorites and bleaches work in the same general manner as


chlorine gas. They react with water and form the disinfectant
hypochlorous acid. The reactions of sodium hypochlorite and
calcium hypochlorite with water are shown below:

Calcium hypochlorite + Water Hypochlorous Acid +


Calcium Hydroxide
Ca(OCl)2 + 2 H2O 2 HOCl + Ca(OH)2

Sodium hypochlorite + Water Hypochlorous Acid + Sodium


Hydroxide
NaOCl + H2O HOCl + NaOH

In general, disinfection using chlorine gas and hypochlorites occurs


in the same manner. The differences lie in how the chlorine is fed
into the water and on handling and storage of the chlorine
compounds. In addition, the amount of each type of chlorine added
to water will vary since each compound has a different
concentration of chlorine.

Storage Facilities
Basic Facilities and Housing

Storage should be in a clean, cool, well ventilated area.


Interior rooms should be of fire-resistant construction and
isolated from other areas of the plant.

Storage facilities should be away from heat sources, flammable


substances and other compressed gasses.

Exterior storage is not recommended since containment of


emergency spills is not available. A spill would result in the free
release of chlorine gas.

If exterior storage has been provided, the areas should be


shielded from direct sunlight and protected from rain, ice and
snow.

In service containers (both on-line and in-reserve) should


be located inside where temperature can be controlled.

Cylinders should be moved inside sufficiently in advance of use


to allow temperature to stabilize.

Cylinder withdrawal rate must be considered if exterior storage


is used.

Decreased temperatures will decrease available withdrawal


rate.

Cylinder storage and the chemical feed area should be in


separate rooms. A window should be available to permit
operator to view the storage and feed rooms without entering.

Entry and Exit Requirements

Access should be from the exterior only. It should be designed


such that personnel can exit quickly under emergency
conditions.
Doors should open outward, be provided with panic hardware
and lead to an unobstructed outside area.

Heating

Storage and feed rooms should be heated when the outside


temperature falls below 50 F.
The recommended comfortable working temperature for
chlorine feed room is 65 to 70F.
The temperature in the storage/supply room should be 5 to 10
F cooler.

Ventilation

Forced air ventilation should be provided.

Exterior switch or door interlock should be provided so that the


ventilation system can be started prior to entering area

Lighting

Storage and feed rooms should be well lit.

Chlorine Scrubbers
Description of Equipment

A chlorine scrubber is a type of equipment that is available to


neutralize liquid and gas chlorine spills.
In the event of a chlorine release, fresh air from outside is
introduced at the top of the storage room and chlorine is pulled
from the floor level through the unit. The chlorine scrubber
maintains negative pressure in the storage room during the
entire chlorine release event and exchanges chlorine in the
atmosphere with fresh air.

The chlorine scrubbing process neutralizes chlorine and vents


the inert gases into the atmosphere.

The size of the equipment is dependent on the size of the


chlorine containers used at the facility.

The equipment size is based on the release of the entire


contents of one container within 30 minutes and on Uniform
Fire Code Guidelines.

Description of Process
Two chlorine scrubbing processes are available: one uses a caustic
solution and the other uses solid media.

Caustic Solution Type


A caustic soda is used to neutralize the chlorine:
Cl2 + 2 NaOH NaOCl + NaCl + H2O

This process produces sodium hypochlorite (NaOCl) and salt


(NaCl). It requires 1.13 pounds of NaOH to react with 1 pound
Cl and produces 1.05 pounds NaOCl.
This process requires the removal of hypochlorite and
replacement with fresh caustic after use.

Solid Media Type

Uses a resin to absorb chlorine.


Chlorine remains on the media and air is discharged to the
atmosphere.

The media must be replaced after use.

Solid media is somewhat safer than the caustic solution


system, since the storage of caustic solution is not necessary.

Chlorination Mechanics and Terminology


An enteric virus is one that lives in the intestines.
Chlorine demand is the amount of chlorine required to react with
all the organic and inorganic material. In practice, the chlorine
demand is the difference between the amount of chlorine added and
the amount remaining after a given contact time. Some reactive
compounds have disinfecting properties and others do not.
Chlorine residual is the total of all compounds with disinfecting
properties and any remaining free chlorine.
Chlorine Residual (mg/l) = Combined Chlorine Forms (mg/l)
+ Free Chlorine (mg/l)
The residual should contain free chlorine since it has the highest
disinfecting ability.
The presence of measurable chlorine residual indicates that all
chemical reactions have been satisfied and that sufficient chlorine is
present to kill microorganisms.
Chlorine dose is the amount of chlorine needed to satisfy the
chlorine demand plus the amount of chlorine residual needed for
disinfection.
Chlorine Dose (mg/l) = Chlorine Demand (mg/l) + Chlorine
Residual (mg/l)
Breakpoint chlorination is the addition of chlorine until all
chlorine demand has been satisfied. It is used to determine how
much chlorine is required for disinfection.
The graph below shows what happens when chlorine (either chlorine
gas or a hypochlorite) is added to water. First (between points 1 and
2), the water reacts with reducing compounds in the water, such as
hydrogen sulfide. These compounds use up the chlorine, producing
no chlorine residual.

Next, between points 2 and 3, the chlorine reacts with organics and
ammonia naturally found in the water. Some combined chlorine
residual is formed - chloramines. Note that if chloramines were to
be used as the disinfecting agent, more ammonia would be added to
the water to react with the chlorine. The process would be stopped
at point 3. Using chloramine as the disinfecting agent results in
little trihalomethane production but causes taste and odor problems
since chloramines typically give a "swimming pool" odor to water.
In contrast, if hypochlorous acid is to be used as the chlorine
residual, then chlorine will be added past point 3. Between points 3
and 4, the chlorine will break down most of the chloramines in the
water, actually lowering the chlorine residual.
Finally, the water reaches the breakpoint, shown at point 4.
The breakpoint is the point at which the chlorine demand has been

totally satisfied - the chlorine has reacted with all reducing agents,
organics, and ammonia in the water. When more chlorine is added
past the breakpoint, the chlorine reacts with water and forms
hypochlorous acid in direct proportion to the amount of chlorine
added. This process, known as breakpoint chlorination, is the
most common form of chlorination, in which enough chlorine is
added to the water to bring it past the breakpoint and to create
some free chlorine residual.

Efficiency
Residual and Dosage
A variety of factors can influence disinfection efficiency when using
breakpoint chlorination or chloramines. One of the most important
of these is the concentration of chlorine residual in the water.
The chlorine residual in the clearwell should be at least 0.5 mg/L.
This residual, consisting of hypochlorous acid and/or chloramines,
must kill microorganisms already present in the water and must also
kill any pathogens which may enter the distribution system through
cross-connections or leakage. In order to ensure that the water is
free of microorganisms when it reaches the customer, the chlorine
residual should be about 0.2 mg/L at the extreme ends of the
distribution system. This residual in the distribution system will also
act to control microorganisms in the distribution system which
produces slimes, tastes, or odors.
Determining the correct dosage of chlorine to add to water will
depend on the quantity and type of substances in the water creating
a chlorine demand. The chlorine dose is calculated as follows:

Chlorine Dose = Chlorine Demand + Chlorine Residual

So, if the required chlorine residual is 0.5 mg/L and the chlorine
demand is known to be 2 mg/L, then 2.5 mg/L of chlorine will have
to be added to treat the water.
The chlorine demand will typically vary over time as the
characteristics of the water change. By testing the chlorine residual,
the operator can determine whether a sufficient dose of chlorine is
being added to treat the water. In a large system, chlorine must be
sampled every two hours at the plant and at various points in the
distribution system.
It is also important to understand the breakpoint curve when
changing chlorine dosages. If the water smells strongly of chlorine,
it may not mean that too much chlorine is being added. More likely,
chloramines are being produced, and more chlorine needs to be
added to pass the breakpoint.

Contact Time
Contact time is just as important as the chlorine residual in
determining the efficiency of chlorination. Contact time is the
amount of time which the chlorine has to react with the
microorganisms in the water, which will equal the time between the
moment when chlorine is added to the water and the moment when
that water is used by the customer. The longer the contact time,
the more efficient the disinfection process is. When using chlorine
for disinfection a minimum contact time of 30 minutes is required for
adequate disinfection.

The CT value is used as a measurement of the degree of pathogen


inactivation due to chlorination. The CT value is calculated as
follows:
CT = (Chlorine residual, mg/L) (Contact time, minutes)

The CT is the Concentration multiplied by the Time. As the formula


suggests, a reduced chlorine residual can still provide adequate kill
of microorganisms if a long contact time is provided. Conversely, a
smaller chlorine residual can be used as long as the chlorine has a
longer contact time to kill the pathogens.

Other Influencing Factors


Within the disinfection process, efficiency is influenced by the
chlorine residual, the type of chemical used for chlorination, the
contact time, the initial mixing of chlorine into the water, and the
location of chlorination within the treatment process. The most
efficient process will have a high chlorine residual, a long contact
time, and thorough mixing.
Characteristics of the water will also affect efficiency of chlorination.
As you will recall, at a high pH, the hypochlorous acid becomes
dissociated into the ineffective hypochlorite ion. So lower pH values
result in more efficient disinfection.
Temperature influences chlorination just as it does any other
chemical reaction. Warmer water can be treated more efficiently
since the reactions occur more quickly. At a lower water

temperature, longer contact times or higher concentrations of


chemicals must be used to ensure adequate disinfection.
Turbidity of the water influences disinfection primarily through
influencing the chlorine demand. Turbid water tends to contain
particles which react with chlorine, reducing the concentration of
chlorine residual which is formed. Since the turbidity of the water
depends to a large extent on upstream processes (coagulation,
flocculation, sedimentation, and filtration), changes in these
upstream processes will influence the efficiency of chlorination.
Turbidity is also influenced by the source water - groundwater
turbidity tends to change slowly or not at all while the chlorine
demand of surface water can change continuously, especially during
storms and the snow melt season.
Finally, and most intuitively, the number and type of microorganisms
in the water will influence chlorination efficiency. Since cyst-forming
microorganisms and viruses are very difficult to kill using
chlorination, the disinfection process will be less efficient if these
pathogens are found in the water.

Regulatory Requirements
Continuous disinfection is required of all public water systems.
For surface water supplies:

The disinfection process must achieve 99.9% (3 log)


inactivation of Giardia cysts and 99.99% (4 log) inactivation of
enteric viruses.

Log inactivation is defined as follows:

1 log inactivation = 90 %
2 log inactivation = 99%

3 log inactivation = 99.9%

4 log inactivation = 99.99%

Chlorination equipment must be capable of maintaining a chlorine


residual which achieves a minimum of 1 log Giardia cyst inactivation
following filtration.
Contact time can be thought of as a residual disinfectant
concentration C in mg/L which is multiplied by a time T in minutes.
The time T is measured between the point of application of the
disinfectant and the measurement of the residual.
For groundwater supplies not under the influence of surface water
intrusion:

Minimum of 20 minutes of contact time must be provided

For chlorine residual requirements:

Minimum free, combined, or chlorine dioxide residual entering


the distribution system must exceed 0.2 mg/l, and be
maintained 0.02 mg/l at the most distant points in the system
Must be determined by Contact Time (CT) factors and
measurement methods established by EPA. Refer to EPAs
Guidance Manual for Compliance with the Filtration and
Disinfection Requirements for Public Water Systems Using
Surface Water Sources, which establishes procedures and
guidance for complying with the EPA Surface Water Treatment
Rule (SWTR).

The exact mechanism of chlorine disinfection is not fully known.

One theory is that chlorine directly destroys the bacterial cell.


Another theory is that chlorine inactivates the enzymes which
enable the cells to use food, thus starving the organisms.

Chlorine added to water containing organic and inorganic chemicals


reacts with these materials to form chlorine compounds.

Process Calculations
There are two basic chlorination process calculations: chlorine
dosage and chlorine demand.

Chlorine Dosage Calculation


To perform the calculation, you will need to know the amount of
chlorine being added and the amount of water being treated.
Chlorine Dosage (mg/l) = Chlorine Feed (lb/day)
[Flow (mgd) x 8.34 (lb/gal)]

Chlorine Demand Calculation

A sufficient amount of chlorine must be added so that the chlorine


demand is met and the desired chlorine residual is provided.
Chlorine Demand (mg/l) = Chlorine Dose (mg/l) Chlorine Residual
(mg/l)

Sample Calculations
Example 1.
The chlorinator at a water treatment plant operating at a flowrate of
1.0 million gallons per day is set to feed 20 pounds in a 24 hour
period. The chlorine residual in the finished water leaving the plant
after a 20 minute contact period is 0.5 mg/l. Calculate the chlorine
demand of the water.
Known: Flow, (mgd) = 1.0 MGD
Chlorinator setting = 20 pounds/day
Finished water chlorine residual = 0.5 mg/l
Find: Chlorine Dosage (mg/l) and Chlorine Demand (mg/l)

Step 1: Calculate chlorine dosage in mg/l


Chlorine Dose (mg/l) = Chlorine Feed (lb/day)
[Flow (mgd) x 8.34 (lb/gal)]
Chlorine Dose (mg/l) = [20 lb Cl/day]
[1.0 (mgd) x 8.34 (lb/gal)]
= 20 lb Cl/day
(8.34 (million lb water /day)

= 2.4 lb Cl/million lb water


= 2.4 Parts Per Million (ppm)
= 2.4 mg/l

Step 2: Calculate Chlorine Demand in mg/l


Chlorine Demand (mg/l) = Chlorine Dose (mg/l) - Chlorine Residual
(mg/l)
Chlorine Demand (mg/l) = 2.4 (mg/l) 0.5 (mg/l)
= 1.9 mg/l

VIVA VOICE JUNE 2014


Correlation
Screening curves
Meta analysis
Graphs
Economic growth
Inflation graphs
Growth rate decreases vs inflation increases?
Which study design is good?
Survival rate.
Life table.....
How to compare life table, survival rate.
Relative Risk
Y-Y analysis
Forest Graph
Effective human resources management
Policy ?
Micro-planning
Macro-planning
Components of policy/ planning
Health care financing
Systematic reviews
Demographic transition
Demographic trap

Demographic fatigue
Demographic dividend (working population, productive group)
Occupational Zoonotic diseases
Anthrax (leather industry, skin problems etc)
Measles
Poliomyelitis
MCH to improve
MMR to reduce
IMR to reduce
Non- Parametric Tests

Biostatistics books

Kuzma

Bethykirkwood

Land

Epidemiology

Leon Gordis

Correlation
Screening curves
Meta analysis
In statistics, a meta-analysis refers to methods that focus on contrasting and combining results
from different studies, in the hope of identifying patterns among study results, sources of
disagreement among those results, or other interesting relationships that may come to light in
the context of multiple studies.[1] In its simplest form, meta-analysis is normally done by
identification of a common measure of effect size. A weighted average of that common measure
is the output of a meta-analysis. The weighting is related to sample sizes within the individual

studies. More generally there are other differences between the studies that need to be allowed
for, but the general aim of a meta-analysis is to more powerfully estimate the true effect size as
opposed to a less precise effect size derived in a single study under a given single set of
assumptions and conditions. A meta-analysis therefore gives a thorough summary of several
studies that have been done on the same topic, and provides the reader with extensive
information on whether an effect exists and what size that effect has.
Meta analysis can be thought of as "conducting research about research."
Meta-analyses are often, but not always, important components of a systematic
review procedure. For instance, a meta-analysis may be conducted on several clinical trials of a
medical treatment, in an effort to obtain a better understanding of how well the treatment works.
Here it is convenient to follow the terminology used by the Cochrane Collaboration,[2] and use
"meta-analysis" to refer to statistical methods of combining evidence, leaving other aspects of
'research synthesis' or 'evidence synthesis', such as combining information from qualitative
studies, for the more general context of systematic reviews.
Meta-analysis forms part of a framework called estimation statistics which relies on effect
sizes, confidence intervals and precision planning to guide data analysis, and is an alternative
to null hypothesis significance testing.

Advantages of meta analysis


Conceptually, a meta-analysis uses a statistical approach to combine the results from multiple
studies in an effort to increase power (over individual studies), improve estimates of the size of
the effect and/or to resolve uncertainty when reports disagree. Basically, it produces a weighted
average of the included study results and this approach has several advantages:

Results can be generalized to a larger population,

The precision and accuracy of estimates can be improved as more data is used. This, in
turn, may increase the statistical power to detect an effect.

Inconsistency of results across studies can be quantified and analyzed. For instance,
does inconsistency arise from sampling error, or are study results (partially) influenced by
between-study heterogeneity.

Hypothesis testing can be applied on summary estimates,

Moderators can be included to explain variation between studies,

The presence of publication bias can be investigated

Pitfalls
A meta-analysis of several small studies does not predict the results of a single large study.
[9]
Some have argued that a weakness of the method is that sources of bias are not controlled
by the method: a good meta-analysis of badly designed studies will still result in bad statistics.
[10]
This would mean that only methodologically sound studies should be included in a metaanalysis, a practice called 'best evidence synthesis'. [10] Other meta-analysts would include
weaker studies, and add a study-level predictor variable that reflects the methodological quality
of the studies to examine the effect of study quality on the effect size.[11] However, others have
argued that a better approach is to preserve information about the variance in the study sample,
casting as wide a net as possible, and that methodological selection criteria introduce unwanted
subjectivity, defeating the purpose of the approach.[12]

steps of meta analysis

1. Formulation of the problem


2. Search of literature
3. Selection of studies ('incorporation criteria')

Based on quality criteria, e.g. the requirement of randomization and blinding in a clinical
trial

Selection of specific studies on a well-specified subject, e.g. the treatment of breast


cancer.

Decide whether unpublished studies are included to avoid publication bias (file drawer
problem)

4. Decide which dependent variables or summary measures are allowed. For instance:

Differences (discrete data)

Means (continuous data)

Hedges' g is a popular summary measure for continuous data that is standardized in


order to eliminate scale differences, but it incorporates an index of variation between
groups:

in which

is the treatment mean,

is the control mean,

the

pooled variance.
5. Selection of meta-regression statistic model. e.g. Simple regression, fixed-effect meta
regression and random-effect meta regression. Meta-regression is a tool used in metaanalysis to examine the impact of moderator variables on study effect size using regressionbased techniques. Meta-regression is more effective at this task than are standard
regression techniques.

Meta-analysis
combines the
results of several
studies

What is meta-analysis?
Meta-analysis is the use of statistical methods to combine
results of individual studies. This allows us to make the best
use of all the information we have gathered in our systematic
review by increasing the power of the analysis. By statistically
combining the results of similar studies we can improve the
precision of our estimates of treatment effect, and assess
whether treatment effects are similar in similar situations. The
decision about whether or not the results of individual studies
are similar enough to be combined in a meta-analysis is
essential to the validity of the result, and will be covered in
the next module on heterogeneity. In this module we will look
at the process of combining studies and outline the various
methods available.
There are many approaches to meta-analysis. We have
discussed already that meta-analysis is not simply a matter of
adding up numbers of participants across studies (although
unfortunately some non-Cochrane reviews do this). This is the
'pooling participants' or 'treat-as-one-trial' method and we will
discuss it in a little more detail now.
Pooling participants (not a valid approach to meta-analysis).
This method effectively considers the participants in all the
studies as if they were part of one big study. Suppose the

studies are randomised controlled trials: we could look at


everyone who received the experimental intervention by
adding up the experimental group events and sample sizes
and compare them with everyone who received the control
intervention. This is a tempting way to 'pool results', but let's
demonstrate how it can produce the wrong answer.
A Cochrane review of trials of daycare for pre-school children
included the following two trials. For this example we will
focus on the outcome of whether a child was retained in the
same class after a period in either a daycare treatment group
or a non-daycare control group. In the first trial (Gray 1970),
the risk difference is -0.16, so daycare looks promising:
Gray 1970 Retained Total
Daycare

19

36

Risk

Risk difference

0.528
-0.16

Control

13

19

0.684

In the second trial (Schweinhart 1993) the absolute risk of


being retained in the same class is considerably lower, but the
risk difference, while small, still lies on the side of a benefit of
daycare:
Schweinhart Retained Total
Daycare

58

Risk

Risk difference

0.1034
-0.004

Control

65

0.1077

What would happen if we pooled all the children as if they

were part of a single trial?


Pooled results Retained Total

We don't add up
patients across
trials

We don't use
simple averages to
calculate a metaanalysis

Risk

Daycare

25

94

0.266

Control

20

84

0.238

Risk difference

+0.03
WRONG!

It suddenly looks as if daycare may be harmful: the risk


difference is now bigger than 0. This is called Simpson's
paradox (or bias), and is why we don't pool participants
directly across studies. The first rule of meta-analysis is to
keep participants within each study grouped together, so as to
preserve the effects of randomisation and compare like with
like. Therefore, we must take the comparison of
risks within each of the two trials and somehow combine
these. In practice, this means we need to calculate a single
measure of treatment effect from each study before
contemplating meta-analysis. For example, for a dichotomous
outcome (like being retained in the same class) we calculate a
risk ratio, the risk difference or the odds ratio for each study
separately, then pool these estimates of effect across the
studies.
Simple average of treatment effects (not used in Cochrane
reviews)
If we obtain a treatment effect separately from each study,
what do we do with them in the meta-analysis? How about
taking the average? The average of the risk differences in the
two trials above is (-0.004 - 0.16) / 2 = - 0.082. This may
seem fair at first, but the second trial randomised more than
twice as many children as the first, so the contribution of each
randomised child in the first trial is diminished. It is not
uncommon for a meta-analysis to contain trials of vastly
different sizes. To give each one the same influence cannot be
reasonable. So we need a better method than a simple
average.

Definition:
What is a meta-analysis? A meta-analysis is a type of research study in which the
researcher compiles numerous previously published studies on a particular research

question and re-analyzes the results to find the general trend for results across the
studies.
A meta-analysis is a useful tool because it can help overcome the problem of small
sample sizes in the original studies, and can help identify trends in an area of the
research literature that may not be evident by merely reading the published
studies.
Graphs
Economic growth
Definition of 'Economic Growth'

An increase in the capacity of an economy to produce goods and services,


compared from one period of time to another. Economic growth can be measured in
nominal terms, which include inflation, or in real terms, which are adjusted for
inflation. For comparing one country's economic growth to another, GDP or GNP per
capita should be used as these take into account population differences between
countries.

Increase in a country's productive capacity, as measured by comparing gross


national product (GNP) in a year with the GNP in the previous year.
Increase in the capital stock, advances in technology, and improvement in
the quality and level of literacy are considered to be the principal causes of
economic growth. In recent years, the idea of sustainable development has brought
in additional factors such as environmentally sound processes that must be taken
into account in growing an economy.

Economic growth is the increase in the market value of the goods and services produced by
an economy over time. It is conventionally measured as the percent rate of increase
in real gross domestic product, or real GDP.[1] Of more importance is the growth of the ratio of
GDP to population (GDP per capita), which is also called per capita income. An increase in per
capita income is referred to as intensive growth. GDP growth caused only by increases in
population or territory is called extensive growth.[2]
Growth is usually calculated in real terms i.e., inflation-adjusted terms to eliminate the
distorting effect of inflation on the price of goods produced. In economics, "economic growth" or
"economic growth theory" typically refers to growth of potential output, i.e., production at "full
employment".
As an area of study, economic growth is generally distinguished from development economics.
The former is primarily the study of how countries can advance their economies. The latter is
the study of the economic aspects of the development process in low-income countries. See
also Economic development.
Since economic growth is measured as the annual percent change of gross domestic product
(GDP), it has all the advantages and drawbacks of that measure. For example, GDP only
measures the market economy, which tends to overstate growth during the change over from a
farming economy with household production.[3] An adjustment was made for food grown on and
consumed on farms, but no correction was made for other household production. Also, there is
no allowance in GDP calculations for depletion of natural resources.

Pros
9. Quality of life
Cons
10.Resource depletion
11.Environmental impact
12.Global warming

Inflation graphs

Growth rate decreases vs inflation increases?


Inflation and Economic Growth

David Henderson explains:

The idea that an increase in economic growth leads to an increase in inflation


and that decreased growth reduces inflation is reflected endlessly in the media.
On April 28, for example, AP writer Rajesh Mahapatra claimed that high economic
growth of more than 8.5% annually in India since 2003 has spurred demand and
caused prices to rise. This makes no sense.

All other things being equal, an increase in economic growth must cause inflation to
drop, and a reduction in growth must cause inflation to rise. In his congressional
testimony yesterday, Federal Reserve chairman Ben Bernanke thankfully did not
state that the higher economic growth he expects will lead to higher inflation.
Although he didnt connect growth and inflation at all, Mr. Bernanke has long
understood that higher growth leads to lower inflation.

Heres why. Inflation, as the old saying goes, is caused by too much money
chasing too few goods. Just as more money means higher prices, fewer goods
also mean higher prices. The connection between the level of production and the
level of prices also holds for the rate of change of production (that is, the rate of
economic growth) and the rate of change of prices (that is, the inflation rate).

Some simple arithmetic will clarify. Start with the famous equation of exchange, MV
= Py, where M is the money supply; V is the velocity of money that is, the speed
at which money circulates; P is the price level; and y is the real output of the
economy (real GDP.) A version of this equation, incidentally, was on the license
plate of the late economist Milton Friedman, who made a large part of his academic
reputation by reviving, and giving evidence for, the role of money growth in causing
inflation.

If the growth rate of real GDP increases and the growth rates of M and V are held
constant, the growth rate of the price level must fall. But the growth rate of the
price level is just another term for the inflation rate; therefore, inflation must fall.

An increase in the rate of economic growth means more goods for money to
chase, which puts downward pressure on the inflation rate. If for example the
money supply grows at 7% a year and velocity is constant and if annual economic
growth is 3%, inflation must be 4% (more exactly, 3.9%). If, however, economic
growth rises to 4%, inflation falls to 3% (actually, 2.9%.)

The April numbers for the index of industrial production (IIP), released on Thursday,
brought some cheer on the growthfront. The IIP grew by 3.4 per cent, its highest in
a long time. April, of course, was a month in which the entire country was deep in
electioneering. Therefore, some sort of stimulus from all the campaign spending
might have been reasonable to expect. The biggest beneficiary of this was the
category of "electrical machinery", which grew by over 66 per cent year on year,
reflecting all those campaign rallies, with their generators and audio equipment.
The other significant contributor to the growth in the overall index was electricity,
which grew by almost 12 per cent year on year, significantly higher than its growth
during 2013-14. Typically, a growth acceleration that relies heavily on one or two
sectoral surges does not have much staying power. It would require an across-theboard show of resurgence to allow people to conclude that a sustainable recovery
was under way. That is clearly not happening yet. However, these numbers do
reinforce the perception that things are not getting worse as far as growth is
concerned.
Likewise, there was some room for relief on the inflation front. The consumer price
index, or CPI, numbers for May 2014 showed headline inflation declining slightly,
from 8.6 per cent in April to 8.3 per cent in May. The Central Statistical Office is now
separately reporting a sub-index labelled consumer food price index, or CFPI, which
provides some convenience to observers. The index itself, though, offers little cheer.
It came down modestly between April and May, largely explaining the decline in the
headline rate, but is still significantly above nine per cent. At a time when there are
concerns about the performance of the monsoon and the impact of that on food
prices, these numbers should be a major cause of worry for the government. Milk,
eggs, fish and meat, vegetables and fruit contributed to the persistence of food
inflation. But cereals are also kicking in, as they have been for the past couple of
years, and the government must use its large stocks of rice and wheat quickly to
dampen at least this source of food inflation. It would be unconscionable not to do
so when risks of a resurgence of inflation are high. The larger point on inflation,
though, is how stubborn the rate is despite sluggish growth and high interest rates.
The limitations of monetary policy are being repeatedly underscored.
Against this backdrop, the government's prioritisation of its fight against inflation is

an extremely important development. It has to move quickly from intent to action


on a variety of reforms, from procurement policy to subsidies and to investment in
rural infrastructure. Many of these will generate benefits only over the medium
term. So those expecting a growth stimulus from the Reserve Bank of India any time
soon are bound to be in for a disappointment. Even so, room for optimism should
come from the fact that this government does have the capacity to design and
execute long-term strategies with complete credibility. The simple equation that it
needs to keep in mind is that inflation will not subside unless food prices moderate
and growth will not recover unless inflation subsides.

Which study design is good


The Best Study Design For Dummies
When I had those tired looks again, my mother in law recommended coenzyme Q,
which research had proven to have wondrous effects on tiredness. Indeed many
sites and magazines advocate this natural energy producing nutrient which
mobilizes your mitochondria for cellular energy! Another time she asked me if I
thought komkommerslank(cucumber pills for slimming) would work to lose some
extra weight. She took my NO for granted.
It is often difficult to explain people that not all research is equally good, and that
outcomes are not always equally significant (both statistically and clinically). It is
even more difficult to understand levels of evidence and why we should even
care. Pharmaceutical Industries (especially the supplement-selling ones) take
advantage of this ignorance and are very successful in selling their stories and pills.
If properly conducted, the Randomized Controlled Trial (RCT) is the best studydesign to examine the clinical efficacy of health interventions. An RCT is an
experimental study where individuals who are similar at the beginning are randomly
allocated to two or more treatment groups and the outcomes of the groups are
compared after sufficient follow-up time. However an RCT may not always be
feasible, because it may not be ethical or desirable to randomize people or to
expose them to certain interventions.
Observational studies provide weaker empirical evidence, because the allocation
of factors is not under control of the investigator, but just happen or are chosen
(e.g. smoking). Of the observational studies, cohort studies provide stronger
evidence than case control studies, because in cohort studies factors are
measured before the outcome, whereas in case controls studies factors are
measured after the outcome.
Most people find such a description of study types and levels of evidence too
theoretical and not appealing.

Last year I was challenged to tell about how doctors search medical
information (central theme = Google) for and here it comes. the Society of
History and ICT.
To explain the audience why it is important for clinicians to find the best evidence
and how methodological filters can be used to sift through the overwhelming
amount of information in for instance PubMed, I had to introduce RCTs and the
levels of evidence. To explain it to them I used an example that stroke me when I
first read about it.
I showed them the following slide :

And clarified: Beta-carotene is a vitamine in carrots and many other vegetables, but
you can also buy it in pure form as pills. There is reason to believe that betacarotene might help to prevent lung cancer in cigarette smokers. How do you think
you can find out whether beta-carotene will have this effect?

Suppose you have two neighbors, both heavy smokers of the same age, both
males. The neighbor who doesnt eat much vegetables gets lung cancer, but the
neighbor who eats a lot of vegetables and is fond of carrots doesnt. Do you think
this provides good evidence that beta-carotene prevents lung cancer?
There is a laughter in the room, so they dont believe in n=1 experiments/case
reports. (still how many people dont think smoking does not necessarily do any
harm because their chainsmoking father reached his nineties in good health).
I show them the following slide with the lowest box only.

O.k. What about this study? Ive a group of lung cancer patients,
who smoke(d) heavily. I ask them to fill in a questionnaire about their eating habits

in the past and take a blood sample, and I do the same with a simlar group of
smokers without cancer (controls). Analysis shows that smokers developing lung
cancer eat much less beta-carotene containing vegetables and have lower
bloodlevels of beta-carotene than the smokers not developing cancer. Does this
mean
that
beta-carotene
is
preventing
lung
cancer?
Humming in the audience, till one man says: perhaps some people dont remember
exactly what they eat and then several people object that it is just an association
and you do not yet know whether beta-carotene really causes this. Right! I show
the box patient-control studies.

Than consider this study design. I follow a large cohort of healthy heavy
smokers and look at their eating habits (including use of supplements) and take
regular blood samples. After a long follow-up some heavy smokers develop lung
cancer whereas others dont. Now it turns out that the group that did not develop
lung cancer had significantly more beta-carotene in their blood and eat larger
amount of beta-carotene containing food. What do you think about that then?
Now the room is a bit quiet, there is some hesitation. Then someone says: well it is
more convincing and finally the chair says: but it may still not be the carrots, but
something else in their food or they may just have other healthy living habits
(including eating carrots). Cohort-study appears on the slide (What a perfect
audience!)

O.k. youre not convinced that these study designs give conclusive evidence.
How could we then establish that beta-carotene lowers the risk of lung cancer in

hea
vy smokers? Suppose you really
wanted
to
know,
how
do
you
set
up
such
a
study?
Grinning. Someone says by giving half of the smokersbeta-carotene and the other
half nothing. Or a placebo, someone else says. Right! Randomized Controlled
Trial is on top of the slide. And there is not much room left for another box, so we
are there. I only add that the best way to do it is to do it double blinded.
Than I reveal that all this research has really been done. There have been numerous
observational studies (case-control as well cohorts studies) showing a consistent

negative correlation between the intake of beta-carotene and the development of


lung cancer in heavy smokers. The same has been shown for vitamin E.
Knowing that, I asked the public: Would you as a heavy smoker participate in a
trial where you are randomly assigned to one of the following groups: 1. betacarotene, 2. vitamin E, 3. both or 4. neither vitamin (placebo)?
The recruitment fails. Some people say they dont believe in supplements, others
say that it would be far more effective if smokers quit smoking (laughter). Just 2
individuals said they would at least consider it. But they thought there was a snag in
it and they were right. Such studies have been done, and did not give the expected
positive
results.
In the first large RCT (appr. 30,000 male smokers!), the ATBC Cancer Prevention
Study, beta-carotene rather increased the incidence of lung cancer with 18 percent
and overall mortality with 8 percent (although harmful effects faded after men
stopped taking the pills). Similar results were obtained in the CARET-study, but not
in a 3rd RCT, the Physicians Health Trial, the only difference being that the latter
trial
was
performed
both
with
smokers
nd
non-smokers.
It is now generally thought that cigarette smoke causes beta-carotene to breakdown
in detrimental products, a process that can be halted by other anti-oxidants
(normally present in food). Whether vitamins act positively (anti-oxidant) or
negatively (pro-oxidant) depends very much on the dose and the situation and on
whether there is a shortage of such supplements or not.
I found that this way of explaining study designs to well-educated layman was very
effective
and
fun!
The take-home message is that no matter how reproducible the observational
studies seem to indicate a certain effect, better evidence is obtained by randomized
control trials. It also shows that scientists should be very prudent to translate
observational findings directly in a particular lifestyle advice.
On the other hand, I wonder whether all hypotheses have to be tested in a costly
RCT (the costs for the ATCB trial were $46 million). Shouldnt there be very very
solid grounds to start a prevention study with dietary supplements in healthy
individuals ? Arent their any dangers? Personally I think we should be very
restrictive about these chemopreventive studies. Till now most chemopreventive
studies
have
not
met
the
high
expectations,
anyway.
And what about coenzyme-Q and komkommerslank? Besides that I do not expect
the evidence to be convincing, tiredness can obviously be best combated by rest
and
I
already
eat
enough
cucumbers. ;)
To be continued

Ecological studies are studies of risk-modifying factors on health or other outcomes based on
populations defined either geographically or temporally. Both risk-modifying factors and
outcomes are averaged for the populations in each geographical or temporal unit and then
compared using standard statistical methods.
Ecological studies have often found links between risk-modifying factors and health outcomes
well in advance of other epidemiological or laboratory approaches. Several examples are given
here.
The study by John Snow regarding a cholera outbreak in London is considered the first
ecological study to solve a health issue. He used a map of deaths from cholera to determine
that the source of the cholera was a pump on Broad Street. He had the pump handle removed
in 1854 and people stopped dying there [Newsom, 2006]. It was only when Robert
Koch discovered bacteria years later that the mechanism of cholera transmission was
understood.[1]
Dietary risk factors for cancer have also been studied using both geographical and temporal
ecological studies. Multi-country ecological studies of cancer incidence and mortality rates with
respect to national diets have shown that some dietary factors such as animal products (meat,
milk, fish and eggs), added sweeteners/sugar, and some fats appear to be risk factors for many
types of cancer, while cereals/grains and vegetable products as a whole appear to be risk
reduction factors for many types of cancer.[2][3] Temporal changes in Japan in the types of cancer
common in Western developed countries have been linked to the nutrition transition to the
Western diet.[4]
An important advancement in the understanding of risk-modifying factors for cancer was made
by examining maps of cancer mortality rates. The map of colon cancer mortality rates in the
United States was used by the brothers Cedric and Frank C. Garland to propose the hypothesis
that solar ultraviolet B (UVB) radiation, through vitamin D production, reduced the risk of cancer
(the UVB-vitamin D-cancer hypothesis).[5] Since then many ecological studies have been
performed relating the reduction of incidence or mortality rates of over 20 types of cancer to
lower solar UVB doses.[6]

Links between diet and Alzheimers disease have been studied using both geographical and
temporal ecological studies. The first paper linking diet to risk of Alzheimers disease was a
multicountry ecological study published in 1997.[7] It used prevalence of Alzheimers disease in
11 countries along with dietary supply factors, finding that total fat and total energy (caloric)
supply were strongly correlated with prevalence, while fish and cereals/grains were inversely
correlated (i.e., protective). Diet is now considered an important risk-modifying factor for
Alzheimers disease.[8] Recently it was reported that the rapid rise of Alzheimers disease in
Japan between 1985 and 2007 was likely due to the nutrition transition from the traditional
Japanese diet to the Western diet.[9]
Another example of the use of temporal ecological studies relates to influenza. John
Cannell and associates hypothesized that the seasonality of influenza was largely driven by
seasonal variations in solar UVB doses and calcidiol levels.[10] A randomized controlled
trial involving Japanese school children found that taking 1000 IU per day vitamin D3 reduced
the risk of type A influenza by two-thirds.[11]
Ecological studies are particularly useful for generating hypotheses since they can use existing
data sets and rapidly test the hypothesis. The advantages of the ecological studies include the
large number of people that can be included in the study and the large number of risk-modifying
factors that can be examined.
The term ecological fallacy means that the findings for the groups may not apply to individuals
in the group. However, this term also applies to observational studies and randomized controlled
trials. All epidemiological studies include some people who have health outcomes related to the
risk-modifying factors studied and some who do not. For example, genetic differences affect
how people respond to pharmaceutical drugs. Thus, concern about the ecological fallacy should
not be used to disparage ecological studies. The more important consideration is that ecological
studies should include as many known risk-modifying factors for any outcome as possible,
adding others if warranted. Then the results should be evaluated by other methods, using, for
example, Hills criteria for causality in a biological system.

The ecological fallacy may occur when conclusions about individuals are drawn from analyses
conducted on grouped data. The nature of this type of analysis tends to overestimate the
degree of association between variables.

Survival rate.
Life table.....
In actuarial science and demography, a life table (also called a mortality table or actuarial
table) is a table which shows, for each age, what the probability is that a person of that age will
die before his or her next birthday ("probability of death"). From this starting point, a number of
inferences can be derived.

the probability of surviving any particular year of age

remaining life expectancy for people at different ages

Life tables are also used extensively in biology and epidemiology. The concept is also of
importance in product life cycle management.

Using

from Table 1 data, the chart shows

ranging from 5 to 25 future years.

with

(Age) ranging from 20 to 90 years and

These curves show the probability that someone at (who has reached) the age of
least

will live at

years and can be used to discuss annuity issues from the boomer viewpoint where an

increase in group size will have major effects.


For those in the age range covered by the chart, the "5 yr" curve indicates the group that will
reach beyond the life expectancy. This curve represents the need for support that
covers longevity requirements.
The "20 yr" and "25 yr" curves indicate the continuing diminishing of the life expectancy value as
"age" increases. The differences between the curves are very pronounced starting around the
age of 50 to 55 and ought to be used for planning based upon expectation models.
The "10 yr" and "15 yr" curves can be thought of as the trajectory that is followed by the life
expectancy curve related to those along the median which indicates that the age of 90 is not out
of the question.

A "life table" is a kind of bookkeeping system that ecologists often use to keep
track of stage-specific mortality in the populations they study.

It is an especially

useful approach in entomology where developmental stages are discrete and


mortality rates may vary widely from one life stage to another.

From a pest

management standpoint, it is very useful to know when (and why) a pest


population suffers high mortality -- this is usually the time when it is most
vulnerable.

By managing the natural environment to maximize this vulnerability,

pest populations can often be suppressed without any other control methods.
To create a life table, an ecologist follows the life history of many individuals in a
population, keeping track of how many offspring each female produces, when each
one dies, and what caused its death.

After amassing data from different

populations, different years, and different environmental conditions, the ecologist


summarizes this data by calculating average mortality within each developmental
stage.
For example, in a hypothetical insect population, an average female will lay 200
eggs before she dies.

Half of these eggs (on average) will be consumed by

predators, 90% of the larvae will die from parasitization, and three-fifths of the
pupae will freeze to death in the winter.

(These numbers are averages, but they

are based on a large database of observations.) A life table can be created from
the above data.
Female).

Start with a cohortof 200 eggs (the progeny of Mrs. Average

This number represents the maximum biotic potential of the species (i.e. the
greatest number of offspring that could be produced in one generation under ideal
conditions).

The first line of the life table lists the main cause(s) of death, the

number dying, and the percent mortality during the egg stage.

In this example,

an average of only 100 individuals survive the egg stage and become larvae.
The second line of the table lists the mortality experience of these 100 larvae: only
10 of them survive to become pupae (90% mortality of the larvae).

The third

line of the table lists the mortality experience of the 10 pupae -- three-fifths die of
freezing.

This leaves only 4 individuals alive in the adult stage to reproduce.

If

we assume a 1:1 sex ratio, then there are 2 males and 2 females to start the next
generation.
If there is no mortality of these females, they will each lay an average of 200 eggs
to start the next generation.

Thus there are two females in the cohort to replace

the one original female -- this population is DOUBLING in size each generation!!
In ecology, the symbol "R" (capital R) is known as the replacement rate.

It is a

way to measure the change in reproductive capacity from generation to


generation.

The value of "R" is simply the number of reproductive daughters that

each female produces over her lifetime:

Number of daughters
R = ------------------------------Number of mothers

If the value of "R" is less than 1, the population is decreasing -- if this


situation persists for any length of time the population becomes extinct.
If the value of "R" is greater than 1, the population is increasing -- if this
situation persists for any length of time the population will grow beyond
the environment's carrying capacity. (Uncontrolled population growth is
usually a sign of a disturbed habitat, an introduced species, or some
other type of human intervention.)
If the value of "R" is equal to 1, the population is stable -- most natural
populations are very close to this value.

Practice Problem:
A typical female of the bubble gum maggot (Bubblicious blowhardi Meyer) lays 250
eggs.

On average, 32 of these eggs are infertile and 64 are killed by parasites.

Of the survivors, 64 die as larvae due to habitat destruction (gum is cleared away
by the janitorial staff) and 87 die as pupae because the gum gets too hard.
Construct a life table for this species and calculate a value for "R", the replacement
rate (assume a 1:1 sex ratio).

Is this population increasing, decreasing, or

remaining stable?

Practice Problem:
A typical female of the bubble gum maggot (Bubblicious blowhardi Meyer) lays 250
eggs.

On average, 32 of these eggs are infertile and 64 are killed by parasites. Of

the survivors, 64 die as larvae due to habitat destruction (gum is cleared away by
the janitorial staff) and 87 die as pupae because the gum gets too hard.

Construct

a life table for this species and calculate a value for "R", the replacement rate
(assume a 1:1 sex ratio).
stable?

Is this population increasing, decreasing, or remaining

How to compare life table, survival rate.


Relative Risk
Y-Y analysis
Forest Graph
A forest plot (or blobbogram[1]) is a graphical display designed to illustrate the relative strength
of treatment effects in multiple quantitative scientific studies addressing the same question. It
was developed for use in medical research as a means of graphically representing a metaanalysis of the results of randomized controlled trials. In the last twenty years, similar metaanalytical techniques have been applied in observational studies (e.g. environmental
epidemiology) and forest plots are often used in presenting the results of such studies also.
Although forest plots can take several forms, they are commonly presented with two columns.
The left-hand column lists the names of the studies (frequently randomized controlled
trials or epidemiological studies), commonly in chronological order from the top downwards. The
right-hand column is a plot of the measure of effect (e.g. an odds ratio) for each of these studies
(often represented by a square) incorporating confidence intervals represented by horizontal
lines. The graph may be plotted on a natural logarithmic scale when using odds ratios or other
ratio-based effect measures, so that the confidence intervals are symmetrical about the means
from each study and to ensure undue emphasis is not given to odds ratios greater than 1 when
compared to those less than 1. The area of each square is proportional to the study's weight in
the meta-analysis. The overall meta-analysed measure of effect is often represented on the plot
as a dashed vertical line. This meta-analysed measure of effect is commonly plotted as a
diamond, the lateral points of which indicate confidence intervals for this estimate.
A vertical line representing no effect is also plotted. If the confidence intervals for individual
studies overlap with this line, it demonstrates that at the given level of confidence their effect
sizes do not differ from no effect for the individual study. The same applies for the metaanalysed measure of effect: if the points of the diamond overlap the line of no effect the overall
meta-analysed result cannot be said to differ from no effect at the given level of confidence.
Forest plots date back to at least the 1970s. One plot is shown in a 1985 book about metaanalysis.[2]:252 The first use in print of the word "forest plot" may be in an abstract for a poster at
the Pittsburgh (USA) meeting of the Society for Clinical Trials in May 1996.[3] An informative
investigation on the origin of the notion "forest plot" was published in 2001.[4] The name refers to
the forest of lines produced. In September 1990, Richard Peto joked that the plot was named

after a breast cancer researcher called Pat Forrest and as a result the name has sometimes
been spelt "forrest plot".[4]

Effective human resources management


Strategic Human Resource Management is done by linking of HRM with strategic goals and
objectives in order to improve business performance and developing organizational cultures that
foster innovation and flexibility. It involves planning HR activities and deployment in such a way
to enable organizations to achieve their goals. Human Resource activities such as recruitment,
selection, training and rewarding personnel are done by keeping in view the company's goals
and objectives. Organizations focuses on identifying, analyzing and balancing two sorts of
forces that is; the organization's external opportunities and threats on one hand and its internal
strengths and weaknesses on the other. Alignment of the Human Resource system with the
strategic goals of firm has facilitated organizations to achieve superb targets.

Effective Human Resource Management is the Center for Effective Organizations' (CEO) sixth
report of a fifteen-year study of HR management in today's organizations. The only long-term
analysis of its kind, this book compares the findings from CEO's earlier studies to new data
collected in 2010. Edward E. Lawler III and John W. Boudreau measure how HR management
is changing, paying particular attention to what creates a successful HR functionone that
contributes to a strategic partnership and overall organizational effectiveness. Moreover, the
book identifies best practices in areas such as the design of the HR organization and HR metrics.
It clearly points out how the HR function can and should change to meet the future demands of
a global and dynamic labor market.
For the first time, the study features comparisons between U.S.-based firms and companies in
China, Canada, Australia, the United Kingdom, and other European countries. With this new
analysis, organizations can measure their HR organization against a worldwide sample,
assessing their positioning in the global marketplace, while creating an international standard
for HR management.
(PDF 2 docs)

Policy?
1. Politics: (1) The basic principles by which a government is guided.
(2) The declared objectives that a government or party seeks to achieve and
preserve in the interest of national community. See also public policy.
2. Insurance: The formal contract issued by an insurer that contains terms and
conditions of the insurance cover and serves as its legal evidence.
3. Management: The set of basic principles and associated guidelines, formulated
and enforced by the governing body of an organization, to direct and limit
its actions in pursuit of long-term goals. See also corporate policy.

A policy is a principle or protocol to guide decisions and achieve rational outcomes. A policy is
a statement of intent, and is implemented as a procedure[1] or protocol. Policies are generally
adopted by the Board of or senior governance body within an organization whereas procedures
or protocols would be developed and adopted by senior executive officers. Policies can assist in
both subjective and objective decision making. Policies to assist in subjective decision making
would usually assist senior management with decisions that must consider the relative merits of
a number of factors before making decisions and as a result are often hard to objectively test
e.g. work-life balance policy. In contrast policies to assist in objective decision making are
usually operational in nature and can be objectively tested e.g. password policy.[citation needed]
The term may apply to government, private sector organizations and groups, as well as
individuals. Presidential executive orders, corporate privacy policies, and parliamentary rules of
order are all examples of policy. Policy differs from rules or law. While law can compel or
prohibit behaviors (e.g. a law requiring the payment of taxes on income), policy merely guides
actions toward those that are most likely to achieve a desired outcome.[citation needed]
Policy or policy study may also refer to the process of making important organizational
decisions, including the identification of different alternatives such as programs or spending
priorities, and choosing among them on the basis of the impact they will have. Policies can be
understood as political, management, financial, and administrative mechanisms arranged to
reach explicit goals. In public corporate finance, a critical accounting policy is a policy for a
firm/company or an industry which is considered to have a notably high subjective element, and
that has a material impact on the financial statements.[citation needed]

Micro-planning
Micro Planning: A tool to empower people
Micro-planning is a comprehensive planning approach wherein the community
prepares development plans themselves considering the priority needs of the
village. Inclusion and participation of all sections of the community is central to

micro-planning, thus making it


an integral component of
decentralized governance. For village development to be sustainable and
participatory, it is imperative that the community owns its village development
plans and that the community ensures that development is in consonance with its
needs.
However, from our experience of working with the panchayats in Mewat, we realized
that this bottom-up planning approach was never followed in making village
development plans in the past. Many a times, the elected panchayat
representatives had not even heard of this term.

Acknowledging the significance of micro-planning for


village development, IRRADs Capacity Building Center organized a week long
training workshop on micro-planning for elected representatives of panchayats and
IRRADs staff working with panchayats in the villages. The aim of this workshop
was to educate the participants about the concept of micro-planning and its
importance in decentralized governance system.
As part of this workshop the participants were explained, in detail about the
concept, why and how of micro planning; the difference between micro-planning
and the traditional planning approaches. To give practical exposure to the
participants, a three day micro-planning exercise was carried out in Untaka Village

of Nuh Block, Mewat. The objective of this exposure was to show participants how
micro-planning is carried out and what challenges may arise during its conduct and
prepare the village development plan following the micro-planning approach.
The village sarpanch led the process from the front, and the entire village and
panchayat members participated wholeheartedly in this exercise. Participatory Rural
Appraisal (PRA) technique which incorporates the knowledge and opinions of rural
people in the planning and management of development projects and programmes
was used to gather information and prioritize development works. Resource, social
and development issue prioritization maps were prepared by the villagers after
analyzing the collected information. The villagers further identified the problems
associated with village development and recommended solutions for specific
problems while working in groups. The planning process went on for two days
subsequent to which a Gram Sabha (village committee), the first power unit in the
panchayati raj system, was organized on the third day. About 250 people
participated in the Gram Sabha including 65 women and 185 men. The sarpanch
shared the final village analysis and development plans with the villagers present in
Gram Sabha and asked for their inputs and suggestions. After incorporating the
suggestions received, a plan was prepared and submitted to Block Development
Office for final approval and sanction of funds.
"After the successful conduct of Gram Sabha in our village, we now need to build
synergies with the district level departments to implement the plans drawn in the
meeting," said the satisfied Sarpanch of Untka after experiencing the conduct of
micro planning exercise in their village.

Macro-planning
Macro Planning and Policy Division (MPPD) is responsible for setting macroeconomic policies
and strategies in consultation with key agencies, such as the Reserve Bank of Fiji (RBF) and
Ministry of Finance. The Division analyzes and forecasts movements in macroeconomic
indicators and accounts, including Gross Domestic Product (GDP), Exports and Imports, and
the Balance of Payments (BOP). Macroeconomic forecasting involves making assessments on
production data in the various sectors of the economy for compilation of quarterly forecasts of
the National Accounts.

The Division also involves in undertaking assessments and research on macroeconomic


indicators, internal external shocks and structural reform measures, which include areas such as
investment, labour market, goods market, trade, public enterprises, and public service.
The Macro Policy and Planning Division:

Provides technical and policy advice;

Produces macroeconomic forecasts of Gross Domestic Product, Exports, Imports and


Balance of Payments on a quarterly basis;

Effective participation at policy development meetings and consultative forums;

Undertake research on topical issues and provide pre-budget macroeconomic analyses


and advice.
1. Macro lesson planning
The term macro comes from Greek makros meaning long, large. For teachers, macro lesson
planning means coming up with the curriculum for the semester/month/year/etc. Not all
teachers feel they are responsible for this as many schools have set curriculums and/or
textbooks determined by the academic coordinator. However, even in these cases, teachers may
be called upon to devise a curriculum for a new class, modify an older curriculum, or map out
themes to match the target lessons within the curriculum.
At my old school, for instance, I had the chance to develop the curriculum for a TOEIC
Intermediate and a TOEFL Advanced class when they were first introduced at our school. Ive
also modified older curricula (or curriculums, if you preferboth are acceptable) for various levels

because of students changing needs. And finally, my old school kindly granted the teachers one
day a month of paid prep time/new student intake, where wed decide on the themes that wed
be using for our class to ensure there wasnt too much overlap with other classes. We did have a
set curriculum in terms of grammar points, but themes and supplementary materials were up to
us. Doing a bit of planning before the semester started ensured that we stayed organized and
kept the students interest throughout the semester.
Another benefit of macro lesson planning is that teachers can share the overall goals of the
course with their students on the first day, and they can reiterate those goals as the semester
progresses. Students often lose sight of the big picture and get discouraged with their English
level, and having clear goals that they see themselves reaching helps prevent this.
2. Micro lesson planning
The term micro comes from the Greek mikros meaning small, little. In the ELT industry, micro
lesson planning refers to planning one specific lesson based on one target (e.g., the simple
past). It involves choosing a topic or grammar point and building a full lesson to complement it.
A typical lesson plan involves a warm-up activity, which introduces the topic or elicits the
grammar naturally, followed by an explanation/lesson of the point to be covered. Next, teachers
devise a few activities that allow students to practice the target point, preferably through a mix of
skills (speaking, listening, reading, writing). Finally, teachers should plan a brief wrap-up activity
that brings the lesson to a close. This could be as simple as planning to ask students to share
their answers from the final activity as a class.
Some benefits of micro lesson planning include classes that runs smoothly and students who
dont get bored. Lesson planning ensures that youll be prepared for every class and that youll
have a variety of activities on hand for whatever situation may arise (well, the majority of
situationsIm sure weve all had those classes where an activity we thought would rock ends
up as an epic fail).
For more information on micro lesson planning, check out How to Make a Lesson Plan, a blog
post I wrote last year, where I emphasized the importance of planning fun, interesting fillers so
that students stay engaged. I also provided links in that post to many examples of activities you

can use for warm-ups, main activities, fillers, homework, etc. There is also a good template for a
typical lesson plan at.docstoc.
Can anyone think of other benefits of macro or micro lesson planning? Does anyone have a
different definition of these terms? Let us know below.
Happy planning!
Tanya

Macro is big and micro is very small. Macro economics depends on big projects like steel mills,
big industrial units, national highway projects etc. which aim at producing good and services at a
very large quantity and serve a wide area. These take time to porduce results because of the
size of the projects. Micro economics is on a small scale, limited to specific area or location and
purpose and normally produce results in a much shorter time. The best example of micro
economics is the Grameen Bank of Bangladesh started by Md. Yunus, who also got
international awards for his initative.The concept of Micro credit was pioneered by the
Bangladesh-based Grameen Bank, which broke away from the age old belief that low income
amounted to low savings and low investment. It started what came to be a system which
followed this sequence: low income, credit, investment, more income, more credit, more
investment, more income. It is owned by the poor borrowers of the bank who are mostly women.
Borrowers of Grameen Bank at present own 95 per cent of the total equity and the balance 5%
by the Govt. Micro economics was also one of the policies of Mahatma Gandhi who wanted
planning to start from local village level and spread thru the country; unfortunately this has not
happened and even now the result of developments has not percolated to the common man,
particularly in the rural areas.

Macro planning vs. micro planning


Ideally, lesson planning should be done at two levels: macro planning and micro
planning. The former is planning over time, for instance, the planning for a month, a
term, or the whole course. The latter is planning for a specific lesson, which usually
lasts 40 or 50 minutes. Of course, there is no clear cut difference between these two
types of planning. Micro planning should be based on macro planning, and macro
planning is apt to be modified as lessons go on.
Read through the following items and decide which belong to macro planning and which belong

to micro planning. Some could belong to both. When you have finished, compare your decisions
with your partner.
Thinking and sharing activity
TASK 2
1.

Write down lesson notes to guide teaching.

2.

Decide on the overall aims of a course or programme

3.

Design activities and procedures for a lesson.

4.

Decide which language points to cover in a lesson.

5.

Study the textbooks and syllabus chosen by the institute.

6.

Decide which skills are to be practised.

7.

Prepare teaching aids.

8.

Allocate time for activities.

9.

Prepare games or songs for a lesson.

10.

Prepare supplementary materials.

In a sense, macro planning is not writing lesson plans for specific lessons but rather
familiarizing with the context in which language teaching is taking place. Macro
planning involves the following:
1) Knowing about the course: The teacher should get to know which language areas
and language skills should be taught or practised in the course, what materials and
teaching aids are available, and what methods and techniques can be used.
2) Knowing about the institution: The teacher should get to know the institution's
arrangements regarding time, length, frequency of lessons, physical conditions of
classrooms, and exam requirements.
3) Knowing about the learners: The teacher should acquire information about the
students?age range, sex ratio, social background, motivation, attitudes, interests,
learning needs and other individual factors.
4) Knowing about the syllabus: The teacher should be clear about the purposes,
requirements and targets specified in the syllabus.
Much of macro planning is done prior to the commencement of a course. However,
macro planning is a job that never really ends until the end of the course.
Macro planning provides general guidance for language teachers. However, most

teachers have more confidence if they have a kind of written plan for each lesson
they teach. All teachers have different personalities and different teaching
strategies, so it is very likely their lesson plans would differ from each other.
However, there are certain guidelines that we can follow and certain elements that
we can incorporate in our plans to help us create purposeful, interesting and
motivating lessons for our learners.

Components of policy/ planning


Five essential components
The five essential components that ensure an effective P&P program include the

organizational documentation process

information plan or architecture

documentation approach

P&P expertise, and

technologies (tools).
Definition of P&P program
A policies and procedures (P&P) program refers to the context in which an organization formally
plans, designs, implements, manages, and uses P&P communication in support of
performance-based learning and on-going reference.
Description of components
The five components of a formal P&P program are described below:

An organizational documentation process which describes how members of the


organization interact in the development and maintenance of the life span of P&P content

The information plan or architecture which identifies the coverage and organization of
subject matter and related topics to be included

The documentation approach which designates how P&P content will be designed and
presented, including the documentation methods, techniques, formats, and styles

The P&P expertise necessary for planning, designing, developing, coordinating,


implementing, and publishing P&P content, as well as the expertise needed for managing the
program and the content development projects

The designated technologies for developing, publishing, storing, accessing, and


managing content, as well as for monitoring content usage.
Implementing components
Every organization is usually at a different maturity stage for their P&P investment. Therefore,
before establishing or enhancing a current P&P program, it is important to obtain an objective
assessment of the organizational maturity, including where your P&P program is now and where
it needs to be in the future. Once the maturity level is established, it is then necessary to

develop a strategic P&P program plan. The strategic plan will enable your organization to
achieve the necessary level of maturity for each component and ensure that your organization
will maximize the value of its P&P investment.
Conclusion
Organizations with informal P&P programs do not usually reap the benefits that formal P&P
programs provide. An effective P&P program must include five components. It is essential to
have an objective P&P program assessment to determine the existing P&P maturity grade and
where it should be. The P&P strategic plan is the basis for achieving a higher level of
performance in your P&P program

The following information is provided as a template to assist learners draft a policy. However
it must be remembered that policies are written to address specific issues, and therefore the
structure and components of a policy will differ considerably according to the need. A policy
document may be many pages or it may be a single page with just a few simple statements.
The following template is drawn from an Information Bulletin "Policy and Planning" by Sport
and Recreation Victoria. It is suggested that there are nine components. The example given at
the right of the table should not be construed as a complete policy

Component
1

Brief Example

A statement of what the The following policy aims to ensure that XYZ
organisation seeks to

Association Inc. fulfills the expectation of its

achieve for its clients

members for quality services in sport and recreation


delivery.

Underpinning principles, The underpinning principle of this policy is that the


values and philosophies

provision of quality services is of the utmost


importance in building membership and
participation. Satisfied members are more likely to
continue participation, contribute to the
organisation and renew the memberships each year.

Broad
service objectives which
explain the areas in
which the organisation
will be dealing

This policy aims to improve the quality of services


provided XYZ Assoc. Inc.:

The organisation and management of


programs and services

The management of association resources

These hypothetical examples are for illustration.


There is no substitute for research and
consultation in the development of effective
policies.

Strategies to achieve
each objective

Strategies to improve the quality of services in


program and event management include:

Provision of training for event officials

Implementing a participant survey

Fostering a culture of continuous


improvement

Strategies to improve the quality of services through


the better management of resources through:

Implementation of best practice consultation


and planning processes

Professional development opportunities for


the human resources of the organisation

Instituting a risk management program

The maintenance of records and databases to


assist in the management process.

These hypothetical examples are for illustration.


There is no substitute for research and
consultation in the development of effective
policies.

Specific actions to be
taken

This policy recommends the following actions:

Participants are surveyed on a once-year basis


for satisfaction with programs and services

The quality of services to participants is


reviewed annually as part of the strategic
planning process

The operational planning process include


scheduling events for the professional
development of staff

The risk management program should be


reviewed on a yearly basis, and that this
review should involve risk management
professionals

All clubs be consulted in the maintenance,


distribution of and usage of physical and
financial resources

These hypothetical examples are for illustration.


There is no substitute for research and
consultation in the development of effective
policies.

Desired outcomes of
specific actions

The desired outcomes of this policy are as follows:

Increased satisfaction of participants with the


association's events and programs

The best utilisation of then association's


resources in line with the expectations of
members

The better management of risks associated


with services delivery

Performance indicators

The success of this policy may be measured in terms


of:

An increase in the average membership


duration An increase in the participation of
association events

An increase in the number of volunteer


officials

A reduction in injuries

Management plans and

This section of the policy provides further

day to day operational

information and detail on how the policy is to be

rules covering all aspects implemented and observed on a day-to-day basis.


of services delivery
9

A review program

This policy should be review annually. The review


process should include an examination of the
performance indicators, consultation with members
of the association, and a discussionforum involving
the management committee and risk management
professionals.

Health care financing


Health Care Financing, Efficiency, and Equity
This paper examines the efficiency and equity implications of
alternative health care system financing strategies. Using data across
the OECD, I find that almost all financing choices are compatible with
efficiency in the delivery of health care, and that there has been no
consistent and systematic relationship between financing and cost
containment. Using data on expenditures and life expectancy by
income quintile from the Canadian health care system, I find that
universal, publicly-funded health insurance is modestly redistributive.
Putting $1 of tax funds into the public health insurance system
effectively channels between $0.23 and $0.26 toward the lowest
income quintile people, and about $0.50 to the bottom two income
quintiles. Finally, a review of the literature across the OECD suggests
that the progressivity of financing of the health insurance system has
limited implications for overall income inequality, particularly over time.

Health financing systems are critical for reaching universal health coverage. Health financing
levers to move closer to universal health coverage lie in three interrelated areas:

raising funds for health;


reducing financial barriers to access through prepayment and subsequent
pooling of funds in preference to direct out-of-pocket payments; and
allocating or using funds in a way that promotes efficiency and equity.
Developments in these key health financing areas will determine whether health services exist
and are available for everyone and whether people can afford to use health services when they
need them.
Guided by the World Health Assembly resolution WHA64.9 from May 2011 and based on the
recommendations from the World Health Report 2010 Health systems financing: The path to
universal coverage, WHO is supporting countries in developing of health financing systems that
can bring them closer to universal coverage.

HEALTH CARE FINANCING


Management Sciences Health helps governments and nongovernmental
organizations assess their current financial situation and systems, understand
service costs, develop financing solutions and to use funds more effectively and
efficiently. MSH believes in integrated approaches to health finance and works with
sets of policy levers that will produce the best outcomes, including government
regulations, budgeting mechanisms, insurance payment methods and provider and
patient incentives.

Healthcare Financing

The Need
More than 120 million people in Pakistan do not have health coverage. This pushes the poor
into debt and an inevitable medical-poverty trap. Two-thirds of households surveyed over the
last three years, reported that they were affected by one or more health problems and went into
debt to finance the cost. Many who cannot afford treatment, particularly women, forego medical
treatment altogether.

The Solution
To fill this vacuum in healthcare financing, the American Pakistan Foundation has partnered with
Heartfile Health Financing to support their groundbreaking work in healthcare reform and health
financing for the poor in Pakistan.

Heartfile is an innovative program that utilizes a custom-made technology platform to transfer


funds for treatment costs of the poor. The system, founded by Dr. Sania Nishtar, is highly
transparent and effective by providing a direct connection between the donor, healthcare facility,
and beneficiary patient.

Success Stories

At the age of 15 Majjid was the only breadwinner of his family. After being hit by a tractor he
was out of a job with a starving family and no money for an operation. Through Heartfile he was
able to get the treatment he needed and stay out of debt.
Majid
The Process

Heartfile is contacted via text or email when a person of dire financial need is admitted into one
of a list of preregistered hospitals.

Within 24 hours a volunteer is mobilized to see the patient, assess poverty status and the
eligibility by running their identity card information through the national database authority.

Once eligibility is established, the patient is sent funds within 72 hours through a cash transfer
to their service provider.

Donors to Heartfile have full control over their donation through a web database that allows
them to decide where they want their funds to go. They are connected to the people they
support through a personal donation page that allows them to see exactly how their funds were
used.

Sampling techniques: Advantages and


disadvantages
Techniqu
e

Descriptions

Advantages

Disadvantages

Simple
random

Random sample
from whole
population

Highly representative
if all subjects
participate; the ideal

Not possible without


complete list of
population members;
potentially uneconomical
to achieve; can be
disruptive to isolate
members from a group;
time-scale may be too
long, data/sample could
change

Stratified
random

Random sample
from identifiable
groups (strata),
subgroups, etc.

Can ensure that


specific groups are
represented, even
proportionally, in the
sample(s) (e.g., by
gender), by selecting
individuals from
strata list

More complex, requires


greater effort than
simple random; strata
must be carefully
defined

Cluster

Random samples
of successive
clusters of subjects
(e.g., by
institution) until
small groups are
chosen as units

Possible to select
randomly when no
single list of
population members
exists, but local lists
do; data collected on
groups may avoid
introduction of
confounding by
isolating members

Clusters in a level must


be equivalent and some
natural ones are not for
essential characteristics
(e.g., geographic:
numbers equal, but
unemployment rates
differ)

Stage

Combination of
cluster (randomly
selecting clusters)
and random or
stratified random
sampling of
individuals

Can make up
probability sample by
random at stages and
within groups;
possible to select
random sample when
population lists are

Complex, combines
limitations of cluster and
stratified random
sampling

very localized
Purposive

Hand-pick subjects
on the basis of
specific
characteristics

Ensures balance of
group sizes when
multiple groups are to
be selected

Samples are not easily


defensible as being
representative of
populations due to
potential subjectivity of
researcher

Quota

Select individuals
as they come to fill
a quota by
characteristics
proportional to
populations

Ensures selection of
adequate numbers of
subjects with
appropriate
characteristics

Not possible to prove


that the sample is
representative of
designated population

Snowball

Subjects with
desired traits or
characteristics
give names of
further
appropriate
subjects

Possible to include
members of groups
where no lists or
identifiable clusters
even exist (e.g., drug
abusers, criminals)

No way of knowing
whether the sample is
representative of the
population

Volunteer,
accidental,
convenien
ce

Either asking for


volunteers, or the
consequence of not
all those selected
finally
participating, or a
set of subjects who
just happen to be
available

Inexpensive way of
ensuring sufficient
numbers of a study

Can be highly
unrepresentative

Source: Black, T. R. (1999). Doing quantitative research in the social sciences: An integrated
approach to research design, measurement, and statistics. Thousand Oaks, CA: SAGE
Publications, Inc. (p. 118)

Viva Questions Oct 2013 FCPS

Comparison of dispersion of two data sets


Sample size formula, specifications for two means and two proportions
Sample size formula for randomized control design
One tailed and two tailed test and conditions when these are used
Screening test cut off points significance and importance of false positive and
negative
Outcome measures of effectiveness of a screening program
Randomized block design
Analysis of qualitative research
MDG 1 and its indicators
MATCH
PATCH
MAPP

Acron
ym

Definition

PATCH Personalized Access to Cultural Heritage


PATCH People Allied to Combat Hunger
PATCH

Participatory Awareness Through Community


Help

PATCH

Nickname for HQ USAF Programs Connected


With Supply

PATCH

planned approach to community health

Components of geriatrics services


Preventive: px of tooth, heart, kidney, dm, htn, obesity, nutrition etc

Promotive: health behaviors like exercise, sleep, diet, smoking, driving,


Curative: tx of minor ailments, teeth, osteoarthritis, gout, kidney, eyesight, lungs,
skin, walking, APH, DM, HTN, etc
Rehabilitative: geriatric nursing, talking, sharing ideas, consulting, etc

Prime minister youth loan scheme SMEDA


Prime Minister's Youth Business Loan - Introduction
Prime Ministers Youth Business Loan, for young entrepreneurs between the age group of 21 45 years, is designed to provide subsidised financing at 8.0% mark-up per annum for one
hundred thousand (100,000) beneficiaries, through designated financial institutions, initially
through National Bank of Pakistan (NBP) and First Women Bank Ltd. (FWBL).
Small business loan with tenure up to 8 years, with first year grace period, and a debt : equity of
90 : 10 will be disbursed to SME beneficiaries across Pakistan, covering; Punjab, Sindh, Khyber
Pakhtunkhwa, Balochistan, Gilgit Baltistan, Azad Jammu & Kashmir and Federally
Administered Tribal Areas (FATA). It has a 50% quota for women and 5% quota for families of
Shaheeds, Widows and Disabled persons.
SMEDA has been tasked with an advisory role in the implementation of PMs scheme by
providing more than fifty five updated pre-feasibilities for referencing by Loan beneficiaries and
participating banks to optimally utilize their financial resources. SMEDA shall continue to add
additional prefeasibilities. However, it is not necessary to develop a project on these
prefeasibilities. Any other projects will also be entertained by the banks.
Prime Minister's loan scheme: anxious youths rush to SMEDA, NBP
branches

The anxious youth of Khyber-Pakhtunkhwa rushed to the Regional Office of Small


and Medium Enterprise Development Authority (SMEDA) and National Bank of
Pakistan (NBP) to get facilitation and forms for the scheme. A large number of youth
was found visiting the special desk established at SMEDA Regional Office in State
Life Building for getting feasibility studies for joining entrepreneurship and take
benefit of the opportunity provided by the government.
The youth are also found seeking guidance regarding beginning of new businesses.
Similarly, long queues of youth were also found in front of the specified counters of
National Bank branches. The officers of the bank are giving them guidance
regarding the scheme and filling of the application form.
Talking to scribe in charge of PM Youth Business Loan Programme, Desk at SMEDA,
Peshawar Gohar Ali said that response of women from across Khyber-Pakthunkhwa

and FATA is tremendous and very positive. He said men and women are also jubilant
over the scheme.
He said youth from far-flung district Chitral to DI Khan and Kohistan are contacting
SMEDA offices to get information about different aspects of this youth friendly
scheme.
He said during last two days, a large number of youth had either visited or phoned
them regarding the scheme. He said most of the women showed their keen interest
in the opening of beauty clinics, boutiques and processing spices, packing and
marketing programmes.
He said SMEDA has floated a list of 56 pre-feasibility studies for guidance and
facilitation of interested entrepreneurs under the PM programme.
For the facilitation of the people in obtaining the loan under the scheme SMEDA has
established information and helping desks in Khyber-Pakthunkhwa Chamber of
Commerce and Industry Peshawar, Swat Chamber of Commerce and Industry in
Mingora, Women Business Development Centre Peshawar and Regional Business
Co-ordination Office Abbottabad for proper guidance and facilitation of youth.
He said settled areas response from tribesmen of FATA is also tremendous and
contacting us for getting information regarding different features and plans of PM
Youth Business Loan programme. Most of the tribesmen of Mohmand Agency have
shown interest in the Marble and Onyx Products Manufacturing, Marble Tiles
Manufacturing Units and Marble Mosaic Development Centres as this area is rich for
this resource.
He said tribesmen of South and North Waziristan besides Khyber, Kurram and
Orakzai Agencies are also showing keen interest in PM's business loan programme.
The SMEDA official assured that all out technical support would be provided to
interested youth in this programme and urged the candidates to come with solid
business plans for their own interest.
Similarly, a senior official of NBP said application form for the scheme is very simple
and comprehensive. He said scheme would usher an era of progress and
development and overcome the growing problem of unemployment. He said the
stifling of business plan with application form is pre-requisite.

Earth quake 2008 outcomes and latter effects


Vertical and integrated health program
ISIS model of sustainable health system
INFORMATION SOCIETY INTEGRATED SYSTEMS MODEL (ISIS - Model)
{Pdf to print}

Population projection graph and role of unmet need


Polio surveillance graph and its indicators
SIAD technique for polio:
What is SIAD?
The Short Interval Additional Dose (SIAD) is an intensified approach to deliver two
successive doses (passages) of monovalent Oral Polio Vaccine (mOPV) within a
period of a few days (usually less than 2 weeks). The objective is to build up
population immunity rapidly by taking advantage of the better sero-conversion with
mOPV, together with intensive supervision and monitoring to ensure a campaign of
the highest possible quality.

Why use the SIAD approach?


There are situations such as new outbreaks and importations of wild poliovirus when
it is necessary to achieve a high level of immunity in a short time to limit the
outbreak quickly. Also even if there is no new outbreak, security issues may only
permit occasional rapid and intensive supplementary immunization contacts when
the situation permits.

Background
Before monovalent OPV (mOPV) became available, the Global Polio Eradication
Initiative (PEI) had used trivalent tOPV in 2 rounds with an interval of 4-6 weeks. The
interval was adopted , because vaccine virus persists in the gut of recipients for that
period of time, risking interference between the three different virus types in the
OPV, while generating an immune response in the vaccinated child. However, most
of the interference between serotypes in tOPV is due to type 2. Since the licensing
of mOPV, studies in Nigeria have shown that approximately 67% of children will
develop immunity to poliovirus type 1 after the first dose of mOPV1 as compared
with approximately 35% of children after the first dose of tOPV. Similarly better seroconversion rates have been estimated for immunity to poliovirus type 3 using
mOPV3 as compared with tOPV.

The SIAD approach is founded on the principle that when monovalent vaccines
are used there will be no interference in the process of sero-conversion when

successive doses are administered. The advantage is that successive doses of


mOPV enhance sero-conversion, and the time interval between doses or rounds
can be reduced.

One SIAD round consists of two passages providing two opportunities for the
target population to be reached during a short period of time, typically two
passages of 5 days each separated by a two day interval.

SIAD has the added operational advantage of enabling the field concentration of
supervisors from other areas to be brought in to support quality implementation
over the whole 2 week period .

The Short Interval Additional Dose (SIAD) - An intensified campaign approach to


deliver monovalent Oral Polio Vaccine (mOPV)

Recruit competent international and national supervisors who can dedicate 2 weeks
for the duration of the SIAD.

Monitoring during each passage and making adjustments between 1st and 2nd
passages
During the break between 1st and 2nd passages supervisors should collect and
analyze data, and meet with teams and communities to make corrective action for
the 2nd passage.

Global Polio Eradication Initiative; Objectives

OBJECTIVE 1: POLIOVIRUS DETECTION AND INTERRUPTION


This objective is to stop all WPV transmission by the end of 2014 by enhancing global poliovirus
surveillance, effectively implementing national emergency plans to improve OPV campaign
quality in the remaining endemic countries, and ensuring rapid outbreak response. This area of
work gives particular attention to addressing the risks that emerged as increasingly important in
late 2012, particularly insecurity, as the programme began to reach chronically underserved
places and populations more systematically. This objective also includes stopping any new polio
outbreaks due to VPDVs within 6 months of the index case. The primary geographic focus of
this objective is in the three endemic countries and the countries at highest risk of importation in
Africa and southern Asia.
Strengthening Global Surveillance

Maintaining appropriate supplementary OPV immunization schedules

Enhancing OPV Campaign Quality to Interrupt Endemic Transmission

Enhancing the safety of OPV campaign operations in insecure areas

Preventing and Responding to Polio Outbreaks

OBJECTIVE 2: ROUTINE IMMUNIZATION STRENGTHENING & OPV


WITHDRAWAL
This objective will help hasten the interruption of all poliovirus transmission, and help build a
stronger system for the delivery of other lifesaving vaccines. To eliminate all vaccine-derived
poliovirus risks, in the long-term all OPV must be removed from routine immunization
programmes. As wild poliovirus type 2 was eradicated in 1999, and the main cause of VDPV
outbreaks is currently the type 2 component of OPV, this component must be removed from the
vaccine by mid-2016. Preparation for this removal entails strengthening routine immunization
systems especially in areas at highest risk, introducing at least one dose of IPV into routine
immunization programmes globally, and then replacing the trivalent OPV with bivalent OPV in
all OPV-using countries. This objective affects all 144 countries worldwide which currently use
OPV in their routine immunization programmes.
Increasing Routine Immunization Coverage

Ensuring Appropriate IPV, bOPV and mOPV Products

Introducing IPV

Withdrawing OPV from Routine and Supplementary Immunization activities

OBJECTIVE 3: CONTAINMENT AND CERTIFICATION


This objective encompasses the certification of the eradication and containment of all wild
polioviruses in all WHO Regions by end-2018, recognizing that a small number of facilities will
need to retain poliovirus stocks in the post-eradication era for the purposes of vaccine

production, diagnostics and research. Criteria for the safe handling and bio-containment of such
polioviruses, and processes to monitor their application, are essential to minimize the risk of
poliovirus re-introduction in the post-eradication era. Consequently, this area of work includes
finalizing international consensus on long-term bio-containment requirements for polioviruses
and the timelines for their application. Verifying application of those requirements, under the
oversight of the Global Certification Commission, will be a key aspect of the processes for
certifying global eradication. All 194 Member States of the World Health Organization are
affected by work towards this objective.

Containing Poliovirus Stocks

Certifying the Eradication of WPVs

OBJECTIVE 4: LEGACY PLANNING]


There are three principal aspects of the Polio legacy work:
Mainstreaming essential long-term polio immunization, surveillance, communication,
response and containment functions into other ongoing public health programmes in order to
protect a polio-free world;
Ensuring that the knowledge generated and lessons learned during more than 20 years of
polio eradication activities are shared with other health initiatives; and
Where feasible, desirable and appropriate, transitioning the capacities, processes and assets
that the Global Polio Eradication Initiative has created to support other health priorities.
As the polio programme approaches key eradication milestones, successful legacy planning will
include the mainstreaming of essential polio functions into on-going public health programmes at
national and international levels, ensuring the transfer of lessons learned to other relevant
programmes and/or initiatives, and the transition of assets and infrastructure to benefit other
development goals and global health priorities. This will require thorough consultation and
planning and implementation processes to ensure the investments made in polio eradication
provide public health dividends for years to come. Work under this objective will lead to the
development of a comprehensive legacy plan by end-2015.

Mainstreaming Polio Functions

Leveraging the Knowledge and Lessons Learned

Transitioning the Assets and Infrastructure

The goal of the 2013-2018 Polio Eradication and Endgame Strategic Plan is
to complete the eradication and containment of all wild, vaccine-related
and Sabin polioviruses, such that no child ever again suffers paralytic
poliomyelitis.

Endgame strategic polio plan

Strategic approaches to all polio disease (wild and vaccine-related)


An urgent emphasis on improving routine immunization systems in key
geographies
The introduction of new IPV options for managing long-term poliovirus risks
and potentially accelerating wild poliovirus eradication
Risk mitigation strategies to address the emerging importance of new risks,
particularly insecurity, in some endemic areas, and contingency plans should
there be a delay in interrupting transmission in such reservoirs
A timeline to complete the Global Polio Eradication Initiative

India was able to interrupt transmission because of its ability to apply a comprehensive set of
tactics and tools to reach and immunize all children that included innovations in:
o Microplanning
o Operations
o Monitoring & accountability
o Technology (e.g. bOPV)
o Social mobilization
o Surge support

RISKS, RISK MITIGATION AND CONTINGENCY PLANNING


The Polio Eradication and Endgame Strategic Plan has been designed to achieve
polio eradication taking into account the specific challenges of each of the four
major objectives. Unexpected factors and external risks can delay or undermine the
GPEI's ability to achieve the Plans four major objectives. Recognizing risks,
identifying mitigation options and articulating contingency plans enhance the GPEIs
ability to rapidly react to problems.

INPUT RISKS
1. Insufficient funding
2. Inability to recruit and retain the right people
3. Insufficient Supply of Appropriate Vaccines
IMPLEMENTATION RISKS
4. Inability to Operate in areas of insecurity
5. Decline in Political and/or social will
6. Lack of Accountability for Quality Activities
ENABLING FUNCTIONS

Successful execution of the 2013-2018 Polio Eradication and Endgame Strategic Plan will
require collaboration across the GPEI partners, national governments, donors and other relevant
organizations and institutions. Whilst national governments will be primarily responsible for
successful execution of the Plan at the local level, the GPEI and its partners will lead on a set of
enabling functions to facilitate successful execution of country operations. These functions
include:

Strategic Planning and Priority Setting

Resource Mobilization and Advocacy

Financial Resources and Management

Vaccine Security and Supply

Research and Policy Development

Cumulative frequency, cumulative cases are high but cumulative mortality is


low for influenza

Growth chart preparation and interpretation

Case fatality graphs

Demography graphs interpretation IMR age, female reproductive age and


its mortality causes, male productive age and its mortality causes, old age
group 4% and its causes

Accident prevention Haddon matrix

The Haddon Matrix is the most commonly used paradigm in the injury prevention field.
Developed by William Haddon in 1970, the matrix looks at factors related to personal attributes,
vector or agent attributes, and environmental attributes before, during and after an injury
or death. By utilizing this framework, one can then think about evaluating the relative
importance of different factors and design interventions.[1]

A typical Haddon Matrix :

Phase

Precrash

Environmental Factors

Equipment Factors

Information

Roadworthiness

Attitudes

Lighting

Impairment

Braking

Speed limits

Speed
Management

Pedestrian facilities

Occupant
restraints

Other safety
devices

Crash-protective
design

Crash

Post-

Vehicles and

Human Factors

Police
Enforcement

Use of
restraints

Impairments

First-aid skills

Crash

Access to
medics

Road design and road


layout

Crash-protective
roadside objects

Ease of access

Rescue facilities

Fire risk

Congestion

Epidemiological transition pointed out by OMRAN 4 stages, pestilence stage,


early degenerative disease stage, late degenerative disease stage

BMI Cut off points for under weight, over weight, four levels of obese

SAAL seasonal awareness alert letter, measles and dengue stage measured
in terms of DEWS, DMIS

Food safety and food contamination

A child suffering from polio brought for vaccination to clinic which vaccines
to give same as given at stage 0 after looking for BCG scar

Chlorination graph stage 1, 2, 3, 4

Carriers

Survival paradox e.g. sometimes obese live more

There is increasing evidence that patients, especially elderly, with several chronic diseases
and elevated BMI may demonstrate lower all-cause and cardiovascular mortality
compared with patients of normal weight.
Obesity paradox in overweight and obese patients with coronary heart disease
Ten years ago, Gruberg and coworkers observed better outcomes in overweight and obese
patients with coronary heart disease undergoing percutaneous coronary intervention compared
with their normal-weight counterparts. This unexpected phenomenon was described as an
obesity paradox (2). Normal-weight patients had higher incidence of major in-hospital
complications, including cardiac death. Moreover, at 1-year follow-up significantly higher
mortality rates were observed in low- and normal-weight patients compared with obese and
overweight.
Obesity paradox in patients with chronic heart failure
Investigations carried out in patients with chronic heart failure show a paradoxical decrease in
mortality in those with higher BMI. This observation has been referred to as a reverse
epidemiology (10). Consequently, several other studies in patients with both chronic and acute
heart failure confirmed lower mortality in those with higher BMI
Enlarged muscle mass and better nutritional status
Body composition
Thromboxane production
Endothelial progenitor cells: Less coronary atherosclerosis demonstrated in
autopsies of severely obese subjects is another example of the obesity paradox
Increased muscle strength

Ghrelin sensitivity
Ghrelin is a gastric peptide hormone, initially described as the endogenous ligand for the growth
hormone secretagogue receptor. Ghrelin stimulates growth hormone release and food intake,
promotes positive energy balance/weight gain, and improves cardiac contractility
Cardiorespiratory fitness obese subjects with an increased cardiorespiratory
fitness have lower all-cause mortality and lower risk of cardiovascular and metabolic
diseases and certain cancers

Conclusions
Despite the fact that obesity is recognized as a major risk factor in the development of
cardiovascular diseases and diabetes, a higher BMI may be associated with a lower mortality and
a better outcome in several chronic diseases and health circumstances. This protective effect of
obesity has been described as the obesity paradox or reverse epidemiology. However, it
should be emphasized that the BMI is a crude and flawed anthropometric biomarker that does
not take into account fat mass/fat-free mass ratio, nutritional status, cardiorespiratory fitness,
body fat distribution, or other factors affecting health risks and the patients mortality

Healthy worker effect

HWE is a phenomenon initially observed in studies of occupational diseases:


Workers usually exhibit lower overall death rates than the general population
because the severely ill and chronically disabled are ordinarily excluded from
employment
However, other occupational epidemiologists describe HWE as the reduction of
mortality or morbidity of occupational cohorts when compared with the general
population.

Bioremediation therapy use of biological agents to remove contaminants

Medical bioremediation is the technique of applying microbial xenoenzymes in human


therapy. The process involves screening for enzymes capable of catabolizing the
target pathogenic substrate, engineering microbes to express sufficient quantities
of the enzyme and finally delivering the enzyme to the appropriate tissue and cell
types.

Bioremediation, Microbial Synthesis and Gale's Infallibility Hypothesis


Bioremediation is the technique of using organisms to catabolize toxic waste such as oil spills or
industrial runoff. The most commonly used organisms are microbes, though phytoremediation is
also used. 1 Wild-type microbes have proven capable of digesting highly toxic and stable
compounds, but organisms can be genetically engineered to augment their ability. For
example,Deinococcus radiodurans, the most radio-resistant organism known, has been modified
to digest toluene and ionic mercury. 2
Microbes are the source of approximately 22,500 bioactive drug compounds. Of these, 17% were
from unicellular bacteria (mainly Pseudomonas and Bacillus), 45% from filamentous bacteria
(actinomycetes) and 38% from fungi. 3 Microbes are the predominant source of manufactured
protein, ever since microbial human insulin production began 25 years ago. There are presently
more than 130 protein therapeutics used worldwide and many more undergoing clinical trials. 4

Medical bioremediation: prospects for the application of microbial catabolic diversity to


aging and several major age-related diseases.
Several major diseases of old age, including atherosclerosis, macular degeneration and
neurodegenerative diseases are associated with the intracellular accumulation of substances that
impair cellular function and viability. Moreover, the accumulation of lipofuscin, a substance that
may have similarly deleterious effects, is one of the most universal markers of aging in
postmitotic cells. Reversing this accumulation may thus be valuable, but has proven challenging,
doubtless because substances resistant to cellular catabolism are inherently hard to degrade. We
suggest a radically new approach: augmenting humans' natural catabolic machinery with
microbial enzymes. Many recalcitrant organic molecules are naturally degraded in the soil. Since
the soil in certain environments - graveyards, for example - is enriched in human remains but
does not accumulate these substances, it presumably harbours microbes that degrade them. The
enzymes responsible could be identified and engineered to metabolise these substances in vivo.
Here, we survey a range of such substances, their putative roles in age-related diseases and the
possible benefits of their removal. We discuss how microbes capable of degrading them can be
isolated, characterised and their relevant enzymes engineered for this purpose and ways to avoid
potential side-effects.

Bioterrorism

Bioterrorism is terrorism involving the intentional release or dissemination


of biological agents. These agents are bacteria,viruses, or toxins, and may be in a
naturally occurring or a human-modified form. For the use of this method in warfare,
seebiological warfare

According to the U.S. Centers for Disease Control and Prevention a bioterrorism attack is the
deliberate release of viruses, bacteria, toxins or other harmful agents used to cause illness or
death in people, animals, or plants. These agents are typically found in nature, but it is possible
that they could be mutated or altered to increase their ability to cause disease, make them
resistant to current medicines, or to increase their ability to be spread into the environment.
Biological agents can be spread through the air, water, or in food. Terrorists tend to use
biological agents because they are extremely difficult to detect and do not cause illness for
several hours to several days. Some bioterrorism agents, like the smallpox virus, can be spread
from person to person and some, like anthrax, cannot.[1]
Bioterrorism is an attractive weapon because biological agents are relatively easy and
inexpensive to obtain, can be easily disseminated, and can cause widespread fear and panic
beyond the actual physical damage.[2] Military leaders, however, have learned that, as a military
asset, bioterrorism has some important limitations; it is difficult to employ a bioweapon in a way
that only the enemy is affected and not friendly forces. A biological weapon is useful
to terrorists mainly as a method of creating mass panic and disruption to a state or a country.
However, technologists such as Bill Joy have warned of the potential power which genetic
engineering might place in the hands of future bio-terrorists.[3]
The use of agents that do not cause harm to humans but disrupt the economy have been
discussed.[citation needed] A highly relevant pathogen in this context is thefoot-and-mouth
disease (FMD) virus, which is capable of causing widespread economic damage and public
concern (as witnessed in the 2001 and 2007 FMD outbreaks in the UK), whilst having almost no
capacity to infect humans.
Category A
Tularemia
Tularemia, or rabbit fever,[9] has a very low fatality rate if treated, but can
severely incapacitate. The disease is caused by the Francisella
tularensis bacterium, and can be contracted through contact with the fur,
inhalation, ingestion of contaminated water or insect bites. Francisella
tularensis is very infectious. A small number (1050 or so organisms) can

cause disease. If F. tularensis were used as a weapon, the bacteria would


likely be made airborne for exposure by inhalation. People who inhale an
infectious aerosol would generally experience severe respiratory illness,
including life-threatening pneumonia and systemic infection, if they are not
treated. The bacteria that cause tularemia occur widely in nature and could
be isolated and grown in quantity in a laboratory, although manufacturing an
effective aerosol weapon would require considerable sophistication. [10]
Anthrax
Anthrax is a non-contagious disease caused by the spore-forming
bacterium Bacillus anthracis. An anthrax vaccine does exist but requires
many injections for stable use. When discovered early anthrax can be cured
by administering antibiotics (such as ciprofloxacin).[11] Its first modern
incidence in biological warfare were when Scandinavian "freedom fighters"
supplied by the German General Staff used anthrax with unknown results
against the Imperial Russian Army in Finland in 1916. [12] In 1993, the Aum
Shinrikyo used anthrax in an unsuccessful attempt in Tokyo with zero
fatalities.[8] Anthrax was used in a series of attackson the offices of several
United States Senators in late 2001. The anthrax was in a powder form and it
was delivered by the mail.[13] Anthrax is one of the few biological agents that
federal employees have been vaccinated for. The strain used in the 2001
anthrax attack was identical to the strain used by theUSAMRIID.[14]
Smallpox
[15]

Smallpox is a highly contagious virus. It is transmitted easily through the

atmosphere and has a high mortality rate (2040%). Smallpox


was eradicated in the world in the 1970s, thanks to a worldwide vaccination
program.[16] However, some virus samples are still available
in Russian and American laboratories. Some believe that after the collapse of
the Soviet Union, cultures of smallpox have become available in other
countries. Although people born pre-1970 will have been vaccinated for
smallpox under the WHO program, the effectiveness of vaccination is limited
since the vaccine provides high level of immunity for only 3 to 5 years.
Revaccination's protection lasts longer. [17] As a biological weapon smallpox is
dangerous because of the highly contagious nature of both the infected and
their pox. Also, the infrequency with which vaccines are administered among
the general population since the eradication of the disease would leave most

people unprotected in the event of an outbreak. Smallpox occurs only in


humans, and has no external hosts or vectors.
Botulinum toxin
[18]

The neurotoxin

[19]

Botulinum is one of the deadliest toxins known, and is

produced by the bacterium Clostridium botulinum. Botulism causes death


byrespiratory failure and paralysis.[20] Furthermore, the toxin is readily
available worldwide due to its cosmetic applications in injections.
Bubonic plague
[21]

Plague is a disease caused by the Yersinia pestis bacterium. Rodents are

the normal host of plague, and the disease is transmitted to humans


by flea bites and occasionally by aerosol in the form of pneumonic plague.
[22]

The disease has a history of use in biological warfare dating back many

centuries, and is considered a threat due to its ease of culture and ability to
remain in circulation among local rodents for a long period of time. The
weaponized threat comes mainly in the form of pneumonic plague (infection
by inhalation)[23] It was the disease that caused the Black Death in Medieval
Europe.
Viral hemorrhagic fevers
[24]

This includes hemorrhagic fevers caused by members of the

family Filoviridae (Marburg virus and Ebola virus), and by the


family Arenaviridae (for exampleLassa virus and Machupo virus). Ebola virus
disease has fatality rates ranging from 5090%. No cure currently exists,
although vaccines are in development. The Soviet Union investigated the use
of filoviruses for biological warfare, and the Aum Shinrikyo group
unsuccessfully attempted to obtain cultures of Ebola virus. [citation needed] Death
from Ebola virus disease is commonly due to multiple organ
failure and hypovolemic shock. Marburg virus was first discovered inMarburg,
Germany. No treatments currently exist aside from supportive care. The
arenaviruses have a somewhat reduced case-fatality rate compared to
disease caused by filoviruses, but are more widely distributed, chiefly in
central Africa and South America.

Category B[edit]
Category B agents are moderately easy to disseminate and have low mortality rates.

Brucellosis (Brucella species)[25]

Epsilon toxin of Clostridium perfringens

Food safety threats (for example, Salmonella species, E


coli O157:H7, Shigella, Staphylococcus aureus)

Glanders[26] (Burkholderia mallei)

Melioidosis (Burkholderia pseudomallei)[27][28]

Psittacosis (Chlamydia psittaci)

Q fever (Coxiella burnetii)[29]

Ricin[30] toxin from Ricinus communis (castor beans)

Abrin toxin from Abrus precatorius (Rosary peas)

Staphylococcal enterotoxin B

Typhus (Rickettsia prowazekii)

Viral encephalitis (alphaviruses, for example,: Venezuelan equine


encephalitis, eastern equine encephalitis, western equine encephalitis)

Water supply threats (for example, Vibrio cholerae,[31] Cryptosporidium


parvum)

Category C[edit]
Category C agents are emerging pathogens that might be engineered for mass dissemination
because of their availability, ease of production and dissemination, high mortality rate, or ability
to cause a major health impact.

Nipah virus

Hantavirus

SARS

H1N1 a strain of influenza (flu)

HIV/AIDS

Anthrax

Anthrax can enter the human body through the intestines (ingestion), lungs (inhalation), or skin
(cutaneous) and causes distinct clinical symptoms based on its site of entry. In general, an
infected human will be quarantined. However, anthrax does not usually spread from an infected
human to a noninfected human. But, if the disease is fatal to the person's body, its mass of
anthrax bacilli becomes a potential source of infection to others and special precautions should
be used to prevent further contamination. Inhalational anthrax, if left untreated until obvious
symptoms occur, may be fatal.
Anthrax can be contracted in laboratory accidents or by handling infected animals or their wool
or hides. It has also been used in biological warfare agents and by terrorists to intentionally infect
as exemplified by the 2001 anthrax attacks.

Occupational exposure to infected animals or their products (such as skin, wool, and
meat) is the usual pathway of exposure for humans

Demographic graph preindustrial, early industrial, late industrial, developed

Situation analysis
Situation analysis refers to a collection of methods that managers use to analyze
an organization's internal and external environment to understand the
organization's capabilities, customers, and business environment. [1] The situation
analysis consists of several methods of analysis: The 5Cs Analysis, SWOT
analysis and Porter five forces analysis.[2] A Marketing Plan is created to guide
businesses on how to communicate the benefits of their products to the needs of
potential customer. The situation analysis is the second step in the marketing plan
and is a critical step in establishing a long term relationship with customers
The situation analysis looks at both the macro-environmental factors that affect
many firms within the environment and the micro-environmental factors that
specifically affect the firm. The purpose of the situation analysis is to indicate to a
company about the organizational and product position, as well as the overall
survival of the business, within the environment. Companies must be able to
summarize opportunities and problems within the environment so they can
understand their capabilities within the market

SWOT analysis for situation analysis

SWOT
A SWOT Analysis is another method under the situation analysis that examines
the Strengths and Weaknesses of a company (internal environment) as well as
theOpportunities and Threats within the market (external environment). A SWOT analysis looks
at both current and future situations, where they analyze their current strengths and weaknesses
while looking for future opportunities and threats. The goal is to build on strengths as much as
possible while reducing weaknesses. A future threat can be a potential weakness while a future
opportunity can be a potential strength.[13] This analysis helps a company come up with a plan
that keeps it prepared for a number of potential scenarios.
5c analysis of situational analysis

The 5C analysis is considered the most useful and common way to analyze the market
environment, because of the extensive information it provides.[5]
Company
The company analysis involves evaluation of the company's objectives, strategy, and capabilities.
These indicate to an organization the strength of the business model, whether there are areas for
improvement, and how well an organization fits the external environment.[6]

Goals & Objectives: An analysis on the mission of the business, the industry
of the business and the stated goals required to achieve the mission.

Position: An analysis on the Marketing strategy and the Marketing mix.

Performance: An analysis on how effectively the business is achieving their


stated mission and goals.

Product line: An analysis on the products manufactured by the business and


how successful it is in the market. [5]

Competitors
The competitor analysis takes into consideration the competitors position within the industry and
the potential threat it may pose to other businesses. The main purpose of the competitor analysis
is for businesses to analyze a competitor's current and potential nature and capabilities so they
can prepare against competition. The competitor analysis looks at the following criteria:

Identify competitors: Businesses must be able to identify competitors within


their industry. Identifying whether competitors provide the same services or

products to the same customer base is useful in gaining knowledge of direct


competitors. Both direct and indirect competitors must be identified, as well as
potential future competitors.

Assessment of competitors: The competitor analysis looks at competitor


goals, mission, strategies and resources. This supports a thorough comparison of
goals and strategies of competitors and the organization.

Predict future initiatives of competitors: An early insight into the potential


activity of a competitor helps a company prepare against competition. [6]

Customers
Customer analysis can be vast and complicated. Some of the important areas that a company
analyzes includes:[5]

Demographics

Advertising that is most suitable for the demographic

Market size and potential growth

Customer wants and needs

Motivation to buy the product

Distribution channels (online, retail, wholesale, etc...)

Quantity and frequency of purchase

Income level of customer

Collaborators
Collaborators are useful for businesses as they allow for an increase in the creation of ideas,
as well as an increase in the likelihood of gaining more business opportunities.[7] The
following type of collaborators are:

Agencies: Agencies are the middlemen of the business world. When


businesses need a specific worker who specializes in the trade, they go to
a recruitment agency.[8]
Suppliers: Suppliers provide raw materials that are required to build
products. There are 7 different types of Suppliers: Manufacturers,
wholesalers, merchants, franchisors, importers and exporters,
independent crafts people and drop shippers. Each category of suppliers
can bring a different skill and experience to the company. [9]

Distributors: Distributors are important as they are the 'holding areas for
inventory'. Distributors can help manage manufacturer relationships as
well as handle vendor relationships.[10]

Partnerships: Business partners would share assets and liabilities, allowing


for a new source of capital and skills. [11]

Businesses must be able to identify whether the collaborator has the capabilities needed
to help run the business as well as an analysis on the level of commitment needed for a
collaborator-business relationship.[6]
Climate
To fully understand the business climate and environment, many factors that can affect
the business must be researched and understood. An analysis on the climate is also
known as the PEST analysis. The types of climate/environment firms have to analyse
are:

Political and regulatory environment: An Analysis of how active the


government regulates the market with their policies and how it would
affect theproduction, distribution and sale of the goods and services.
Economic Environment: An Analysis of trends regarding macroeconomics,
such as exchange rates and inflation rate, can prove to influence
businesses.[5]

Social/cultural environment: Interpreting the trends of society, [5] which


includes the study of demographics, education, culture etc...

Technological analysis: An analysis of technology helps improve on old


routines and suggest new methods for being cost efficient. To stay
competitive and gain an advantage over competitors, businesses must
sufficiently understand technological advances. [12]

RODS real time outbreak disease surveillance

Demographic graph preindustrial, early industrial, late industrial, developed

Hawthorne effect

The Hawthorne effect (also referred to as the observer effect) refers to


aphenomenon whereby individuals improve or modify an aspect of their behavior in
response to their awareness of being observed. [1][2] The original "Hawthorne effect"
study suggested that the novelty of being research subjects and the increased
attention from such could lead to temporary increases in workers' productivity.

The Hawthorne effect is a term referring to the tendency of some people to work
harder and perform better when they are participants in an experiment. Individuals
may change their behavior due to the attention they are receiving from researchers
rather than because of any manipulation of independent variables
A placebo is a simulated or otherwise medically ineffectual treatment for a disease
or other medical condition intended to deceive the recipient. Sometimes patients
given a placebo treatment will have a perceived or actual improvement in a medical
condition, a phenomenon commonly called the placebo effect.

Simpson's paradox
Simpson's paradox, or the YuleSimpson effect, is a paradox in probability and statistics, in
which a trend that appears in different groups of data disappears when these groups are
combined. It is sometimes given the impersonal title reversal paradox or amalgamation
paradox.[1]
This result is often encountered in social-science and medical-science statistics,[2] and is
particularly confounding when frequency data are unduly given causal interpretations.
[3]
Simpson's Paradox disappears when causal relations are brought into consideration. Many
statisticians believe that the mainstream public should be informed of the counter-intuitive
results in statistics such as Simpson's paradox.

Simpson's paradox for continuous data: a positive trend appears for two separate
groups (blue and red), a negative trend (black, dashed) appears when the data are
combined.

Health economics dollars per life years gained, dollars per quality adjusted
life years gained. It is a measure of cost effective analysis which measures
outcome in terms of DALYS

Cost benefit analysis outcome measured in monetary terms- not suitable for
health

MDG

PERT analysis

Personality traits

Psychosomatic disorders

Psychosocial disorders
FCPS

PRISM analysis performance of routine information management system


improve HMIS, develop performances, track progress, create awareness,
develop interventions for strengthening, develop targets, monitoring and
evaluation, create knowledge

Social marketing models in relation to health product, price, place,


promotion

Health system performance financing, stewardship, responsiveness of the


health system to non health needs of the public

Double burden of disease communicable and non communicable due to life


style

Ob gene

The human obese (OB) gene: RNA expression pattern and mapping on the physical,
cytogenetic, and genetic maps of chromosome 7.
The recently identified mouse obese (ob) gene apparently encodes a secreted protein that may
function in the signaling pathway of adipose tissue. Mutations in the mouse ob gene are
associated with the early development of gross obesity

Furthermore the ob gene expression was increased in human obesity, which led
to postulate the concept of leptin resistance.

Leptin , the "satiety hormone", is a hormone made by fat cells which regulates
the amount of fat stored in the body. It does this by adjusting both the sensation

of hunger, and adjusting energy expenditures. Hunger is inhibited (satiety) when


the amount of fat stored reaches a certain level. Leptin is then secreted and
circulates through the body, eventually activating leptin receptors in the arcuate
nucleus of the hypothalamus. Energy expenditure is increased both by the signal
to the brain, and directly via leptin receptors on peripheral targets.

Ghrelin, the "hunger hormone", is a peptide produced by ghrelin cells in the gastrointestinal
tract[1][2] which functions as a neuropeptide in the central nervous system.[3] Beyond regulating
hunger, ghrelin also plays a significant role in regulating the distribution and rate of use of
energy.
When the stomach is empty, ghrelin is secreted. When the stomach is stretched, secretion stops.
It acts onhypothalamic brain cells both to increase hunger, and to increase gastric acid secretion
and gastrointestinal motility to prepare the body for food intake.[5]
The receptor for ghrelin is found on the same cells in the brain as the receptor for leptin, the
satiety hormone that has opposite effects from ghrelin.[6] Ghrelin also plays an important role in
regulating reward perception in dopamine neurons that link the ventral tegmental area to
the nucleus accumbens[7] (a site that plays a role in processing sexual desire, reward,
and reinforcement, and in developing addictions) through its colocalized receptors and
interaction with dopamine and acetylcholine.[3][8] Ghrelin is encoded by theGHRL gene and is
produced from the presumed cleavage of the prepropeptide ghrelin/obestatin. Full-length
preproghrelin is homologous to promotilin and both are members of the motilin family.

Hidden hunger

Calculation of the Index[edit]


The Index ranks countries on a 100 point scale, with 0 being the best score ("no hunger") and
100 being the worst, though neither of these extremes is achieved in practice. The higher the
score, the worse the food situation of a country. Values less than 4.9 reflect "low hunger", values
between 5 and 9.9 reflect "moderate hunger", values between 10 and 19.9 indicate a "serious",
values between 20 and 29.9 are "alarming", and values exceeding 30 are "extremely alarming"
hunger problem.[15]
The GHI combines three equally weighted indicators: 1) the proportion of the undernourished as
a percentage of the population; 2) the prevalence of underweightchildren under the age of five;
3) the mortality rate of children under the age of five.

The Global Hunger Index (GHI) is a multidimensional statistical tool used to describe
the state of countrieshunger situation. The GHI measures progress and failures in the
global fight against hunger.[1] The GHI is updated once a year.

The Index was adopted and further developed by the International Food Policy Research
Institute (IFPRI), and was first published in 2006 with the Welthungerhilfe, a
German non-profit organization (NGO). Since 2007, the Irish NGO Concern
Worldwide joined the group as co-publisher.[2][3][4][5][6][7][8][9][10]

The 2014 GHI was calculated for 120 developing countries and countries in transition, 55
of which with a serious or worse hunger situation.[11]

In addition to the ranking, the Global Hunger Index report every year focuses on a main
topic: in 2014 the thematic focus was on hidden hunger, a form of undernutrition
characterized by micronutrient deficiencies.[11]

Focus of the GHI 2014: Hidden Hunger


Hidden hunger concerns over 2 million people worldwide. This micronutrient
deficiency develops, when humans do not take in enough micronutrients such as zinc, jod and
iron, and vitamins or when their bodies cannot absorb them. Reasons include an unbalanced diet,
a higher need for micronutrient (e.g. during pregnancy or while breast feeding) but also health
issues related to sickness, infections or parasites.
The consequences for the individual can be devastating: mental impairment, bad health, low
productivity and death caused by sickness. Especially children are hit if they did not absorb
enough micronutrients in the first 1000 days of their lives (from conception to their 2nd
birthday).[GHI2014 4]
Micronutrient deficiencies are responsible for an estimated 1.1 million of the yearly 3.1 million
death caused by under nutrition in children. Despite the size of the problem, it is still not easy to
get precise data on the spread of hidden hunger. Macro- and micronutrient deficiencies cause a
loss in global productivity of 1.4 to 2.1 bn US Dollars per year.[20]
To prevent hidden hunger different measures exist. It is very effective to ensure that humans get
a diverse diet. Quality of the produce is as important as the quantity (measured in calories). This
can be achieved by promoting the production of a wide variety of nutrient rich plants and the
creation of house gardens.
Other possible solutions are the industrial enrichment (fortification) of food or biofortification of
feedplants (e.g. vitamin A rich sweet potatoes). In the case of acute nutrient deficiency and in

specific life phases, food supplements can be used. In particular the addition of vitamin A, leads
to a better survival rate of children.[GHI2014 5]
Generally, the situation concerning hidden hunger can only be improved, when many measures
intermesh. In addition to the direct measures described above this includes education and
empowerment of women, creation of better sanitation adequate hygiene, access to clean drinking
water and access to health services.
Simply eating until satisfied is not enough. Every woman, man and child have the right to a
culturally adequate amount, but also adequate quality to cover their food needs. The international
community has to ensure that hidden hunger is not overlooked and that the post-2015 agenda
includes a comprehensive goal for the elimination of hunger and malnutrition of any type.[GHI2014 6]
Focus of the GHI 2013: Resilience to build food and nutrition security
Many of the countries, in which the hunger situation is "alarming" or "extremely alarming", are
particularly prone to crises: In the African Sahel people experience yearly droughts. On top of
that, they have to deal with violent conflict and natural calamities. At the same time, the global
context becomes more and more volatile (financial and economic crises, food price crises).
The inability to cope with these crises leads to the destruction of many development successes
that had been achieved over the years. In addition, people have even less resources to withstand
the next shock or crises. 2.6 billion people in the world live with less than 2 USD per day. For
them, a sickness in the family, crop failure after a drought or the interruption of remittances from
relatives who live abroad can set in motion a downward spiral from which they cannot free
themselves on their own.
It is therefore not enough to support people in emergencies and, once the crises is over, to start
longer term development efforts. Instead, emergency and development assistance has to be
conceptualized with the goal of increasing resilience of poor people against these shocks.
The Global Hunger Index differentiates three coping strategies. The lower the intensity of the
crises, the less resources have to be used to cope with the consequences:[GHI2013 1]

Absorption: Skills or resources, which are used to reduce the impact of a crisis
without changing the lifestyle (e.g. selling some livestock)

Adaptation: Once the capacity to absorb is exhausted, steps are taken to


adapt the lifestyle to the situation without making drastic changes (e.g. using
drought-resistant seeds).

Transformation: If the adaptation strategies do not suffice to deal with the


negative impact of the crises, fundamental, longer lasting changes to life and

behavior have to be made (e.g. nomadic tribes become sedentary and become
farmers because they cannot keep their herds).

Based on this analysis the authors present several policy recommendations:[GHI2013 2]

Overcoming the institutional, financial and conceptual boundaries between


humanitarian aid and development assistance.

Elimination of policies that undermine people's resilience. Using the Right to


Food as a basis for the development of new policies.

Implementation of multi-year, flexible programs, which are financed in a way


that enables multi-sectoral approaches to overcome chronic food crises.

Communicating that improving resilience is cost effective and improves food


and nutrition security, especially in fragile contexts.

Scientific monitoring and evaluation of measures and programs with the goal
to increase resilience.

Active involvement of the local population in the planning and


implementation of resilience increasing programs.

Improvement of food especially of mothers and children through nutritionspecific and sensitive interventions to avoid that short-term crises lead to
nutrition-related problems late in life or across generations.

Focus of the GHI 2012: Pressures on land, water and energy resources
Increasingly, Hunger is related to how we use land, water and energy. The growing scarcity of
these resources puts more and more pressure on food security. Several factors contribute to an
increasing shortage of natural resources:[GHI2012 1]
1. Demographic change: The world population is expected to be over 9 billion by
2050. Additionally, more and more people live in cities. Urban populations
feed themselves differently than inhabitants of rural areas; they tend to
consume less staple foods and more meat and dairy products.
2. Higher income and non-sustainable use of resources: As the global economy
grows, wealthy people consume more food and goods, which have to be
produced with a lot of water and energy. They can afford not to be efficient
and wasteful in their use of resources.

3. Bad policies and weak institutions: When policies, for example energy policy,
are not tested for the consequences they have on the availability of land and
water it can lead to failures. An example are the biofuel policies of
industrialized countries: As corn and sugar are increasingly used for the
production of fuels, there is less land and water for the production of food.

Signs for an increasing scarcity of energy, land and water resources are for example: growing
prices for food and energy, a massive increase of large-scale investment in arable land (so-called
land grabbing), increasing degradation of arable land because of too intensive land use (for
example, increasing desertification), increasing number of people, who live in regions with
lowering ground water levels, and the loss of arable land as a consequence of climate change.
The analysis of the global conditions lead the authors of the GHI 2012 to recommend several
policy actions:[14]

Securing land and water rights

Gradual lowering of subsidies

Creation of a positive macroeconomic framework

Investment in agriculture technology development to promote a more


efficient use of land, water and energy

Support for approaches, that lead to a more efficient use of land, water and
energy along the whole value chain

Preventing and overuse of natural resources through monitoring strategies for


water, land and energy, and agricultural systems

Improvement of the access to education for women and the strengthening of


their reproductive rights to address demographic change

Increase incomes, reduce social and economic inequality and promotion of


sustainable lifestyles

Climate change mitigation and adaptation through a reorientation of


agriculture

Focus of the GHI 2011: Rising and volatile food prices


The report cites 3 factors as the main reasons for high volatility, or price changes, and price
spikes of food:

Use of the so-called biofuels, promoted by high oil prices, subsidies in the
United States (over one third of the corn harvest of 2009 and 2010 respectively)
and quota for biofuel in gasoline in the European Union, India and others.

Extreme weather events as a result of Climate Change

Future trading of agricultural commodities, for instance investments in


fonds[clarification needed], which are speculating on price changes of agricultural
products (2003: 13 Bn US Dollar, 2008: 260 Bn US Dollar), as well as
increasing trade volume of these goods.

Volatility and prices increases are worsened according to the report by the concentration of staple
foods in a few countries and export restrictions of these goods, the historical low of
worldwide cereal reserves and the lack of timely information on food products, reserves and
price developments. Especially this lack of information can lead to overreactions in the markets.
Moreover, seasonal limitations on production possibilities, limited land for agricultural
production, limited access to fertilizers and water, as well as the increasing demand resulting
from population growth, puts pressure on food prices.
According to the Global Hunger Index 2011 price trends show especially harsh consequences for
poor and under-nourished people, because they are not capable to react to price spikes and price
changes. Reactions, following these developments, can include: reduced calorie intake, no longer
sending children to school, riskier income generation such as prostitution, criminality, or
searching landfills, and sending away household members, who cannot be fed anymore. In
addition, the report sees an all-time high in the instability and unpredictability of food prices,
which after decades of slight decrease, increasingly show price spikes (strong and short-term
increase).[GHI2011 1][GHI2011 2]
At a national level, especially food importing countries (those with a negative food trade balance,
are affected by the changing prices.
Focus of the GHI 2010: Early Childhood Under-nutrition
Under-nutrition among children has reached terrible levels. About 195 million children under the
age of five in the developing world about one in three children - are too small and thus
underdeveloped. Nearly one in four children under age five 129 million is underweight, and
one in 10 is severely underweight. The problem of child under-nutrition is concentrated in a few
countries and regions with more than 90 percent of stunted children living in Africa and Asia.
42% of the worlds undernourished children live in India alone.
The evidence presented in the report[21] [22] shows that the window of opportunity for improving
nutrition spans is the 1,000 days between conception and a childs second birthday (that is the

period from -9 to +24 months). Children who are do not receive adequate nutrition during this
period have increased risks to experiencing lifelong damage, including poor physical and
cognitive development, poor health, and even early death. The consequences of malnutrition that
occurred after 24 months of a child's life are by contrast largely reversible.[6]
Gomez
In 1956, Gmez and Galvan studied factors associated with death in a group of malnourished
children in a hospital in Mexico City, Mexico and defined categories of malnutrition: first,
second, and third degree.[29] The degrees were based on weight below a specified percentage of
median weight for age.[30] The risk of death increases with increasing degree of malnutrition.
[29]
An adaptation of Gomez's original classification is still used today. While it provides a way to
compare malnutrition within and between populations, the classification has been criticized for
being "arbitrary" and for not considering overweight as a form of malnutrition. Also, height
alone may not be the best indicator of malnutrition; children who are born prematurely may be
considered short for their age even if they have good nutrition.[31]
Degree of PEM

% of desired body weight for age and sex

Normal

90%-100%

Mild: Grade I (1st degree)

75%-89%

Moderate: Grade II (2nd degree)

60%-74%

Severe: Grade III (3rd degree)

<60%

SOURCE:"Serum Total Protein and Albumin Levels in Different Grades of Protein


Energy Malnutrition"[28]

Waterlow
John Conrad Waterlow established a new classification for malnutrition.[32] Instead of using just
weight for age measurements, the classification established by Waterlow combines weight-forheight (indicating acute episodes of malnutrition) with height-for-age to show the stunting that

results from chronic malnutrition.[33]One advantage of the Waterlow classification over the
Gomez classification is that weight for height can be examined even if ages are not known.[32]

Degree of PEM

Stunting (%) Height for

Wasting (%) Weight for

age

height

Normal: Grade 0

>95%

>90%

Mild: Grade I

87.5-95%

80-90%

80-87.5%

70-80%

<80%

<70%

Moderate: Grade
II

Severe: Grade III

SOURCE: "Classification and definition of protein-calorie


malnutrition." by Waterlow, 1972[32]

These classifications of malnutrition are commonly used with some modifications by WHO.[30]

Policy document of health sector reforms

Factorial Randomized Control Trials

Strategic planning

Health planning models

21.

Box and whisker plot

To create a box-and-whisker plot, you start by ordering your data (putting the
values in numerical order), if they aren't ordered already. Then you find the
median of your data. The median divides the data into two halves. To divide the
data into quarters, you then find the medians of these two halves.

In descriptive statistics, a box plot or boxplot is a convenient way of graphically depicting


groups of numerical data through their quartiles. Box plots may also have lines extending
vertically from the boxes (whiskers) indicating variability outside the upper and lower quartiles,
hence the terms box-and-whisker plot and box-and-whisker diagram. Outliers may be plotted
as individual points.
Box plots are non-parametric: they display variation in samples of a statistical population
without making any assumptions of the underlying statistical distribution. The spacings between
the different parts of the box indicate the degree of dispersion (spread) and skewness in the data,
and show outliers. In addition to the points themselves, they allow one to visually estimate
various L-estimators, notably the interquartile range, midhinge, range, mid-range, and trimean.
Boxplots can be drawn either horizontally or vertically.\

Statistics assumes that your data points (the numbers in your list) are clustered around some central
value. The "box" in the box-and-whisker plot contains, and thereby highlights, the middle half of these
data points.
To create a box-and-whisker plot, you start by ordering your data (putting the values in numerical order), if
they aren't ordered already. Then you find the median of your data. The median divides the data into two
halves. To divide the data into quarters, you then find the medians of these two halves. Note: If you have
an even number of values, so the first median was the average of the two middle values, then you include
the middle values in your sub-median computations. If you have an odd number of values, so the first
median was an actual data point, then you do not include that value in your sub-median computations.
That is, to find the sub-medians, you're only looking at the values that haven't yet been used.
You have three points: the first middle point (the median), and the middle points of the two halves (what I
call the "sub-medians"). These three points divide the entire data set into quarters, called "quartiles". The
top point of each quartile has a name, being a " Q" followed by the number of the quarter. So the top point
of the first quarter of the data points is "Q1", and so forth. Note that Q1 is also the middle number for the
first half of the list, Q2 is also the middle number for the whole list, Q3 is the middle number for the second
half of the list, and Q4 is the largest value in the list.

Once you have these three points, Q1, Q2, and Q3, you have all you need in order to draw a simple boxand-whisker plot. Here's an example of how it works.

Draw a box-and-whisker plot for the following data set:


4.3,

5.1, 3.9, 4.5, 4.4, 4.9, 5.0, 4.7, 4.1, 4.6, 4.4, 4.3, 4.8, 4.4, 4.2, 4.5, 4.4

My first step is to order the set. This gives me:

3.9, 4.1, 4.2, 4.3, 4.3, 4.4, 4.4, 4.4, 4.4, 4.5, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1
The first number I need is the median of the entire set. Since there are seventeen values in this
list, I need the ninth value:

3.9, 4.1, 4.2, 4.3, 4.3, 4.4, 4.4, 4.4, 4.4, 4.5, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1
The median is

Q2 = 4.4.

The next two numbers I need are the medians of the two halves. Since I used the " 4.4" in the
middle of the list, I can't re-use it, so my two remaining data sets are:

3.9, 4.1, 4.2, 4.3, 4.3, 4.4, 4.4, 4.4 and 4.5, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1
The first half has eight values, so the median is the average of the middle two:

Q1 = (4.3 + 4.3)/2 = 4.3


The median of the second half is:

Copyright Elizabeth Stapel 2004-2011 All Rights Reserved

Q3 = (4.7 + 4.8)/2 = 4.75


Since my list values have one decimal place and
range from 3.9 to 5.1, I won't use a scale of, say,
zero to ten, marked off by ones. Instead, I'll draw a
number line from 3.5 to 5.5, and mark off by tenths.

Now I'll mark off the minimum and maximum values,


and Q1, Q2, and Q3:

The "box" part of the plot goes from

Q1 to Q3:

And then the "whiskers" are drawn to the endpoints:

By the way, box-and-whisker plots don't have to be drawn horizontally as I did above; they can be vertical,
too.

22.

Validity and reliability

In logic, an argument is valid if and only if its conclusion is logically entailed by its
premises. A formula is valid if and only if it is true under every interpretation, and
an argument form (or schema) is valid if and only if every argument of that logical
form is valid.

EXPLORING RELIABILITY IN ACADEMIC ASSESSMENT


Written by Colin Phelan and Julie Wren, Graduate Assistants, UNI Office of
Academic Assessment (2005-06)

Reliability is the degree to which an assessment tool produces stable and


consistent results.
Types of Reliability

2. Test-retest reliability is a measure of reliability obtained by administering the


same test twice over a period of time to a group of individuals. The scores from
Time 1 and Time 2 can then be correlated in order to evaluate the test for stability
over time.
Example: A test designed to assess student learning in psychology could be given
to a group of students twice, with the second administration perhaps coming a week
after the first. The obtained correlation coefficient would indicate the stability of the
scores.

3. Parallel forms reliability is a measure of reliability obtained by administering


different versions of an assessment tool (both versions must contain items that
probe the same construct, skill, knowledge base, etc.) to the same group of
individuals. The scores from the two versions can then be correlated in order to
evaluate the consistency of results across alternate versions.
Example: If you wanted to evaluate the reliability of a critical thinking assessment,
you might create a large set of items that all pertain to critical thinking and then
randomly split the questions up into two sets, which would represent the parallel
forms.
Inter-rater reliability is a measure of reliability used to assess the degree to which
different judges or raters agree in their assessment decisions. Inter-rater reliability is

useful because human observers will not necessarily interpret answers the same way;
raters may disagree as to how well certain responses or material demonstrate
knowledge of the construct or skill being assessed.
Example: Inter-rater reliability might be employed when different judges are
evaluating the degree to which art portfolios meet certain standards. Inter-rater
reliability is especially useful when judgments can be considered relatively
subjective. Thus, the use of this type of reliability would probably be more likely
when evaluating artwork as opposed to math problems.
Internal consistency reliability is a measure of reliability used to evaluate the degree
to which different test items that probe the same construct produce similar results.

B. Average inter-item correlation is a subtype of internal consistency


reliability. It is obtained by taking all of the items on a test that probe the
same construct (e.g., reading comprehension), determining the correlation
coefficient for each pair of items, and finally taking the average of all of
these correlation coefficients. This final step yields the average inter-item
correlation.

C. Split-half reliability is another subtype of internal consistency reliability.


The process of obtaining split-half reliability is begun by splitting in half
all items of a test that are intended to probe the same area of knowledge
(e.g., World War II) in order to form two sets of items. The entire test is
administered to a group of individuals, the total score for each set is
computed, and finally the split-half reliability is obtained by determining the
correlation between the two total set scores.

Validity refers to how well a test measures what it is purported to measure.


Why is it necessary?
While reliability is necessary, it alone is not sufficient. For a test to be reliable, it also
needs to be valid. For example, if your scale is off by 5 lbs, it reads your weight every
day with an excess of 5lbs. The scale is reliable because it consistently reports the
same weight every day, but it is not valid because it adds 5lbs to your true weight. It is
not a valid measure of your weight.
Types of Validity
1. Face Validity ascertains that the measure appears to be assessing the intended
construct under study. The stakeholders can easily assess face validity. Although this is

not a very scientific type of validity, it may be an essential component in enlisting


motivation of stakeholders. If the stakeholders do not believe the measure is an
accurate assessment of the ability, they may become disengaged with the task.
Example: If a measure of art appreciation is created all of the items should be related to
the different components and types of art. If the questions are regarding historical time
periods, with no reference to any artistic movement, stakeholders may not be motivated
to give their best effort or invest in this measure because they do not believe it is a true
assessment of art appreciation.
2. Construct Validity is used to ensure that the measure is actually measure what it is
intended to measure (i.e. the construct), and not other variables. Using a panel of
experts familiar with the construct is a way in which this type of validity can be
assessed. The experts can examine the items and decide what that specific item is
intended to measure. Students can be involved in this process to obtain their feedback.
Example: A womens studies program may design a cumulative assessment of learning
throughout the major. The questions are written with complicated wording and
phrasing. This can cause the test inadvertently becoming a test of reading
comprehension, rather than a test of womens studies. It is important that the measure
is actually assessing the intended construct, rather than an extraneous factor.
3. Criterion-Related Validity is used to predict future or current performance - it
correlates test results with another criterion of interest.

Example: If a physics program designed a measure to assess cumulative student


learning throughout the major. The new measure could be correlated with a
standardized measure of ability in this discipline, such as an ETS field test or the GRE
subject test. The higher the correlation between the established measure and new
measure, the more faith stakeholders can have in the new assessment tool.
4. Formative Validity when applied to outcomes assessment it is used to assess how
well a measure is able to provide information to help improve the program under study.

Example: When designing a rubric for history one could assess students knowledge
across the discipline. If the measure can provide information that students are lacking
knowledge in a certain area, for instance the Civil Rights Movement, then that
assessment tool is providing meaningful information that can be used to improve the
course or program requirements.

5. Sampling Validity (similar to content validity) ensures that the measure covers the
broad range of areas within the concept under study. Not everything can be covered,
so items need to be sampled from all of the domains. This may need to be completed
using a panel of experts to ensure that the content area is adequately sampled.
Additionally, a panel can help limit expert bias (i.e. a test reflecting what an individual
personally feels are the most important or relevant areas).
Example: When designing an assessment of learning in the theatre department, it
would not be sufficient to only cover issues related to acting. Other areas of theatre
such as lighting, sound, functions of stage managers should all be included. The
assessment should reflect the content area in its entirety.
What are some ways to improve validity?
5. Make sure your goals and objectives are clearly defined and operationalized.
Expectations of students should be written down.
6. Match your assessment measure to your goals and objectives. Additionally, have
the test reviewed by faculty at other schools to obtain feedback from an outside
party who is less invested in the instrument.
7. Get students involved; have the students look over the assessment for
troublesome wording, or other difficulties.
8. If possible, compare your measure with other measures, or data that may be
available

23.

Skewed curves, where mean, median and mode lies

In probability theory and statistics, skewness is a measure of the asymmetry of the


probability distribution of a real-valued random variable about its mean. The skewness
value can be positive or negative, or even undefined.
The qualitative interpretation of the skew is complicated. For a unimodal distribution,
negative skew indicates that the tail on the left side of the probability density function is
longer or fatter than the right side it does not distinguish these shapes. Conversely,
positive skew indicates that the tail on the right side is longer or fatter than the left side.
In cases where one tail is long but the other tail is fat, skewness does not obey a simple
rule. For example, a zero value indicates that the tails on both sides of the mean balance
out, which is the case both for a symmetric distribution, and for asymmetric distributions
where the asymmetries even out, such as one tail being long but thin, and the other being
short but fat. Further, in multimodal distributions and discrete distributions, skewness is
also difficult to interpret. Importantly, the skewness does not determine the relationship
of mean and median.
Skewed curves are asymmetrical curves; their skewness is caused by "outliers." (An
outlier is a number thats much smaller or much larger than all other numbers in a data
set.) One or just a few outliers, in a data set, can cause these curves have a "tail." Data is
not normally distributed in skewed curves.
USMLE high yield concepts include knowing, for example, if the mean is less than or
more than the mode when a curve is skewed positively. Or, what happens to the mean,
median and mode if the largest number in a data set is removed (i.e., if an outlier is
removed)?
If you can count, 1, 2, 3, then the USMLEbiostatistics workbook offers the easiest
3 Steps imaginable so you will NEVER miss a question about skewed curves.

Core concepts of skewed curves


Skewed curves are asymmetrical curves: they "skew" negatively (the tail
points left) or positively (the tail points right). Skewed curves NEVER have
the mean, median & mode in the same location. This is distinctly different
than the bell curve, which is symmetrical.

Also, a negatively skewed curves can be of entirely positive numbers and, positively skewed
curves can be of entirely negative numbers. "Positive" and "negative" provides you the direction
of the curves tail and, the direction that numbers are moving on the x-axis.

24.

Negatie skew: points in negative direction

25.

numbers on the x-axis, under the tail, are less than the numbers under
the hump; negatively skewed curves do NOT necessarily have negative
numbers (as in example below)

26.
27. Positive skew: points in positive direction
28.

numbers on the x-axis, under the tail, are more than the numbers
under the hump; positively skewed curves do NOT necessarily have positive
numbers (as in example below)

29.
30.
31.
32.

33.

Correlation, types of correlation and its interpretation

34.

What is a Correlation?

Thus far weve covered the key descriptive statisticsthe


mean, median, mode, and standard deviationand weve
learned how to test the difference between means. But often
we want to know how two things (usually called "variables"
because they vary from high to low) are related to each other.
For example, we might want to know whether reading scores
are related to math scores, i.e., whether students who have
high reading scores also have high math scores, and vice
versa. The statistical technique for determining the degree to
which two variables are related (i.e., the degree to which they
co-vary) is, not surprisingly, called correlation.
There are several different types of correlation, and well talk
about them later, but in this lesson were going to spend most
of the time on the most commonly used type of correlation: the
Pearson Product Moment Correlation. This correlation,
signified by the symbol r, ranges from 1.00 to +1.00. A
correlation of 1.00, whether its positive or negative, is a
perfect correlation. It means that as scores on one of the two
variables increase or decrease, the scores on the other
variable increase or decrease by the same magnitude
something youll probably never see in the real world. A
correlation of 0 means theres no relationship between the two
variables, i.e., when scores on one of the variables go up,
scores on the other variable may go up, down, or whatever.
Youll see a lot of those.
Thus, a correlation of .8 or .9 is regarded as a high correlation,
i.e., there is a very close relationship between scores on one of
the variables with the scores on the other. And correlations of .
2 or .3 are regarded as low correlations, i.e., there is some
relationship between the two variables, but its a weak one.
Knowing peoples score on one variable wouldnt allow you to
predict their score on t

Population pyramid, baby boom, what advantages to baby boomers is currently


available; echo boom?

A population pyramid, also called an age pyramid or age picture diagram, is a graphical
illustration that shows the distribution of various age groups in a population (typically that of
a country or region of the world), which forms the shape of a pyramid when the population is
growing.[1] It is also used in ecology to determine the overall age distribution of a population;
an indication of the reproductive capabilities and likelihood of the continuation of a species.
It typically consists of two back-to-back bar graphs, with the population plotted on the X-axis
and age on the Y-axis, one showing the number of males and one showing females in a
particular population in five-year age groups (also called cohorts). Males are conventionally
shown on the left and females on the right, and they may be measured by raw number or as a
percentage of the total population.
Population pyramids are often viewed as the most effective way to graphically depict the age
and sex distribution of a population, partly because of the very clear image these pyramids
present.[2]
A great deal of information about the population broken down by age and sex can be read
from a population pyramid, and this can shed light on the extent of development and other
aspects of the population. A population pyramid also tells how many people of each age
range live in the area. There tends to be more females than males in the older age groups, due
to females' longer life expectancy.
A baby boom is any period marked by a greatly increased birth rate. This
demographic phenomenon is usually ascribed within certain geographical
bounds. People born during such a period are often called baby boomers;
however, some experts distinguish between those born during such
demographic baby booms and those who identify with the overlapping
cultural generations. Conventional wisdom states that baby booms signify
good times and periods of general economic growth and stability,[citation needed]
however in circumstances where baby booms lead to very large number of
children per family unit, such as in the case in lower income regions of the
world, the outcome may be different. One common baby boom was right
after WWII during the Cold War

35.

Population momentum, why difficult to control and what is it?

Population Momentum Across the Demographic Transition


A typical consequence of the demographic transitiona populations shift from high mortality
and high fertility to low mortality and low fertilityis a period of robust population growth. This
growth occurs once survival has improved but before fertility has fallen to or below replacement
level, so that the birth rate substantially exceeds the death rate. During the second half of the
twentieth century, the world experienced unprecedented population growth as developing
countries underwent a demographic transition. It was during this period that Nathan Keyfitz
demonstrated how an immediate drop to replacement fertility in high-fertility populations could
still result in decades of population growth. Building on work by Paul Vincent (1945), he called
this outcome population momentum. Keyfitz wrote, The phenomenon occurs because a
history of high fertility has resulted in a high proportion of women in the reproductive ages, and
these ensure high crude birth rates long after the age-specific rates have dropped (Keyfitz 1971:
71).
For societies today that have not yet completed their demographic transitions, population
momentum is still expected to contribute significantly to future growth, as relatively large
cohorts of children enter their reproductive years and bear children. John Bongaarts (1994 1999)
calculated that population momentum will account for about half of the developing worlds
projected twenty-first-century population growth. However, even though momentum is a useful
concept precisely because of the non-stationary age structures that exist in populations in the
midst of demographic transition, no research has examined trends in momentum or documented
the highly regular pattern of population momentum across the demographic transition. This
article sets out to do so.
We describe the arc of population momentum over time in 16 populations: five in the nowdeveloped world and 11 in the developing world. Because population momentum identifies the
cumulative future contribution of todays age distribution to a populations growth and size,
adding momentum to our understanding of demographic transition means that we do not treat
changes in age distribution merely as a consequence of demographic transition, as is usually the
case (Lee 2003). Instead, we also illustrate the impact that these age-distribution changes have
themselves had in producing key features of the demographic transition. Age composition exerts
an independent influence on crude birth and crude death rates so that for given vital rate
schedules, population growth rates are typically highest in those populations with a middleheavy age distribution. During demographic transition (or even during a demographic crisis),
any change in a populations age distribution will have repercussions for future population
growth potential and future population size.
We also trace the course of two recently defined measures of population momentum.
Espenshade, Olgiati, and Levin (2011) decompose total momentum into two constituent and
multiplicative parts: stable momentum measures deviations between the stable age distribution
implied by the populations mortality and fertility and the stationary age distribution implied by

the populations death rates; and nonstable momentum measures deviations between the
observed population age distribution and the implied stable age distribution.
To understand the usefulness of stable and nonstable momentum, consider the case of a
population with unchanging vital rates. Over time, stable momentum remains constant as both
the stable age distribution and the stationary age distribution are unchanging. In this sense we
may consider stable momentum to be the permanent component of population momentum; it
persists as long as mortality and fertility do not change. In contrast, nonstable momentum in this
population gradually becomes weaker and eventually vanishes as the populations age
distribution conforms to the stable age distribution. In this sense we may consider nonstable
momentum to be the temporary or transitory component of population momentum. Of course,
most populations exhibit some year-to-year fluctuation in fertility and mortality, so in empirical
analyses we commonly observe concurrent changes in both the permanent and the temporary
components of momentum. Nevertheless, how overall momentum is composed and what part is
contributed by stable versus nonstable momentum have implications for future population
growth or decline.1
In showing patterns over time in total population momentum, stable momentum, and nonstable
momentum, we pursue three distinct ends. First and most simply, we trace how momentum
dynamics have historically unfolded, not only across demographic transitions but also in the
midst of fertility swings and other demographic cycles. This is a straightforward task that has not
yet been undertaken. Second, we demonstrate some previously ignored empirical regularities of
the demographic transition, as it has occurred around the globe and at various times over the last
three centuries. Third, although population momentum is by definition a static measure, our
results suggest that momentum can also be considered a dynamic process. Across the
demographic transition, momentum typically increases and then decreases as survival first
improves and fertility rates later fall. This dynamic view of momentum is further supported by
trends in stable and nonstable momentum. A change in stable momentum induced by a change in
fertility will initiate a demographic chain reaction that affects nonstable momentum both
immediately and subsequently.

The demographic transition


Historical roots
Demographic transition first occurred in Europe: in parts of the continent, death rates began a
steady decline at some point during the seventeenth or eighteenth century. Because the
transitions occurred before the age of reliable vital statistics, the causes of these earliest mortality
declines are unclear. By the early nineteenth century, as industrialization took hold and paved the
way for even greater advances in health, mortality crises became less common in England,
France, and other parts of northern and western Europe (Vallin 1991; Livi-Bacci 2007). Child
survival was improving, and life expectancy at birth was inching upward.
As a result of these early mortality declines, the population of Europe began a long period of
robust growth, also beginning sometime in the seventeenth or eighteenth century. Although death
rates were declining, birth rates remained more or less stable, or at least they declined much

more slowly, so that year after year, for decades if not centuries, the number of births exceeded
the number of deaths by a substantial margin. In 1700 the population of Europe was an estimated
30 million. By 1900 it had more than quadrupled to 127 million (Livi-Bacci 2007). Europeans
also migrated to North America and Australia by the millions. The population continued to grow
despite this out-migration, since most of Europe did not experience substantial declines in the
number of children per woman until sometime in the late nineteenth or early twentieth century.
Fertility reached replacement in many parts of Europe around the mid-twentieth century, and
since then has fallen well below replacement in much of the continent.
Demographic transition has occurred much faster in the developing world than it did in Europe.
In 195055, for example, life expectancy at birth in India was about 38 years for both sexes
combined; 15 years later, life expectancy was nearly 47 (United Nations 2009b). Over the same
period in Kenya, life expectancy rose from 42 to 51 years, while in Mexico it rose from 51 to 60
(United Nations 2009b). This rapid mortality decline, brought about in part by technology
adopted from the West and accompanied initially by little or no decrease in fertility, led not to the
long period of steady population expansion that Europe experienced starting more than a century
earlier, but rather to rapid population growth, especially in the third quarter of the twentieth
century. Following World War II, developing countries grew at an average annual rate of more
than 2 percent, with some countries posting yearly population gains of more than 3 or even 4
percent, as in Ivory Coast, Jordan, and Libya (United Nations 2009b).
Unlike in Europe, rapid fertility decline often followed within just a few decades. Although much
of sub-Saharan Africa still has fertility well above replacement, most of the rest of the world
appears to have completed the demographic transition. Today every country in East Asia has subreplacement fertility, and even in countries like Bangladesh and Indonesia, once the cause of
much hand-wringing among population-control advocates (Connelly 2008: 11, 305), fertility is
now barely above replacement (United Nations 2009b). The concept of a demographic transition
therefore describes developing-world experience about as well as it seems to have portrayed
earlier developed-world experience. The major differences between these two situations are the
speed of mortality decline, the speed of fertility decline, and, as has received most attention both
then and now, the rate of population growth. Today it is very unusual to see the kind of
population doubling timesin some cases less than 20 yearsthat were so alarming to
policymakers and scholars throughout the 1960s and 1970s (Ehrlich 1968).

36.
37.

ROC curves, why used, how AUC is determined what does 1-specificity

means?

In statistics, a receiver operating characteristic (ROC), or ROC curve, is a graphical plot


that illustrates the performance of a binary classifier system as its discrimination threshold is
varied. The curve is created by plotting the true positive rate against the false positive rate at
various threshold settings. (The true-positive rate is also known as sensitivity in biomedical
informatics, or recall in machine learning. The false-positive rate is also known as the fall-out
and can be calculated as 1 - specificity). The ROC curve is thus the sensitivity as a function
of fall-out. In general, if the probability distributions for both detection and false alarm are
known, the ROC curve can be generated by plotting the cumulative distribution function
(area under the probability distribution from
to
) of the detection probability in the
y-axis versus the cumulative distribution function of the false-alarm probability in x-axis.

ROC analysis provides tools to select possibly optimal models and to discard suboptimal
ones independently from (and prior to specifying) the cost context or the class
distribution. ROC analysis is related in a direct and natural way to cost/benefit analysis of
diagnostic decision making.
The ROC curve was first developed by electrical engineers and radar engineers during
World War II for detecting enemy objects in battlefields and was soon introduced to
psychology to account for perceptual detection of stimuli. ROC analysis since then has
been used in medicine, radiology, biometrics, and other areas for many decades and is
increasingly used in machine learning and data mining research.
The ROC is also known as a relative operating characteristic curve, because it is a
comparison of two operating characteristics (TPR and FPR) as the criterion changes.[1

38.

Type I and II errors, how to minimize it?

Type I error is often referred to as a 'false positive', and is the process of incorrectly
rejecting the null hypothesis in favor of the alternative. In the case above, the null hypothesis
refers to the natural state of things, stating that the patient is not HIV positive.
The alternative hypothesis states that the patient does carry the virus. A Type I error would
indicate that the patient has the virus when they do not, a false rejection of the null.

Type II Error
A Type II error is the opposite of a Type I error and is the false acceptance of the null
hypothesis. A Type II error, also known as a false negative, would imply that the patient is
free of HIV when they are not, a dangerous diagnosis.
In most fields of science, Type II errors are not seen to be as problematic as a Type I error.
With the Type II error, a chance to reject the null hypothesis was lost, and no conclusion is
inferred from a non-rejected null. The Type I error is more serious, because you have
wrongly rejected the null hypothesis.
Medicine, however, is one exception; telling a patient that they are free of disease, when
they are not, is potentially dangerous.

h) Legionnaire disease, causes, how to prevent and treat/


i) Growth chart (other name, interpretation, how to prepare, present)
j) Gl0bal warming, what policies and strategies to reduce it?
k) Hidden hunger, what type of micronutrient deficiency can occur in children?
l) Types of accidents and what strategies to control?
m) Polio, GPEI, Travellors restrictions for polio, what policies and strategies for
polio?

n) Plaque, types, D/D of bubonic plaque, control?

12.
13.

Within one year what measure can be taken to MMR


How will you evaluate the cost of an intervention of type of cost

analysis techniques?
14.
What is surveillance, what are the types, what are criteria for
conducting surveillance?

Surveillance (/srve.ns/ or /srvelns/)[1] is the monitoring of the behavior,


activities, or other changing information, usually of people for the purpose of influencing,
managing, directing, or protecting them.[2] This can include observation from a distance
by means of electronic equipment (such as CCTV cameras), or interception of
electronically transmitted information (such as Internet traffic or phone calls); and it can
include simple, relatively no- or low-technology methods such as human intelligence
agents and postal interception. The word surveillance comes from a French phrase for
"watching over" ("sur" means "from above" and "veiller" means "to watch"), and is in
contrast to more recent developments such as sousveillance.[3][4][5]
Surveillance is used for intelligence gathering, the prevention of crime, the protection of
a process, person, group or object, or for the investigation of crime. Surveillance can
achieve this by three means: by deterrence, by observation and by reconstruction.
Surveillance can deter by increasing the chance of being caught, and by revealing the
modus operandi and accomplishes. This requires a minimal level of invasiveness.[6]
Surveillance can detect by giving human operatives accurate and live situational
awareness, and / or through the use of automated processes, i.e. video analytics.
Surveillance can help reconstruct an incident through the availability of footage for
forensics experts, perhaps again helped by video analytics. Surveillance can also
influence subjective security if surveillance resources are visible or if the consequences
of surveillance can be felt. In order to determine whether surveillance technology is
actually improving surveillance, the effectiveness of surveillance must be expressed in
terms of these higher purposes.
With the advent of programs such as the Total Information Awareness program and
ADVISE, technologies such as high speed surveillance computers and biometrics
software, and laws such as the Communications Assistance for Law Enforcement Act,
governments now possess an unprecedented ability to monitor the activities of their
subjects.[7] Many civil rights and privacy groups, such as the Electronic Frontier
Foundation and American Civil Liberties Union, have expressed concern that by allowing
continual increases in government surveillance of citizens we will end up in a mass
surveillance society, with extremely limited, or non-existent political and/or personal
freedoms. Fears such as this have led to numerous lawsuits such as Hepting v. AT&T.[7][8]

Bhopal gas tragedy latter outcomes and effects


The Bhopal disaster and its aftermath: a review
Abstract
On December 3 1984, more than 40 tons of methyl isocyanate
gas leaked from a pesticide plant in Bhopal, India, immediately
killing at least 3,800 people and causing significant morbidity and
premature death for many thousands more. The company
involved in what became the worst industrial accident in history
immediately tried to dissociate itself from legal responsibility.
Eventually it reached a settlement with the Indian Government
through mediation of that country's Supreme Court and accepted
moral responsibility. It paid $470 million in compensation, a
relatively small amount of based on significant underestimations
of the long-term health consequences of exposure and the
number of people exposed. The disaster indicated a need for
enforceable international standards for environmental safety,
preventative strategies to avoid similar accidents and industrial
disaster preparedness.
Since the disaster, India has experienced rapid industrialization.
While some positive changes in government policy and behavior
of a few industries have taken place, major threats to the
environment from rapid and poorly regulated industrial growth
remain. Widespread environmental degradation with significant
adverse

human

health

consequences

continues

to

occur

throughout India.
December 2004 marked the twentieth anniversary of the massive
toxic gas leak from Union Carbide Corporation's chemical plant in
Bhopal in the state of Madhya Pradesh, India that killed more

than 3,800 people. This review examines the health effects of


exposure to the disaster, the legal response, the lessons learned
and whether or not these are put into practice in India in terms of
industrial development, environmental management and public
health.
History
In the 1970s, the Indian government initiated policies to
encourage foreign companies to invest in local industry. Union
Carbide Corporation (UCC) was asked to build a plant for the
manufacture of Sevin, a pesticide commonly used throughout
Asia. As part of the deal, India's government insisted that a
significant percentage of the investment come from local
shareholders. The government itself had a 22% stake in the
company's subsidiary, Union Carbide India Limited (UCIL) [1]. The
company built the plant in Bhopal because of its central location
and access to transport infrastructure. The specific site within the
city was zoned for light industrial and commercial use, not for
hazardous industry. The plant was initially approved only for
formulation of pesticides from component chemicals, such as MIC
imported from the parent company, in relatively small quantities.
However, pressure from competition in the chemical industry led
UCIL to implement "backward integration" the manufacture of
raw materials and intermediate products for formulation of the
final product within one facility. This was inherently a more
sophisticated and hazardous process [2].
In 1984, the plant was manufacturing Sevin at one quarter of its production
capacity due to decreased demand for pesticides. Widespread crop failures and
famine on the subcontinent in the 1980s led to increased indebtedness and
decreased capital for farmers to invest in pesticides. Local managers were directed

to close the plant and prepare it for sale in July 1984 due to decreased profitability
[3]. When no ready buyer was found, UCIL made plans to dismantle key
production units of the facility for shipment to another developing country. In the
meantime, the facility continued to operate with safety equipment and procedures
far below the standards found in its sister plant in Institute, West Virginia. The
local government was aware of safety problems but was reticent to place heavy
industrial safety and pollution control burdens on the struggling industry because
it feared the economic effects of the loss of such a large employer [3].
At 11.00 PM on December 2 1984, while most of the one million residents of
Bhopal slept, an operator at the plant noticed a small leak of methyl isocyanate
(MIC) gas and increasing pressure inside a storage tank. The vent-gas scrubber, a
safety device designer to neutralize toxic discharge from the MIC system, had
been turned off three weeks prior [3]. Apparently a faulty valve had allowed one
ton of water for cleaning internal pipes to mix with forty tons of MIC [1]. A 30 ton
refrigeration unit that normally served as a safety component to cool the MIC
storage tank had been drained of its coolant for use in another part of the plant [3].
Pressure and heat from the vigorous exothermic reaction in the tank continued to
build. The gas flare safety system was out of action and had been for three months.
At around 1.00 AM, December 3, loud rumbling reverberated around the plant as a
safety valve gave way sending a plume of MIC gas into the early morning air [4].
Within hours, the streets of Bhopal were littered with human corpses and the
carcasses of buffaloes, cows, dogs and birds. An estimated 3,800 people died
immediately, mostly in the poor slum colony adjacent to the UCC plant [1,5].
Local hospitals were soon overwhelmed with the injured, a crisis further
compounded by a lack of knowledge of exactly what gas was involved and what
its effects were [1]. It became one of the worst chemical disasters in history and
the name Bhopal became synonymous with industrial catastrophe [5].
Estimates of the number of people killed in the first few days by
the plume from the UCC plant run as high as 10,000, with 15,000

to

20,000

premature

deaths

reportedly

occurring

in

the

subsequent two decades [6]. The Indian government reported


that more than half a million people were exposed to the gas [7].
Several

epidemiological

studies

conducted

soon

after

the

accident showed significant morbidity and increased mortality in


the exposed population. Table Table1.1. summarizes early and
late effects on health. These data are likely to under-represent
the true extent of adverse health effects because many exposed
individuals left Bhopal immediately following the disaster never to
return and were therefore lost to follow-up [8].
Health effects of the Bhopal methyl isocyanate gas leak exposure
Aftermath
Immediately after the disaster, UCC began attempts to dissociate
itself from responsibility for the gas leak. Its principal tactic was
to shift culpability to UCIL, stating the plant was wholly built and
operated by the Indian subsidiary. It also fabricated scenarios
involving sabotage by previously unknown Sikh extremist groups
and disgruntled employees but this theory was impugned by
numerous independent sources [1].
The toxic plume had barely cleared when, on December 7, the
first multi-billion dollar lawsuit was filed by an American attorney
in a U.S. court. This was the beginning of years of legal
machinations in which the ethical implications of the tragedy and
its affect on Bhopal's people were largely ignored. In March 1985,
the Indian government enacted the Bhopal Gas Leak Disaster Act
as a way of ensuring that claims arising from the accident would
be dealt with speedily and equitably. The Act made the
government the sole representative of the victims in legal
proceedings both within and outside India. Eventually all cases

were taken out of the U.S. legal system under the ruling of the
presiding American judge and placed entirely under Indian
jurisdiction much to the detriment of the injured parties.
In a settlement mediated by the Indian Supreme Court, UCC accepted moral
responsibility and agreed to pay $470 million to the Indian government to be
distributed to claimants as a full and final settlement. The figure was partly based
on the disputed claim that only 3000 people died and 102,000 suffered permanent
disabilities [9]. Upon announcing this settlement, shares of UCC rose $2 per share
or 7% in value [1]. Had compensation in Bhopal been paid at the same rate that
asbestosis victims where being awarded in US courts by defendant including UCC
which mined asbestos from 1963 to 1985 the liability would have been greater
than the $10 billion the company was worth and insured for in 1984 [10]. By the
end of October 2003, according to the Bhopal Gas Tragedy Relief and
Rehabilitation Department, compensation had been awarded to 554,895 people for
injuries received and 15,310 survivors of those killed. The average amount to
families of the dead was $2,200 [9].
At every turn, UCC has attempted to manipulate, obfuscate and withhold scientific
data to the detriment of victims. Even to this date, the company has not stated
exactly what was in the toxic cloud that enveloped the city on that December night
[8]. When MIC is exposed to 200 heat, it forms degraded MIC that contains the
more deadly hydrogen cyanide (HCN). There was clear evidence that the storage
tank temperature did reach this level in the disaster. The cherry-red color of blood
and viscera of some victims were characteristic of acute cyanide poisoning [11].
Moreover, many responded well to administration of sodium thiosulfate, an
effective therapy for cyanide poisoning but not MIC exposure [11]. UCC initially
recommended use of sodium thiosulfate but withdrew the statement later
prompting suggestions that it attempted to cover up evidence of HCN in the gas
leak. The presence of HCN was vigorously denied by UCC and was a point of
conjecture among researchers [8,11-13].

As further insult, UCC discontinued operation at its Bhopal plant following the
disaster but failed to clean up the industrial site completely. The plant continues to
leak several toxic chemicals and heavy metals that have found their way into local
aquifers. Dangerously contaminated water has now been added to the legacy left
by the company for the people of Bhopal [1,14].
Lessons learned
The events in Bhopal revealed that expanding industrialization in developing
countries without concurrent evolution in safety regulations could have
catastrophic consequences [4]. The disaster demonstrated that seemingly local
problems of industrial hazards and toxic contamination are often tied to global
market dynamics. UCC's Sevin production plant was built in Madhya Pradesh not
to avoid environmental regulations in the U.S. but to exploit the large and growing
Indian pesticide market. However the manner in which the project was executed
suggests the existence of a double standard for multinational corporations
operating in developing countries [1]. Enforceable uniform international operating
regulations for hazardous industries would have provided a mechanism for
significantly improved in safety in Bhopal. Even without enforcement,
international standards could provide norms for measuring performance of
individual companies engaged in hazardous activities such as the manufacture of
pesticides and other toxic chemicals in India [15]. National governments and
international agencies should focus on widely applicable techniques for corporate
responsibility and accident prevention as much in the developing world context as
in advanced industrial nations [16]. Specifically, prevention should include risk
reduction in plant location and design and safety legislation [17].
Local governments clearly cannot allow industrial facilities to be situated within
urban areas, regardless of the evolution of land use over time. Industry and
government need to bring proper financial support to local communities so they
can provide medical and other necessary services to reduce morbidity, mortality
and material loss in the case of industrial accidents.

Public health infrastructure was very weak in Bhopal in 1984. Tap water was
available for only a few hours a day and was of very poor quality. With no
functioning sewage system, untreated human waste was dumped into two nearby
lakes, one a source of drinking water. The city had four major hospitals but there
was a shortage of physicians and hospital beds. There was also no mass casualty
emergency response system in place in the city [3]. Existing public health
infrastructure needs to be taken into account when hazardous industries choose
sites for manufacturing plants. Future management of industrial development
requires that appropriate resources be devoted to advance planning before any
disaster occurs [18]. Communities that do not possess infrastructure and technical
expertise to respond adequately to such industrial accidents should not be chosen
as sites for hazardous industry.
Since 1984
Following the events of December 3 1984 environmental awareness and activism
in India increased significantly. The Environment Protection Act was passed in
1986, creating the Ministry of Environment and Forests (MoEF) and strengthening
India's commitment to the environment. Under the new act, the MoEF was given
overall responsibility for administering and enforcing environmental laws and
policies. It established the importance of integrating environmental strategies into
all industrial development plans for the country. However, despite greater
government commitment to protect public health, forests, and wildlife, policies
geared to developing the country's economy have taken precedence in the last 20
years [19].
India has undergone tremendous economic growth in the two decades since the
Bhopal disaster. Gross domestic product (GDP) per capita has increased from
$1,000 in 1984 to $2,900 in 2004 and it continues to grow at a rate of over 8% per
year [20]. Rapid industrial development has contributed greatly to economic
growth but there has been significant cost in environmental degradation and
increased public health risks. Since abatement efforts consume a large portion of

India's GDP, MoEF faces an uphill battle as it tries to fulfill its mandate of
reducing industrial pollution [19]. Heavy reliance on coal-fired power plants and
poor enforcement of vehicle emission laws have result from economic concerns
taking precedence over environmental protection [19].
With the industrial growth since 1984, there has been an increase in small scale
industries (SSIs) that are clustered about major urban areas in India. There are
generally less stringent rules for the treatment of waste produced by SSIs due to
less waste generation within each individual industry. This has allowed SSIs to
dispose of untreated wastewater into drainage systems that flow directly into
rivers. New Delhi's Yamuna River is illustrative. Dangerously high levels of heavy
metals such as lead, cobalt, cadmium, chrome, nickel and zinc have been detected
in this river which is a major supply of potable water to India's capital thus posing
a potential health risk to the people living there and areas downstream [21].
Land pollution due to uncontrolled disposal of industrial solid and hazardous
waste is also a problem throughout India. With rapid industrialization, the
generation of industrial solid and hazardous waste has increased appreciably and
the environmental impact is significant [22].
India relaxed its controls on foreign investment in order to accede to WTO
rules and thereby attract an increasing flow of capital. In the process, a number of
environmental regulations are being rolled back as growing foreign investments
continue to roll in. The Indian experience is comparable to that of a number of
developing countries that are experiencing the environmental impacts of structural
adjustment. Exploitation and export of natural resources has accelerated on the
subcontinent. Prohibitions against locating industrial facilities in ecologically
sensitive zones have been eliminated while conservation zones are being stripped
of their status so that pesticide, cement and bauxite mines can be built [23]. Heavy
reliance on coal-fired power plants and poor enforcement of vehicle emission laws
are other consequences of economic concerns taking precedence over
environmental protection [19].

In March 2001, residents of Kodaikanal in southern India caught the Anglo-Dutch


company, Unilever, red-handed when they discovered a dumpsite with toxic
mercury laced waste from a thermometer factory run by the company's Indian
subsidiary, Hindustan Lever. The 7.4 ton stockpile of mercury-laden glass was
found in torn stacks spilling onto the ground in a scrap metal yard located near a
school. In the fall of 2001, steel from the ruins of the World Trade Center was
exported to India apparently without first being tested for contamination from
asbestos and heavy metals present in the twin tower debris. Other examples of
poor environmental stewardship and economic considerations taking precedence
over public health concerns abound [24].
The Bhopal disaster could have changed the nature of the chemical industry and
caused a reexamination of the necessity to produce such potentially harmful
products in the first place. However the lessons of acute and chronic effects of
exposure to pesticides and their precursors in Bhopal has not changed agricultural
practice patterns. An estimated 3 million people per year suffer the consequences
of pesticide poisoning with most exposure occurring in the agricultural developing
world. It is reported to be the cause of at least 22,000 deaths in India each year. In
the state of Kerala, significant mortality and morbidity have been reported
following exposure to Endosulfan, a toxic pesticide whose use continued for 15
years after the events of Bhopal [25].
Aggressive marketing of asbestos continues in developing countries as a result of
restrictions being placed on its use in developed nations due to the wellestablished link between asbestos products and respiratory diseases. India has
become a major consumer, using around 100,000 tons of asbestos per year, 80% of
which is imported with Canada being the largest overseas supplier. Mining,
production and use of asbestos in India is very loosely regulated despite the health
hazards. Reports have shown morbidity and mortality from asbestos related
disease will continue in India without enforcement of a ban or significantly tighter
controls [26,27].

UCC has shrunk to one sixth of its size since the Bhopal disaster in an effort to
restructure and divest itself. By doing so, the company avoided a hostile takeover,
placed a significant portion of UCC's assets out of legal reach of the victims and
gave its shareholder and top executives bountiful profits [1]. The company still
operates under the ownership of Dow Chemicals and still states on its website that
the Bhopal disaster was "cause by deliberate sabotage". [28].
Some positive changes were seen following the Bhopal disaster. The British
chemical company, ICI, whose Indian subsidiary manufactured pesticides,
increased attention to health, safety and environmental issues following the events
of December 1984. The subsidiary now spends 3040% of their capital
expenditures on environmental-related projects. However, they still do not adhere
to standards as strict as their parent company in the UK. [24].
The US chemical giant DuPont learned its lesson of Bhopal in a different way. The
company attempted for a decade to export a nylon plant from Richmond, VA to
Goa, India. In its early negotiations with the Indian government, DuPont had
sought and won a remarkable clause in its investment agreement that absolved it
from all liabilities in case of an accident. But the people of Goa were not willing to
acquiesce while an important ecological site was cleared for a heavy polluting
industry. After nearly a decade of protesting by Goa's residents, DuPont was
forced to scuttle plans there. Chennai was the next proposed site for the plastics
plant. The state government there made significantly greater demand on DuPont
for concessions on public health and environmental protection. Eventually, these
plans were also aborted due to what the company called "financial concerns". [29].
Conclusion
The tragedy of Bhopal continues to be a warning sign at once ignored and heeded.
Bhopal and its aftermath were a warning that the path to industrialization, for
developing countries in general and India in particular, is fraught with human,
environmental and economic perils. Some moves by the Indian government,

including the formation of the MoEF, have served to offer some protection of the
public's health from the harmful practices of local and multinational heavy
industry and grassroots organizations that have also played a part in opposing
rampant development. The Indian economy is growing at a tremendous rate but at
significant cost in environmental health and public safety as large and small
companies throughout the subcontinent continue to pollute. Far more remains to
be done for public health in the context of industrialization to show that the
lessons of the countless thousands dead in Bhopal have truly been heeded.

S-ar putea să vă placă și